id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
472293
https://en.wikipedia.org/wiki/Yolk
Yolk
Among animals which produce eggs, the yolk (; also known as the vitellus) is the nutrient-bearing portion of the egg whose primary function is to supply food for the development of the embryo. Some types of egg contain no yolk, for example because they are laid in situations where the food supply is sufficient (such as in the body of the host of a parasitoid) or because the embryo develops in the parent's body, which supplies the food, usually through a placenta. Reproductive systems in which the mother's body supplies the embryo directly are said to be matrotrophic; those in which the embryo is supplied by yolk are said to be lecithotrophic. In many species, such as all birds, and most reptiles and insects, the yolk takes the form of a special storage organ constructed in the reproductive tract of the mother. In many other animals, especially very small species such as some fish and invertebrates, the yolk material is not in a special organ, but inside the egg cell. As stored food, yolks are often rich in vitamins, minerals, lipids and proteins. The proteins function partly as food in their own right, and partly in regulating the storage and supply of the other nutrients. For example, in some species the amount of yolk in an egg cell affects the developmental processes that follow fertilization. The yolk is not living cell material like protoplasm, but largely passive material, that is to say deutoplasm. The food material and associated control structures are supplied during oogenesis. Some of the material is stored more or less in the form in which the maternal body supplied it, partly as processed by dedicated non-germ tissues in the egg, while part of the biosynthetic processing into its final form happens in the oocyte itself. Apart from animals, other organisms, like algae, especially in the oogamous, can also accumulate resources in their female gametes. In gymnosperms, the remains of the female gametophyte serve also as food supply, and in flowering plants, the endosperm. Avian egg yolk In avian eggs, the yolk usually is a hue of yellow in color. It is spherical and is suspended in the egg white (known alternatively as albumen or glair/glaire) by one or two spiral bands of tissue called the chalazae. The yolk mass, together with the ovum proper (after fertilization, the embryo) are enclosed by the vitelline membrane, whose structure is different from a cell membrane. The yolk is mostly extracellular to the oolemma, being not accumulated inside the cytoplasm of the egg cell (as occurs in frogs), contrary to the claim that the avian ovum (in strict sense) and its yolk are a single giant cell. After the fertilization, the cleavage of the embryo leads to the formation of the germinal disc. As food, the chicken egg yolk is a major source of vitamins and minerals. It contains all of the egg's fat and cholesterol, and nearly half of the protein. If left intact when an egg is fried, the yellow yolk surrounded by a flat blob of egg white creates a distinctive "sunny-side up" form. Mixing the two components together before cooking results in a yellow (from pale yellow to almost orange, depending on the breed of hen) mass, as in omelets and scrambled eggs. Uses The developing embryo inside the egg uses the yolk as sustenance. It is at times separated from the egg white for cooking, and is frequently employed as an emulsifier, and is used in mayonnaise, custard, hollandaise sauce, crème brûlée, avgolemono and ovos moles. It is used in painting as a component of traditional egg-tempera. It is used in the production of egg yolk agar plate medium, useful in testing for the presence of Clostridium perfringens. Egg yolk contains an antibody called antiglobulin (IgY). The antibody transfers from the laying hen to the egg yolk by passive immunity to protect both embryo and hatchling from microorganism invasion. Egg yolk can be used to make liqueurs such as Advocaat or eggnog. Egg yolk is used to extract egg oil which has various cosmetic, nutritional, and medicinal uses. Composition of chicken egg yolk The yolk makes up about 33% of the liquid weight of the egg; it contains about , three times the energy content of the egg white, mostly due to its fat content. All of the fat-soluble vitamins (A, D, E and K) are found in the egg yolk. Egg yolk is one of the few foods naturally containing vitamin D. The composition (by weight) of the most prevalent fatty acids in egg yolk typically is: Unsaturated fatty acids: Oleic acid, 47% Linoleic acid, 16% Palmitoleic acid, 5% Linolenic acid, 2% Saturated fatty acids: Palmitic acid, 23% Stearic acid, 4% Myristic acid, 1% Egg yolk is a source of lecithin, as well as egg oil, for cosmetic and pharmaceutical applications. Based on weight, egg yolk contains about 9% lecithin. The yellow color is due to lutein and zeaxanthin, which are yellow or orange carotenoids known as xanthophylls. Yolk proteins The different yolk's proteins have distinct roles. Phosvitins are important in sequestering calcium, iron, and other cations for the developing embryo. Phosvitins are one of the most phosphorylated (10%) proteins in nature; the high concentration of phosphate groups provides efficient metal-binding sites in clusters. Lipovitellins are involved in lipid and metal storage, and contain a heterogeneous mixture of about 16% (w/w) noncovalently bound lipid, most being phospholipid. Lipovitellin-1 contains two chains, LV1N and LV1C. Yolk vitamins and minerals Yolks hold more than 90% of the calcium, iron, phosphorus, zinc, thiamine, vitamin B6, folate, vitamin B12, and pantothenic acid of the egg. In addition, yolks cover all of the fat-soluble vitamins: A, D, E, and K in the egg, as well as all of the essential fatty acids. A single yolk from a large egg contains roughly 22 mg of calcium, 66 mg of phosphorus, 9.5 micrograms of selenium, and 19 mg of potassium, according to the USDA. Double-yolk eggs Double-yolk eggs occur when ovulation occurs too rapidly, or when one yolk becomes joined with another yolk. These eggs may be the result of a young hen's reproductive cycle not yet being synchronized. Double-yolked eggs seldom lead to successful hatchlings without human intervention, as the chicks interfere with each other's hatching process and do not survive. Higher-order yolks are rare, though hens are known to occasionally lay even triple-yolk eggs. Yolkless eggs Eggs without yolks are known as "dwarf" or "wind" eggs, or the archaic term "cock egg". Such an egg is most often a pullet's first effort, produced before her laying mechanism is fully ready. Mature hens rarely lay a yolkless egg, but sometimes a piece of reproductive tissue breaks away and passes down the tube. Such a scrap of tissue may stimulate the egg-producing glands to react as though it were a yolk and wrap it in albumen, membranes, and a shell as it travels through the egg tube. This is usually what causes an egg to contain a small particle of grayish tissue instead of a yolk. Since these eggs contain no yolk, and therefore cannot hatch, they were traditionally believed to have been laid by roosters. This type of egg occurs in many varieties of fowl and has been found in chickens, both standard and bantams, guineas, and coturnix quail. Yolk color The color of an egg yolk is directly influenced by the makeup of the chicken feed. Egg yolk color is generally more yellow when given a feed containing a large component of yellow, fat-soluble pigments, such as the carotenes in dark green plant material, for example alfalfa. Although much emphasis is put onto the color of the egg yolk, it does not reliably reflect the nutritional value of an egg. For example, some of the natural pigments that produce a rich yolk color are xanthophylls without much nutritional value, rather than the carotenoids that act as provitamin A in the body. Also, a diet rich in vitamin A itself, but without A-provitamins or xanthophylls, can produce practically colourless yolks that are just as nutritious as any richly colored yolks. Yolks, particularly from free-range eggs, can be of a wide range of colors, ranging from nearly white, through yellow and orange, to practically red, or even olive green, depending on the pigments in their feed. Feeding fowl large amounts of capsicum peppers, for example, tends to result in red or deep orange yolks. This has nothing to do with adding colors such as cochineal to eggs in cooking. In fish All bony fish, some sharks and rays have yolk sacs at some stage of development, with all oviparous fish retaining the sac after hatching. Lamniform sharks are ovoviviparous, in that their eggs hatch in utero; in addition to eating unfertilized eggs, unborn sharks participate in intrauterine-cannibalism: stronger pups consume their weaker womb-mates. In crustaceans The yolk in crustacean eggs is essential for embryonic development, serving as a nutrient reservoir. In decapod crustaceans, the primary yolk precursor protein is apolipocrustacein (apoCr), which differs from the traditional vitellogenins (Vtgs) found in most oviparous animals. ApoCr shares greater structural and evolutionary similarity with insect apolipophorin II/I (apoLp-II/I) and vertebrate apolipoprotein B (apoB), distinguishing it from other members of the large lipid transfer protein (LLTP) superfamily. ApoCr is a large glycolipoprotein, approximately 2,600 amino acids long, with conserved structural domains characteristic of LLTPs. These domains include an N-terminal lipid transfer module, a DUF1081 domain exclusive to apoLp-II/I and apoB, and a von Willebrand factor type D domain at the C-terminal. Additionally, it features a subtilisin-like cleavage site, a trait shared with apoLp-II/I. Evolutionary analyses reveal that apoCr is phylogenetically closer to apoLp-II/I than to Vtg proteins, indicating a distinct lineage for crustacean yolk proteins. In decapods, apoCr is typically expressed in both the ovary and hepatopancreas, supporting its dual roles in lipid metabolism and yolk formation. In some species, gene duplication events have resulted in multiple apoCr variants with tissue-specific functions.
Biology and health sciences
Animal reproduction
Biology
472479
https://en.wikipedia.org/wiki/Scutigera%20coleoptrata
Scutigera coleoptrata
Scutigera coleoptrata, also known as the house centipede, is a species of centipede that is typically yellowish-grey and has up to 15 pairs of long legs. Originating in the Mediterranean region, it has spread to other parts of the world, where it can live in human homes. It is an insectivore; it kills and eats other arthropods, such as insects and arachnids. Etymology In 1758, Carl Linnaeus described the species in the tenth edition of his Systema Naturae, giving the name Scolopendra coleoptrata, writing that it has a "coleopterated thorax" (similar to a coleopter). In 1801, Jean-Baptiste Lamarck separated Scutigera from Scolopendra, calling this species Scutigera coleoptrata. The word scutigera comes from Latin and , because of the shape of the plates in the back of the chilopod. Morphology The body of an adult Scutigera coleoptrata is typically in length, although larger specimens are sometimes encountered. Up to 15 pairs of long legs are attached to the rigid body. Together with the antennae they give the centipede an appearance of being in length. The delicate legs enable it to reach surprising speeds of up to running across floors, up walls and along ceilings. Its body is yellowish-grey and has three dark dorsal stripes running down its length; the legs also have dark stripes. S. coleoptrata has developed automimicry in that its tail-like hind legs present the appearance of antennae. When the centipede is at rest, it is not easy to tell its cranial end from its caudal end. Unlike most other centipedes, house centipedes and their close relatives have well-developed faceted eyes. Reproduction and development House centipedes lay their eggs in spring. In a laboratory observation of 24 house centipedes, an average of 63 and a maximum of 151 eggs were laid. As with many other arthropods, the larvae look like miniature versions of the adult, albeit with fewer legs. Young centipedes have four pairs of legs when they are hatched. They gain a new pair with the first molting, and two pairs with each of their five subsequent moltings. Adults with 15 pairs of legs retain that number through three more molting stages (sequence 4-5-7-9-11-13-15-15-15-15 pairs). House centipedes live anywhere from three to seven years, depending on the environment. They can start breeding in their third year. To begin mating, the male and female circle around each other. They initiate contact with their antennae. The male deposits his sperm on the ground and the female then uses it to fertilize her eggs. Behavior and ecology House centipedes feed on spiders, bed bugs, termites, cockroaches, silverfish, ants, and other household arthropods. They administer venom through forcipules. These are not part of their mandibles, so strictly speaking they sting rather than bite. They are mostly nocturnal hunters. Despite their developed eyes, they seem to rely mostly on their antennae when hunting. Their antennae are sensitive to both smells and tactile information. They use both their mandibles and their legs for holding prey. This way they can deal with several small insects at the same time. To capture prey they either jump onto it or use their legs in a technique described as "lassoing". Using their legs to beat prey has also been described. Like other centipedes they can stridulate. In a feeding study, S. coleoptrata showed the ability to distinguish between possible prey, avoiding dangerous insects. They also adapted their feeding pattern to the type of hazard the prey might pose to them. For wasps, they retreat after applying the venom to give it time to take effect. When the centipede is in danger of becoming prey itself, it can detach any legs that have become trapped. House centipedes have been observed to groom their legs by curling around and grooming them with their forcipules. In 1902, C. L. Marlatt, an entomologist with the United States Department of Agriculture, wrote a brief description of the house centipede: Habitat Outdoors, house centipedes prefer to live in cool, damp places. Centipede respiratory systems do not provide any mechanism for shutting the spiracles, and that is why they need an environment that protects them from dehydration and excessive cold. Most live outside, primarily under large rocks, piles of wood or leaves, in barkdust and especially in compost piles. They often emerge from hiding during the watering of gardens or flowerbeds. These centipedes can be found in almost any part of the house, although they are usually encountered in dark or dimly lit areas such as basements and garages. Inside the home, they can be found in bathrooms and lavatories, which tend to be humid, but they can also be found in drier places like offices, bedrooms and dining rooms. They are usually seen crawling along the ground or floor, but they are capable of climbing walls. The greatest likelihood of encountering them is in spring, when they emerge due to warmer weather and in autumn/fall, when the cooling weather forces them to seek shelter in human habitats. Distribution Scutigera coleoptrata is indigenous to the Mediterranean region, but it has spread through much of Europe, Asia, North America and South America. It has also been introduced to Australia. Biological details The faceted eyes of S. coleoptrata are sensitive to daylight and very sensitive to ultraviolet light. They were shown to be able to visually distinguish between different mutations of Drosophila melanogaster. How this ability fits with its nocturnal lifestyle and underground natural habitat is still under study. They do not instantly change direction when light is suddenly shone at them, but will retreat to a darker hiding spot. Some of the plates covering the body segments fused and became smaller during the evolution to the current state of S. coleoptrata. The resulting mismatch between body segments and dorsal plates (tergites) is the cause for this centipede's rigid body. Tergites 10 and 11 are not fully developed and segment 18 does not have a sternite. This model deviates from descriptions by Lewis who identified only 7 tergites and 15 segments. Another feature that sets S. coleoptrata apart from other centipedes is that their hemolymph was found to contain proteins for transporting oxygen. The mitochondrial genome of S. coleoptrata has been sequenced. This opened up discussions on the taxonomy and phylogeny of this and related species. Interaction with humans Unlike its shorter-legged but larger tropical cousins, S. coleoptrata can live its entire life inside a building, usually on the ground levels of homes. While many homeowners may be unsettled by house centipedes due to their speed and appearance, they pose little to no threat towards humans, and are often beneficial as they catch other, more harmful pests, such as cockroaches. They are not aggressive and usually flee when disturbed or revealed from cover. Sting attempts are therefore rare unless the centipede is cornered or aggressively handled. Its small forcipules have difficulty penetrating skin, and even successful stings produce only mild, localized pain and swelling, similar to a bee sting. Allergic reactions to centipede stings have been reported, but these are rare; most stings heal quickly and without complication.
Biology and health sciences
Myriapoda
Animals
472608
https://en.wikipedia.org/wiki/Depressant
Depressant
Depressants, colloquially known as "downers" or central nervous system (CNS) depressants, are drugs that lower neurotransmission levels, decrease the electrical activity of brain cells, or reduce arousal or stimulation in various areas of the brain. Some specific depressants do influence mood, either positively (e.g., opioids) or negatively, but depressants often have no clear impact on mood (e.g., most anticonvulsants). In contrast, stimulants, or "uppers", increase mental alertness, making stimulants the opposite drug class from depressants. Antidepressants are defined by their effect on mood, not on general brain activity, so they form an orthogonal category of drugs. Depressants are closely related to sedatives as a category of drugs, with significant overlap. The terms may sometimes be used interchangeably or may be used in somewhat different contexts. Depressants are widely used throughout the world as prescription medicines and illicit substances. Alcohol is a very prominent depressant. When depressants are used, effects often include ataxia, anxiolysis, pain relief, sedation or somnolence, cognitive or memory impairment, as well as, in some instances, euphoria, dissociation, muscle relaxation, lowered blood pressure or heart rate, respiratory depression, and anticonvulsant effects. Depressants sometimes also act to produce anesthesia. Other depressants can include drugs like Xanax (a benzodiazepine) and a number of opioids. Gabapentinoids like gabapentin and pregabalin are depressants and have anticonvulsant and anxiolytic effects. Most anticonvulsants, like lamotrigine and phenytoin, are depressants. Carbamates, such as meprobamate, are depressants that are similar to barbiturates. Anesthetics are generally depressants; examples include ketamine and propofol. Depressants exert their effects through a number of different pharmacological mechanisms, the most prominent of which include facilitation of GABA and inhibition of glutamatergic or monoaminergic activity. Other examples are chemicals that modify the electrical signaling inside the body, the most prominent of which are bromides and channel blockers. Indications Depressants are used medicinally to relieve the following symptoms and disorders: Anxiety disorders such as: Generalized anxiety Social anxiety Panic attacks Insomnia Obsessive–compulsive disorder Seizures Convulsions Depression Pain Types Alcohol An alcoholic beverage is a drink that contains alcohol (known formally as ethanol), an anesthetic that has been used as a psychoactive drug for several millennia. Ethanol is the oldest recreational drug still used by humans. Ethanol can cause alcohol intoxication when consumed. Alcoholic beverages are divided into three general classes for taxation and regulation of production: beers, wines, and spirits (distilled beverages). They are legally consumed in most countries around the world. More than 100 countries have laws regulating their production, sale, and consumption. The most common way to measure intoxication for legal or medical purposes is through blood alcohol content (also called blood alcohol concentration or blood alcohol level). It is usually expressed as a percentage of alcohol in the blood in units of mass of alcohol per volume of blood, or mass of alcohol per mass of blood, depending on the country. For instance, in North America, a blood alcohol content of 0.10 g/dL means that there are 0.10 g of alcohol for every dL of blood (i.e., mass per volume is used there). Barbiturates Barbiturates were once popular treatments for insomnia, anxiety, and seizures, although their popularity has waned in recent decades. Barbiturates are sometimes used recreationally; they cause dependence and severe withdrawal, and they have a high risk of fatal overdose due to respiratory depression. By the late 1950s, concerns over the mounting social costs associated with barbiturates prompted a concerted effort to find alternative medications. Most people still using barbiturates today do so for the prevention of seizures or, in mild form, for relief from the symptoms of migraines. One barbiturate that remains in use for seizure disorders is phenobarbital. Benzodiazepines A benzodiazepine (sometimes colloquially "benzo"; often abbreviated "BZD") is a drug whose core chemical structure is the fusion of a benzene ring and a diazepine ring. The first such drug, chlordiazepoxide (Librium), was discovered accidentally by Leo Sternbach in 1955 and made available in 1960 by Hoffmann–La Roche, which has also marketed the benzodiazepine diazepam (Valium) since 1963. Benzodiazepines enhance the effect of the neurotransmitter gamma-aminobutyric acid (GABA) at the GABAA receptor, resulting in sedative, hypnotic (sleep-inducing), anxiolytic (anti-anxiety), anticonvulsant, and muscle relaxant properties. High doses of shorter-acting benzodiazepines induce anterograde amnesia, which may be helpful for surgical and procedural anesthesia to reduce patient recall. Midazolam is often used in anesthesiology. These properties make benzodiazepines useful in treating anxiety, insomnia, agitation, seizures, muscle spasms, alcohol withdrawal, and as a premedication for medical or dental procedures. Benzodiazepines are categorized as either short-, intermediate-, or long-acting. Short- and intermediate-acting benzodiazepines are preferred for the treatment of insomnia; longer-acting benzodiazepines are recommended for the treatment of anxiety. In general, benzodiazepines are safe and effective in the short term, although cognitive impairments and paradoxical effects such as aggression or behavioral disinhibition occasionally occur. A minority of patients react to benzodiazepines with paradoxical agitation. Long-term use is controversial due to adverse psychological and cognitive effects, decreasing effectiveness, dependence, and benzodiazepine withdrawal syndrome, following withdrawal after long-term use. The elderly are at an increased risk of experiencing both short- and long-term adverse effects. There is controversy concerning the safety of benzodiazepines in pregnancy. While they are not major teratogens, uncertainty remains as to whether they cause cleft palate in a small number of babies and whether neurobehavioral effects occur as a result of prenatal exposure; they are known to cause withdrawal symptoms in the newborn. Benzodiazepines can be overdosed and cause dangerous deep unconsciousness. However, they are much less toxic than their predecessors, barbiturates, and death rarely results when a benzodiazepine is the only drug taken; however, when combined with other central nervous system depressants such as alcohol and opiates, the potential for toxicity and fatal overdose increases. Benzodiazepines are commonly misused and taken in combination with other addictive drugs. In addition, all benzodiazepines are listed in the Beers List, which is significant in clinical practice. Cannabis Cannabis is often considered either in its own unique category or as a mild psychedelic. The chemical compound tetrahydrocannabinol (THC), which is found in cannabis, has many depressant effects, such as muscle relaxation, sedation, decreased alertness, and tiredness. Contrary to the previous statement, activation of the CB1 receptor by cannabinoids causes an inhibition of GABA, the exact opposite of what CNS depressants do. Carbamates Carbamates are a class of depressants, or "tranquilizers", that are synthesized from urea. Carbamates have anxiolytic, muscle relaxant, anticonvulsant, hypnotic, antihypertensive, and analgesic effects. They have other uses, like muscle tremors, agitation, and alcohol withdrawal. Their muscle relaxant effects are useful for strains, sprains, and muscle injuries combined with rest, physical therapy, and other measures. The effects, synthesis, and mechanism of action of carbamates are very similar to those of barbiturates. There are many different types of carbamates: some only produce anxiolytic and hypnotic effects, while others only have anticonvulsant effects. Side effects of carbamates include drowsiness, dizziness, headache, diarrhea, nausea, flatulence, liver failure, poor coordination, nystagmus, abuse, dizziness, weakness, nervousness, euphoria, overstimulation, and dependence. Uncommon but potentially severe adverse reactions include hypersensitivity reactions such as Stevens–Johnson syndrome, embryo-fetal toxicity, stupor, and coma. It is not recommended to use most carbamates, like carisoprodol, for a long time, as physical and psychological dependence do occur. Meprobamate was launched in 1955. It quickly became the first popular psychotropic drug in America, becoming popular in Hollywood and gaining fame for its seemingly miraculous effects. It has since been marketed under more than 100 trade names, including Amepromat, Quivet, and Zirpon. Carisoprodol, which metabolizes into meprobamate and is still used mainly for its muscle relaxant effects, can potentially be abused. Its mechanism of action is very similar to that of barbiturates, alcohol, methaqualone, and benzodiazepines. Carisoprodol allosterically modulates and directly activates the human α1β2γ2 GABAAR (GABAA) in the central nervous system, similar to barbiturates. This causes chloride channels to open, allowing chloride to flood into the neuron. This slows down communication between neurons and the nervous system. Unlike benzodiazepines, which increase the frequency of the chloride channel opening, carisoprodol increases the duration of channel opening when GABA is bound. GABA is the main inhibitory neurotransmitter in the nervous system, which causes its depressant effects. Carbamates are fatal in overdose, which is why many have been replaced with benzodiazepines. Symptoms are similar to a barbiturate overdose and typically include difficulty thinking, poor coordination, decreased levels of consciousness, and a decreased effort to breathe (respiratory depression). An overdose is more likely to be fatal when mixed with another depressant that suppresses breathing. Physical and psychological dependence does happen with long-term use of carbamates, particularly carisoprodol. Today, carisoprodol is only used in the short term for muscle pain, particularly back pain. Discontinuation after long-term use could be very intense and even possibly fatal. Withdrawal can resemble barbiturate, alcohol, or benzodiazepine withdrawal, as they all have a similar mechanism of action. Discontinuation symptoms include confusion, disorientation, delirium, hallucinations (auditory and visual), insomnia, decreased appetite, anxiety, psychomotor agitation, pressured speech, tremor, tachycardia, and seizures, which could be fatal. Carbamates gained widespread use in the 1950s, alongside barbiturates. While their popularity has gradually waned due to concerns over overdose and dependence potential, newer derivatives of carbamates continue to be developed. Among these is Felbamate, an anticonvulsant that was approved in 1993 and is commonly used today. It is a GABAA positive allosteric modulator and blocks the NR2B subunit of the NMDA receptor. Other carbamates block sodium channels. Phenprobamate was used as an anxiolytic and is still sometimes used in Europe for general anesthesia and for treating muscle cramps and spasticity. Methocarbamol is a popular drug that is commonly known as Robaxin and is over-the-counter in some countries. It is a carbamate with muscle relaxant effects. Tetrabamate is a controversial drug that is a combination of febarbamate, difebarbamate, and phenobarbital. It is marketed in Europe and has been largely, but not completely, discontinued. On 4 April 1997, after over 30 years of use due to reports of hepatitis and acute liver failure, the use of the drug was restricted. Carisoprodol, known as "Soma", is still commonly used today for its muscle relaxant effects. It is also very commonly abused around the world. It is a Schedule IV substance in the United States. Approved: Carisoprodol/Meprobamate/Tybamate (Soma/Miltown, Solacen) (muscle relaxant, anxiolytic, tranquilizer) Difebarbamate (Atrium, Sevrium) (tranquilizer) Emylcamate (Striatran) (anxiolytic and muscle relaxant) Ethinamate (Valamin, Valmid) (sedative-hypnotic) Febarbamate/Phenobamate (Solium, Tymium) (anxiolytic and tranquilizer) Felbamate (Felbatol) (anticonvulsant) Hexapropymate (Merinax) (hypnotic-sedative) Mebutamate (Capla, Dormate) (anxiolytic, sedative, antihypertensive) Phenprobamate (Gamaquil, Isotonil) (muscle relaxant, sedative, anxiolytic, anticonvulsant, anesthesia) Procymate (Equipax) (sedative, anxiolytic) Styramate (Sinaxamol) (muscle relaxant, anticonvulsant) Tetrabamate (febarbamate, difebarbamate, phenobarbital) (Atrium, G Tril, Sevrium) (for anxiety, alcohol withdrawal, muscle tremors, agitation, depression) Although a drug may be approved, that does not necessarily mean it is still being used today. Not approved: Carisbamate (anticonvulsant) Clocental (hypnotic) Cyclarbamate (muscle relaxant and tranquilizer) Lorbamate (muscle relaxant and tranquilizer) Nisobamate (tranquilizer) Pentabamate (tranquilizer) Gabapentinoids Gabapentinoids are a unique and relatively novel class of depressants that selectively bind to the auxiliary α2δ subunit (CACNA2D1 and CACNA2D2) site of certain VDCCs and thereby act as inhibitors of α2δ subunit-containing voltage-gated calcium channels. α2δ is nicknamed the "gabapentin receptor". At physiologic or resting membrane potential, VDCCs are normally closed. They are activated (opened) at depolarized membrane potentials, which is the source of the "voltage-gated" epithet. Gabapentinoids bind to the α1 and α2 sites of the α2δ subunit family. Gabapentin is the prototypical gabapentinoid. The α2δ is found on L-type calcium channels, N-type calcium channels, P/Q-type calcium channels, and R-type calcium channels throughout the central and peripheral nervous systems. α2δ is located on presynaptic neurons and affects calcium channel trafficking and kinetics, initiates extracellular signaling cascades and gene expression, and promotes excitatory synaptogenesis through thrombospondin 1. Gabapentinoids are not direct channel blockers; rather, they disrupt the regulatory function of α2δ and its interactions with other proteins. Most of the effects of gabapentinoids are mediated by the high-voltage activated N and P/Q-type calcium channels. P/Q-type calcium channels are mainly found in the cerebellum (Purkinje neurons), which may be responsible for the ataxic adverse effect of gabapentinoids, while N-type calcium channels are located throughout the central and peripheral nervous systems. N-type calcium channels are mainly responsible for the analgesic effects of gabapentinoids. Ziconotide, a non-gabapentinoid ω-conotoxin peptide, binds to the N-type calcium channels and has analgesic effects 1000 times stronger than morphine. Gabapentinoids are selective for the α2δ site but non-selective when they bind to the calcium channel complex. They act on the α2δ site to lower the release of many excitatory and pro-nociceptive neurochemicals, including glutamate, substance P, calcitonin gene-related peptide (CGRP), and more. Gabapentinoids are absorbed from the intestines mainly by the large neutral amino acid transporter 1 (LAT1, SLC7A5) and the excitatory amino acid transporter 3 (EAAT3). They are one of the few drugs that use these amino acid transporters. Gabapentinoids are structurally similar to the branched-chain amino acids L-leucine and L-isoleucine, both of which also bind to the α2δ site. Branched-chain amino acids like l-leucine, l-isoleucine, and l-valine have many functions in the central nervous system. They modify large neutral amino acid (LNAA) transport at the blood–brain barrier and reduce the synthesis of neurotransmitters derived from aromatic amino acids, notably serotonin from tryptophan and catecholamines from tyrosine and phenylalanine. This may be relevant to the pharmacology of gabapentinoids. Gabapentin was designed by researchers at Parke-Davis to be an analogue of the neurotransmitter GABA that could more easily cross the blood–brain barrier and was first described in 1975 by Satzinger and Hartenstein. Gabapentin was first approved for epilepsy, mainly as an add-on treatment for partial seizures. Gabapentinoids are GABA analogues, but they do not bind to the GABA receptors, convert into GABA or another GABA receptor agonist in vivo, or directly modulate GABA transport or metabolism. Phenibut and baclofen, two structurally related compounds, are exceptions, as they mainly act on the GABA B receptor. Gabapentin, but not pregabalin, has been found to activate voltage-gated potassium channels (KCNQ), which might potentiate its depressant qualities. Despite this, gabapentinoids mimic GABA activity by inhibiting neurotransmission. Gabapentinoids prevent delivery of the calcium channels to the cell membrane and disrupt interactions of α2δ with NMDA receptors, AMPA receptors, neurexins, and thrombospondins. Some calcium channel blockers of the dihydropyridine class are used for hypertension to weakly block α2δ. Gabapentinoids have anxiolytic, anticonvulsant, antiallodynic, antinociceptive, and possibly muscle relaxant properties. Pregabalin and gabapentin are used in epilepsy, mainly partial seizures (focal). Gabapentinoids are not effective for generalized seizures. They are also used for postherpetic neuralgia, neuropathic pain associated with diabetic neuropathy, fibromyalgia, generalized anxiety disorder, and restless legs syndrome. Pregabalin and gabapentin have many off-label uses, including insomnia, alcohol and opioid withdrawal, smoking cessation, social anxiety disorder, bipolar disorder, attention deficit hyperactivity disorder, chronic pain, hot flashes, tinnitus, migraines, and more. Baclofen is primarily used for the treatment of spastic movement disorders, especially in instances of spinal cord injury, cerebral palsy, and multiple sclerosis. Phenibut is used in Russia, Ukraine, Belarus, and Latvia to treat anxiety and improve sleep, as in the treatment of insomnia. It is also used for various other indications, including the treatment of asthenia, depression, alcoholism, alcohol withdrawal syndrome, post-traumatic stress disorder, stuttering, tics, vestibular disorders, Ménière's disease, dizziness, and the prevention of motion sickness and anxiety before or after surgical procedures or painful diagnostic tests. Phenibut, like other GABA B agonists, is also sometimes used by bodybuilders to increase the human growth hormone. Reuters reported on 25 March 2010 that "Pfizer Inc violated a United States racketeering law by improperly promoting the epilepsy drug Neurontin (gabapentin). Under the Racketeer Influenced and Corrupt Organizations Act, the penalty is automatically tripled, so the finding will cost Pfizer $141 million." The case stems from a claim from Kaiser Foundation Health Plan Inc. that "it was misled into believing Neurontin was effective for off-label treatment of migraines, bipolar disorder and other conditions. Pfizer argued that Kaiser physicians "still recommend the drug for those uses" and that "the insurer's website also still lists Neurontin as a drug for neuropathic pain." In some cases, gabapentinoids are abused and provide similar effects to alcohol, benzodiazepines, and gamma-hydroxybutyric acid (GHB). The FDA placed a black box warning on Neurontin (gabapentin) and Lyrica (pregabalin) for serious breathing problems. Mixing gabapentinoids with opioids, benzodiazepines, barbiturates, GHB, alcohol, or any other depressant is potentially deadly. Common side effects of gabapentinoids include drowsiness, dizziness, weakness, increased appetite, urinary retention, shortness of breath, involuntary eye movements (nystagmus), memory issues, uncontrollable jerking motions, auditory hallucinations, erectile dysfunction, and myoclonic seizures. An overdose of gabapentinoids usually consists of severe drowsiness, severe ataxia, blurred vision, slurred speech, severe uncontrollable jerking motions, and anxiety. Like most anticonvulsants, pregabalin and gabapentin have an increased risk of suicidal thoughts and behaviors. Gabapentinoids, like all calcium channel blockers, are known to cause angioedema. Taking them with an ACE inhibitor can increase the toxic effects of gabapentinoids. They may also enhance the fluid-retaining effect of certain anti-diabetic agents (thiazolidinediones). It is not known if they cause gingival enlargement like other calcium channel blockers. Gabapentinoids are excreted by the kidney, mostly in their original form. Gabapentinoids can build up in the body when someone has renal failure. This usually presents itself as myoclonus and an altered mental state. It is unclear if it is safe to use gabapentinoids during pregnancy, with some studies showing potential harm. Physical or physiological dependence does occur during the long-term use of gabapentinoids. Following abrupt or rapid discontinuation of pregabalin and gabapentin, people report withdrawal symptoms like insomnia, headache, nausea, diarrhea, flu-like symptoms, anxiety, depression, pain, hyperhidrosis, seizures, psychomotor agitation, confusion, disorientation, and gastrointestinal complaints. Acute withdrawal from baclofen and phenibut may also cause auditory and visual hallucinations, as well as acute psychosis. Baclofen withdrawal can be more intense if it is administered intrathecally or for long periods of time. If baclofen or phenibut is used for long periods of time, it can resemble intense benzodiazepine, GHB, or alcohol withdrawal. To minimize withdrawal symptoms, baclofen or phenibut should be tapered down slowly. Abrupt withdrawal from phenibut or baclofen could possibly be life-threatening because of its mechanism of action. Abrupt withdrawal can cause rebound seizures and severe agitation. Approved: Gabapentin (Neurontin) Gabapentin enacarbil (Horizant, Regnite) Gabapentin Extended-Release (Gralise) Pregabalin (Lyrica) Phenibut (Anvifen, Fenibut, Noofen) Baclofen (Lioresal) Mirogabalin (Tarlige) (Japan only) Not approved: Imagabalin Tolibut 4-Flurophenibut HSK16149 Trans-4 and cis-4-[18F] fluorogabapentin (α2δ PET Imaging) 4-Methylpregabalin PD-217,014 Atagabalin Arbaclofen Saclofen Endogenous (not gabapentinoids), endogenous BCAA amino acids that bind to α2δ): Calcium Isoleucine Leucine Valine Aspartate Other α2δ ligands: Phenylalanine NP-118809 Gababutin Ziconotide (approved for pain) Ethanol Dextrothyroxine (agonist of α2δ instead of inhibiting it) Ethioninie Suloctidil Terodiline Bepridil Gamma-hydroxybutyric acid Gamma-hydroxybutyric acid, or "GHB", is a GABA analogue that is a naturally occurring neurotransmitter and depressant drug. It is also naturally found in small amounts in some alcoholic beverages alongside ethanol. GHB is the prototypical substance among a couple of GHB receptor modulators. GHB has been used as a general anesthetic and as a treatment for cataplexy, narcolepsy, and alcoholism. The sodium salt of GHB, sodium oxybate, is commonly used today for narcolepsy, sudden muscle weakness, and excessive daytime sleepiness. It is sold under the brand name Xyrem. As a depressant, GHB would worsen narcolepsy and muscle weakness. But in low doses, GHB mainly affects the GHB receptor, an excitatory receptor that releases dopamine and glutamate, giving GHB stimulant effects, the opposite of a depressant. But in large doses, GHB activates the GABAB receptor, an inhibitory receptor in the central nervous system, which overpowers the excitatory effects, thus causing central nervous system depression. Some antipsychotics are agonists of the GHB receptor. GHB can usually be found in either sodium, potassium, magnesium, or calcium salts. Xywav is a medication that is a mixture of all GHB salts and is used to treat the same conditions as Xyrem. Both Xywav and Xyrem are Schedule III and have a black box warning for central nervous system depressant effects (hypoventilation and bradycardia) and for their very high potential for abuse. Overdose on GHB is fatal with or without mixing other CNS depressants. Death from a GHB overdose is usually caused by respiratory depression, seizures, or coma. GHB is used illegally as an intoxicant, an aphrodisiac, and an athletic-performance enhancer. It is a popular club drug in some parts of the world due to its powerful aphrodisiac and euphoric effects. Similarly to phenibut and baclofen, it is used by bodybuilders to increase the human growth hormone due to GABAB activation. It has also been reportedly used as a date-rape drug. This caused it to be a Schedule I substance in the United States, Canada, and other countries. Xyrem, which is GHB in its sodium form, is Schedule III in the United States, Canada, and other countries. In low doses, GHB mainly binds to the GHB receptor and weakly binds to the GABAB receptor. The GHB receptor is an excitatory G protein-coupled receptor (GPCR). Its endogenous ligand is GHB, since GHB is also a neurotransmitter. It is also a transporter for vitamin B2. The existence of a specific GHB receptor was predicted by observing the action of GHB and related compounds that primarily act on the GABAB receptor but also exhibit a range of effects that were found not to be produced by GABAB activity and so were suspected of being produced by a novel and, at the time, unidentified receptor target. At higher doses, seizures are very common. This is thought to be mediated through an increased Na+/K+ current and the increased release of dopamine and glutamate. GHB can also cause absence seizures; the mechanism is currently not known but it is believed to be due to interactions with the GABAB receptor. It is being investigated if endogenous GHB is responsible for non-convulsive seizures in humans. GHB withdrawal is very intense. Physical dependence develops quickly. It is also highly psychologically addictive. It shares some similarities with the withdrawal of gabapentinoids phenibut and baclofen due to the activation of the GABAB receptor. It features a typical depressant withdrawal syndrome that mimics alcohol withdrawal. Symptoms include delirium, tremor, anxiety, tachycardia, insomnia, hypertension, confusion, sweating, severe agitation which may require restraint, auditory and visual hallucinations, and possibly death from tonic-clonic seizures. Baclofen and phenibut are very effective for withdrawal and are preferred by patients over benzodiazepines for treatment of withdrawal. GHB receptor modulators: GHB receptor agonists: Gamma-hydroxybutyric acid (GHB, Xyrem) - Calcium oxybate, magnesium oxybate, sodium oxybate (Xyrem), potassium oxybate (Xywav is a mixture of all these salts.) 3-hydroxycyclopent-1-enecarboxylic acid (HOCPCA) γ-hydroxycrotonic acid, trans-4-Hydroxycrotonic acid (GHC, T-HCA) Amisulpride, levosulpiride, sulpiride, sultopride (antipsychotic GHB receptor ligands) 3-Chloropropanoic acid (UMB66) 3-phenylpropyloxybutyric acid (UMB72) 4-benzyloxybutyric acid (UMB73) 4-hydroxy-4-napthylbutanoic acid (UMB86) 5-Hydroxypentanoate (UMB58) gamma-(4-methoxybenzyl)-gamma-hydroxybutyric acid (NCS-435) 4-(4-chlorophenyl)-4-hydroxy-2-butanoic acid (NCS-356) 3-hydroxyphenylacetic acid (3-HPA) Catechin, monastrol (positive allosteric modulators) Prodrugs that metabolize into GHB: γ-Hydroxyvaleric acid (GHV) - gamma-Valerolactone, γ-Valerolactone (GVL) (prodrug to GHV) 1,4-Butanediol (1,4-BD) 1,4-Butanediol acetate (DABD) Ethyl acetoxy butanoate (EAB) Aceburic acid (GHB acetate) gamma-Butyrolactone, γ-Butyrolactone (GBL) 2-Furanone, γ-crotonolactone (GCL) Gamma-Hydroxybutyraldehyde, γ-Hydroxybutyraldehyde (GHBAL) Gamma-Hydroxyvaleric acid, γ-Hydroxyvaleric acid (GHV) GHB receptor antagonists: NCS-382 Gabazine Some GHB receptors modulators only bind to the GHB receptor, while others bind to both the GHB and GABAB receptors. Nonbenzodiazepines Nonbenzodiazepines, sometimes referred to as Z-drugs, are a class of hypnotic depressants that are mainly used to treat insomnia and sometimes anxiety. They are structurally related to benzodiazepines. They positively modulate the benzodiazepine site of the GABAA receptor, the chief inhibitory receptor of the central nervous system, just like benzodiazepines, but at a molecular level, they are structurally unrelated. Nonbenzodiazepines bind to the benzodiazepine at the GABAA receptor site to keep the chloride channel open. This causes chloride in the intercellular area to flood into the neuron. Since chloride has a negative charge, it causes the neuron to rest and cease firing. This results in a relaxing and depressant effect on the central nervous system. Common nonbenzodiazepines like Zolpidem and Zopiclone are extremely effective for insomnia but carry many risks and side effects. Sleeping pills, including zopiclone, have been associated with an increased risk of death. Adverse reactions are as follows: "taste disturbance (some report a metallic-like taste); less commonly, nausea, vomiting, dizziness, drowsiness, dry mouth, headache; rarely, amnesia, confusion, depression, hallucinations, nightmares; very rarely, lightheadedness, incoordination, paradoxical effects, and sleep-walking are also reported." Some users of nonbenzodiazepines have sleepwalked and committed murders or have been involved in motor vehicle accidents. Unlike benzodiazepines, nonbenzodiazepines have a risk of hallucinations and sleepwalking. Like benzodiazepines, they can cause anterograde amnesia. Nonbenzodiazepines should not be discontinued abruptly if taken for more than a few weeks due to the risk of rebound withdrawal effects and acute withdrawal reactions, which may resemble those seen during benzodiazepine withdrawal. Treatment usually entails gradually reducing the dosage over a period of weeks or several months, depending on the individual, dosage, and length of time the drug has been taken. If this approach fails, a crossover to a benzodiazepine equivalent dose of a long-acting benzodiazepine (such as chlordiazepoxide or, more preferably, diazepam) can be tried, followed by a gradual reduction in dosage. In extreme cases and, in particular, where severe addiction and/or abuse are manifested, inpatient detoxification may be required, with flumazenil as a possible detoxification tool. Opioids/opiates Opioids are substances that act on opioid receptors to reduce pain. Medically, they are primarily used for pain relief, including anesthesia. Opioids also cause euphoria and are highly abused. Opioids and opiates are not the same. Opiates refer to natural opioids such as morphine and codeine. Opioids refer to all natural, semisynthetic, and synthetic opioids, like heroin and oxycodone. Contrary to popular misconception, opioids are not depressants in the classical sense. They do produce central nervous system depression, but they also excite certain areas of the central nervous system. To remain true to the term "depressant", opioids cannot be classified as such. For opioid agonists and opium derivatives, these are classified differently. These drugs are more correctly identified as "analgesic" or "narcotic". However, they do have depressant actions nonetheless. There are three principal classes of opioid receptors: μ, κ, δ (mu, kappa, and delta), although up to seventeen have been reported, and include the ε, ι, λ, and ζ (epsilon, iota, lambda, and zeta) receptors. Conversely, σ (sigma) receptors are no longer considered to be opioid receptors because their activation is not reversed by the opioid inverse-agonist naloxone. The nociception opioid peptide receptor (NOP) (ORL1) is an opioid receptor that is involved in pain responses, anxiety, movement, reward, hunger, memory, and much more. It plays a major role in the development of tolerance to μ-opioid agonists. When "pain" occurs, a signal gets sent from the site of possible injury. This signal goes up the spinal cord into the brain, where it is perceived as a negative emotion known as nociception or "hurt". In the central nervous system, the spine is connected to the brain by a structure called the brain stem. The brain stem is the first part of the brain that develops in a mammal out of the neural crest. It is also the oldest part of the brain and controls many automatic functions such as consciousness, breathing, heart rate, digestion, and many more. Opioid receptors are specialized pain-blocking receptors. They bind a wide range of hormones, peptides, and much more. Although they are found everywhere in the central nervous system, they are highly concentrated in the brain stem. Depending on the receptor, activation of them has the ability to stop pain from making its way to the brain and being perceived as pain. Hence, opioids do not actually "stop" pain; they simply stop you from knowing you are in pain. Pain and the ability to modify it based on an organism's environment is an evolutionary advantage, and it has been shown that it can help an organism escape and survive certain situations where they may otherwise be immobilized due to pain and injury. The midbrain nuclei of the brain stem, with structures like the periaqueductal gray, reticular formation, and rostromedial tegmental nucleus, are responsible for the majority of the physical and psychological effects of endogenous and exogenous opioids. The μ-opioid receptor is responsible for the analgesic, euphoric, and adverse effects of opioids. The μ-opioid receptor is a G protein-coupled receptor. When the μ-opioid receptor is activated, it causes pain relief, euphoria, constipation, constricted pupils, itching, and nausea. The μ-opioid is located in the gastrointestinal tract, which controls peristalsis. This causes constipation, which can be extremely problematic and distressing. Activation of this receptor also causes relaxation of voluntary and involuntary muscles, which can cause side effects like trouble urinating and swallowing. The μ-opioid receptor can also reduce androgens, thus decreasing libido and sexual function. The receptor is also known to cause "musical anhedonia". The receptor plays a critical role in feeding. The palatability of food is determined by opioid receptor-related processes in the nucleus accumbens and ventral pallidum. The opioid processes involve mu opioid receptors and are present in the rostromedial shell of the nucleus accumbens on its spiny neurons. This area has been called the "opioid eating site". The μ-opioid receptor has many endogenous ligands, including endorphin. The κ-opioid receptor (KOR) is a G protein-coupled receptor located in the central nervous system. KOR is also a G protein-coupled receptor. Humans and some other primates have a higher density of kappa receptors than most other animals. KOR is responsible for nociception, consciousness, motor control, and mood. Dysregulation of this receptor system has been implicated in alcohol and drug addiction. The endogenous ligand for KOR is dynorphin. The activation of KOR usually causes dysphoria, hence the name dynorphin. The intoxicating plant Salvia divinorum contains salvinorin A, an alkaloid that is a potent and selective κ-opioid receptor agonist. This causes powerful hallucinations. Antagonizing the κ-opioid receptor may be able to treat depression, anxiety, stress, addiction, and alcoholism. The third receptor is the δ-opioid receptor (DOR). The delta receptor is the least studied of the three main opioid receptors. It is a G protein-coupled receptor, and its endogenous ligand is deltorphin. The activation of DOR may have antidepressant effects. δ-opioid agonists can produce respiratory depression at very high doses; at lower doses, they have the opposite effect. High doses of a δ-opioid agonist can cause seizures, although not all delta agonists produce this effect. Activation of the delta receptor is usually stimulating instead of sedating like most opioids. The nociception opioid peptide receptor (NOP) is involved in the regulation of numerous brain activities, particularly instinctive emotional behaviors and pain. NOP is a G protein-coupled receptor. The nociception receptor controls a wide range of biological functions, including nociception, food intake, memory processes, cardiovascular and renal functions, spontaneous locomotor activity, gastrointestinal motility, anxiety, and the control of neurotransmitter release at peripheral and central sites. An opioid overdose is fatal. A person overdosing on opioids or opiates is presented with respiratory depression, a lethal condition that can cause hypoxia from slow and shallow breathing. Mixing opioids with another depressant, such as benzodiazepines or alcohol, increases the chance of an overdose and respiratory depression. Opioid overdose causes a decreased level of consciousness, pinpoint pupils, and respiratory depression. Other symptoms include seizures and muscle spasms. Opioids activate μ-opioid receptors in specific regions of the central nervous system associated with respiratory regulation. They activate μ-opioid receptors in the medulla and pons. They are located in the brain stem, which connects to the spine. This area has a high density of μ-opioid receptors as they block pain going up from the spine into the brain. These areas are the oldest and most primitive parts of the brain. They control automatic functions such as breathing and digestion. Opioids stop this process and cause respiratory depression and constipation. The brain stem no longer detects carbon dioxide in the blood, so it does not initiate the inhalation reflex, usually resulting in hypoxia. Some overdose victims, however, die from cardiovascular failure or asphyxiation from choking on their vomit. Naloxone is a μ-opioid receptor antagonist, meaning instead of activating the μ-opioid receptor, it disrupts the functioning of the receptor. Since naloxone is powerful and highly selective for the μ-opioid receptor, it can knock powerful opioids like fentanyl off the receptor and block another ligand from binding to the receptor, thus stopping an overdose. A person dependent on opioids may go into precipitated withdrawal when naloxone is used. Since naloxone blocks any endogenous or exogenous opioids from binding to the μ-opioid receptor. This may cause a person to immediately go into withdrawal after naloxone is used. This can cause withdrawal symptoms like cold sweats and diarrhea. Opioids activate μ-opioid receptors in the rostromedial tegmental nucleus (RMTg). The rostromedial tegmental nucleus is a GABAergic nucleus that functions as a "master brake" for the midbrain dopamine system. The RMTg possesses robust functional and structural links to the dopamine pathways. Opioids decrease the release of GABA, thus disinhibiting the GABAergic brake on dopamine networks. GABA is an inhibitory neurotransmitter, meaning it either blocks or decreases the potential of neuron firing. This causes large amounts of dopamine to be released, as it is no longer blocked by GABA. Disinhibition of GABA may be responsible for causing seizures, an uncommon adverse effect of opioids. GABAergic disinhibition is also why opioids are not considered true depressants. This excitement of dopaminergic pathways causes the euphoria of opioids. This causes major positive reinforcing effects in the brain, instructing it to do it again. The RMTg is also responsible for the development of tolerance and addiction. Psychostimulants also excite this pathway. Fentanyl is very commonly cut into other substances sold on the street. Fentanyl is used to increase the potency of substances, thus making the user spend more money on the laced substance. Codeine is a weaker natural opiate that is usually used for bronchitis, diarrhea, and post-operative pain. It is very easy to overdose on these substances, especially if the user has no tolerance. Natural opiates (derived from papaver somniferum and opium) Morphine (MS Contin) Codeine (Tylenol No. 3) Papaverine (Pavabid) Noscapine (Narcotine) Thebaine Oripavine Narceine Semi-synthetic morphinan opioids (derived from thebaine): Oxycodone (OxyContin) Heroin (Diamorphine) Hydrocodone (Vicodin) Oxymorphone (Opana) Hydromorphone (Dilaudid) Buprenorphine (Suboxone) Naloxone (Narcan) Fully synthetic opioids: Fentanyl (Duragesic) Tramadol (Ultram) Methadone (Dolophine) Pethidine (Demerol) Ketobemidone (Ketogan) Pentazocine (Talwin) Carfentanil (Wildnil) Loperamide (Imodium) Dextropropoxyphene (Darvocet) Tapentadol (Nucynta) Dextropropoxyphene (Darvocet) Others: Mitragyna speciosa (Kratom) (indole alkaloid) Piperidinediones Piperidinediones are a class of depressants that are not used anymore. There are piperidinediones that are used for other purposes, like breast cancer. The piperidinedione class is very structurally similar to barbiturates. Some piperidinediones include glutethimide, methyprylon, pyrithyldione, glutarimide, and aminoglutethimide. The first three (glutethimide, methyprylon, and pyrithyldione) are central nervous depressants. The piperidinedione depressants, specifically glutethimide, are positive modulators of the GABAA anion channel. The drug increases inhibitory GABAergic tone and causes neuro-inhibition of the cortical and limbic systems, observed clinically as a sedative-hypnotic effect. Glutethimide is also a potent inducer of the CYP 2D6 enzyme in the liver. This enzyme is responsible for converting many drugs, from beta blockers to antidepressants to opioids and opiates. Due to its effects on the conversion of opioids, it was highly abused and mixed with opioids like codeine. Codeine must be metabolized to morphine in the liver to have its psychoactive and analgesic effects. Mixing codeine with glutethimide allowed more codeine to be converted into morphine in the body, thus increasing its effect. These were known as "hits", "cibas and codeine", and "dors and 4s". Glutethimide was believed to be safer than barbiturates, but many people died from the drug. Demand was high in the United States at one point. Production of glutethimide was discontinued in the US in 1993 and in several eastern European countries, most notably Hungary, in 2006. Glutethimide withdrawal is intense and resembles barbiturate withdrawal. It features hallucinations and delirium typical of a depressant withdrawal. In the 1970s, there were reports of neonatal withdrawal from glutethimide. Infants born to mothers addicted to glutethimide responded well initially, then had a recurrence of symptoms about 5 days later, including overactivity, restlessness, tremors, hyperreflexia, hypotonia, vasomotor instability, incessant crying, and general irritability. Glutethimide withdrawal featured severe agitation, tremors, and seizures, which could be fatal. Overdose causes stupor, coma, and/or respiratory depression. Methyprylon (Dimerin, Methyprylone, Noctan, Noludar) Pyrithyldione (Presidon, Pyridion, Pyridione, Pyrithyldion, Pyrithyldione) Piperidione (Ascron, Dihyprylon, Dihyprylone, Sedulon, Tusseval) (withdrawn before approval) Glutethimide (Doriden) Quinazolinone Quinazolinones are a class of depressants that are rarely used anymore. Quinazolinones have powerful sedative, hypnotic, and anxiolytic effects. Quinazolinone's structure is very similar to that of some antibiotics. Quinazolinone's main mechanism of action is binding to the GABAA receptor. It does not bind to the ethanol, barbiturate, neurosteroid, or benzodiazepine site. Instead, it binds on a site directly between the GABRB2 (β2) and (α1) GABRA1 proteins on the GABAA receptor. The anesthetic etomidate and anticonvulsant loreclezole may also bind to this site. Overdosing on quinazolinone sometimes causes effects that are the opposite of quinazolinone-like sedation. The overdose consists of hyperreflexia, vomiting, kidney failure, delirium, hypertonia, coma, myoclonic twitches, somnolence, euphoria, muscular hyperactivity, agitated delirium, tachycardia, and tonic-clonic seizures. In 1982, 2,764 people visited US emergency rooms after overdosing on quinazolinones, specifically methaqualone. Mixing quinazolinones with another depressant is possibly fatal. Death from a quinazolinone overdose is usually caused by death through cardiac or respiratory arrest. An overdose resembles a barbiturate or carbamate overdose. Quinazolinone withdrawal occurs when someone who has become dependent on a quinazolinone ceases usage. Quinazolinone withdrawal resembles ethanol, barbiturate, benzodiazepine, and carbamate withdrawal. It usually consists of restlessness, nausea and vomiting, decreased appetite, tachycardia, insomnia, tremor, hallucinations, delirium, confusion, and seizures; and, which are possibly fatal: EEG photoparoxysmal response, myoclonic twitches, fever, muscle spasms, and irritability. Methaqualone hydrochloride and quinazolinone anxiolytics and hypnotics are referred to as "quaaludes", "ludes", and "disco biscuits". Methaqualone was very commonly abused in the western world during the 1960s and 1970s. Methaqualone was mainly prescribed for insomnia, as it was thought to be safer than barbiturates and carbamates. Methaqualone became highly abused by many, including celebrities, after its introduction in 1965. Methaqualone was first synthesized in India in 1951 by Indra Kishore Kacker and Syed Husain Zaheer, who were conducting research on finding new antimalarial medications. The drug name "Quaalude" (methaqualone) is a portmanteau, combining the words "quiet interlude". Methaqualone was discontinued in the United States in 1985, mainly due to its psychological addictiveness, widespread abuse, and illegal recreational use. Nonbenzodiazepines and benzodiazepines are now used to treat insomnia instead. Methaqualone is now a Schedule I substance. Some quinazolinone analogues are still sold online. They come with the risk of seizures. Large doses of methaqualone can cause euphoria, disinhibition, increased sexuality and sociability, muscle relaxation, anxiolysis, and sedation. Today, methaqualone is widely abused in South Africa. Many celebrities have used quinazolinone, most notably methaqualone. Bill Cosby admitted to casual sex involving the recreational use of methaqualone. 18-year-old actor Anissa Jones died from an overdose of cocaine, PCP, methaqualone, and the barbiturate Seconal. Billy Murcia, a drummer for the rock band New York Dolls, died at 21 when he drowned in a bathtub while overdosing on heroin and methaqualone. Cloroqualone was a quinazolinone that bound to the GABAA and sigma-1 receptors. It had useful cough suppressant effects and weaker sedative effects than methaqualone, but was ultimately withdrawn due to its potential for abuse and overdose. Diproqualone is a quinazolinone that is still used today. Diproqualone has sedative, anxiolytic, antihistamine, and analgesic properties, resulting from its agonist activity at the β subtype of the GABAa receptor, antagonist activity at all histamine receptors, inhibition of the cyclooxygenase-1 enzyme, and possibly its agonist activity at both the Sigma-1 receptor and Sigma-2 receptor. Diproqualone is used primarily for the treatment of inflammatory pain associated with osteoarthritis and rheumatoid arthritis; it is used more rarely for treating insomnia, anxiety, and neuralgia. Diproqualone is the only analogue of methaqualone that is still in widespread clinical use due to its useful anti-inflammatory and analgesic effects, along with the sedative and anxiolytic actions common to other drugs of this class. There are still some concerns about the potential of diproqualone for abuse and overdose; it is sold not as a pure drug but as the camphosulfonate salt in combination mixtures with other medicines such as ethenzamide. Etaqualone is a quinazolinone-class depressant. It has sedative, hypnotic, muscle relaxant, and central nervous system depressant properties. It was highly abused and had a high risk of overdose. Users would snort or smoke the free-base etaqualone hydrochloride salt. Methylmethaqualone is an analogue of methaqualone with similar hypnotic and sedative effects. Methylmethaqualone differs from methaqualone by 4-methylation on the phenyl ring. It produces convulsions at only slightly above the effective sedative dose. It would appear that this compound was sold on the black market in Germany as a designer drug analogue of methaqualone. Nitromethaqualone is a quinazolinone depressant with ten times more hypnotic and sedative effects than methaqualone. Quinazolinones: Alfoqualone (Arofuto) Cloroqualone Diproqualone Etaqualone (Aolan, Athinazone, Ethinazone) Mebroqualone (MBQ) Mecloqualone (Nubarene, Casfen) Methaqualone (Quaalude, Sopor, Mandrax) Methylmethaqualone Nitromethaqualone SL-164 (Dicloqualone, DCQ) Miscellaneous Alpha and beta blockers (carvedilol, propranolol, atenolol, etc.) Anticholinergics (atropine, hyoscyamine, scopolamine, etc.) Anticonvulsants (topiramate, carbamazepine, lamotrigine, etc.) Antihistamines (diphenhydramine, doxylamine, promethazine, etc.) Antipsychotics (haloperidol, chlorpromazine, clozapine, etc.) Hypnotics (zolpidem, zopiclone, chloral hydrate, eszopiclone, etc.) Muscle relaxants (baclofen, phenibut, carisoprodol, cyclobenzaprine, etc.) Sedatives (gamma-hydroxybutyrate, etc.) Combining multiple depressants Combining multiple depressants can be very dangerous because the central nervous system's depressive properties have been proposed to increase exponentially instead of linearly. This characteristic makes depressants a common choice for deliberate overdoses in the case of suicide. The use of alcohol or benzodiazepines along with the usual dose of heroin is often the cause of overdose deaths in opiate addicts.
Biology and health sciences
General concepts_2
Health
472640
https://en.wikipedia.org/wiki/Flying%20fish
Flying fish
The Exocoetidae are a family of marine ray-finned fish in the order Beloniformes, known colloquially as flying fish or flying cod. About 64 species are grouped in seven genera. While they don't "fly" in the same way a bird does, flying fish can make powerful, self-propelled leaps out of the water where their long wing-like fins enable gliding for considerable distances above the water's surface. The main reason for this behavior is thought to be to escape from underwater predators, which include swordfish, mackerel, tuna, and marlin, among others, though their periods of flight expose them to attack by avian predators such as frigate birds. Barbados is known as "the land of the flying fish" and the fish is one of the national symbols of the country. The Exocet missile is named after them, as variants are launched from underwater, and take a low trajectory, skimming the surface, before striking their targets. Etymology The term Exocoetidae is both the scientific name and the general name in Latin for a flying fish. The suffix -idae, common for indicating a family, follows the root of the Latin word , a transliteration of the Ancient Greek name . This means literally 'sleeping outside', from , 'outside', and , 'bed', 'resting place', with the verb root , 'to lie down' (not 'untruth'), so named as flying fish were believed to leave the water to sleep ashore, or due to flying fish flying and thus stranding themselves in boats. Taxonomy The Exocoetidae is divided into four subfamilies and seven genera: Subfamily Exocoetinae Genus Exocoetus Subfamily Fodiatorinae Genus Fodiator Subfamily Parexocoetinae Genus Parexocoetus Subfamily Cypsellurinae Genus Cheilopogon Genus Cypselurus Genus Hirundichthys Genus Prognichthys Distribution and description Flying fish live in all of the oceans, particularly in tropical and warm subtropical waters. They are commonly found in the epipelagic zone, the top layer of the ocean to a depth of about . Numerous morphological features give flying fish the ability to leap above the surface of the ocean. One such feature is fully broadened neural arches, which act as insertion sites for connective tissues and ligaments in a fish's skeleton. Fully broadened neural arches act as more stable and sturdier sites for these connections, creating a strong link between the vertebral column and cranium. A steady glide will improve their flight duration and allow them to be above water. An unsteady glide will not impact their flight as much but will shorten their flight duration not much more than a steady flight. This also will vary based on their energy consumption. This ultimately allows a rigid and sturdy vertebral column (body) that is beneficial in flight. Having a rigid body during glided flight gives the flying fish aerodynamic advantages, increasing its speed and improving its aim. Furthermore, flying fish have developed vertebral columns and ossified caudal complexes. These features provide the majority of strength to the flying fish, allowing them to physically lift their bodies out of water and glide remarkable distances. These additions also reduce the flexibility of the flying fish, allowing them to perform powerful leaps without weakening midair. At the end of a glide, they fold their pectoral fins to re-enter the sea, or drop their tails into the water to push against the water to lift for another glide, possibly changing direction. The curved profile of the "wing" is comparable to the aerodynamic shape of a bird wing. The fish is able to increase its time in the air by flying straight into or at an angle to the direction of updrafts created by a combination of air and ocean currents. Species of genus Exocoetus have one pair of fins and streamlined bodies to optimize for speed, while Cypselurus spp. have flattened bodies and two pairs of fins, which maximize their time in the air. From 1900 to the 1930s, flying fish were studied as possible models used to develop airplanes. The Exocoetidae feed mainly on plankton. Predators include dolphins, tuna, marlin, birds, squid, and porpoises. Flight measurements In May 2008, a Japanese television crew (NHK) filmed a flying fish (dubbed "Icarfish") off the coast of Yakushima Island, Japan. The fish spent 45 seconds in flight. The previous record was 42 seconds. The flights of flying fish are typically around , though they can use updrafts at the leading edge of waves to cover distances up to . They can travel at speeds of more than . Maximum altitude is above the surface of the sea. Flying fish often accidentally land on the decks of smaller vessels. Fishery and cuisine Flying fish are commercially fished in Japan, Vietnam, and China by gillnetting, and in Indonesia and India by dipnetting. Often in Japanese cuisine, the fish is preserved by drying to be used as fish stock for dashi broth. The roe of Cheilopogon agoo, or Japanese flying fish, is used to make some types of sushi, and is known as . It is also a staple in the diet of the Tao people of Orchid Island, Taiwan. Flying fish is part of the national dish of Barbados, cou-cou and flying fish. The taste is close to that of a sardine. Flying fish roe is known as "cau-cau" in southern Peru, and is used to make several local dishes. In the Solomon Islands, the fish are caught while they are flying, using nets held from outrigger canoes. They are attracted to the light of torches. Fishing is done only when no moonlight is available. Importance Barbados Barbados is known as "the land of the flying fish", and the fish is one of the national symbols of the country. Once abundant, it migrated between the warm, coral-filled Atlantic Ocean surrounding the island of Barbados and the plankton-rich outflows of the Orinoco River in Venezuela. Just after the completion of the Bridgetown Harbor / Deep Water Harbor in Bridgetown, Barbados had an increase of ship visits, linking the island to the world. The overall health of the coral reefs surrounding Barbados suffered due to ship-based pollution. Additionally, Barbadian overfishing pushed them closer to the Orinoco delta, no longer returning to Barbados in large numbers. Today, the flying fish only migrate as far north as Tobago, around southwest of Barbados. Despite the change, flying fish remain a coveted delicacy. Many aspects of Barbadian culture center around the flying fish; it is depicted on coins, as sculptures in fountains, in artwork, and as part of the official logo of the Barbados Tourism Authority. Additionally, the Barbadian coat of arms features a pelican and dolphinfish on either side of the shield, but the dolphinfish resembles a flying fish. Furthermore, actual artistic renditions and holograms of the flying fish are also present within the Barbadian passport. Maritime disputes Flying fish have also been gaining in popularity in other islands, fueling several maritime disputes. In 2006, the council of the United Nations Convention on the Law of the Sea fixed the maritime boundaries between Barbados and Trinidad and Tobago over the flying fish dispute, which gradually raised tensions between the neighbours. The ruling stated both countries must preserve stocks for the future. Barbadian fishers still follow the flying fish southward. Indonesia Makassar fishermen in south Sulawesi have been catching flying fish (torani) in special boats called patorani for centuries developing their own sailing traditions along the way. These fishermen were able to sail as far as Kimberley region in west of Australia reaching the indigenous people there. The Indosiar channel was also prominently featured a flying fish in its logo during commercial breaks and on its ident only from 2000 to 2012. Prehistoric analogues The oldest known fossil of a flying or gliding fish are those of the extinct family Thoracopteridae, dating back to the Middle Triassic, 235–242 million years ago. However, they are thought to be basal neopterygians and are not related to modern flying fish, with the wing-like pectoral fins being convergently evolved in both lineages. Similarly, the Cheirothricidae of the Late Cretaceous also similarly evolved wing-like pectoral fins that were likely also used for gliding, but are indeterminate eurypterygians; they are possibly Aulopiformes, which would make them most closely related to lizardfish.
Biology and health sciences
Fishes
null
472645
https://en.wikipedia.org/wiki/Video%20camera
Video camera
A video camera is an optical instrument that captures videos, as opposed to a movie camera, which records images on film. Video cameras were initially developed for the television industry but have since become widely used for a variety of other purposes. Video cameras are used primarily in two modes. The first, characteristic of much early broadcasting, is live television, where the camera feeds real time images directly to a screen for immediate observation. A few cameras still serve live television production, but most live connections are for security, military/tactical, and industrial operations where surreptitious or remote viewing is required. In the second mode the images are recorded to a storage device for archiving or further processing; for many years, videotape was the primary format used for this purpose, but was gradually supplanted by optical disc, hard disk, and then flash memory. Recorded video is used in television production, and more often surveillance and monitoring tasks in which unattended recording of a situation is required for later analysis. Types and uses Modern video cameras have numerous designs and use: Professional video cameras, such as those used in television production, may be television studio-based or mobile in the case of an electronic field production (EFP). Such cameras generally offer extremely fine-grained manual control for the camera operator, often to the exclusion of automated operation. They usually use three sensors to separately record red, green and blue. Camcorders combine a camera and a VCR or other recording device in one unit; these are mobile, and were widely used for television production, home movies, electronic news gathering (ENG) (including citizen journalism), and similar applications. Since the transition to digital video cameras, most cameras have in-built recording media and as such are also camcorders. Action cameras often have 360° recording capabilities. Closed-circuit television (CCTV) generally uses pan–tilt–zoom cameras (PTZ), for security, surveillance, and/or monitoring purposes. Such cameras are designed to be small, easily hidden, and able to operate unattended; those used in industrial or scientific settings are often meant for use in environments that are normally inaccessible or uncomfortable for humans, and are therefore hardened for such hostile environments (e.g. radiation, high heat, or toxic chemical exposure). Webcams are video cameras that stream a live video feed to a computer. Many smartphones have built-in video cameras and even high-end smartphones can capture video in 4K resolution. Special camera systems are used for scientific research, e.g. on board a satellite or a space probe, in artificial intelligence and robotics research, and in medical use. Such cameras are often tuned for non-visible radiation for infrared (for night vision and heat sensing) or X-ray (for medical and video astronomy use). History The earliest video cameras were based on the mechanical Nipkow disk and used in experimental broadcasts through the 1910s–1930s. All-electronic designs based on the video camera tube, such as Vladimir Zworykin's Iconoscope and Philo Farnsworth's image dissector, supplanted the Nipkow system by the 1930s. These remained in wide use until the 1980s, when cameras based on solid-state image sensors such as the charge-coupled device (CCD) and later CMOS active-pixel sensor (CMOS sensor) eliminated common problems with tube technologies such as image burn-in and streaking and made digital video workflow practical, since the output of the sensor is digital so it does not need conversion from analog. The basis for solid-state image sensors is metal–oxide–semiconductor (MOS) technology, which originates from the invention of the MOSFET (MOS field-effect transistor) at Bell Labs in 1959. This led to the development of semiconductor image sensors, including the CCD and later the CMOS active-pixel sensor. The first semiconductor image sensor was the charge-coupled device, invented at Bell Labs in 1969, based on MOS capacitor technology. The NMOS active-pixel sensor was later invented at Olympus in 1985, which led to the development of the CMOS active-pixel sensor at NASA's Jet Propulsion Laboratory in 1993. Practical digital video cameras were also enabled by advances in video compression, due to the impractically high memory and bandwidth requirements of uncompressed video. The most important compression algorithm in this regard is the discrete cosine transform (DCT), a lossy compression technique that was first proposed in 1972. Practical digital video cameras were enabled by DCT-based video compression standards, including the H.26x and MPEG video coding standards introduced from 1988 onwards. The transition to digital television gave a boost to digital video cameras. By the early 21st century, most video cameras were digital cameras. With the advent of digital video capture, the distinction between professional video cameras and movie cameras has disappeared as the intermittent mechanism has become the same. Nowadays, mid-range cameras exclusively used for television and other work (except movies) are termed professional video cameras. Recording media Early video could not be directly recorded. The first somewhat successful attempt to directly record video was in 1927 with John Logie Baird’s disc based Phonovision. The discs were unplayable with the technology of the time although later advances allowed the video to be recovered in the 1980s. The first experiments with using tape to record a video signal took place in 1951. The first commercially released system was Quadruplex videotape produced by Ampex in 1956. Two years later Ampex introduced a system capable of recording colour video. The first recording systems designed to be mobile (and thus usable outside the studio) were the Portapak systems starting with the Sony DV-2400 in 1967. This was followed in 1981 by the Betacam system where the tape recorder was built into the camera making a camcorder. Lens mounts While some video cameras have built in lenses others use interchangeable lenses connected via a range of mounts. Some like Panavision PV and Arri PL are designed for movie cameras while others like Canon EF and Sony E come from still photography. A further set of mounts like S-mount exist for applications like CCTV.
Technology
Media and communication
null
472786
https://en.wikipedia.org/wiki/Quercus%20virginiana
Quercus virginiana
Quercus virginiana, also known as the southern live oak, is an evergreen oak tree endemic to the Southeastern United States. Though many other species are loosely called live oak, the southern live oak is particularly iconic of the Old South. Many very large and old specimens of live oak can be found today in the Deep South region of the United States. Description Although live oaks retain their leaves nearly year-round, they are not true evergreens. Live oaks drop their leaves immediately before new leaves emerge in the spring. Occasionally, senescing leaves may turn yellow or contain brown spots in the winter, leading to the mistaken belief that the tree has oak wilt, whose symptoms typically occur in the summer. A live oak's defoliation may occur sooner in marginal climates or in dry or cold winters. The bark is dark, thick, and furrowed longitudinally. The leaves are stiff and leathery, with the tops shiny dark green and the bottoms pale gray and very tightly tomentose, simple and typically flattish with bony-opaque margins, with a length of and a width of , borne alternately. The male flowers are green hanging catkins with lengths of . The acorns are small, , oblong in shape (ovoid or oblong-ellipsoid), shiny and tan-brown to nearly black, often black at the tips, and borne singly or in clusters. Depending on the growing conditions, live oaks vary from a shrub-size to large and spreading tree-size: typical open-grown trees reach in height, with a limb spread of nearly . Their lower limbs often sweep down towards the ground before curving up again. They can grow at severe angles; Native Americans used to bend saplings over so that they would grow at extreme angles, to serve as trail markers. The southern live oak has a deep taproot that anchors it when young and eventually develops into an extensive and widespread root system. This, along with its low center of gravity and other factors, makes the southern live oak extremely resistant to strong sustained winds, such as those seen in hurricanes. Taxonomy Quercus virginiana is placed in the southern live oaks section of the genus Quercus (section Virentes). A large number of common names are used for this tree, including "Virginia live oak", "bay live oak", "scrub live oak", "plateau oak", "plateau live oak", "escarpment live oak", and (in Spanish) "roble". It is also often just called "live oak" within its native area, but the full name "southern live oak" helps to distinguish it from other live oaks, a general term for any evergreen species of oak. This profusion of common names partly reflects an ongoing controversy about the classification of various live oaks, in particular its near relatives. Some authors recognize as distinct species the forms others consider to be varieties of Quercus virginiana. Notably, the following two taxa, treated as species in the Flora of North America, are treated as varieties of southern live oak by the United States Forest Service: the escarpment live oak, Quercus fusiformis (Q. virginiana var. fusiformis) and the sand live oak, Quercus geminata (Q. virginiana var. geminata). Matters are further complicated by southern live oaks hybridizing with both of the above two species, and also with the dwarf live oak (Q. minima), swamp white oak (Q. bicolor), Durand oak (Q. durandii), overcup oak (Q. lyrata), bur oak (Q. macrocarpa), and post oak (Q. stellata). Distribution and habitat Live oak can be found in the wild growing and reproducing on the lower coastal plain of the Gulf of Mexico and lower East Coast of the United States. Its native range begins in southeast Virginia, and then continues south in a narrow band through North Carolina along the coast to the interior South Carolina coast, where its range begins to expand farther inland. The range of live oak continues to expand inland as it moves south, growing across southern Georgia and covering all of Florida south to the northernmost Florida Keys. Live oak grows along the Florida panhandle to Mobile Bay, then westward across the southernmost two tiers of counties in Mississippi. Live oak grows across the southern third of Louisiana, except for some barrier islands and scattered parts of the most southern parishes. Live oak's range continues into Texas and narrows to hug the coast until just past Port Lavaca, Texas. There is a misconception that the southern live oak reaches its northwestern limit in the granite massifs and canyons in Southwestern Oklahoma. However, this actually belongs to the closely related and much more cold-hardy Escarpment Live Oak (Quercus fusiformis), a rare remnant from the last glaciation also found around Norman, Oklahoma. Along the coastal plain of the Gulf of Mexico and south Atlantic United States, live oak is found in both single and mixed species forests, dotting the savannas, and as occasional clumps in the grasslands along the lower coastal plain. Live oak grows in soils ranging from heavy textures (clay loams), to sands with layers of organic materials or fine particles. Live oak can be found dominating some maritime forests, especially where fire periodicity and duration are limited. Live oak is found on higher topographic sites as well as hammocks in marshes and swamps. In general, southern live oak hugs the coastline and is rarely found more than above sea level. Live oaks grow across a wide range of sites with many moisture regimes – ranging from dry to moist. Live oak will survive well on both dry sites and in wet areas, effectively handling short duration flooding if water is moving and drainage is good. Good soil drainage is a key resource component for sustained live oak growth. The usual precipitation range is of water per year, preferably in spring and summer. Soil is usually acidic, ranging between pH of 5.5 and 6.5. A live oak on Tyler Avenue in Annapolis, Maryland or one on Cherrywood Lane in Bowie, Maryland is the northernmost known mature specimen, although a number of saplings can be found growing around nearby Towson. Multiple healthy young examples can be found in the Bolton Hill neighborhood of Baltimore. Ecology One source states that the southern live oak responds "with vigorous growth to plentiful moisture on well-drained soil." They tend to survive fire, because often a fire will not reach their crowns. Even if a tree is burned, its crowns and roots usually survive the fire and sprout vigorously. Furthermore, live oak forests discourage entry of fire from adjacent communities because they provide dense cover that discourages the growth of a flammable understory. They can withstand occasional floods and hurricanes, and are resistant to salt spray and moderate soil salinity. Although they grow best in well-drained sandy soils and loams, they will also grow in clay. The branches frequently support other plant species such as rounded clumps of ball moss (Tillandsia recurvata), thick drapings of Spanish moss (Tillandsia usneoides), resurrection fern (Pleopeltis polypodioides), and parasitic mistletoe. Cultivation Southern live oak is cultivated in warmer climates as a specimen tree or for shade in the southern United States (zone 8 and south), Nuevo León and Tamaulipas states in Mexico, and in the warmer parts of the United States, Europe, and Australia. Cultivation is relatively simple, as southern live oak seedlings grow fast with ample soil moisture. Planting depth has little effect on the success of the tree. After a few years live oak needs only occasional supplemental water. Southern live oak is very long lived, and there are many specimens that are more than 400 years old in the deep southern United States. The southern live oak is reliably hardy to USDA Hardiness Zone 8a, which places its northern limit for long-term cultivation inland around Atlanta, Memphis, and Washington, D.C. Uses Live oak wood is hard, heavy, and difficult to work with, but very strong. In the days of wooden ships, live oaks were the preferred source of the framework timbers of the ship, using the natural trunk and branch angles for their strength. The frame of was constructed from southern live oak wood harvested from St. Simons Island, Georgia, and the density of the wood grain allowed it to survive cannon fire, thus earning her the nickname "Old Ironsides". Even today, the U.S. Navy continues to own extensive live oak tracts. The primary uses for southern live oaks today are providing food and shelter for wildlife. Among the animals for which live oak acorns are an important food source are the bobwhite quail, the threatened Florida scrub jay, the wood duck, yellow-bellied sapsucker, wild turkey, black bear, various species of squirrel, and the white-tailed deer. The tree crown is very dense, making it valuable for shade, and the species provides nest sites for many mammal species. Native Americans extracted a cooking oil from the acorns, used all parts of live oak for medicinal purposes, leaves for making rugs, and bark for dyes. The roots of seedlings sometimes form starchy, edible tubers. People in past centuries harvested and fried these tubers for human consumption much as one might use a potato. In 1937, the southern live oak was designated the official state tree of Georgia (U.S. state). Famous specimens The Seven Sisters Oak, estimated to be between 500 and 1,000 years old, is the largest certified southern live oak tree. The Angel Oak on Johns Island, South Carolina, near Charleston is estimated to be 400–500 years old. It has a trunk circumference of , height of and limb spread of . The Big Tree is an estimated 1,000-year-old southern live oak located in Rockport, Texas, the largest live oak in Texas. The Boyington Oak, an approximately 180-year-old southern live oak in Mobile, Alabama, that is known for the folklore surrounding its origin. The Cellon Oak, with a circumference of , a height of , and an average crown spread of , is the largest recorded live oak tree in Florida. It is used as the logo of Alachua County, Florida. The Duffie Oak, a more than 300-year-old southern live oak in Mobile, Alabama, has a trunk circumference of , height of and limb spread of . It is the oldest living landmark in the city. The Emancipation Oak, on the campus of Hampton University in Virginia, is listed as one of the "Ten Great Trees of the World" by the National Geographic Society. The Century Tree , planted in 1891 on the campus of Texas A&M University in College Station, Texas, is a campus landmark and has been declared a Famous Tree of Texas by the Texas Forest Service. The Evangeline Oak in St. Martinville, Louisiana The Friendship Oak is a 500-year-old southern live oak located on the Gulf Park campus of the University of Southern Mississippi in Long Beach, Mississippi. The Lover's Oak in Brunswick, Georgia, is estimated to be 900 years old. Lanier's Oak in Brunswick, Georgia, where poet Sidney Lanier was inspired to write "The Marshes of Glynn" The Treaty Oak in Austin, Texas The Treaty Oak in Jacksonville, Florida The Bland Oak in Sydney, Australia, is one of the oldest trees in the city and the largest oak tree in the country, planted in the 1840s by inventor and politician William Bland. The Airlie Oak in Wilmington, NC dates to about 1545. It is the largest Live Oak in North Carolina, with a circumference of over . The Big Oak in Thomasville, Georgia. The Baranoff Oak in Safety Harbor, Florida is reportedly the oldest live oak in Pinellas County, Florida and is estimated to be between 300 and 500 years old. McDonogh Oak in City Park, New Orleans, LA is around 800 years old and several beams have been erected to support the trees’ limbs. The tree was named in honor of John McDonogh who donated City Park's original 100 acres in 1854.
Biology and health sciences
Fagales
Plants
472877
https://en.wikipedia.org/wiki/Prior%20probability
Prior probability
A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the relative proportions of voters who will vote for a particular politician in a future election. The unknown quantity may be a parameter of the model or a latent variable rather than an observable variable. In Bayesian statistics, Bayes' rule prescribes how to update the prior with new information to obtain the posterior probability distribution, which is the conditional distribution of the uncertain quantity given new data. Historically, the choice of priors was often constrained to a conjugate family of a given likelihood function, so that it would result in a tractable posterior of the same family. The widespread availability of Markov chain Monte Carlo methods, however, has made this less of a concern. There are many ways to construct a prior distribution. In some cases, a prior may be determined from past information, such as previous experiments. A prior can also be elicited from the purely subjective assessment of an experienced expert. When no information is available, an uninformative prior may be adopted as justified by the principle of indifference. In modern applications, priors are also often chosen for their mechanical properties, such as regularization and feature selection. The prior distributions of model parameters will often depend on parameters of their own. Uncertainty about these hyperparameters can, in turn, be expressed as hyperprior probability distributions. For example, if one uses a beta distribution to model the distribution of the parameter p of a Bernoulli distribution, then: p is a parameter of the underlying system (Bernoulli distribution), and α and β are parameters of the prior distribution (beta distribution); hence hyperparameters. In principle, priors can be decomposed into many conditional levels of distributions, so-called hierarchical priors. Informative priors An informative prior expresses specific, definite information about a variable. An example is a prior distribution for the temperature at noon tomorrow. A reasonable approach is to make the prior a normal distribution with expected value equal to today's noontime temperature, with variance equal to the day-to-day variance of atmospheric temperature, or a distribution of the temperature for that day of the year. This example has a property in common with many priors, namely, that the posterior from one problem (today's temperature) becomes the prior for another problem (tomorrow's temperature); pre-existing evidence which has already been taken into account is part of the prior and, as more evidence accumulates, the posterior is determined largely by the evidence rather than any original assumption, provided that the original assumption admitted the possibility of what the evidence is suggesting. The terms "prior" and "posterior" are generally relative to a specific datum or observation. Strong prior A strong prior is a preceding assumption, theory, concept or idea upon which, after taking account of new information, a current assumption, theory, concept or idea is founded. A strong prior is a type of informative prior in which the information contained in the prior distribution dominates the information contained in the data being analyzed. The Bayesian analysis combines the information contained in the prior with that extracted from the data to produce the posterior distribution which, in the case of a "strong prior", would be little changed from the prior distribution. Weakly informative priors A weakly informative prior expresses partial information about a variable, steering the analysis toward solutions that align with existing knowledge without overly constraining the results and preventing extreme estimates. An example is, when setting the prior distribution for the temperature at noon tomorrow in St. Louis, to use a normal distribution with mean 50 degrees Fahrenheit and standard deviation 40 degrees, which very loosely constrains the temperature to the range (10 degrees, 90 degrees) with a small chance of being below -30 degrees or above 130 degrees. The purpose of a weakly informative prior is for regularization, that is, to keep inferences in a reasonable range. Uninformative priors An uninformative, flat, or diffuse prior expresses vague or general information about a variable. The term "uninformative prior" is somewhat of a misnomer. Such a prior might also be called a not very informative prior, or an objective prior, i.e. one that is not subjectively elicited. Uninformative priors can express "objective" information such as "the variable is positive" or "the variable is less than some limit". The simplest and oldest rule for determining a non-informative prior is the principle of indifference, which assigns equal probabilities to all possibilities. In parameter estimation problems, the use of an uninformative prior typically yields results which are not too different from conventional statistical analysis, as the likelihood function often yields more information than the uninformative prior. Some attempts have been made at finding a priori probabilities, i.e. probability distributions in some sense logically required by the nature of one's state of uncertainty; these are a subject of philosophical controversy, with Bayesians being roughly divided into two schools: "objective Bayesians", who believe such priors exist in many useful situations, and "subjective Bayesians" who believe that in practice priors usually represent subjective judgements of opinion that cannot be rigorously justified (Williamson 2010). Perhaps the strongest arguments for objective Bayesianism were given by Edwin T. Jaynes, based mainly on the consequences of symmetries and on the principle of maximum entropy. As an example of an a priori prior, due to Jaynes (2003), consider a situation in which one knows a ball has been hidden under one of three cups, A, B, or C, but no other information is available about its location. In this case a uniform prior of p(A) = p(B) = p(C) = 1/3 seems intuitively like the only reasonable choice. More formally, we can see that the problem remains the same if we swap around the labels ("A", "B" and "C") of the cups. It would therefore be odd to choose a prior for which a permutation of the labels would cause a change in our predictions about which cup the ball will be found under; the uniform prior is the only one which preserves this invariance. If one accepts this invariance principle then one can see that the uniform prior is the logically correct prior to represent this state of knowledge. This prior is "objective" in the sense of being the correct choice to represent a particular state of knowledge, but it is not objective in the sense of being an observer-independent feature of the world: in reality the ball exists under a particular cup, and it only makes sense to speak of probabilities in this situation if there is an observer with limited knowledge about the system. As a more contentious example, Jaynes published an argument based on the invariance of the prior under a change of parameters that suggests that the prior representing complete uncertainty about a probability should be the Haldane prior p−1(1 − p)−1. The example Jaynes gives is of finding a chemical in a lab and asking whether it will dissolve in water in repeated experiments. The Haldane prior gives by far the most weight to and , indicating that the sample will either dissolve every time or never dissolve, with equal probability. However, if one has observed samples of the chemical to dissolve in one experiment and not to dissolve in another experiment then this prior is updated to the uniform distribution on the interval [0, 1]. This is obtained by applying Bayes' theorem to the data set consisting of one observation of dissolving and one of not dissolving, using the above prior. The Haldane prior is an improper prior distribution (meaning that it has an infinite mass). Harold Jeffreys devised a systematic way for designing uninformative priors as e.g., Jeffreys prior p−1/2(1 − p)−1/2 for the Bernoulli random variable. Priors can be constructed which are proportional to the Haar measure if the parameter space X carries a natural group structure which leaves invariant our Bayesian state of knowledge. This can be seen as a generalisation of the invariance principle used to justify the uniform prior over the three cups in the example above. For example, in physics we might expect that an experiment will give the same results regardless of our choice of the origin of a coordinate system. This induces the group structure of the translation group on X, which determines the prior probability as a constant improper prior. Similarly, some measurements are naturally invariant to the choice of an arbitrary scale (e.g., whether centimeters or inches are used, the physical results should be equal). In such a case, the scale group is the natural group structure, and the corresponding prior on X is proportional to 1/x. It sometimes matters whether we use the left-invariant or right-invariant Haar measure. For example, the left and right invariant Haar measures on the affine group are not equal. Berger (1985, p. 413) argues that the right-invariant Haar measure is the correct choice. Another idea, championed by Edwin T. Jaynes, is to use the principle of maximum entropy (MAXENT). The motivation is that the Shannon entropy of a probability distribution measures the amount of information contained in the distribution. The larger the entropy, the less information is provided by the distribution. Thus, by maximizing the entropy over a suitable set of probability distributions on X, one finds the distribution that is least informative in the sense that it contains the least amount of information consistent with the constraints that define the set. For example, the maximum entropy prior on a discrete space, given only that the probability is normalized to 1, is the prior that assigns equal probability to each state. And in the continuous case, the maximum entropy prior given that the density is normalized with mean zero and unit variance is the standard normal distribution. The principle of minimum cross-entropy generalizes MAXENT to the case of "updating" an arbitrary prior distribution with suitable constraints in the maximum-entropy sense. A related idea, reference priors, was introduced by José-Miguel Bernardo. Here, the idea is to maximize the expected Kullback–Leibler divergence of the posterior distribution relative to the prior. This maximizes the expected posterior information about X when the prior density is p(x); thus, in some sense, p(x) is the "least informative" prior about X. The reference prior is defined in the asymptotic limit, i.e., one considers the limit of the priors so obtained as the number of data points goes to infinity. In the present case, the KL divergence between the prior and posterior distributions is given by Here, is a sufficient statistic for some parameter . The inner integral is the KL divergence between the posterior and prior distributions and the result is the weighted mean over all values of . Splitting the logarithm into two parts, reversing the order of integrals in the second part and noting that does not depend on yields The inner integral in the second part is the integral over of the joint density . This is the marginal distribution , so we have Now we use the concept of entropy which, in the case of probability distributions, is the negative expected value of the logarithm of the probability mass or density function or Using this in the last equation yields In words, KL is the negative expected value over of the entropy of conditional on plus the marginal (i.e. unconditional) entropy of . In the limiting case where the sample size tends to infinity, the Bernstein-von Mises theorem states that the distribution of conditional on a given observed value of is normal with a variance equal to the reciprocal of the Fisher information at the 'true' value of . The entropy of a normal density function is equal to half the logarithm of where is the variance of the distribution. In this case therefore where is the arbitrarily large sample size (to which Fisher information is proportional) and is the 'true' value. Since this does not depend on it can be taken out of the integral, and as this integral is over a probability space it equals one. Hence we can write the asymptotic form of KL as where is proportional to the (asymptotically large) sample size. We do not know the value of . Indeed, the very idea goes against the philosophy of Bayesian inference in which 'true' values of parameters are replaced by prior and posterior distributions. So we remove by replacing it with and taking the expected value of the normal entropy, which we obtain by multiplying by and integrating over . This allows us to combine the logarithms yielding This is a quasi-KL divergence ("quasi" in the sense that the square root of the Fisher information may be the kernel of an improper distribution). Due to the minus sign, we need to minimise this in order to maximise the KL divergence with which we started. The minimum value of the last equation occurs where the two distributions in the logarithm argument, improper or not, do not diverge. This in turn occurs when the prior distribution is proportional to the square root of the Fisher information of the likelihood function. Hence in the single parameter case, reference priors and Jeffreys priors are identical, even though Jeffreys has a very different rationale. Reference priors are often the objective prior of choice in multivariate problems, since other rules (e.g., Jeffreys' rule) may result in priors with problematic behavior. Objective prior distributions may also be derived from other principles, such as information or coding theory (see e.g. minimum description length) or frequentist statistics (so-called probability matching priors). Such methods are used in Solomonoff's theory of inductive inference. Constructing objective priors have been recently introduced in bioinformatics, and specially inference in cancer systems biology, where sample size is limited and a vast amount of prior knowledge is available. In these methods, either an information theory based criterion, such as KL divergence or log-likelihood function for binary supervised learning problems and mixture model problems. Philosophical problems associated with uninformative priors are associated with the choice of an appropriate metric, or measurement scale. Suppose we want a prior for the running speed of a runner who is unknown to us. We could specify, say, a normal distribution as the prior for his speed, but alternatively we could specify a normal prior for the time he takes to complete 100 metres, which is proportional to the reciprocal of the first prior. These are very different priors, but it is not clear which is to be preferred. Jaynes' method of transformation groups can answer this question in some situations. Similarly, if asked to estimate an unknown proportion between 0 and 1, we might say that all proportions are equally likely, and use a uniform prior. Alternatively, we might say that all orders of magnitude for the proportion are equally likely, the , which is the uniform prior on the logarithm of proportion. The Jeffreys prior attempts to solve this problem by computing a prior which expresses the same belief no matter which metric is used. The Jeffreys prior for an unknown proportion p is p−1/2(1 − p)−1/2, which differs from Jaynes' recommendation. Priors based on notions of algorithmic probability are used in inductive inference as a basis for induction in very general settings. Practical problems associated with uninformative priors include the requirement that the posterior distribution be proper. The usual uninformative priors on continuous, unbounded variables are improper. This need not be a problem if the posterior distribution is proper. Another issue of importance is that if an uninformative prior is to be used routinely, i.e., with many different data sets, it should have good frequentist properties. Normally a Bayesian would not be concerned with such issues, but it can be important in this situation. For example, one would want any decision rule based on the posterior distribution to be admissible under the adopted loss function. Unfortunately, admissibility is often difficult to check, although some results are known (e.g., Berger and Strawderman 1996). The issue is particularly acute with hierarchical Bayes models; the usual priors (e.g., Jeffreys' prior) may give badly inadmissible decision rules if employed at the higher levels of the hierarchy. Improper priors Let events be mutually exclusive and exhaustive. If Bayes' theorem is written as then it is clear that the same result would be obtained if all the prior probabilities P(Ai) and P(Aj) were multiplied by a given constant; the same would be true for a continuous random variable. If the summation in the denominator converges, the posterior probabilities will still sum (or integrate) to 1 even if the prior values do not, and so the priors may only need to be specified in the correct proportion. Taking this idea further, in many cases the sum or integral of the prior values may not even need to be finite to get sensible answers for the posterior probabilities. When this is the case, the prior is called an improper prior. However, the posterior distribution need not be a proper distribution if the prior is improper. This is clear from the case where event B is independent of all of the Aj. Statisticians sometimes use improper priors as uninformative priors. For example, if they need a prior distribution for the mean and variance of a random variable, they may assume p(m, v) ~ 1/v (for v > 0) which would suggest that any value for the mean is "equally likely" and that a value for the positive variance becomes "less likely" in inverse proportion to its value. Many authors (Lindley, 1973; De Groot, 1937; Kass and Wasserman, 1996) warn against the danger of over-interpreting those priors since they are not probability densities. The only relevance they have is found in the corresponding posterior, as long as it is well-defined for all observations. (The Haldane prior is a typical counterexample.) By contrast, likelihood functions do not need to be integrated, and a likelihood function that is uniformly 1 corresponds to the absence of data (all models are equally likely, given no data): Bayes' rule multiplies a prior by the likelihood, and an empty product is just the constant likelihood 1. However, without starting with a prior probability distribution, one does not end up getting a posterior probability distribution, and thus cannot integrate or compute expected values or loss. See for details. Examples Examples of improper priors include: The uniform distribution on an infinite interval (i.e., a half-line or the entire real line). Beta(0,0), the beta distribution for α=0, β=0 (uniform distribution on log-odds scale). The logarithmic prior on the positive reals (uniform distribution on log scale). These functions, interpreted as uniform distributions, can also be interpreted as the likelihood function in the absence of data, but are not proper priors. Prior probability in statistical mechanics While in Bayesian statistics the prior probability is used to represent initial beliefs about an uncertain parameter, in statistical mechanics the a priori probability is used to describe the initial state of a system. The classical version is defined as the ratio of the number of elementary events (e.g. the number of times a die is thrown) to the total number of events—and these considered purely deductively, i.e. without any experimenting. In the case of the die if we look at it on the table without throwing it, each elementary event is reasoned deductively to have the same probability—thus the probability of each outcome of an imaginary throwing of the (perfect) die or simply by counting the number of faces is 1/6. Each face of the die appears with equal probability—probability being a measure defined for each elementary event. The result is different if we throw the die twenty times and ask how many times (out of 20) the number 6 appears on the upper face. In this case time comes into play and we have a different type of probability depending on time or the number of times the die is thrown. On the other hand, the a priori probability is independent of time—you can look at the die on the table as long as you like without touching it and you deduce the probability for the number 6 to appear on the upper face is 1/6. In statistical mechanics, e.g. that of a gas contained in a finite volume , both the spatial coordinates and the momentum coordinates of the individual gas elements (atoms or molecules) are finite in the phase space spanned by these coordinates. In analogy to the case of the die, the a priori probability is here (in the case of a continuum) proportional to the phase space volume element divided by , and is the number of standing waves (i.e. states) therein, where is the range of the variable and is the range of the variable (here for simplicity considered in one dimension). In 1 dimension (length ) this number or statistical weight or a priori weighting is . In customary 3 dimensions (volume ) the corresponding number can be calculated to be . In order to understand this quantity as giving a number of states in quantum (i.e. wave) mechanics, recall that in quantum mechanics every particle is associated with a matter wave which is the solution of a Schrödinger equation. In the case of free particles (of energy ) like those of a gas in a box of volume such a matter wave is explicitly where are integers. The number of different values and hence states in the region between is then found to be the above expression by considering the area covered by these points. Moreover, in view of the uncertainty relation, which in 1 spatial dimension is these states are indistinguishable (i.e. these states do not carry labels). An important consequence is a result known as Liouville's theorem, i.e. the time independence of this phase space volume element and thus of the a priori probability. A time dependence of this quantity would imply known information about the dynamics of the system, and hence would not be an a priori probability. Thus the region when differentiated with respect to time yields zero (with the help of Hamilton's equations): The volume at time is the same as at time zero. One describes this also as conservation of information. In the full quantum theory one has an analogous conservation law. In this case, the phase space region is replaced by a subspace of the space of states expressed in terms of a projection operator , and instead of the probability in phase space, one has the probability density where is the dimensionality of the subspace. The conservation law in this case is expressed by the unitarity of the S-matrix. In either case, the considerations assume a closed isolated system. This closed isolated system is a system with (1) a fixed energy and (2) a fixed number of particles in (c) a state of equilibrium. If one considers a huge number of replicas of this system, one obtains what is called a microcanonical ensemble. It is for this system that one postulates in quantum statistics the "fundamental postulate of equal a priori probabilities of an isolated system." This says that the isolated system in equilibrium occupies each of its accessible states with the same probability. This fundamental postulate therefore allows us to equate the a priori probability to the degeneracy of a system, i.e. to the number of different states with the same energy. Example The following example illustrates the a priori probability (or a priori weighting) in (a) classical and (b) quantal contexts. Priori probability and distribution functions In statistical mechanics (see any book) one derives the so-called distribution functions for various statistics. In the case of Fermi–Dirac statistics and Bose–Einstein statistics these functions are respectively These functions are derived for (1) a system in dynamic equilibrium (i.e. under steady, uniform conditions) with (2) total (and huge) number of particles (this condition determines the constant ), and (3) total energy , i.e. with each of the particles having the energy . An important aspect in the derivation is the taking into account of the indistinguishability of particles and states in quantum statistics, i.e. there particles and states do not have labels. In the case of fermions, like electrons, obeying the Pauli principle (only one particle per state or none allowed), one has therefore Thus is a measure of the fraction of states actually occupied by electrons at energy and temperature . On the other hand, the a priori probability is a measure of the number of wave mechanical states available. Hence Since is constant under uniform conditions (as many particles as flow out of a volume element also flow in steadily, so that the situation in the element appears static), i.e. independent of time , and is also independent of time as shown earlier, we obtain Expressing this equation in terms of its partial derivatives, one obtains the Boltzmann transport equation. How do coordinates etc. appear here suddenly? Above no mention was made of electric or other fields. Thus with no such fields present we have the Fermi-Dirac distribution as above. But with such fields present we have this additional dependence of .
Mathematics
Statistics
null
473046
https://en.wikipedia.org/wiki/Primulaceae
Primulaceae
The Primulaceae ( ), commonly known as the primrose family (but not related to the evening primrose family), are a family of herbaceous and woody flowering plants including some favourite garden plants and wildflowers. Most are perennial though some species, such as scarlet pimpernel, are annuals. Previously one of three families in the order Primulales, it underwent considerable generic re-alignment once molecular phylogenetic methods were used for taxonomic classification. The order was then submerged in a much enlarged order Ericales and became a greatly enlarged Primulaceae sensu lato (s.l). In this new classification of the Angiosperm Phylogeny Group, each of the Primulales families was reduced to the rank of subfamily of Primulaceae s.l. The original Primulaceae (Primulaceae sensu stricto or s.s.) then became subfamily Primuloideae, and one genus (Maesa) was raised to the rank of a separate subfamily, making four in all. Description The family shares a number of characteristics, including haplostemonous flowers having the same number of petals and stamens, sympetalous corolla having the petals united, stamens opposite the petals, free central placentation, bitegmic (two layered) ovules and nuclear endosperm formation. Stems Primulaceae are mostly herbaceous, having no woody stem, except that some form cushions (spreading mats a few inches high) and their stems are stiffened by lignin. The stems can grow upright (erect) or spread out horizontally and then turn upright (decumbent). Leaves Leaves are simple, being directly attached to the stem by a petiole (stalk), but unlike the leaves of most flowering plants they have no stipules. The petiole is short or the leaf tapers gradually towards the base. Leaf arrangement is typically alternate but some are opposite or whorled, and there is generally a rosette at the base of the stem. The edges are toothed (dentate) or sawtoothed. New leaves in the bud are usually involute (rolled towards the upper surface) or conduplicate (folded upwards), but a few species roll downwards. Flowers Each flower is bisexual, having both stamens and carpels. They have radial symmetry; the petals can be separate or partially or fully fused together to form a tube-shaped corolla that opens up at the mouth to form a bell-like shape (as in item 8 in the figure) or a flat-faced flower. In most of the families of Ericales, stamens alternate with lobes, but in Primulaceae there is a stamen opposite each petal. The calyx has 4 to 9 lobes and persists after flowering. They are grouped in unbranched, indeterminate clusters such as racemes, spikes, corymbs or umbels. Reproductive anatomy The fruit of Primulaceae begins as an ovary and inside it are the future seeds (ovules). These are attached to a central axis without any partitions between them (an arrangement called free central placentation; see item 7 in the figure), and they are bitegmic (having a double protective layer around each ovule). Unlike in most other families of Ericales, both layers form the opening at the top (the micropyle). Seeds and fruit As seeds develop, an endosperm grows around the embryo through free division of nuclei without forming walls (nuclear endosperm formation). The embryo forms a pair of short, narrow cotyledons (item 10 in the figure). Usually multiple seeds are in a capsule that is carried on a straight stalk (pedicel or scape). After it matures, it splits apart, releasing the seeds ballistically. Taxonomy History The taxonomic history of Primulaceae has been long and complex. The botanical authority for the family name is given to August Batsch (1794), as Batsch ex Borkh, using the term Primulae with six genera, the valid description being subsequently given by Borkhausen (1797). Some earlier authors attributed the name to Ventenat (1799), as Primulaceae Vent., who described the Primulacées, but Batsch had precedence. Linnaeus (1753) placed Primula and related primuloid genera in the Hexandria Monogynia (six stamens one pistil) in his sexual classification based on reproductive characteristics. Jussieu arranged Linnaeus' genera in a hierarchical system of ranks based on the relative value of a much wider range of characteristics. In his Genera plantarum (1789) he organised the primuloid genera into two Ordo (families), within a class (VIII) he called Dicotyledones Monopetalae Corolla Hypogyna, based on the cotyledons (two), form of the petals (fused), and position of the corolla with respect to the ovary (below). Jussieu's families were the Lysimachiae, including Primula and Theophrasta and the Sapotae, including Myrsine, these being the three main lineages in modern understanding. The most complete treatment of the Primulaceae family, with nearly 1,000 species arranged into 22 genera, was by Pax and Knuth in 1905 in the Engler system. They divided the family into five tribes (and several subtribes); Androsaceae, Cyclamineae, Lysimachieae, Samoleae and Corideae. Many systems since have lacked consistency, but generally recognised two major groups as either tribes or subfamilies, the Lysimachieae and Primuleae (the Androsaceae of Pax and Knuth), with the largest genera being Primula, Lysimachia and Androsace. In the Cronquist system (1988), Cronquist included the three closely related families, Primulaceae, Myrsinaceae and Theophrastaceae in the order Primulales, of subclass Dilleniidae, based on morphological characteristics, in particular, ovaries with free-central placentation, a feature considered synapomorphic. His circumscription of Primulaceae included about 800 species. Molecular phylogenetics These three families were referred to as the primuloid families. With the later development of molecular phylogenetic methods, the Primulales were found to be more closely related to other families within the Ericales, and the three primuloid families were subsequently absorbed into an expanded Ericales (Ericales sensu lato or s.l.), making 24 families within that order, where the primuloids formed a monophyletic clade. It was also apparent that Myrsinaceae were paraphyletic, unless the genus Maesa was segregated and elevated to become a new monogeneric family, Maesaceae, but also that Primulaceae were probably paraphyletic. In the first consensus taxonomic classification, the Angiosperm Phylogeny Group (APG 1998), these proposals were recognised by including Primulaceae within Ericales, as Eudicots, forming one of three clades in the Asterids (Asteridae). Maesa was formally segregated in 2000. Further changes came from analysis of DNA sequence data. This led to the move of genera (primarily terrestrial non-basal-rosette) from Primulaceae to Mysinaceae and Theophrastaceae. At that time Primulaceae was considered to consist of nine tribes (Primuleae, Androsaceae, Ardisiandreae, Lysimachieae, Glauceae, Anagallideae, Corideae, Cyclamineae, and Samoleae). Notably, Lysimachieae and three smaller tribes, Corideae, Cyclamineae and Ardisiandreae, were transferred to Myrsinaceae, and Samoleae to Theophrastaceae. This enlarged Myrsinaceae is distinguished as Myrsinaceae s.l. in comparison to the previous smaller family, Myrsinaceae s.s. (less Maesa). Some authors preferred to raise Samoleae to its own family, Samolaceae, but this has not been accepted by subsequent authors, placing it within Theophrastaceae, while recognising its distinct position within that grouping. These transfers, to preserve monophyly at the family level essentially left two tribes remaining in Primulaceae, the Primuleae and Androsaceae, with about 15 genera sharing a number of common characteristics. These additional changes were reflected in the 2003 revision of the APG system (APG II), where the now four primuloid families were among 23 in Ericales. This restricted Primulaceae sensu stricto (s.s.) consisted of three groups: The Primulae, including Primula, the largest genus; the Androsaceae, including Androsace, the second largest genus; together with a small third group containing Soldanella, Hottonia, Omphalogramma and Bryocarpum. The APG third classification system (APG III, 2009) discussed all the taxonomic challenges arising from the phylogenetic studies, and placed all primuloid genera into one large Primulaceae s.l., corresponding to Cronquist's Primulales. They stated that "The biggest problem for APG III was the question of how to treat Primulaceae and their immediate relatives, a closely related group that in the past has often been recognized as a separate order". The decision to treat all genera as a single family was based on the observation that the new circumscriptions had little in the way of apomorphies, but the entire group had numerous synapomorphies and were easy to recognise. This resulted in an Ericales with 22 families. Consequently, the four primuloid families were reduced to the rank of subfamilies within Primulaceae s.l. Phylogeny Primulaceae s.l. sensu APG III form part of the speciose (species rich) Asterid order Ericales s.l., with about 12,000 species and 22 families as per APG IV. Ericales is one of four major clades within the asterids, where it is sister to the euasterids. The phylogenetic structure of Ericales, as shown in the following cladogram, consists of seven major suprafamilial clades (e.g. balsaminoids, styracoids) and a group of "core" Ericales. Within the eracalean families, Primulaceae s.l. is shown as a sister group to Ebenaceae, and both are sister to Sapotaceae. These three families make up the primuloid clade. Evolution and biogeography The fossil record of Primulaceae s.l. is sparse, but the crown group has been estimated as c. 46-61 million years old. The crown primuloids have been dated to c. 102 mya, with Primulaceae/Ebenaceae divergence at 80 mya. Crown ages for the Primulaceae subfamilies vary from 24 mya for the Maesoideae, the basal group, to 70 mya for the Theophrastoideae. The primuloids probably originated in a shared Neotropical/Indo-Malaysian ancestral range, with the Primulaceae/Ebenaceae clade occupying the neotropics. Theophrastoideae is nearly all neotropical with a more recent migration out of the realm found in the aquatic Samolus genus. The divergence between Theophrastoideae and Primuloideae-Myrsinoideae at 70 mya represents a vicariant event between the Neotropics and the Palearctic in the case of the latter. The Primuloideae originating in the Palearctic, persisted till the last 16 mya, when it started to shift into the Nearctic. Subdivision The three former families of the Primulales, together with the segregated Maesaceae, have been re-circumscribed into the broadly defined Primulaceae sensu lato (s.l.) The two uniting features of this family are a free central placenta and one stamen opposite each of the corolla lobes. The cladogram below shows the infrafamilial phylogenetic relationships, together with the subfamilial crown ages. Maesoideae forms the basal group, while Primuloideae and Myrsinoideae are in a sister group relationship. Christenhusz et al. (2016, 2017) list 53 genera and 2,790 species, varying from 1 in Maesoideae to 38 in Myrsinoideae, with 8 in Theophrastoideae and the remaining 6 in Primuloideae. Byng (2014) and Plants of the World Online list 55 accepted genera. The generic limits of Myrsinoideae are not fully resolved and the status of a number of genera is under revision. Subfamilies Etymology The Primulaceae are named for their nominative and type genus, Primula. Linnaeus used this name to reflect its place among the first flowers of spring, given the primrose's vernacular Latin name of primula veris (), primula (feminine diminutive primus, first + veris (genitive ver, spring). Distribution and habitat Distribution is cosmopolitan. Cultivation The British National Collection of Double Primroses is held at Glebe Garden, at North Petherwin, in North Cornwall.
Biology and health sciences
Ericales
null
473326
https://en.wikipedia.org/wiki/Poise%20%28unit%29
Poise (unit)
The poise (symbol P; ) is the unit of dynamic viscosity (absolute viscosity) in the centimetre–gram–second system of units (CGS). It is named after Jean Léonard Marie Poiseuille (see Hagen–Poiseuille equation). The centipoise (1 cP = 0.01 P) is more commonly used than the poise itself. Dynamic viscosity has dimensions of , that is, . The analogous unit in the International System of Units is the pascal-second (Pa⋅s): The poise is often used with the metric prefix centi- because the viscosity of water at 20 °C (standard conditions for temperature and pressure) is almost exactly 1 centipoise. A centipoise is one hundredth of a poise, or one millipascal-second (mPa⋅s) in SI units (1 cP = 10−3 Pa⋅s = 1 mPa⋅s). The CGS symbol for the centipoise is cP. The abbreviations cps, cp, and cPs are sometimes seen. Liquid water has a viscosity of 0.00890 P at 25 °C at a pressure of 1 atmosphere (0.00890 P = 0.890 cP = 0.890 mPa⋅s).
Physical sciences
Viscosity
Basics and measurement
473493
https://en.wikipedia.org/wiki/Quercus%20macrocarpa
Quercus macrocarpa
Quercus macrocarpa, the bur oak or burr oak, is a species of oak tree native to eastern North America. It is in the white oak section, Quercus sect. Quercus, and is also called mossycup oak, mossycup white oak, blue oak, or scrub oak. The acorns are the largest of any North American oak (thus the species name macrocarpa, from Ancient Greek "large" and "fruit"), and are important food for wildlife. Description Quercus macrocarpa is a large deciduous tree growing up to , rarely , in height, and is one of the most massive oaks with a trunk diameter of up to . It is one of the slowest-growing oaks, with a growth rate of per year when young. However, one source states that a well-established tree can grow up to per year. A 20-year-old tree will be about tall if grown in full sun. Naturally occurring saplings in forests will typically be older. Bur oaks commonly get to be 200 to 300 years old, and may live up to 400 years. The bark is gray with distinct vertical ridges. The leaves are long and broad, variable in shape, with a lobed margin. Most often, the basal two-thirds is narrower and deeply lobed, while the apical third is wider and has shallow lobes or large teeth. They usually do not show strong fall color, although fine golden hues are occasionally seen. The flowers are greenish-yellow catkins, produced in the spring. The acorns are very large, long and broad, having a large cup that wraps much of the way around the nut, with large overlapping scales and often a fringe at the edge of the cup. The wood when sawn transversely shows the characteristic annual rings formed by secondary thickening. Bur oak is sometimes confused with other members of the white oak section, such as Quercus bicolor (swamp white oak), Quercus lyrata (overcup oak), and Quercus alba (white oak). It hybridises with several other species of oaks. Varieties Two varieties are accepted in Kew's Plants of the World Online: Quercus macrocarpa var. macrocarpa Quercus macrocarpa var. depressa Distribution and habitat Quercus macrocarpa is widespread in the Atlantic coastal plain from New Brunswick to North Carolina, west as far as Alberta, eastern Montana, Wyoming, and northeastern New Mexico. The vast majority of the populations are found in the eastern Great Plains, the Mississippi–Missouri–Ohio Valley, and the Great Lakes region. Bur oaks primarily grow in a temperate climate on the western oak–hickory forested regions in the United States and into Canada. It commonly grows in the open, away from dense forest canopy. For this reason, it is an important tree on the eastern prairies, often found near waterways in otherwise more forested areas, where there is a break in the canopy. It is drought resistant, possibly because of its long taproot. At the end of the growing season, a one-year sapling may have a taproot deep and a lateral root spread of . The West Virginia state champion bur oak has a trunk diameter of almost . Large bur oaks, older than 12 years, are fire-tolerant because of their thick bark. One of the bur oak's most common habitats, especially in Midwestern United States, is the oak savanna, where fires often occurred in early spring or late fall. Without fires, bur oak is often succeeded by other tree and shrub species that are more shade-tolerant. Older bur oaks may survive in dense woodlands for 80 years, until they are weakened by wood-rot fungi in the lower branches killed by shade, and by 100 to 110 years, they are often snapped by wind storms. Ecology The acorns are the largest of any North American oak and are an important wildlife food; American black bears sometimes tear off branches to get them. However, heavy nut crops are borne only every few years. In this evolutionary strategy, known as masting, the large seed crop every few years overwhelms the ability of seed predators to eat the acorns, thus ensuring the survival of some seeds. Other wildlife, such as deer and porcupine, eat the leaves, twigs and bark. Cattle are heavy browsers in some areas. The bur oak is the only known foodplant of Bucculatrix recognita caterpillars. Many species of arthropods form galls on the leaves and twigs, including Aceria querci (a mite) and numerous cynipid wasps: Acraspis macrocarpae, Acraspis villosa, Andricus chinquapin, Andricus dimorphus, Andricus foliaformis, Andricus flavohirtus, Andricus quercuspetiolicola, Callirhytis flavipes, Disholcaspis quercusmamma, Neuroterus floccosus, Neuroterus saltarius, Neuroterus umbilicatus, Philonix nigra, and Phylloteras poculum. Diseases Bur oak blight is caused by a fungal pathogen Tubakia iowensis. It forms black pustules on the petioles and causes leaf discoloration and death, making the tree more susceptible to other secondary issues such as Armillaria root rot or Agrilus bilineatus (two-lined chestnut borer). Cultivation Quercus macrocarpa is cultivated by plant nurseries for use in gardens, in parks, and on urban sidewalks. Among the white oaks, it is one of the most tolerant of urban conditions, and is one of the fastest-growing of the group. It has been planted in many climates, ranging northwards to Anchorage, Alaska, and as far south as Mission, Texas. It withstands chinook conditions in Calgary, Alberta. It is drought tolerant. Coppicing has been shown to produce superior growth. Uses The wood of Quercus macrocarpa is commercially valuable; it is durable, used for flooring, fence posts, cabinets, and barrels. The acorns can be eaten boiled and raw. Native Americans have used the astringent bark to treat wounds, sores, rashes, and diarrhea. Culture Many places are named after the burr oak, such as Burr Oak State Park in Ohio, the cities of Burr Oak, Iowa and Burr Oak, Kansas, and the village of Burr Oak, Michigan. Burr Oaks (1947) is a volume of poetry by Richard Eberhart.
Biology and health sciences
Fagales
Plants
473514
https://en.wikipedia.org/wiki/Generalized%20coordinates
Generalized coordinates
In analytical mechanics, generalized coordinates are a set of parameters used to represent the state of a system in a configuration space. These parameters must uniquely define the configuration of the system relative to a reference state. The generalized velocities are the time derivatives of the generalized coordinates of the system. The adjective "generalized" distinguishes these parameters from the traditional use of the term "coordinate" to refer to Cartesian coordinates. An example of a generalized coordinate would be to describe the position of a pendulum using the angle of the pendulum relative to vertical, rather than by the x and y position of the pendulum. Although there may be many possible choices for generalized coordinates for a physical system, they are generally selected to simplify calculations, such as the solution of the equations of motion for the system. If the coordinates are independent of one another, the number of independent generalized coordinates is defined by the number of degrees of freedom of the system. Generalized coordinates are paired with generalized momenta to provide canonical coordinates on phase space. Constraints and degrees of freedom Generalized coordinates are usually selected to provide the minimum number of independent coordinates that define the configuration of a system, which simplifies the formulation of Lagrange's equations of motion. However, it can also occur that a useful set of generalized coordinates may be dependent, which means that they are related by one or more constraint equations. Holonomic constraints For a system of particles in 3D real coordinate space, the position vector of each particle can be written as a 3-tuple in Cartesian coordinates: Any of the position vectors can be denoted where labels the particles. A holonomic constraint is a constraint equation of the form for particle which connects all the 3 spatial coordinates of that particle together, so they are not independent. The constraint may change with time, so time will appear explicitly in the constraint equations. At any instant of time, any one coordinate will be determined from the other coordinates, e.g. if and are given, then so is . One constraint equation counts as one constraint. If there are constraints, each has an equation, so there will be constraint equations. There is not necessarily one constraint equation for each particle, and if there are no constraints on the system then there are no constraint equations. So far, the configuration of the system is defined by quantities, but coordinates can be eliminated, one coordinate from each constraint equation. The number of independent coordinates is . (In dimensions, the original configuration would need coordinates, and the reduction by constraints means ). It is ideal to use the minimum number of coordinates needed to define the configuration of the entire system, while taking advantage of the constraints on the system. These quantities are known as generalized coordinates in this context, denoted . It is convenient to collect them into an -tuple which is a point in the configuration space of the system. They are all independent of one other, and each is a function of time. Geometrically they can be lengths along straight lines, or arc lengths along curves, or angles; not necessarily Cartesian coordinates or other standard orthogonal coordinates. There is one for each degree of freedom, so the number of generalized coordinates equals the number of degrees of freedom, . A degree of freedom corresponds to one quantity that changes the configuration of the system, for example the angle of a pendulum, or the arc length traversed by a bead along a wire. If it is possible to find from the constraints as many independent variables as there are degrees of freedom, these can be used as generalized coordinates. The position vector of particle is a function of all the generalized coordinates (and, through them, of time), and the generalized coordinates can be thought of as parameters associated with the constraint. The corresponding time derivatives of are the generalized velocities, (each dot over a quantity indicates one time derivative). The velocity vector is the total derivative of with respect to time and so generally depends on the generalized velocities and coordinates. Since we are free to specify the initial values of the generalized coordinates and velocities separately, the generalized coordinates and velocities can be treated as independent variables. Non-holonomic constraints A mechanical system can involve constraints on both the generalized coordinates and their derivatives. Constraints of this type are known as non-holonomic. First-order non-holonomic constraints have the form An example of such a constraint is a rolling wheel or knife-edge that constrains the direction of the velocity vector. Non-holonomic constraints can also involve next-order derivatives such as generalized accelerations. Physical quantities in generalized coordinates Kinetic energy The total kinetic energy of the system is the energy of the system's motion, defined as in which · is the dot product. The kinetic energy is a function only of the velocities , not the coordinates themselves. By contrast an important observation is which illustrates the kinetic energy is in general a function of the generalized velocities, coordinates, and time if the constraints also vary with time, so . In the case the constraints on the particles are time-independent, then all partial derivatives with respect to time are zero, and the kinetic energy is a homogeneous function of degree 2 in the generalized velocities. Still for the time-independent case, this expression is equivalent to taking the line element squared of the trajectory for particle , and dividing by the square differential in time, , to obtain the velocity squared of particle . Thus for time-independent constraints it is sufficient to know the line element to quickly obtain the kinetic energy of particles and hence the Lagrangian. It is instructive to see the various cases of polar coordinates in 2D and 3D, owing to their frequent appearance. In 2D polar coordinates , in 3D cylindrical coordinates , in 3D spherical coordinates , Generalized momentum The generalized momentum "canonically conjugate to" the coordinate is defined by If the Lagrangian does not depend on some coordinate , then it follows from the Euler–Lagrange equations that the corresponding generalized momentum will be a conserved quantity, because the time derivative is zero implying the momentum is a constant of the motion; Examples Bead on a wire For a bead sliding on a frictionless wire subject only to gravity in 2d space, the constraint on the bead can be stated in the form , where the position of the bead can be written , in which is a parameter, the arc length along the curve from some point on the wire. This is a suitable choice of generalized coordinate for the system. Only one coordinate is needed instead of two, because the position of the bead can be parameterized by one number, , and the constraint equation connects the two coordinates and ; either one is determined from the other. The constraint force is the reaction force the wire exerts on the bead to keep it on the wire, and the non-constraint applied force is gravity acting on the bead. Suppose the wire changes its shape with time, by flexing. Then the constraint equation and position of the particle are respectively which now both depend on time due to the changing coordinates as the wire changes its shape. Notice time appears implicitly via the coordinates and explicitly in the constraint equations. Simple pendulum The relationship between the use of generalized coordinates and Cartesian coordinates to characterize the movement of a mechanical system can be illustrated by considering the constrained dynamics of a simple pendulum. A simple pendulum consists of a mass hanging from a pivot point so that it is constrained to move on a circle of radius . The position of the mass is defined by the coordinate vector measured in the plane of the circle such that is in the vertical direction. The coordinates and are related by the equation of the circle that constrains the movement of . This equation also provides a constraint on the velocity components, Now introduce the parameter , that defines the angular position of from the vertical direction. It can be used to define the coordinates and , such that The use of to define the configuration of this system avoids the constraint provided by the equation of the circle. Notice that the force of gravity acting on the mass is formulated in the usual Cartesian coordinates, where is the acceleration due to gravity. The virtual work of gravity on the mass as it follows the trajectory is given by The variation can be computed in terms of the coordinates and , or in terms of the parameter , Thus, the virtual work is given by Notice that the coefficient of is the -component of the applied force. In the same way, the coefficient of is known as the generalized force along generalized coordinate , given by To complete the analysis consider the kinetic energy of the mass, using the velocity, so, D'Alembert's form of the principle of virtual work for the pendulum in terms of the coordinates and are given by, This yields the three equations in the three unknowns, , and . Using the parameter , those equations take the form which becomes, or This formulation yields one equation because there is a single parameter and no constraint equation. This shows that the parameter is a generalized coordinate that can be used in the same way as the Cartesian coordinates and to analyze the pendulum. Double pendulum The benefits of generalized coordinates become apparent with the analysis of a double pendulum. For the two masses , let define their two trajectories. These vectors satisfy the two constraint equations, and The formulation of Lagrange's equations for this system yields six equations in the four Cartesian coordinates and the two Lagrange multipliers that arise from the two constraint equations. Now introduce the generalized coordinates that define the angular position of each mass of the double pendulum from the vertical direction. In this case, we have The force of gravity acting on the masses is given by, where is the acceleration due to gravity. Therefore, the virtual work of gravity on the two masses as they follow the trajectories is given by The variations can be computed to be Thus, the virtual work is given by and the generalized forces are Compute the kinetic energy of this system to be Euler–Lagrange equation yield two equations in the unknown generalized coordinates given by and The use of the generalized coordinates provides an alternative to the Cartesian formulation of the dynamics of the double pendulum. Spherical pendulum For a 3D example, a spherical pendulum with constant length free to swing in any angular direction subject to gravity, the constraint on the pendulum bob can be stated in the form where the position of the pendulum bob can be written in which are the spherical polar angles because the bob moves in the surface of a sphere. The position is measured along the suspension point to the bob, here treated as a point particle. A logical choice of generalized coordinates to describe the motion are the angles . Only two coordinates are needed instead of three, because the position of the bob can be parameterized by two numbers, and the constraint equation connects the three coordinates so any one of them is determined from the other two. Generalized coordinates and virtual work The principle of virtual work states that if a system is in static equilibrium, the virtual work of the applied forces is zero for all virtual movements of the system from this state, that is, for any variation . When formulated in terms of generalized coordinates, this is equivalent to the requirement that the generalized forces for any virtual displacement are zero, that is . Let the forces on the system be be applied to points with Cartesian coordinates , then the virtual work generated by a virtual displacement from the equilibrium position is given by where denote the virtual displacements of each point in the body. Now assume that each depends on the generalized coordinates then and The terms are the generalized forces acting on the system. Kane shows that these generalized forces can also be formulated in terms of the ratio of time derivatives, where is the velocity of the point of application of the force . In order for the virtual work to be zero for an arbitrary virtual displacement, each of the generalized forces must be zero, that is
Physical sciences
Classical mechanics
Physics
473979
https://en.wikipedia.org/wiki/Mutually%20orthogonal%20Latin%20squares
Mutually orthogonal Latin squares
In combinatorics, two Latin squares of the same size (order) are said to be orthogonal if when superimposed the ordered paired entries in the positions are all distinct. A set of Latin squares, all of the same order, all pairs of which are orthogonal is called a set of mutually orthogonal Latin squares. This concept of orthogonality in combinatorics is strongly related to the concept of blocking in statistics, which ensures that independent variables are truly independent with no hidden confounding correlations. "Orthogonal" is thus synonymous with "independent" in that knowing one variable's value gives no further information about another variable's likely value. An older term for a pair of orthogonal Latin squares is Graeco-Latin square, introduced by Euler. Graeco-Latin squares A Graeco-Latin square or Euler square or pair of orthogonal Latin squares of order over two sets and (which may be the same), each consisting of symbols, is an arrangement of cells, each cell containing an ordered pair , where is in and is in , such that every row and every column contains each element of and each element of exactly once, and that no two cells contain the same ordered pair. The arrangement of the -coordinates by themselves (which may be thought of as Latin characters) and of the -coordinates (the Greek characters) each forms a Latin square. A Graeco-Latin square can therefore be decomposed into two orthogonal Latin squares. Orthogonality here means that every pair from the Cartesian product occurs exactly once. Orthogonal Latin squares were studied in detail by Leonhard Euler, who took the two sets to be }, the first upper-case letters from the Latin alphabet, and }, the first lower-case letters from the Greek alphabet—hence the name Graeco-Latin square. Existence When a Graeco-Latin square is viewed as a pair of orthogonal Latin squares, each of the Latin squares is said to have an orthogonal mate. In an arbitrary Latin square, a selection of positions, one in each row and one in each column whose entries are all distinct is called a transversal of that square. Consider one symbol in a Graeco-Latin square. The positions containing this symbol must all be in different rows and columns, and furthermore the other symbol in these positions must all be distinct. Hence, when viewed as a pair of Latin squares, the positions containing one symbol in the first square correspond to a transversal in the second square (and vice versa). A given Latin square of order n possesses an orthogonal mate if and only if it has n disjoint transversals. The Cayley table (without borders) of any group of odd order forms a Latin square which possesses an orthogonal mate. Thus Graeco-Latin squares exist for all odd orders as there are groups that exist of these orders. Such Graeco-Latin squares are said to be group based. Euler was able to construct Graeco-Latin squares of orders that are multiples of four, and seemed to be aware of the following result. No group based Graeco-Latin squares can exist if the order is an odd multiple of two (that is, equal to 4 + 2 for some positive integer ). History Although recognized for his original mathematical treatment of the subject, orthogonal Latin squares predate Euler. In the form of an old puzzle involving playing cards, the construction of a 4 x 4 set was published by Jacques Ozanam in 1725. The problem was to take all aces, kings, queens and jacks from a standard deck of cards, and arrange them in a 4 x 4 grid such that each row and each column contained all four suits as well as one of each face value. This problem has several solutions. A common variant of this problem was to arrange the 16 cards so that, in addition to the row and column constraints, each diagonal contains all four face values and all four suits as well. According to Martin Gardner, who featured this variant of the problem in his November 1959 Mathematical Games column, the number of distinct solutions was incorrectly stated to be 72 by Rouse Ball. This mistake persisted for many years until the correct value of 144 was found by Kathleen Ollerenshaw. Each of the 144 solutions has eight reflections and rotations, giving 1152 solutions in total. The 144×8 solutions can be categorized into the following two equivalence classes: For each of the two solutions, 24×24 = 576 solutions can be derived by permuting the four suits and the four face values, independently. No permutation will convert the two solutions into each other, because suits and face values are different. Thirty-six officers problem A problem similar to the card problem above was circulating in St. Petersburg in the late 1700s and, according to folklore, Catherine the Great asked Euler to solve it, since he was residing at her court at the time. This problem is known as the thirty-six officers problem, and Euler introduced it as follows: Euler was unable to solve the problem, but in this work he demonstrated methods for constructing Graeco-Latin squares where is odd or a multiple of 4. Observing that no order two square exists and being unable to construct an order six square, he conjectured that none exist for any oddly even number The non-existence of order six squares was confirmed in 1901 by Gaston Tarry through a proof by exhaustion. However, Euler's conjecture resisted solution until the late 1950s, but the problem has led to important work in combinatorics. In 1959, R.C. Bose and S. S. Shrikhande constructed some counterexamples (dubbed the Euler spoilers) of order 22 using mathematical insights. Then E. T. Parker found a counterexample of order 10 using a one-hour computer search on a UNIVAC 1206 Military Computer while working at the UNIVAC division of Remington Rand (this was one of the earliest combinatorics problems solved on a digital computer). In April 1959, Parker, Bose, and Shrikhande presented their paper showing Euler's conjecture to be false for all Thus, Graeco-Latin squares exist for all orders except In the November 1959 edition of Scientific American, Martin Gardner published this result. The front cover is the 10 × 10 refutation of Euler's conjecture. Thirty-six entangled officers problem Extensions of mutually orthogonal Latin squares to the quantum domain have been studied since 2017. In these designs, instead of the uniqueness of symbols, the elements of an array are quantum states that must be orthogonal to each other in rows and columns. In 2021, an Indian-Polish team of physicists (Rather, Burchardt, Bruzda, Rajchel-Mieldzioć, Lakshminarayan, and Życzkowski) found an array of quantum states that provides an example of mutually orthogonal quantum Latin squares of size 6; or, equivalently, an arrangement of 36 officers that are entangled. This setup solves a generalization of the 36 Euler's officers problem, as well as provides a new quantum error detection code, allowing to encode a 6-level system into a three 6-level system that certifies occurrence of one error. Examples of mutually orthogonal Latin squares (MOLS) A set of Latin squares of the same order such that every pair of squares are orthogonal (that is, form a Graeco-Latin square) is called a set of mutually orthogonal Latin squares (or pairwise orthogonal Latin squares) and usually abbreviated as MOLS or MOLS(n) when the order is made explicit. For example, a set of MOLS(4) is given by: And a set of MOLS(5): While it is possible to represent MOLS in a "compound" matrix form similar to the Graeco-Latin squares, for instance, {| class="wikitable" |- | 1,1,1,1 | 2,2,2,2 | 3,3,3,3 | 4,4,4,4 | 5,5,5,5 |- | 2,3,5,4 | 3,4,1,5 | 4,5,2,1 | 5,1,3,2 | 1,2,4,3 |- | 3,5,4,2 | 4,1,5,3 | 5,2,1,4 | 1,3,2,5 | 2,4,3,1 |- | 4,2,3,5 | 5,3,4,1 | 1,4,5,2 | 2,5,1,3 | 3,1,2,4 |- | 5,4,2,3 | 1,5,3,4 | 2,1,4,5 | 3,2,5,1 | 4,3,1,2 |} for the MOLS(5) example above, it is more typical to compactly represent the MOLS as an orthogonal array (see below). In the examples of MOLS given so far, the same alphabet (symbol set) has been used for each square, but this is not necessary as the Graeco-Latin squares show. In fact, totally different symbol sets can be used for each square of the set of MOLS. For example, is a representation of the compounded MOLS(5) example above where the four MOLS have the following alphabets, respectively: the background color: black, maroon, teal, navy, and silver the foreground color: white, red, lime, blue, and yellow the text: fjords, jawbox, phlegm, qiviut, and zincky the typeface family: serif, sans-serif, monospaced, cursive, and slab-serif. The above table therefore allows for testing five values in each of four different dimensions in only 25 observations instead of 625 (= 54) observations required in a full factorial design. Since the five words cover all 26 letters of the alphabet between them, the table allows for examining each letter of the alphabet in five different typefaces and color combinations. The number of mutually orthogonal Latin squares The mutual orthogonality property of a set of MOLS is unaffected by Permuting the rows of all the squares simultaneously, Permuting the columns of all the squares simultaneously, and Permuting the entries in any square, independently. Using these operations, any set of MOLS can be put into standard form, meaning that the first row of every square is identical and normally put in some natural order, and one square has its first column also in this order. The MOLS(4) and MOLS(5) examples at the start of this section have been put in standard form. By putting a set of MOLS() in standard form and examining the entries in the second row and first column of each square, it can be seen that no more than squares can exist. A set of − 1 MOLS() is called a complete set of MOLS. Complete sets are known to exist when is a prime number or power of a prime (see Finite field construction below). However, the number of MOLS that may exist for a given order is not known for general , and is an area of research in combinatorics. Projective planes A set of − 1 MOLS() is equivalent to a finite affine plane of order (see Nets below). As every finite affine plane is uniquely extendable to a finite projective plane of the same order, this equivalence can also be expressed in terms of the existence of these projective planes. As mentioned above, complete sets of MOLS() exist if is a prime or prime power, so projective planes of such orders exist. Finite projective planes with an order different from these, and thus complete sets of MOLS of such orders, are not known to exist. The only general result on the non-existence of finite projective planes is the Bruck–Ryser theorem, which says that if a projective plane of order exists and or ≡ 2 (mod 4), then must be the sum of two (integer) squares. This rules out projective planes of orders 6 and 14 for instance, but does not guarantee the existence of a plane when satisfies the condition. In particular, = 10 satisfies the conditions, but no projective plane of order 10 exists, as was shown by a very long computer search, which in turn implies that there do not exist nine MOLS of order 10. No other existence results are known. the smallest order for which the existence of a complete set of MOLS is undetermined is 12. McNeish's theorem The minimum number of MOLS() is known to be 2 for all except for = 2 or 6, where it is 1. However, more can be said, namely, MacNeish's Theorem: If is the factorization of the integer into powers of distinct primes then MacNeish's theorem does not give a very good lower bound, for instance if ≡ 2 (mod 4), that is, there is a single 2 in the prime factorization, the theorem gives a lower bound of 1, which is beaten if > 6. On the other hand, it does give the correct value when is a power of a prime. For general composite numbers, the number of MOLS is not known. The first few values starting with = 2, 3, 4... are 1, 2, 3, 4, 1, 6, 7, 8, ... . The smallest case for which the exact number of MOLS() is not known is = 10. From the Graeco-Latin square construction, there must be at least two and from the non-existence of a projective plane of order 10, there are fewer than nine. However, no set of three MOLS(10) has ever been found even though many researchers have attempted to discover such a set. For large enough , the number of MOLS is greater than , thus for every , there are only a finite number of such that the number of MOLS is . Moreover, the minimum is 6 for all > 90. Finite field construction A complete set of MOLS() exists whenever is a prime or prime power. This follows from a construction that is based on a finite field GF(), which only exist if is a prime or prime power. The multiplicative group of GF() is a cyclic group, and so, has a generator, λ, meaning that all the non-zero elements of the field can be expressed as distinct powers of λ. Name the elements of GF() as follows: α0 = 0, α1 = 1, α2 = λ, α3 = λ2, ..., α-1 = λ-2. Now, λ-1 = 1 and the product rule in terms of the α's is αα = α, where = + -1 (mod -1). The Latin squares are constructed as follows, the ()th entry in Latin square L (with ≠ 0) is L() = α + αα, where all the operations occur in GF(). In the case that the field is a prime field ( = a prime), where the field elements are represented in the usual way, as the integers modulo , the naming convention above can be dropped and the construction rule can be simplified to L() = + , where ≠ 0 and , and are elements of GF() and all operations are in GF(). The MOLS(4) and MOLS(5) examples above arose from this construction, although with a change of alphabet. Not all complete sets of MOLS arise from this construction. The projective plane that is associated with the complete set of MOLS obtained from this field construction is a special type, a Desarguesian projective plane. There exist non-Desarguesian projective planes and their corresponding complete sets of MOLS can not be obtained from finite fields. Orthogonal array An orthogonal array, OA(), of strength two and index one is an array ( ≥ 2 and ≥ 1, integers) with entries from a set of size such that within any two columns of (strength), every ordered pair of symbols appears in exactly one row of (index). An OA( + 2, ) is equivalent to MOLS(). For example, the MOLS(4) example given above and repeated here, can be used to form an OA(5,4): {| class="wikitable" |- ! r ! c ! L1 ! L2 ! L3 |- | 1 | 1 | 1 | 1 | 1 |- | 1 | 2 | 2 | 2 | 2 |- | 1 | 3 | 3 | 3 | 3 |- | 1 | 4 |4 |4 |4 |- | 2 | 1 | 2 | 4 | 3 |- |2 |2 |1 |3 |4 |- |2 |3 |4 |2 |1 |- |2 |4 |3 |1 |2 |- |3 |1 |3 |2 |4 |- |3 |2 |4 |1 |3 |- |3 |3 |1 |4 |2 |- |3 |4 |2 |3 |1 |- |4 |1 |4 |3 |2 |- |4 |2 |3 |4 |1 |- |4 |3 |2 |1 |4 |- |4 |4 |1 |2 |3 |} where the entries in the columns labeled r and c denote the row and column of a position in a square and the rest of the row for fixed r and c values is filled with the entry in that position in each of the Latin squares. This process is reversible; given an OA(,) with ≥ 3, choose any two columns to play the r and c roles and then fill out the Latin squares with the entries in the remaining columns. More general orthogonal arrays represent generalizations of the concept of MOLS, such as mutually orthogonal Latin cubes. Nets A (geometric) ()-net is a set of 2 elements called points and a set of subsets called lines or blocks each of size with the property that two distinct lines intersect in at most one point. Moreover, the lines can be partitioned into parallel classes (no two of its lines meet) each containing lines. An ( + 1, )-net is an affine plane of order . A set of MOLS() is equivalent to a ( + 2, )-net. To construct a ( + 2, )-net from MOLS(), represent the MOLS as an orthogonal array, OA( + 2, ) (see above). The ordered pairs of entries in each row of the orthogonal array in the columns labeled and , will be considered to be the coordinates of the 2 points of the net. Each other column (that is, Latin square) will be used to define the lines in a parallel class. The lines determined by the column labeled Li will be denoted by lij. The points on lij will be those with coordinates corresponding to the rows where the entry in the Li column is . There are two additional parallel classes, corresponding to the and columns. The lines j and j consist of the points whose first coordinates are , or second coordinates are respectively. This construction is reversible. For example, the OA(5,4) in the above section can be used to construct a (5,4)-net (an affine plane of order 4). The points on each line are given by (each row below is a parallel class of lines): {| class="wikitable" |- |11: |(1,1) (2,2) (3,3) (4,4) |12: |(1,2) (2,1) (3,4) (4,3) |13: |(1,3) (2,4) (3,1) (4,2) |14: |(1,4) (2,3) (3,2) (4,1) |- |21: |(1,1) (2,4) (3,2) (4,3) |22: |(1,2) (2,3) (3,1) (4,4) |23: |(1,3) (2,2) (3,4) (4,1) |24: |(1,4) (2,1) (3,3) (4,2) |- |31: |(1,1) (2,3) (3,4) (4,2) |32: |(1,2) (2,4) (3,3) (4,1) |33: |(1,3) (2,1) (3,2) (4,4) |34: |(1,4) (2,2) (3,1) (4,3) |- |1: |(1,1) (1,2) (1,3) (1,4) |2: |(2,1) (2,2) (2,3) (2,4) |3: |(3,1) (3,2) (3,3) (3,4) |4: |(4,1) (4,2) (4,3) (4,4) |- | 1: |(1,1) (2,1) (3,1) (4,1) | 2: |(1,2) (2,2) (3,2) (4,2) | 3: |(1,3) (2,3) (3,3) (4,3) | 4: |(1,4) (2,4) (3,4) (4,4) |} Transversal designs A transversal design with groups of size and index λ, denoted T[, λ; ], is a triple () where: is a set of varieties; } is a family of -sets (called groups, but not in the algebraic sense) which form a partition of ; is a family of -sets (called blocks) of varieties such that each -set in intersects each group in precisely one variety, and any pair of varieties which belong to different groups occur together in precisely λ blocks in . The existence of a T[,1;] design is equivalent to the existence of -2 MOLS(). A transversal design T[,1;] is the dual incidence structure of an ()-net. That is, it has points and 2 blocks. Each point is in blocks; each block contains points. The points fall into equivalence classes (groups) of size so that two points in the same group are not contained in a block while two points in different groups belong to exactly one block. For example, using the (5,4)-net of the previous section we can construct a T[5,1;4] transversal design. The block associated with the point () of the net will be denoted ij. The points of the design will be obtained from the following scheme: i ↔ , j ↔ 5, and ij ↔ 5 + . The points of the design are thus denoted by the integers 1, ..., 20. The blocks of the design are: {| class="wikitable" |- |11: |6 11 16 1 5 |22: |6 13 19 2 10 |33: |6 14 17 3 15 |44: |6 12 18 4 20 |- |12: |7 12 17 1 10 |21: |7 14 18 2 5 |34: |7 13 16 3 20 |43: |7 11 19 4 15 |- |13: |8 13 18 1 15 |24: |8 11 17 2 20 |31: |8 12 19 3 5 |42: |8 14 16 4 10 |- |14: |9 14 19 1 20 |23: |9 12 16 2 15 |32: |9 11 18 3 10 |41: |9 13 17 4 5 |} The five "groups" are: {| class="wikitable" |- |6 7 8 9 |- |11 12 13 14 |- |16 17 18 19 |- |1 2 3 4 |- |5 10 15 20 |} Graph theory A set of MOLS() is equivalent to an edge-partition of the complete ( + 2)-partite graph Kn,...,n into complete subgraphs of order + 2. Applications Mutually orthogonal Latin squares have a great variety of applications. They are used as a starting point for constructions in the statistical design of experiments, tournament scheduling, and error correcting and detecting codes. Euler's interest in Graeco-Latin squares arose from his desire to construct magic squares. The French writer Georges Perec structured his 1978 novel Life: A User's Manual around a 10×10 Graeco-Latin square.
Mathematics
Combinatorics
null
474119
https://en.wikipedia.org/wiki/Rhamphorhynchus
Rhamphorhynchus
Rhamphorhynchus (, from Ancient Greek rhamphos meaning "beak" and rhynchus meaning "snout") is a genus of long-tailed pterosaurs in the Jurassic period. Less specialized than contemporary, short-tailed pterodactyloid pterosaurs such as Pterodactylus, it had a long tail, stiffened with ligaments, which ended in a characteristic soft-tissue tail vane. The mouth of Rhamphorhynchus housed needle-like teeth, which were angled forward, with a curved, sharp, beak-like tip lacking teeth, indicating a diet mainly of fish; indeed, fish and cephalopod remains are frequently found in Rhamphorhynchus abdominal contents, as well as in their coprolites. Although fragmentary fossil remains possibly belonging to Rhamphorhynchus have been found in England, Tanzania, and Spain, the best preserved specimens come from the Solnhofen limestone of Bavaria, Germany. Many of these fossils preserve not only the bones but impressions of soft tissues, such as wing membranes and probably pycnofibers. Scattered teeth believed to belong to Rhamphorhynchus have been found in Portugal as well. History and classification Early research The classification and taxonomy of Rhamphorhynchus, like many pterosaur species known since the Victorian era, is complex, with a long history of reclassification under a variety of names, often for the same specimens. The first named specimen of Rhamphorhynchus was brought to the attention of Samuel Thomas von Sömmerring by the collector Georg Graf zu Münster in 1825. Von Sömmerring concluded that it belonged to an ancient bird. When further preparation uncovered teeth, Graf zu Münster sent a cast to Professor Georg August Goldfuss, who recognised it as a pterosaur. Like most pterosaurs described in the mid 19th century, Rhamphorhynchus was originally considered to be a species of Pterodactylus. However, at the time, many scientists incorrectly considered Ornithocephalus to be the valid name for Pterodactylus. This specimen of Rhamphorhynchus was therefore originally named Ornithocephalus Münsteri. This was first mentioned in 1830 by Graf zu Münster himself. However, the description making the name valid was given by Goldfuss in an 1831 follow-up to Münster's short paper. Note that the ICZN later ruled that non-standard Latin characters, such as ü, would not be allowed in scientific names, and the spelling münsteri was emended to muensteri by Richard Lydekker in 1888. In 1839, Münster described another specimen that he considered to belong to Ornithocephalus (i.e. Pterodactylus), with a distinctive long tail. He named it Ornithocephalus longicaudus, meaning "long tail", to differentiate it from the specimens with short tails (the true specimens of Pterodactylus). In 1845, Hermann von Meyer officially emended the original species Ornithocephalus münsteri to Pterodactylus münsteri, since the name Pterodactylus had been by that point recognized as having priority over Ornithocephalus. In a subsequent 1846 paper describing a new species of long-tailed 'pterodactyl', von Meyer decided that the long-tailed forms of Pterodactylus were different enough from the short-tailed forms to warrant placement in a subgenus, and he named his new species Pterodactylus (Rhamphorhynchus) gemmingi after a specimen owned by collector Captain Carl Eming von Gemming that was later by von Gemming sold for three hundred guilders to the Teylers Museum in Haarlem. It was not until 1847 that von Meyer elevated Rhamphorhynchus to a full-fledged genus, and officially included in it both long-tailed species of Pterodactylus known at the time, R. longicaudus (the original species preserving a long tail) and R. gemmingi. The type species of Rhamphorhynchus is R. longicaudus; its type specimen or holotype also was sold to the Teylers Museum, where it still resides as TM 6924. The original species, Pterodactylus münsteri, remained misclassified until a re-evaluation was published by Richard Owen in an 1861 book, in which he renamed it as Rhamphorhynchus münsteri. Modern research The type specimen of R. muensteri, described by Münster and Goldfuss, was lost during World War II. If available, a new specimen, or neotype, is designated as the type specimen if the original is lost or deemed too poorly preserved. Peter Wellnhofer declined to designate a neotype in his 1975 review of the genus, because a number of high-quality casts of the original specimen were still available in museum collections. These can serve as plastotypes. By the 1990s (and following Wellnhofer's consolidation of many previously named species), about five species of Rhamphorhynchus were recognized from the Solnhofen limestone of Germany, with a few others having been named from Africa, Spain, and the UK based on fragmentary remains. Most of the Solnhofen species were differentiated based on their relative size, and size-related features, such as the relative length of the skull. In 1995, pterosaur researcher Chris Bennett published an extensive review of the currently recognized German species. Bennett concluded that all the supposedly distinct German species were actually different year-classes of a single species, R. muensteri, representing distinct age groups, with the smaller species being juveniles and the larger adults. Bennett's paper did not cover the British and African species, though he suggested that these should be considered indeterminate members of the family Rhamphorhynchidae and not necessarily species of Rhamphorhynchus itself. Despite the reduction of the genus to a single species, the type species remains R. longicaudus. In 2015, a new species of Rhamphorhynchus, R. etchesi was named for associated remains of a left and right wing from the Kimmeridge Clay in the United Kingdom, the name commemorates the discoverer, Steve Etches, a local collector of the fossils of the Kimmeridge Clay. It is distinguished from other species of Rhamphorhynchus by "the unique length ratio between wing phalanx 1 and wing phalanx 2" Phylogeny The cladogram of below is the result of a large phylogenetic analysis published by Brian Andres & Timothy Myers in 2013. The species R. muensteri was recovered within the family Rhamphorhynchidae, sister taxon to both Cacibupteryx and Nesodactylus. Description The largest known specimen of Rhamphorhynchus muensteri (catalog number NHMUK PV OR 37002) has an estimated wingspan of . A very large, fragmentary rhamphorhynchid specimen from Ettling in Germany may also belong to the genus, in which case Rhamphorhynchus would be the largest known non-pterodactyloid pterosaur and one of the largest pterosaurs known from the Jurassic. This specimen represents an individual around 180% the size of the next largest specimen of the genus, with an estimated wingspan of over 3 metres. Skull Contrary to a 1927 report by pterosaur researcher Ferdinand Broili, Rhamphorhynchus lacked any bony or soft tissue crest, as seen in several species of contemporary small pterodactyloid pterosaurs. Broili claimed to have found a two-millimeter-tall crest made of thin bone that ran much of the skull's length in one Rhamphorhynchus specimen, evidenced by an impression in the surrounding rock and a few small fragments of the crest itself. However, subsequent examination of this specimen by Wellnhofer in 1975 and Bennett in 2002 using both visible and ultraviolet light found no trace of a crest; both concluded that Broili was mistaken. The supposed crest, they concluded, was simply an artifact of preservation. The teeth of Rhamphorhynchus intermesh when the jaw is closed and are suggestive of a piscivorous diet. There are twenty teeth in the upper jaws and fourteen in the lower jaws. Paleobiology Life history Traditionally, the large size variation between specimens of Rhamphorhynchus has been taken to represent species variation. However, in a 1995 paper, Bennett argued that these "species" actually represent year-classes of a single species, Rhamphorhynchus muensteri, from flaplings to adults. Following from this interpretation, Bennett found several notable changes that occurred in R. muensteri as the animal aged. The smallest known Rhamphorhynchus specimen has a wingspan of only ; however, it is likely that even such a small individual was capable of flight. Bennett examined two possibilities for hatchlings: that they were altricial, requiring some period of parental care before leaving the nest, or that they were precocial, hatching with sufficient size and ability for flight. If precocious, Bennett suggested that clutches would be small, with only one or two eggs laid per clutch, to compensate for the relatively large size of the hatchings. Bennett did not speculate on which possibility was more likely, though the discovery of a pterosaur embryo (Avgodectes) with strongly ossified bones suggests that pterosaurs in general were precocial, able to fly soon after hatching with minimal parental care. This theory was contested by a histological study of Rhamphorhynchus that showed the initial rapid growth was followed by a prolonged period of slow growth. Juvenile Rhamphorhynchus had relatively short skulls with large eyes, and the toothless beak-like tips of the jaws were shorter in juveniles than adults, with rounded, blunt lower jaw tips eventually becoming slender and pointed as the animals grew. Adult Rhamphorhynchus also developed a strong upward "hook" at the end of the lower jaw. The number of teeth remained constant from juvenile to adult, though the teeth became relatively shorter and stockier as the animals grew, possibly to accommodate larger and more powerful prey. The pelvic and pectoral girdles fused as the animals aged, with full pectoral fusion attained by one year of age. The shape of the tail vane also changed across various age classes of Rhamphorhynchus. In juveniles, the vane was shallow relative to the tail and roughly oval, or "lancet-shaped". As growth progressed, the tail vane became diamond-shaped, and finally triangular in the largest individuals. In 2020, published ontogenetic analyses indicated that Rhamphorhynchus could fly soon after hatching, supporting the theory of precociality in the species. The study supported the conclusion that juveniles may have occupied different sequential niches throughout their growth as they matured. A 2024 paper by David Hone and Skye McDavid described a complete and three dimensional specimen of Rhamphorhynchus, NHMUK PV OR 37002, noted for its abnormally large size. With an estimated wingspan of , it is over a third larger than the next largest specimen, and more than 60% larger than most individuals. Prior research had even considered the specimen to represent a distinct species, R. longiceps, named in 1902 by Arthur Smith Woodward. Hone and McDavid, however, interpret it as representing the same species as other known specimens, R. muensteri, finding the identifying traits of the species to be present and explaining differences as growth-related. The majority of specimens known show a lack of a complete bone fusion and are interpreted at immature, a conclusion of Bennett's 1995 study; the "R. longiceps" specimen and other abnormally large individuals are then interpret as the rare preservation of fully grown adults. Overall, the specimen was noted to be proportionally similar to other individuals, but informative of various changes in later life. The toothless end of the snout was noted to be shorter and deeper than younger specimens, with the jaw as a whole being narrower relative to its length. The back of the skull, meanwhile, was wider, and the lower temporal fenestrae had radically changed shape from a slitlike hole to a wide trapezoidal one. The teeth are thin and flattened side to side, whereas those of typical younger individuals are round in cross-section. The unchanged wing proportions of the animal were noted as being unexpected, as a larger animal would have altered biomechanical needs; this may indicate differences in flight behaviour in larger individuals. Similarly, an altered shape of the indicates a potential weakened ability to launch into flight from the water. Taken together, the modified skull anatomy and altered flight behaviour support the idea of ecological differences across life stages. Especially large and old individuals may have shifted their diet away from fish and cephalopods, with the more powerful skull and cutting teeth potentially used to hunt tetrapods in more terrestrial settings. This could also explain the lack of such adults within the marine Solnholfen beds, and the large size of NHMUK PV OR 37002 may have in fact been especially unusual amongst unpreserved individuals living in other environments. Metabolism Having determined that Rhamphorhynchus specimens fit into discrete year-classes, Bennett was able to estimate the growth rate during one year by comparing the size of one-year-old specimens with two-year-old specimens. He found that the average growth rate during the first year of life for Rhamphorhynchus was 130% to 173%, slightly faster than the growth rate in alligators. Growth likely slowed considerably after sexual maturity, so it would have taken more than three years to attain maximum adult size. This growth rate is much slower than the rate seen in large pterodactyloid pterosaurs, such as Pteranodon, which attained near-adult size within the first year of life. Additionally, pterodactyloids had determinate growth, meaning that the animals reached a fixed maximum adult size and stopped growing. Previous assumptions of rapid growth rate in rhamphorhynchoids were based on the assumption that they needed to be warm-blooded to sustain active flight. Warm-blooded animals, like modern birds and bats, normally show rapid growth to adult size and determinate growth. Because there is no evidence for either in Rhamphorhynchus, Bennett considered his findings consistent with an ectothermic metabolism, though he recommended more studies needed to be done. Cold-blooded Rhamphorhynchus, Bennett suggested, may have basked in the sun or worked their muscles to accumulate enough energy for bouts of flight, and cooled to ambient temperature when not active to save energy, like modern reptiles. Swimming Though Rhamphorhynchus is often depicted as an aerial piscivore, recent evidence suggests that, much like most modern aquatic birds, it probably foraged while swimming. Like several pteranodontians it has hatchet-shaped deltopectoral crests, a short torso and short legs, all features associated with water based launching in pterosaurs. Its feet are broad and large, being useful for propulsion, and the predicted floating position is adequate by pterosaur standards. The animal's ability to swim may account for the genus' generally excellent fossil record, being in a position where preservation would be much easier. Sexual dimorphism Both Koh Ting-Pong and Peter Wellnhofer recognized two distinct groups among adult Rhamphorhynchus muensteri, differentiated by the proportions of the neck, wing, and hind limbs, but particularly in the ratio of skull to humerus length. Both researchers noted that these two groups of specimens were found in roughly a 1:1 ratio, and interpreted them as different sexes. Bennett tested for sexual dimorphism in Rhamphorhynchus by using a statistical analysis, and found that the specimens did indeed group together into small-headed and large-headed sets. However, without any known variation in the actual form of the bones or soft tissue (morphological differences), he found the case for sexual dimorphism inconclusive. A 2024 study by Habib and Hone et al., suggests that the high degree of tail variation in mature specimens may represent increased sexual selection in Rhamphorhynchus, though it is equally likely a result of reduced flight constraint. Head orientation In 2003, a team of researchers led by Lawrence Witmer studied the brain anatomy of several types of pterosaurs, including Rhamphorhynchus muensteri, using endocasts of the brain they retrieved by performing CAT scans of fossil skulls. Using comparisons to modern animals, they were able to estimate various physical attributes of pterosaurs, including relative head orientation during flight and coordination of the wing membrane muscles. Witmer and his team found that Rhamphorhynchus held its head parallel to the ground due to the orientation of the osseous labyrinth of the inner ear, which helps animals detect balance. In contrast, pterodactyloid pterosaurs, such as Anhanguera, appear to have normally held their heads at a downward angle, both in flight and while on the ground. Daily activity patterns Comparisons between the scleral rings of Rhamphorhynchus and modern birds and reptiles suggest that it may have been nocturnal, and may have had activity patterns similar to those of modern nocturnal seabirds. This may also indicate niche partitioning with contemporary pterosaurs inferred to be diurnal, such as Scaphognathus and Pterodactylus. Ecology Several limestone slabs have been discovered in which fossils of Rhamphorhynchus are found in close association with the ganoid fish Aspidorhynchus. In one of these specimens, the jaws of an Aspidorhynchus pass through the wings of the Rhamphorhynchus specimen. The Rhamphorhynchus also has the remains of a small fish, possibly Leptolepides, in its throat. This slab, cataloged as WDC CSG 255, may represent two levels of predation; one by Rhamphorhynchus and one by Aspidorhynchus. In a 2012 description of WDC CSG 255, researchers proposed that the Rhamphorhynchus individual had just caught a Leptolepides while it was swimming. As the Leptolepides was travelling down its pharynx, a large Aspidorhynchus would have attacked from below the water, accidentally puncturing the left wing membrane of the Rhamphorhynchus with its sharp rostrum in the process. The teeth in its snout were ensnared in the fibrous tissue of the wing membrane, and as the fish thrashed to release itself the left wing of Rhamphorhynchus was pulled backward into the distorted position seen in the fossil. The encounter resulted in the death of both individuals, most likely because the two animals sank into an anoxic layer in the water body, depriving the fish of oxygen. The two may have been preserved together as the weight of the head of Aspidorhynchus held down the much lighter body of Rhamphorhynchus. Putative coprolites have been also found in association with specimens of Rhamphorhynchus muensteri, and probable gut contents in other specimens include fragmentary remains of fish and indeterminate vertebrates. "Odontorhynchus" "Odontorhynchus" aculeatus was based on a skull with lower jaws that is now lost. This set of jaws supposedly differed in having two teeth united at the tip of the lower jaw, and none at the tip of the upper jaw. The skull was , making it a small form. Stolley, who described the specimen in 1936, argued that R. longicaudus also should be reclassified in the genus "Odontorhynchus". Both Koh and Wellnhofer rejected this idea, arguing instead that "Odontorhynchus" was a junior synonym of R. longicaudus. Bennett agreed with their assessments, and included both "Odontorhynchus" and R. longicaudus as synonyms of R. muensteri.
Biology and health sciences
Pterosaurs
Animals
474404
https://en.wikipedia.org/wiki/High-pressure%20area
High-pressure area
A high-pressure area, high, or anticyclone, is an area near the surface of a planet where the atmospheric pressure is greater than the pressure in the surrounding regions. Highs are middle-scale meteorological features that result from interplays between the relatively larger-scale dynamics of an entire planet's atmospheric circulation. The strongest high-pressure areas result from masses of cold air which spread out from polar regions into cool neighboring regions. These highs weaken once they extend out over warmer bodies of water. Weaker—but more frequently occurring—are high-pressure areas caused by atmospheric subsidence: Air becomes cool enough to precipitate out its water vapor, and large masses of cooler, drier air descend from above. Within high-pressure areas, winds flow from where the pressure is highest, at the center of the area, towards the periphery where the pressure is lower. However, the direction is not straight from the center outwards, but curved due to the Coriolis effect from Earth's rotation. Viewed from above, the wind direction is bent in the direction opposite to the planet's rotation; this causes the characteristic spiral shape of the tropical cyclones otherwise known as hurricanes and typhoons. On English-language weather maps, high-pressure centers are identified by the letter H. Weather maps in other languages may use different letters or symbols. Wind circulation in the northern and southern hemispheres The direction of wind flow around an atmospheric high-pressure area and a low-pressure area, as seen from above, depends on the hemisphere. High-pressure systems rotate clockwise in the northern Hemisphere; low-pressure systems rotate clockwise in the southern hemisphere. High pressure systems in the temperate latitudes generally bring warm weather in summer, when the amount of heat received from the Sun during daytime exceeds what is lost at night, and cold weather in winter when the amount of heat lost at night exceeds what is gained during daytime. In the Southern Hemisphere the result is similar. Australia and the southern cone of South America get hot, dry summer weather from the subtropical ridge and cooler wetter winter weather as cold fronts from the southern oceans take over. The term cyclone was coined by Henry Piddington of the British East India Company to describe the devastating storm of December 1789 in Coringa, India. A cyclone forms around a low-pressure area. Anticyclone, the term for the kind of weather around a high-pressure area, was coined in 1877 by Francis Galton. A simple rule is that for high-pressure areas, where generally air flows from the center outward, the coriolis force given by the earth's rotation to the air circulation is in the opposite direction of earth's apparent rotation if viewed from above the hemisphere's pole. So, both the earth and winds around a low-pressure area rotate counter-clockwise in the northern hemisphere, and clockwise in the southern. The opposite to these two cases occurs in the case of a high. These results derive from the Coriolis effect. Formation High-pressure areas form due to downward motion through the troposphere, the atmospheric layer where weather occurs. Preferred areas within a synoptic flow pattern in higher levels of the troposphere are beneath the western side of troughs. On weather maps, these areas show converging winds (isotachs), also known as convergence, near or above the level of non-divergence, which is near the 500 hPa pressure surface about midway up through the troposphere, and about half the atmospheric pressure at the surface. High pressure systems are also called anticyclones. On English-language weather maps, high-pressure centers are identified by the letter H in English, within the isobar with the highest pressure value. On constant pressure upper level charts, it is located within the highest height line contour. Typical conditions Highs are frequently associated with light winds at the surface and subsidence through the lower portion of the troposphere. In general, subsidence will dry out an air mass by adiabatic, or compressional, heating. Thus, high pressure typically brings clear skies. During the day, since no clouds are present to reflect sunlight, there is more incoming shortwave solar radiation and temperatures rise. At night, the absence of clouds means that outgoing longwave radiation (i.e. heat energy from the surface) is not absorbed, giving cooler diurnal low temperatures in all seasons. When surface winds become light, the subsidence produced directly under a high-pressure system can lead to a buildup of particulates in urban areas under the ridge, leading to widespread haze. If the low level relative humidity rises towards 100 percent overnight, fog can form. Strong, vertically shallow high-pressure systems moving from higher latitudes to lower latitudes in the northern hemisphere are associated with continental arctic air masses. Once arctic air moves over an unfrozen ocean, the air mass modifies greatly over the warmer water and takes on the character of a maritime air mass, which reduces the strength of the high-pressure system. When extremely cold air moves over relatively warm oceans, polar lows can develop. However, warm and moist (or maritime tropical) air masses that move poleward from tropical sources are slower to modify than arctic air masses. In climatology The horse latitudes, or torrid zone, is roughly at the 30th parallel and is the source of warm high pressure systems. As the hot air closer to the equator rises, it cools, losing moisture; it is then transported poleward where it descends, creating the high-pressure area. This is part of the Hadley cell circulation and is known as the subtropical ridge or subtropical high. It follows the track of the sun over the year, expanding north (south in the Southern Hemisphere) in spring and retreating south (north in the Southern Hemisphere) in fall. The subtropical ridge is a warm core high-pressure system, meaning it strengthens with height. Many of the world's deserts are caused by these climatological high-pressure systems. Some climatological high-pressure areas acquire regionally based names. The land-based Siberian High often remains quasi-stationary for more than a month during the most frigid time of the year, making it unique in that regard. It is also a bit larger and more persistent than its counterpart in North America. Surface winds accelerating down valleys down the western Pacific Ocean coastline, causing the winter monsoon. Arctic high-pressure systems such as the Siberian High are cold core, meaning that they weaken with height. The influence of the Azores High, also known as the Bermuda High, brings fair weather over much of the North Atlantic Ocean and mid to late summer heat waves in western Europe. Along its southerly periphery, the clockwise circulation often impels easterly waves, and tropical cyclones that develop from them, across the ocean towards landmasses in the western portion of ocean basins during the hurricane season. The highest barometric pressure ever recorded on Earth was measured in Tosontsengel, Zavkhan, Mongolia on 19 December 2001. A particularly hot summer such as 2003 which saw the subtropical ridge expand more than usual can bring heat waves as far north as Scandinavia—conversely, while Europe had record-breaking summer heat in 2003 due to a particularly strong subtropical ridge, its counterpart in North America was unusually weak, and temperatures across the continent that spring and summer were wet and well below normal. Connection to wind Wind flows from areas of high pressure to areas of low pressure. This is due to density differences between the two air masses. Since stronger high-pressure systems contain cooler or drier air, the air mass is more dense and flows towards areas that are warm or moist, which are in the vicinity of low pressure areas in advance of their associated cold fronts. The stronger the pressure difference, or pressure gradient, between a high-pressure system and a low-pressure system, the stronger the wind. The coriolis force caused by the Earth's rotation is what gives winds within high-pressure systems their clockwise circulation in the northern hemisphere (as the wind moves outward and is deflected right from the center of high pressure) and counterclockwise circulation in the southern hemisphere (as the wind moves outward and is deflected left from the center of high pressure). Friction with land slows down the wind flowing out of high-pressure systems and causes wind to flow more outward than would be the case in the absence of friction. This results in the 'actual wind' or 'true wind', including ageostrophic corrections, which add to the geostrophic wind that is characterized by flow parallel to the isobars.
Physical sciences
Atmospheric circulation
null
3543585
https://en.wikipedia.org/wiki/Joule-second
Joule-second
The joule-second (symbol J⋅s or J s) is the unit of action and of angular momentum in the International System of Units (SI) equal to the product of an SI derived unit, the joule (J), and an SI base unit, the second (s). The joule-second is a unit of action or of angular momentum. The joule-second also appears in quantum mechanics within the definition of the Planck constant. Angular momentum is the product of an object's moment of inertia, in units of kg⋅m2 and its angular velocity in units of rad⋅s−1. This product of moment of inertia and angular velocity yields kg⋅m2⋅s−1 or the joule-second. The Planck constant represents the energy of a wave, in units of joule, divided by the frequency of that wave, in units of s−1. This quotient of energy and frequency also yields the joule-second (J⋅s). Base units In SI base units the joule-second becomes kilogram-meter squared-per second or kg⋅m2⋅s−1. Dimensional Analysis of the joule-second yields M L2 T−1. Note the denominator of seconds (s) in the base units. Confusion with joules per second The joule-second (J⋅s) should not be confused with joules per second (J/s) or watts (W). In physical processes, when the unit of time appears in the denominator of a ratio, the described process occurs at a rate. For example, in discussions about speed, an object like a car travels a known distance of kilometers spread over a known number of seconds, and the car's speed is measured in the unit kilometer per hour (km/h). In physics, work per time describes a system's power, with the unit watt (W), which is equal to joules per second (J/s).
Physical sciences
Energy
Basics and measurement
3546942
https://en.wikipedia.org/wiki/Domestic%20pigeon
Domestic pigeon
The domestic pigeon (Columba livia domestica or Columba livia forma domestica) is a pigeon subspecies that was derived from the rock dove or rock pigeon. The rock pigeon is the world's oldest domesticated bird. Mesopotamian cuneiform tablets mention the domestication of pigeons more than 5,000 years ago, as do Egyptian hieroglyphics. Pigeons were most likely domesticated in the Mediterranean at least 2000–5000 years ago, and may have been domesticated earlier as a food source. Some research suggests that domestication occurred as early as 10,000 years ago. Pigeons have held historical importance to humans as food, pets, holy animals, and messengers. Due to their homing ability, pigeons have been used to deliver messages, including during the world wars. Despite this, city pigeons, which are feral birds, are generally seen as pests, mainly due to their droppings. Feral pigeons are considered invasive in many parts of the world, though they have a positive impact on wild bird populations, serving as an important prey species for birds of prey. History of domestication Despite the long history of pigeons, little is known about the specifics of their initial domestication. Which subspecies of C. livia was the progenitor of domestics, exactly when, how many times, where and how they were domesticated, and how they spread, remains unknown. Their fragile bones and similarity to wild birds make the fossil record a poor tool for their study. Thus most of what is known comes from written accounts, which almost certainly do not cover the first stages of domestication. The earliest recorded mention of pigeons comes from Mesopotamia some 5,000 years ago. Pigeon Valley in Cappadocia has rock formations that were carved into ancient dovecotes. Ancient Egyptians kept vast quantities of them, and would sacrifice tens of thousands at a time for ritual purposes. Akbar the Great traveled with a coterie of thousands of pigeons. The domestic pigeon was brought to the Americas by European colonists as an easy source of food and as messengers. Around the 18th century, European interest in fancy pigeons began, and breeders there greatly expanded the variety of pigeons, importing birds from the Middle East and South Asia and mixing different breeds to create new ones. Because domestic and feral pigeons have extensively interbred with wild rock doves, genetically pure wild-type pigeons may not exist anymore, or are nearly extinct. This frequent admixture further muddies the true origins of pigeons. Genetics From a genetic perspective, there are two loose ancestral clades of pigeons, but there is striking genetic homogeneity due to frequent interbreeding and human directed cross-breeding; pigeon fanciers often do not enforce breed standards, unlike with dogs. The first ancestral clade contains pigeons with exaggerated crops, tails, and manes; the second contains tumblers (the most diverse group), homing pigeons, owl pigeons, and those with exaggerated wattles. Over the millennia of human interaction with pigeons, a multitude of pigeon breeds have been created, which differ in either plumage or body structure. Markings A wild-type pigeon is closest in markings to the rock dove, which possesses a gradienting, slate-grey head and body with a green-purple iridescent neck, and ash-grey wings and tail with dark, often black, barring. Due to millennia of selective breeding, including crossing with other Columba species, domestic pigeons possess major variations in plumage; often two birds from the same clutch may be of different color. The domestic pigeon possesses 3 main colors; the wild-type blue, brown, and ash-red. This variation in color is linked to the parent's sex chromosomes; as animals with the ZW chromosome system, cockbirds possess the color genes from both parents, while hens only inherit their father's color and patterns. Additionally, there is some dominance observed; ash-red is dominant over the other two base colors, while blue is dominant over brown. Recessive red is a unique color which is inherited differently from the three base ones; it is distinct from ash-red in that the bird always is a uniform chestnut color. Another important aspect of pigeon markings is the pattern on the wing coverts, which exists in four variants; wild-type bar, check, T-check, and barless. T-check is the most dominant pattern, followed by check, barred, and the least dominant barless pattern. Additionally, the modifiers spread and dilute affects the expression of the color; the spread gene spreads the color of the bird's tail to its entire body, while dilute lightens the bird's overall color, as if were a dye being diluted to reduce its saturation. Crest A recessive allele in the EphB2 gene controls the crested-feather mutation in domestic pigeons. Pigeons with two copies of the crest allele grow neck and head feathers that point towards the top of the head, unlike other feathers that point towards the tail. Additionally, bacterial growth analysis suggests that crested pigeons have reduced bacterial-killing abilities due to reduced kinase activity. Pigeons may express the crest gene differently depending on its genetic heritage; two squabs from the same brood descending from the same pair may have one bird develop a peak crest, and the other a wild-type smooth head. Foot feathering Pigeons with feathers growing on their hind feet have differently expressed genes: a hindlimb-development gene called PITX1 is less active than normal, and a forelimb-development gene called Tbx5 that normally develops the wings is also active in the feet, causing both feather growth and larger leg bones. The cause of these changes is a change in the regulatory sequences of DNA that control the expression of the Pitx1 and the Tbx5 genes, rather than mutations in the genes themselves. Pigeon foot feathering has been speculated to use similar pathways as extinct microraptorian dinosaurs, although in pigeons the foot feathering does not form an airfoil. Hybridization There is strong evidence that some divergences in appearance between the wild-type rock dove and domestic pigeons, such as checkered wing patterns and red/brown coloration, may be due to introgression by cross-breeding with the speckled pigeon. Domestic pigeons may be crossed with the ringneck dove (Streptopelia risoria) to create offspring, but the offspring are not fertile. Life history Reproduction Domestic pigeons reproduce exactly as wild rock pigeons do; settling in a safe, cool nook, building a flimsy stick nest, and laying two eggs that are incubated for a little longer than two weeks. A pigeon keeper may select breeding partners, but in an open loft the birds choose their own mate. Both sexes of pigeons are extremely protective of their eggs and young, and often defend them vigorously from nest predators, including their human keepers. Baby pigeons are squabs, squeakers, or peeps, the latter two being a reference to their cry when begging for food. Initially, the squabs are fed by their parents with crop milk (or when human-raised, an appropriate substitute); some breeds are bred into such debilitating forms that they may require human intervention to produce squabs successfully, which necessitates feeding their chicks with special squab formula or by fostering them under another pair of pigeons. As they grow and become more mobile and alert, their parents transition them to their adult food of seeds and grains, and after fledging the chicks will follow their parents to the communal feeding ground; areas with plentiful forage that a pigeon flock uses. Here the chicks gain their independence and integrate into pigeon society. Domestic pigeons were selected to breed faster than their wild ancestors; a lack of a breeding season, abundance of food in a domestic setting, and swift maturity (squabs fledge in about a month, and often have already bred and fledged a few clutches of their own before reaching a year in age) leads to swift population growth of pigeons in the flock. This fact, and the number of pigeons lost in races or intentionally released, leads to exponential growth in free-living, feral populations. Pigeon related illness Pigeon breeders sometimes suffer from an ailment known as bird fancier's lung or pigeon lung. A form of hypersensitivity pneumonitis, pigeon lung is caused by the inhalation of the avian proteins found in feathers and dung. It can sometimes be combated by wearing a filtered mask. Other pigeon related pathogens causing lung disease are Chlamydophila psittaci (which causes psittacosis), Histoplasma capsulatum (which causes histoplasmosis) and Cryptococcus neoformans, which causes cryptococcosis. Uses For food Pigeons bred for meat are generally referred to as a meat or utility breed. The term "squab" can either refer to young birds or the meat harvested from them; these birds grow to a very large size in the nest before they fledge and are able to fly; during this stage of development they are often fattier and seen as being tastier than the fully-flighted adults. Squabs during this stage are valued as food; in neolithic and early agricultural communities they were an easy and reliable source of protein, the birds requiring only reliable sources of grains and water (which they independently foraged for) to enter breeding condition, and the rock formations they nested in would have made for attractive dwellings for early humans. Pigeon meat, both from squabs and from adult birds, are still a source of protein for people worldwide. Breeds of pigeons harvested for their meat during adulthood are collectively known as utility pigeons. For commercial meat production a breed of large white pigeon, the King pigeon, has been developed by selective breeding. Homing pigeons Homing pigeons are a specialized type of pigeon bred for navigation and speed. Originally developed through selective breeding to carry messages, members of this variety of pigeon are still being used in the sport of pigeon racing and the ceremony of releasing white doves at social events. These breeds of domestic pigeons, especially when trained are able to return to the home loft if released at a location that they have never visited before and that may be up to away. This ability of a pigeon to return home from a foreign location necessitates two sorts of information. The first, called "map sense" is their geographic location. The second, "compass sense" is the bearing they need to fly from their new location to reach their home. Both of these senses, however, respond to a number of different cues in different situations. The most popular conception of how pigeons are able to do this is that they are able to sense the Earth's magnetic field with tiny magnetic tissues in their head (magnetoception). Another theory is that pigeons have compass sense, which uses the position of the sun, along with an internal clock, to work out direction. However, studies have shown that if magnetic disruption or clock changes disrupt these senses, the pigeon can still manage to get home. The variability in the effects of manipulations to these sense of the pigeons indicates that there is more than one cue on which navigation is based and that map sense appears to rely on a comparison of available cues. Other potential cues used include The use of a sun compass Nocturnal navigation by stars Visual landmark map Navigation by infrasound map Polarised light compass Olfactory stimuli (see also olfactory navigation) Display Flying/sporting Pigeons are also kept by enthusiasts for the enjoyment of Flying/Sporting competitions. Unlike racers, these birds are not released far from their home lofts; breeds such as tipplers are bred for the ability to hover above the loft for hours at a time. Their ability to hover for a long time shows the ability of the keeper to select for endurance. Wild pigeons naturally somersault when evading aerial predators such as large-bodied falcons; they are naturally selected by the extreme speeds that some stooping falcons reach (over 320 km/h (200 mph)), being able to dodge this attack at the last second. Tumbler and roller pigeons are bred to enhance this ability; some birds have been recorded to be able to somersault on the ground and land on its feet, and some breeds are even deliberately bred to a point where the rolling ability is debilitative, being wholly unable to fly due to it. Exhibition breeds Pigeon fanciers developed many exotic forms of pigeon through selective breeding. Perhaps the simplest form of display pigeon are those of white plumage, either truly albino or merely white-feathered; these white birds were seen as holy animals or heralds of peace and are well represented in both ancient and contemporary culture. As pigeonkeepers accrued more experience, they started selecting for increasingly more unusual features in their birds; features such as unusual plumage patterns and colors, various crests, foot feathering, altered stance and proportion, or unusual behaviors are well represented in extant pigeon breeds. These birds are generally classed as fancy pigeons. Pigeon shows are conventions where pigeon fanciers and breeders meet to compete and trade their fancy pigeons. The various pigeon breeds dubbed "American show" were developed specifically by pigeon show frequenters pursuing a certain show standard determined by the National Pigeon Association. Fanciers compete against each other at exhibitions or shows and the different forms or breeds are judged to a standard to decide who has the best bird. There are many fancy or ornamental breeds of pigeons: among them are the English carrier pigeons, a variety of pigeon with prominent wattles and an almost vertical stance, the Duchess breed, which has as a prominent characteristic feet that are completely covered by a sort of fan of feathers, the fantails with a fan of tail feathers like a peacock, and the Voorburg Shield Cropper which are bred to inflate their crops in an attempt to seduce the human judges like they would another pigeon. Experimentation Domestic pigeons are model organisms commonly used in laboratory experiments relating to biology; often to test medicines and chemical substances, or in cognitive sciences. Pigeons have been trained to distinguish between cubist and impressionist paintings. In Project Sea Hunt, a US coast guard search and rescue project in the 1970s/1980s, pigeons were shown to be more effective than humans in spotting shipwreck victims at sea. Research in pigeons is widespread, encompassing shape and texture perception, exemplar and prototype memory, category-based and associative concepts, and many more unlisted here (see pigeon intelligence). Pigeons are able to acquire orthographic processing skills, which form part of the ability to read, and basic numerical skills equivalent to those shown in primates. Relation to humans Domestic pigeons, especially the leucistic and albinistic specimens commonly referred to as "white doves", have had a long history in symbolism. Charles Darwin was famously requested to write a book on pigeons during the process of writing his book, On the Origin of Species. He would subsequently take on pigeon fancying himself, which would ultimately lead to another book; The Variation of Animals and Plants Under Domestication. Illegal predator killing by enthusiasts In the United States, some pigeon keepers illegally trap and kill hawks and falcons to protect their pigeons. However, it is legal in some places. In American pigeon-related organizations, some enthusiasts have openly shared their experiences of killing hawks and falcons, although this is frowned upon by some fanciers. Some of the major clubs condone this practice. It is estimated that almost 1,000 birds of prey have been killed in Oregon and Washington, and that 1,000–2,000 are killed in southern California annually. In June 2007, three Oregon men were indicted with misdemeanour violations of the Migratory Bird Treaty Act for killing birds of prey. Seven Californians and a Texan were also charged in the case. In the West Midlands region of the United Kingdom pigeon fanciers have been blamed for a trap campaign to kill peregrine falcons. Eight illegal spring-loaded traps were found close to peregrine nests and at least one of the birds died. The steel traps are thought to have been set as part of a "concerted campaign" to kill as many of the birds as possible in the West Midlands. Feral pigeons Many domestic birds have escaped or been released over the years, and have given rise to the feral pigeon. As a result of natural variation, feral pigeons demonstrate a wide variety of plumages, ranging from closely resembling wild rock doves, to patterns directly inherited from their domestic ancestors, though over time a population tends to homogenize and adopt a plumage that suits their environment, such as camouflaging against black asphalt. The scarcity of the pure wild species is partly due to interbreeding with feral birds. Domestic pigeons can often be distinguished from feral pigeons because they usually have a metal or plastic band around one (sometimes both) legs which shows, by a number on it, that they are registered to an owner. Feral pigeons bear striking genetic resemblance to homing pigeons, supporting the idea that most feral pigeons trace their origins to homing pigeons who did not find their way home, or were otherwise sired by homing pigeons. The huge numbers of birds released in pigeon races and loft owners breaking down their lofts and leaving the pigeons to fend for themselves may be a significant factor in the persistence of urban pigeons.
Biology and health sciences
Pigeons
Animals
4771644
https://en.wikipedia.org/wiki/Necturus
Necturus
Necturus is a genus of aquatic salamanders in the family Proteidae. Species of the genus are native to the eastern United States and Canada. They are commonly known as waterdogs and mudpuppies. The common mudpuppy (N. maculosus) is probably the best-known species – as an amphibian with gill slits, it is often dissected in comparative anatomy classes. The common mudpuppy has the largest distribution of any fully aquatic salamander in North America. Taxonomy The genus Necturus is under scrutiny by herpetologists. The relationship between the species is still being studied. In 1991, Collins recommended N. maculosus louisianensis be elevated to full species status as N. louisianensis. Originally described by Viosca as a species, it is usually considered a subspecies of the common mudpuppy (N. maculosus). However, the interpretation of Collins was not largely followed. A 2018 study identified two lineages (Great Lakes and Mississippi River), but did not draw conclusions about species vs. subspecies status ("Our limited samples are consistent with either interpretation." pg. 360). Currently, the Society for the Study of Reptiles and Amphibians considers the Red River mudpuppy to be a subspecies of N. maculosus, but notes that "its taxonomic status requires further research." Species There are seven or eight species: {| class="wikitable" |- ! Image !! Scientific name !! Common Name !! Distribution |- | || Necturus alabamensis Viosca, 1937 || Alabama waterdog ||Alabama. |- | || Necturus beyeri Viosca, 1937 synonym: N. lodingi Viosca, 1937 || western waterdog (formerly the Gulf Coast waterdog) or Mobile mudpuppy. These two names have been recognised as independent species in the past. ||Alabama, Louisiana, Mississippi, and Texas. |- | || Necturus lewisi Brimley, 1924 || Neuse River waterdog ||North Carolina. |- | ||Necturus maculosus louisianensis Viosca, 1938 || Red River mudpuppy. Currently considered a subspecies of N. maculosus.||southeastern Kansas, southern Missouri, northeastern Oklahoma, Arkansas, and northcentral Louisiana. |- | || Necturus maculosus (Rafinesque, 1818) || common mudpuppy || southern section of Canada, as far south as Georgia. |- | ||Necturus moleri Guyer et al., 2020 || Apalachicola waterdog ||southeastern Alabama, the Panhandle of Florida, and southwestern to north-central Georgia. |- | || Necturus mounti Guyer et al., 2020 || Escambia waterdog ||southern Alabama and the Panhandle of Florida. |- | || Necturus punctatus (Gibbes, 1850) || dwarf waterdog ||from southeastern Virginia to southcentral Georgia. |- |}Nota bene: A binomial authority in parentheses indicates that the species was originally described in a genus other than Necturus. Two known fossil species, N. krausei and an unnamed species, are respectively known from the Paleocene of Saskatchewan and from Florida during the Pleistocene. DescriptionNecturus are paedomorphic: adults retain larval-like morphology with external gills, two pairs of gill slits, and no eyelids. They are moderately robust and have two pairs of short but well-developed limbs and a large, laterally compressed tail. Lungs are present but small. Typical adult size is in total length, but Necturus maculosus is larger and may reach .N. maculosus is brown to gray on its back with bluish black spots. There may be spots on its belly, but these spots range from heavily spotted to no spotting. There are dark red bushy gills. Four toes are present per hindlimb. Reproduction Females lay eggs under rocks and other large cover objects in late spring and early summer. Females guard nests at least until eggs hatch. Females forage while nest-guarding, but they may eat some of their eggs as a source of energy if other food sources are not readily available. Larvae are believed to stay under the rock as late as November. EcologyNecturus occur in surface waters, preferentially with clear water and rocky substrates without silt. N. maculosus live in lakes, rivers, streams, and creeks. They like shallow waters with low temperatures from autumn to early spring. They are most active in cold temperatures, specifically between 9.1 and 20.2 degrees Celsius. During the day, N. maculosus seeks refuge under rocks or logs and plant debris. They forage during the night and eat a variety of prey, but have preference for crayfish. During the winter and spring, N. maculosus will also eat fish.N. maculosus are good indicators of ecosystem health. This species has frequently been harmed via bycatch events (primarily passive ice fishing), chemical pollutants, and siltation. Amphibian chytrid fungus (Bd) has been known to affect captive N. maculosus, but it is currently unknown whether it has affected wild N. maculosus''.
Biology and health sciences
Salamanders and newts
Animals
4774419
https://en.wikipedia.org/wiki/Herpes%20simplex%20virus
Herpes simplex virus
Herpes simplex virus 1 (cold sores) and 2 (genital herpes) (HSV-1 and HSV-2), also known by their taxonomic names Human alphaherpesvirus 1 and Human alphaherpesvirus 2, are two members of the human Herpesviridae family, a set of viruses that produce viral infections in the majority of humans. Both HSV-1 and HSV-2 are very common and contagious. They can be spread when an infected person begins shedding the virus. As of 2016, about 67% of the world population under the age of 50 had HSV-1. In the United States, about 47.8% and 11.9% are estimated to have HSV-1 and HSV-2, respectively, though actual prevalence may be much higher. Because it can be transmitted through any intimate contact, it is one of the most common sexually transmitted infections. Symptoms Many of those who are infected never develop symptoms. Symptoms, when they occur, may include watery blisters in the skin of any location of the body, or in mucous membranes of the mouth, lips, nose, genitals, or eyes (herpes simplex keratitis). Lesions heal with a scab characteristic of herpetic disease. Sometimes, the viruses cause mild or atypical symptoms during outbreaks. However, they can also cause more troublesome forms of herpes simplex. As neurotropic and neuroinvasive viruses, HSV-1 and -2 persist in the body by hiding from the immune system in the cell bodies of neurons, particularly in sensory ganglia. After the initial or primary infection, some infected people experience sporadic episodes of viral reactivation or outbreaks. In an outbreak, the virus in a nerve cell becomes active and is transported via the neuron's axon to the skin, where virus replication and shedding occur and may cause new sores. Transmission HSV-1 and HSV-2 are transmitted by contact with an infected person who has reactivations of the virus. HSV 1 and HSV-2 are periodically shed, most often asymptomatically. In a study of people with first-episode genital HSV-1 infection from 2022, genital shedding of HSV-1 was detected on 12% of days at 2 months and declined significantly to 7% of days at 11 months. Most genital shedding was asymptomatic; genital and oral lesions and oral shedding were rare. Most sexual transmissions of HSV-2 occur during periods of asymptomatic shedding. Asymptomatic reactivation means that the virus causes atypical, subtle, or hard-to-notice symptoms that are not identified as an active herpes infection, so acquiring the virus is possible even if no active HSV blisters or sores are present. In one study, daily genital swab samples detected HSV-2 at a median of 12–28% of days among those who had an outbreak, and 10% of days among those with asymptomatic infection (no prior outbreaks), with many of these episodes occurring without visible outbreak ("subclinical shedding"). In another study, 73 subjects were randomized to receive valaciclovir 1 g daily or placebo for 60 days each in a two-way crossover design. A daily swab of the genital area was self-collected for HSV-2 detection by polymerase chain reaction, to compare the effect of valaciclovir versus placebo on asymptomatic viral shedding in immunocompetent, HSV-2 seropositive subjects without a history of symptomatic genital herpes infection. The study found that valaciclovir significantly reduced shedding during subclinical days compared to placebo, showing a 71% reduction; 84% of subjects had no shedding while receiving valaciclovir versus 54% of subjects on placebo. About 88% of patients treated with valaciclovir had no recognized signs or symptoms versus 77% for placebo. For HSV-2, subclinical shedding may account for most of the transmission. Studies on discordant partners (one infected with HSV-2, one not) show that the transmission rate is approximately 5–8.9 per 10,000 sexual contacts, with condom usage greatly reducing the risk of acquisition. Atypical symptoms are often attributed to other causes, such as a yeast infection. HSV-1 is often acquired orally during childhood. It may also be sexually transmitted, including contact with saliva, such as kissing and oral sex. Historically HSV-2 was primarily a sexually transmitted infection, but rates of HSV-1 genital infections have been increasing for the last few decades. Both viruses may also be transmitted vertically during natural childbirth. However, the risk of transmission is minimal if the mother has no symptoms nor exposed blisters during delivery. The risk is considerable when the mother is infected with the virus for the first time during late pregnancy, reflecting a high viral load. While most viral STDs can not be transmitted through objects as the virus dies quickly outside of the body, HSV can survive for up to 4.5 hours on surfaces and can be transmitted through use of towels, toothbrushes, cups, cutlery, etc. Herpes simplex viruses can affect areas of skin exposed to contact with an infected person. An example of this is herpetic whitlow, which is a herpes infection on the fingers; it was commonly found on dental surgeon's hands before the routine use of gloves when treating patients. Shaking hands with an infected person does not transmit this disease. Genital infection of HSV-2 increases the risk of acquiring HIV. Virology HSV has been a model virus for many studies in molecular biology. For instance, one of the first functional promoters in eukaryotes was discovered in HSV (of the thymidine kinase gene) and the virion protein VP16 is one of the most-studied transcriptional activators. Viral structure Animal herpes viruses all share some common properties. The structure of herpes viruses consists of a relatively large, double-stranded, linear DNA genome encased within an icosahedral protein cage called the capsid, which is wrapped in a lipid bilayer called the envelope. The envelope is joined to the capsid through a tegument. This complete particle is known as the virion. HSV-1 and HSV-2 each contain at least 74 genes (or open reading frames, ORFs) within their genomes, although speculation over gene crowding allows as many as 84 unique protein coding genes by 94 putative ORFs. These genes encode a variety of proteins involved in forming the capsid, tegument and envelope of the virus, as well as controlling the replication and infectivity of the virus. These genes and their functions are summarized in the table below. The genomes of HSV-1 and HSV-2 are complex and contain two unique regions called the long unique region (UL) and the short unique region (US). Of the 74 known ORFs, UL contains 56 viral genes, whereas US contains only 12. Transcription of HSV genes is catalyzed by RNA polymerase II of the infected host. Immediate early genes, which encode proteins, for example, ICP22 that regulate the expression of early and late viral genes, are the first to be expressed following infection. Early gene expression follows, to allow the synthesis of enzymes involved in DNA replication and the production of certain envelope glycoproteins. Expression of late genes occurs last; this group of genes predominantly encodes proteins that form the virion particle. Five proteins from (UL) form the viral capsid - UL6, UL18, UL35, UL38, and the major capsid protein UL19. Cellular entry Entry of HSV into a host cell involves several glycoproteins on the surface of the enveloped virus binding to their transmembrane receptors on the cell surface. Many of these receptors are then pulled inwards by the cell, which is thought to open a ring of three gHgL heterodimers stabilizing a compact conformation of the gB glycoprotein so that it springs out and punctures the cell membrane. The envelope covering the virus particle then fuses with the cell membrane, creating a pore through which the contents of the viral envelope enters the host cell. The sequential stages of HSV entry are analogous to those of other viruses. At first, complementary receptors on the virus and the cell surface bring the viral and cell membranes into proximity. Interactions of these molecules then form a stable entry pore through which the viral envelope contents are introduced to the host cell. The virus can also be endocytosed after binding to the receptors, and the fusion could occur at the endosome. In electron micrographs, the outer leaflets of the viral and cellular lipid bilayers have been seen merged; this hemifusion may be on the usual path to entry or it may usually be an arrested state more likely to be captured than a transient entry mechanism. In the case of a herpes virus, initial interactions occur when two viral envelope glycoproteins called glycoprotein C (gC) and glycoprotein B (gB) bind to a cell surface polysaccharide called heparan sulfate. Next, the major receptor binding protein, glycoprotein D (gD), binds specifically to at least one of three known entry receptors. These cell receptors include herpesvirus entry mediator (HVEM), nectin-1 and 3-O sulfated heparan sulfate. The nectin receptors usually produce cell-cell adhesion, to provide a strong point of attachment for the virus to the host cell. These interactions bring the membrane surfaces into mutual proximity and allow for other glycoproteins embedded in the viral envelope to interact with other cell surface molecules. Once bound to the HVEM, gD changes its conformation and interacts with viral glycoproteins H (gH) and L (gL), which form a complex. The interaction of these membrane proteins may result in a hemifusion state. gB interaction with the gH/gL complex creates an entry pore for the viral capsid. gB interacts with glycosaminoglycans on the surface of the host cell. Genetic inoculation After the viral capsid enters the cellular cytoplasm, it starts to express viral protein ICP27. ICP27 is a regulator protein that causes disruption in host protein synthesis and utilizes it for viral replication. ICP27 binds with a cellular enzyme Serine-Arginine Protein Kinase 1, SRPK1. Formation of this complex causes the SRPK1 shift from the cytoplasm to the nucleus, and the viral genome gets transported to the cell nucleus. Once attached to the nucleus at a nuclear entry pore, the capsid ejects its DNA contents via the capsid portal. The capsid portal is formed by 12 copies of the portal protein, UL6, arranged as a ring; the proteins contain a leucine zipper sequence of amino acids, which allow them to adhere to each other. Each icosahedral capsid contains a single portal, located in one vertex. The DNA exits the capsid in a single linear segment. Immune evasion HSV evades the immune system through interference with MHC class I antigen presentation on the cell surface, by blocking the transporter associated with antigen processing (TAP) induced by the secretion of ICP-47 by HSV. In the host cell, TAP transports digested viral antigen epitope peptides from the cytosol to the endoplasmic reticulum, allowing these epitopes to be combined with MHC class I molecules and presented on the surface of the cell. Viral epitope presentation with MHC class I is a requirement for the activation of cytotoxic T-lymphocytes (CTLs), the major effectors of the cell-mediated immune response against virally infected cells. ICP-47 prevents the initiation of a CTL-response against HSV, allowing the virus to survive for a protracted period in the host. HSV usually produces cytopathic effect (CPE) within 24–72 hours post-infection in permissive cell lines which is observed by classical plaque formation. However, HSV-1 clinical isolates have also been reported that did not show any CPE in Vero and A549 cell cultures over several passages with low levels of virus protein expression. Probably these HSV-1 isolates are evolving towards a more "cryptic" form to establish chronic infection thereby unravelling yet another strategy to evade the host immune system, besides neuronal latency. Replication Following the infection of a cell, a cascade of herpes virus proteins, called immediate-early, early, and late, is produced. Research using flow cytometry on another member of the herpes virus family, Kaposi's sarcoma-associated herpesvirus, indicates the possibility of an additional lytic stage, delayed-late. These stages of lytic infection, particularly late lytic, are distinct from the latency stage. In the case of HSV-1, no protein products are detected during latency, whereas they are detected during the lytic cycle. The early proteins transcribed are used in the regulation of genetic replication of the virus. On entering the cell, an α-TIF protein joins the viral particle and aids in immediate-early transcription. The virion host shutoff protein (VHS or UL41) is very important to viral replication. This enzyme shuts off protein synthesis in the host, degrades host mRNA, helps in viral replication, and regulates gene expression of viral proteins. The viral genome immediately travels to the nucleus, but the VHS protein remains in the cytoplasm. The late proteins form the capsid and the receptors on the surface of the virus. Packaging of the viral particles — including the genome, core, and capsid - occurs in the nucleus of the cell. Here, concatemers of the viral genome are separated by cleavage and are placed into formed capsids. HSV-1 undergoes a process of primary and secondary envelopment. The primary envelope is acquired by budding into the inner nuclear membrane of the cell. This then fuses with the outer nuclear membrane. The virus acquires its final envelope by budding into cytoplasmic vesicles. Latent infection HSVs may persist in a quiescent but persistent form known as latent infection, notably in neural ganglia. The HSV genome circular DNA resides in the cell nucleus as an episome. HSV-1 tends to reside in the trigeminal ganglia, while HSV-2 tends to reside in the sacral ganglia, but these are historical tendencies only. During latent infection of a cell, HSVs express latency-associated transcript (LAT) RNA. LAT regulates the host cell genome and interferes with natural cell death mechanisms. By maintaining the host cells, LAT expression preserves a reservoir of the virus, which allows subsequent, usually symptomatic, periodic recurrences or "outbreaks" characteristic of non-latency. Whether or not recurrences are symptomatic, viral shedding occurs to infect a new host. A protein found in neurons may bind to herpes virus DNA and regulate latency. Herpes virus DNA contains a gene for a protein called ICP4, which is an important transactivator of genes associated with lytic infection in HSV-1. Elements surrounding the gene for ICP4 bind a protein known as the human neuronal protein neuronal restrictive silencing factor (NRSF) or human repressor element silencing transcription factor (REST). When bound to the viral DNA elements, histone deacetylation occurs atop the ICP4 gene sequence to prevent initiation of transcription from this gene, thereby preventing transcription of other viral genes involved in the lytic cycle. Another HSV protein reverses the inhibition of ICP4 protein synthesis. ICP0 dissociates NRSF from the ICP4 gene and thus prevents silencing of the viral DNA. Genome The HSV genome spans about 150,000 bp and consists of two unique segments, named unique long (UL) and unique short (US), as well as terminal inverted repeats found to the two ends of them named repeat long (RL) and repeat short (RS). There are also minor "terminal redundancy" (α) elements found on the further ends of RS. The overall arrangement is RL-UL-RL-α-RS-US-RS-α with each pair of repeats inverting each other. The whole sequence is then encapsulated in a terminal direct repeat. The long and short parts each have their own origins of replication, with OriL located between UL28 and UL30 and OriS located in a pair near the RS. As the L and S segments can be assembled in any direction, they can be inverted relative to each other freely, forming various linear isomers. Gene expression HSV genes are expressed in 3 temporal classes: immediate early (IE or α), early (E or ß), and late (γ) genes. However, the progression of viral gene expression is rather gradual than in clearly distinct stages. Immediate early genes are transcribed right after infection and their gene products activate transcription of the early genes. Early gene products help to replicate the viral DNA. Viral DNA replication, in turn, stimulates the expression of the late genes, encoding the structural proteins. Transcription of the immediate early (IE) genes begins right after virus DNA enters the nucleus. All virus genes are transcribed by the host RNA polymerase II. Although host proteins are sufficient for virus transcription, viral proteins are necessary for the transcription of certain genes. For instance, VP16 plays an important role in IE transcription and the virus particle brings it into the host cell, so that it does not need to be produced first. Similarly, the IE proteins RS1 (ICP4), UL54 (ICP27), and ICP0 promote the transcription of the early (E) genes. Like IE genes, early gene promoters contain binding sites for cellular transcription factors. One early protein, ICP8, is necessary for both transcription of late genes and DNA replication. Later in the life cycle of HSV, the expression of immediate early and early genes is shut down. This is mediated by specific virus proteins, e.g. ICP4, which represses itself by binding to elements in its promoter. As a consequence, the down-regulation of ICP4 levels leads to a reduction of early and late gene expression, as ICP4 is important for both. Importantly, HSV shuts down host cell RNA, DNA, and protein synthesis to direct cellular resources to virus production. First, the virus protein vhs induces the degradation of existing mRNAs early in infection. Other viral genes impede cellular transcription and translation. For instance, ICP27 inhibits RNA splicing, so that virus mRNAs (which are usually not spliced) gain an advantage over host mRNAs. Finally, virus proteins destabilize certain cellular proteins involved in the host cell cycle, so that both cell division and host cell DNA replication are disturbed in favor of virus replication. Evolution The herpes simplex 1 genomes can be classified into six clades. Four of these occur in East Africa, one in East Asia and one in Europe and North America. This suggests that the virus may have originated in East Africa. The most recent common ancestor of the Eurasian strains appears to have evolved ~60,000 years ago. The East Asian HSV-1 isolates have an unusual pattern that is currently best explained by the two waves of migration responsible for the peopling of Japan. Herpes simplex 2 genomes can be divided into two groups: one is globally distributed and the other is mostly limited to sub Saharan Africa. The globally distributed genotype has undergone four ancient recombinations with herpes simplex 1. It has also been reported that HSV-1 and HSV-2 can have contemporary and stable recombination events in hosts simultaneously infected with both pathogens. All of the cases are HSV-2 acquiring parts of the HSV-1 genome, sometimes changing parts of its antigen epitope in the process. The mutation rate has been estimated to be ~1.38×10−7 substitutions/site/year. In the clinical setting, mutations in either the thymidine kinase gene or DNA polymerase gene have caused resistance to aciclovir. However, most of the mutations occur in the thymidine kinase gene rather than the DNA polymerase gene. Another analysis has estimated the mutation rate in the herpes simplex 1 genome to be 1.82×10−8 nucleotide substitution per site per year. This analysis placed the most recent common ancestor of this virus ~710,000 years ago. Herpes simplex 1 and 2 diverged about . Treatment Similar to other herpesviridae, the herpes simplex viruses establish latent lifelong infection, and thus cannot be eradicated from the body with current treatments. Treatment usually involves general-purpose antiviral drugs that interfere with viral replication, reduce the physical severity of outbreak-associated lesions, and lower the chance of transmission to others. Studies of vulnerable patient populations have indicated that daily use of antivirals such as aciclovir and valaciclovir can reduce reactivation rates. The extensive use of antiherpetic drugs has led to the development of some drug resistance, which in turn may lead to treatment failure. Therefore, new sources of drugs are broadly investigated to address the problem. In January 2020, a comprehensive review article was published that demonstrated the effectiveness of natural products as promising anti-HSV drugs. Pyrithione, a zinc ionophore, has shown antiviral activity against herpes simplex. Alzheimer's disease In 1979, it was reported that there is a possible link between HSV-1 and Alzheimer's disease, in people with the epsilon4 allele of the gene APOE. HSV-1 appears to be particularly damaging to the nervous system and increases one's risk of developing Alzheimer's disease. The virus interacts with the components and receptors of lipoproteins, which may lead to the development of Alzheimer's disease. This research identifies HSVs as the pathogen most clearly linked to the establishment of Alzheimer's. According to a study done in 1997, without the presence of the gene allele, HSV-1 does not appear to cause any neurological damage or increase the risk of Alzheimer's. However, a more recent prospective study published in 2008 with a cohort of 591 people showed a statistically significant difference between patients with antibodies indicating recent reactivation of HSV and those without these antibodies in the incidence of Alzheimer's disease, without direct correlation to the APOE-epsilon4 allele. The trial had a small sample of patients who did not have the antibody at baseline, so the results should be viewed as highly uncertain. In 2011, Manchester University scientists showed that treating HSV1-infected cells with antiviral agents decreased the accumulation of β-amyloid and tau protein and also decreased HSV-1 replication. A 2018 retrospective study from Taiwan on 33,000 patients found that being infected with herpes simplex virus increased the risk of dementia 2.56 times (95% CI: 2.3-2.8) in patients not receiving anti-herpetic medications (2.6 times for HSV-1 infections and 2.0 times for HSV-2 infections). However, HSV-infected patients who were receiving anti-herpetic medications (e.g., acyclovir, famciclovir, ganciclovir, idoxuridine, penciclovir, tromantadine, valaciclovir, or valganciclovir) showed no elevated risk of dementia compared to patients uninfected with HSV. Multiplicity reactivation Multiplicity reactivation (MR) is the process by which viral genomes containing inactivating damage interact within an infected cell to form a viable viral genome. MR was originally discovered with the bacterial virus bacteriophage T4 but was subsequently also found with pathogenic viruses including influenza virus, HIV-1, adenovirus simian virus 40, vaccinia virus, reovirus, poliovirus, and herpes simplex virus. When HSV particles are exposed to doses of a DNA-damaging agent that would be lethal in single infections but are then allowed to undergo multiple infections (i.e. two or more viruses per host cell), MR is observed. Enhanced survival of HSV-1 due to MR occurs upon exposure to different DNA damaging agents, including methyl methanesulfonate, trimethylpsoralen (which causes inter-strand DNA cross-links), and UV light. After treatment of genetically marked HSV with trimethylpsoralen, recombination between the marked viruses increases, suggesting that trimethylpsoralen damage stimulates recombination. MR of HSV appears to partially depend on the host cell recombinational repair machinery since skin fibroblast cells defective in a component of this machinery (i.e. cells from Bloom's syndrome patients) are deficient in MR. These observations suggest that MR in HSV infections involves genetic recombination between damaged viral genomes resulting in the production of viable progeny viruses. HSV-1, upon infecting host cells, induces inflammation and oxidative stress. Thus it appears that the HSV genome may be subjected to oxidative DNA damage during infection, and that MR may enhance viral survival and virulence under these conditions. Use as an anti-cancer agent Modified Herpes simplex virus is considered as a potential therapy for cancer and has been extensively clinically tested to assess its oncolytic (cancer-killing) ability. Interim overall survival data from Amgen's phase 3 trial of a genetically attenuated herpes virus suggests efficacy against melanoma. Use in neuronal connection tracing Herpes simplex virus is also used as a transneuronal tracer defining connections among neurons by traversing synapses. Other related outcomes HSV-2 is the most common cause of Mollaret's meningitis. HSV-1 can lead to potentially fatal cases of herpes simplex encephalitis. Herpes simplex viruses have also been studied in the central nervous system disorders such as multiple sclerosis, but research has been conflicting and inconclusive. Following a diagnosis of genital herpes simplex infection, patients may develop an episode of profound depression. In addition to offering antiviral medication to alleviate symptoms and shorten their duration, physicians must also address the mental health impact of a new diagnosis. Providing information on the very high prevalence of these infections, their effective treatments, and future therapies in development may provide hope to patients who are otherwise demoralized. HSV infection was found to increase all-cause mortality in Denmark: 19.3% excess one-year mortality for HSV-1 and 5.3% for HSV-2 in the first year of infection. Additionally, lower employment rates and higher disability pension rates were observed. Research There exist commonly used vaccines to some herpesviruses, such as the veterinary vaccine HVT/LT (Turkey herpesvirus vector laryngotracheitis vaccine). However, it prevents atherosclerosis (which histologically mirrors atherosclerosis in humans) in target animals vaccinated. The only human vaccines available for herpesviruses are for Varicella zoster virus, given to children around their first birthday to prevent chickenpox (varicella), or to adults to prevent an outbreak of shingles (herpes zoster). There is, however, no human vaccine for herpes simplex viruses. As of 2022, there are active pre-clinical and clinical studies underway on herpes simplex in humans; vaccines are being developed for both treatment and prevention.
Biology and health sciences
Specific viruses
Health
23836476
https://en.wikipedia.org/wiki/Hypocarnivore
Hypocarnivore
A hypocarnivore is an animal that consumes less than 30% meat for its diet, the majority of which consists of fungi, fruits, and other plant material. Examples of living hypocarnivores are the grizzly bear (Ursus horribilis), black bear (Ursus americanus), binturong (Arctictis binturong) and kinkajou (Potos flavus) The evolutionary division of carnivory into three groups, including hypercarnivore and mesocarnivore, appears to have occurred about 40 million years ago (mya). The term hypocarnivory is used with increasing frequency in describing early Canidae evolution, and reliance upon that survival strategy has a documented history in North American Borophaginae during the Miocene (23.03 to 5.33 mya). Twenty-five species of hypocarnivore are documented as co-occurring on the North American continent 30 mya. A shift from hyper- to hypo- occurred at least three times among Oligocene and Miocene canids Oxetocyon, Phlaocyon, and Cynarctus. Large hypocarnivores (Ursus) were rare and developed in the mid-to-late Miocene-Pliocene as Borophaginae became extinct. Dentition Examination of dentition shows that post-carnassial molar volume expands with hypocarnivores while decreasing in hypercarnivores. Prohesperocyon (38 mya—33.9 mya) displayed a shift in relative proportion between slicing and grinding functions indicative of a dietary shift away from vertebrate foods to one including fruits.
Biology and health sciences
Ethology
Biology
23836952
https://en.wikipedia.org/wiki/Mesocarnivore
Mesocarnivore
A mesocarnivore is an animal whose diet consists of 30–70% meat with the balance consisting of non-vertebrate foods which may include insects, fungi, fruits, other plant material and any food that is available to them. Mesocarnivores are from a large family group of mammalian carnivores and vary from small to medium sized, which are often less than fifteen kilograms, the human is a notable exception. Mesocarnivores are seen today among the Canidae (coyotes, foxes), Viverridae (civets), Mustelidae (martens, tayra), Procyonidae (ringtail, raccoon), Mephitidae (skunks), and Herpestidae (some mongooses). The red fox is also the most common of the mesocarnivores in Europe and has a high population density in the areas they reside. In North America, some mesocarnivores are in danger of being over hunted for their pelts. This has led to efforts to help protect and conserve the mesocarnivores in the area which have been largely successful thus far. These animals play an essential role in the function and system of the ecosystem, since the elimination of apex predators. Evolution Mesocarnivores, as a part of the mammalian carnivore family, play a large role in the ecosystem, due to their prey-drive effects and impact on its functionality and structure. They are an important part of the ecological function, as their small to medium size allows them to disperse seeds that hypercarnivores cannot. Mesocarnivores transport seeds in open spaces, as far as one kilometre and disperse seeds within 600 to 750 metres of each other. They can influence other native carnivores by predation and competition in the ecosystem, and can lead to a reduction or possible extinction of prey species and affect geographical distribution, changing the structure of the ecosystem. Mesocarnivores also serve other ecological roles such as their position in the food web and disease mitigation. Mesocarnivores' habitats are rapidly changing due to urbanisation, habitat fragmentation and deforestation, which is a threat to survival for these animals, due to habitat loss and can cause a decrease in species. Some mesocarnivores have adapted very quickly to the constantly changing habitat conditions, compared to other mesocarnivores, for example the coyote (Canis latrans) in Northeast North America. Many carnivores have different locomotor movements and can easily adapt to a range of habitats and source various foods. Characteristics Behaviour and activity In some mesocarnivores, including the masked palm civet and hog badger, activity patterns peak during the night. Mesocarnivores activity levels change within different seasons and climates. Different temperatures and the rate of plant growth may affect the activity patterns in mesocarnivores. Masked palm civets in China do not appear often in the winter months (December to February) and are not as active. Mesocarnivores' behaviour and characteristics are individual to their species. For example, coyotes are pack animals and form strong family relations. The way mesocarnivores communicate with each other is through their behaviours that are able to organise mating systems, distinguish parental care and other behaviours. Carnivores also use their senses to communicate with other animals and in the pack, especially their olfactory senses. Mesocarnivores perform a wide range of different movements. Different species of mesocarnivores can achieve different types of locomotor movements. For example, otters (Lutrinae) are specialised in swimming in water, however find it difficult to move on land. Other carnivores can improve their locomotor movements by behaviour modifications, for example, the red wolf demonstrate group hunting behaviour where it allows them to run and hunt prey as a pack, that can not be done individually. Carnivores with limbs that are adapted for running may run, gallop or pace to go at a fast pace and cover long distances. These carnivorous mammals use their gait which is dependent on their species and size. The structure of a carnivore is designed to catch prey and kill it. Feeding behaviours Mesocarnivores are found to be nocturnal and are hunting for prey when they are most active during the nighttime. Mesocarnivores' feeding behaviours mainly consist of prey availability. They feed on small mammals which include a range of different mice and squirrels, such as the northern grasshopper mice, ord's kangaroo rat and thirteen-lined ground squirrels. Some other examples of mesocarnivores' prey are the blacktailed jackrabbit and the desert cottontail. Large and small mammals are considered as prey to these mesocarnivores, as well as different herbivores, depending on what food is most readily available to these animals. Without apex predators, there is a decreased level of inter-specific competition in the food chain between mesocarnivores, allowing them to increase their scavenging options for different food. As mesocarnivores are scavengers, they will eat any food that is accessible to them. For example, the yellow-throated marten and Siberian weasel change their feeding behaviours in winter when limited fruits are available and convert to small mammal prey. Mesocarnivores are closely related to other mammals in regards to competition and intraguild predation. Interspecific competition is a vital part of the ecological species and community structure, as a result can lead to "exploitation competition" and "interference competition" with other species. Dentition Mesocarnivore cheek teeth are heterodont and their different shapes reflect distinct functions. Incisors and canines are used to apprehend food and kill prey, pointed premolars pierce and hold prey, and molars are involved in both slicing and crushing functions. The slicing function of the molars is produced by occlusion between the carnassials, the lower first molar, and the upper fourth premolar. Mesocarnivores are first represented by the Miacidae. They are best represented by Prohesperocyon, with three incisors, one canine tooth, four premolars above. The jaw has three molars below, and two molars above on each side. Taxonomy There are many animals in the wild that are considered as mesocarnivores, such as species of lynx, bobcat, American marten, fisher, river otter, American mink, coyote, red fox, gray fox, raccoon, striped skunk, weasels. Individual species' diets may vary, depending on the season and what food can be sourced. Mesocarnivore mammals have a large role in the ecosystem that impacts ecological community and system in the environment. Example species Coyote (Canis latrans) The coyote (Canis latrans) is a native species to North America. They can live up to a lifespan of fourteen years, with their size ranging from 81–94 cm (32 to 37 inch) head to body, and weigh 9–23 kg (20–50 pounds). Coyotes' diet mostly consists of mammals, fruits, birds, grass and insects. They are also hunters and will eat anything of readily available prey including rabbits, fish, lamb. The coyotes in the wild enjoy intense smells of adventure and prey, as well as having an excellent sense of vision. They are pack animals and hunt prey and food in a pack, especially in the fall and winter. River otter (Lontra canadensis) The river otter is one of North America's native animals. They have an average lifespan of 8 to 9 years, with a body length ranging from 56–80 cm (22–32 inch) head to body and weigh 5–13 kg (11–30 pounds). The river otter's habitat is in water and on land. They create a burrow near the water as their den and easily adapt to other aquatic habitats. They hunt during the night, and find food that is readily available to them. River otters have great swimming abilities and stay active during winter. Raccoon (Procyon lotor) There are several raccoon species which are also known as ringtail, all originated from the United States. Their physical characteristics include short limbs, a pointed snout and small upright ears, with a body length of 75–90 cm (30–35 inch) long. Raccoons weight varies from 10–20 kg (22–44 pounds) and have a furry coat that resembles black, grey and brown shades. These mesocarnivores catch majority of their food in water, including crayfish, frogs and other marine animals, as well as feeding on rodents and other plant material. Some species of the raccoon include the Barbados raccoon (P. gloveralleni), Tres Marías raccoon (P. insularis), Bahaman raccoon (P. maynardi), Guadeloupe raccoon (P. minor) and Cozumel raccoon (P. pygmaeus). Mongoose (Herpestidae) The mongoose is a species of mesocarnivores which are mainly located in Africa, southern Asia and southern Europe. They are known for their predatory attacks on snakes. The meerkat is known as a part of the mongoose family of mesocarnivores. Mongooses are animals with physical features including short legs, pointed snout, minute ears and a long tail. Their fur colour resembles grey to brown shades and have specks of lighter grey. The mongoose ranges in size from the smallest, dwarf mongoose, 17–24 cm (7–10 inch) in body length and the largest mongoose of 48–74 cm (19–29 inch) in body length. Dwarf mongoose have a tail approximately 15–20 cm (6–8 inch) long, and larger mongooses have a longer tail up to 40 cm (19 inch) long. Red fox (Vulpes vulpes) The red fox is a species part of the fox family, which is located in Europe, Asia, Africa and North America. Its body length is usually approximately 90–105 cm (35–41 inch) long, 30–40 cm (12–16 inch) of its body length being its tail, and is a height of 40 cm (16 inch). Many adult red foxes weigh 5–7 kg (11–15 pounds) and can reach up to 14 kg (31 pounds). The physical characteristics of the red fox have a soft thin undercoat and long hairs that consists of orange, red, brown shades. The red fox has black ears and legs, with white on the tip of its tail and on its chest. Red foxes live in a range of habitats which include grasslands, forests, mountains and deserts. Striped skunk (Mephitis mephitis) The striped skunk is a mesocarnivore species that are located in the United States. Their physical characteristics in size range from 20–25 cm (8–10 inch) from head to body, with a 12–38 cm (5–15 inch) tail. Striped skunks weigh between 200g–6 kg (7 ounces–14 pounds) and have an average lifespan of 3 years. They are easily adaptable animals that live in forests, woodlands and grasslands. These mesocarnivores can be easily recognized by their black fur with a thin white stripe from their nose to their forehead. There are two thick white stripes that run along the sides of their back and continue to their furry, bushy tail with grey shades. Striped skunks are known for their predatory skunk spray, where oily liquid is released by its glands, resulting in a foul odor to their predators. Marten (Martes spp.) The marten is a mesocarnivore species which are found in Canada, United States, Africa, Asia and Europe. There are many different species of the marten. Their physical characteristics include a variation in size and colour from yellow to shades of dark brown, short legs, small, round ears and slender bodies, with thick coats. Their body length ranges from 35–65 cm (14–26 inch), with a long tail of 3–7 cm (9–18 inch), depending on the species and weigh 1–2kg (2–4 pounds). Some species of the marten include American marten, pine marten, stone marten, yellow-throated marten, and nilgiri marten.
Biology and health sciences
Ethology
Biology
23837739
https://en.wikipedia.org/wiki/International%20Prototype%20of%20the%20Kilogram
International Prototype of the Kilogram
The International Prototype of the Kilogram (referred to by metrologists as the IPK or Le Grand K; sometimes called the ur-kilogram, or urkilogram, particularly by German-language authors writing in English:30) is an object whose mass was used to define the kilogram from 1889, when it replaced the Kilogramme des Archives, until 2019, when it was replaced by a new definition of the kilogram based entirely on physical constants. During that time, the IPK and its duplicates were used to calibrate all other kilogram mass standards on Earth. The IPK is a roughly golfball-sized object made of a platinum–iridium alloy known as "Pt10Ir", which is 90% platinum and 10% iridium (by mass) and is machined into a right-circular cylinder with height equal to its diameter of about 39millimetres to reduce its surface area. The addition of 10% iridium improved upon the all-platinum Kilogramme des Archives by greatly increasing hardness while still retaining platinum's many virtues: extreme resistance to oxidation, extremely high density (almost twice as dense as lead and more than 21 times as dense as water), satisfactory electrical and thermal conductivities, and low magnetic susceptibility. By 2018, the IPK underpinned the definitions of four of the seven SI base units: the kilogram itself, plus the mole, ampere, and candela (whose definitions at the time referenced the gram, newton, and watt respectively) as well as the definitions of every named SI derived unit except the hertz, becquerel, degree Celsius, gray, sievert, farad, ohm, siemens, henry, radian and steradian. The IPK and its six sister copies are stored at the International Bureau of Weights and Measures (known by its French-language initials BIPM) in an environmentally monitored safe in the lower vault located in the basement of the BIPM's Pavillon de Breteuil in Saint-Cloud on the outskirts of Paris (see External images, below, for photographs). Three independently controlled keys are required to open the vault. Official copies of the IPK were made available to other nations to serve as their national standards. These were compared to the IPK roughly every 40 years, thereby providing traceability of local measurements back to the IPK. Creation The Metre Convention was signed on 20 May 1875 and further formalised the metric system (a predecessor to the SI), quickly leading to the production of the IPK. The IPK is one of three cylinders made in London in 1879 by Johnson Matthey, which continued to manufacture nearly all of the national prototypes as needed until the new definition of the kilogram came into effect in 2019. In 1883, the mass of the IPK was found to be indistinguishable from that of the Kilogramme des Archives made eighty-four years prior, and was formally ratified as the kilogram by the 1st CGPM in 1889. Copies of the IPK The IPK and its various copies are given the following designations in the literature: The IPK itself, stored in the BIPM's vault in Saint-Cloud, France. Six sister copies: K1, 7, 8(41), 32, 43 and 47. Stored in the same vault at the BIPM. Ten working copies: eight (9, 31, 42′, 63, 77, 88, 91, and 650) for routine use and two (25 and 73) for special use. Kept in the BIPM's calibration laboratory in Saint-Cloud, France. National prototypes, stored in Argentina (30), Australia (44 and 87), Austria (49), Belgium (28 and 37), Brazil (66), Canada (50 and 74), China (60 and 64; 75 in Hong Kong), Czech Republic (67), Denmark (48), Egypt (58), Finland (23), France (35), Germany (52, 55 and 70), Hungary (16), India (57), Indonesia (46), Israel (71), Italy (5 and 76), Japan (6, 94 and E59), Kazakhstan, Kenya (95), Mexico (21, 90 and 96), Netherlands (53), North Korea (68), Norway (36), Pakistan (93), Poland (51), Portugal (69), Romania (2), Russia (12 and 26), Serbia (11 and 29), Singapore (83), Slovakia (41 and 65), South Africa (56), South Korea (39, 72 and 84), Spain (24 and 3), Sweden (40 and 86), Switzerland (38 and 89), Taiwan (78), Thailand (80), Turkey (54), United Kingdom (18, 81 and 82), and the United States (20, 4, 79, 85 and 92). Some additional copies held by non-national organisations, such as the French Academy of Sciences in Paris (34) and the Istituto di Metrologia G. Colonnetti in Turin (62). Stability of the IPK Before 2019, by definition, the error in the measured value of the IPK's mass was exactly zero; the mass of the IPK was the kilogram. However, any changes in the IPK's mass over time could be deduced by comparing its mass to that of its official copies stored throughout the world, a rarely undertaken process called "periodic verification". The only three verifications occurred in 1889, 1948, and 1989. For instance, the US owns five 10%iridium (Pt10Ir) kilogram standards, two of which, K4 and K20, are from the original batch of 40 replicas distributed in 1884. The K20 prototype was designated as the primary national standard of mass for the US. Both of these, as well as those from other nations, are periodically returned to the BIPM for verification. Great care is exercised when transporting prototypes. In 1984, the K4 and K20 prototypes were hand-carried in the passenger section of separate commercial flights. None of the replicas has a mass precisely equal to that of the IPK; their masses are calibrated and documented as offset values. For instance, K20, the US's primary standard, originally had an official mass of (micrograms) in 1889; that is to say, K20 was 39μg less than the IPK. A verification performed in 1948 showed a mass of . The latest verification performed in 1989 shows a mass precisely identical to its original 1889 value. Quite unlike transient variations such as this, the US's check standard, K4, has persistently declined in mass relative to the IPK—and for an identifiable reason: check standards are used much more often than primary standards and are prone to scratches and other wear. K4 was originally delivered with an official mass of in 1889, but as of 1989 was officially calibrated at and ten years later was Over a period of 110 years, K4 lost relative to the IPK. Beyond the simple wear that check standards can experience, the mass of even the carefully stored national prototypes can drift relative to the IPK for a variety of reasons, some known and some unknown. Since the IPK and its replicas are stored in air (albeit under two or more nested bell jars), they gain mass through adsorption of atmospheric contamination onto their surfaces. Accordingly, they are cleaned in a process the BIPM developed between 1939 and 1946 known as "the BIPM cleaning method" that comprises firmly rubbing with a chamois soaked in equal parts ether and ethanol, followed by steam cleaning with bi-distilled water, and allowing the prototypes to settle for days before verification. Before the BIPM's published report in 1994 detailing the relative change in mass of the prototypes, different standard bodies used different techniques to clean their prototypes. The NIST's practice before then was to soak and rinse its two prototypes first in benzene, then in ethanol, and to then clean them with a jet of bi-distilled water steam. Cleaning the prototypes removes between 5 and 60μg of contamination depending largely on the time elapsed since the last cleaning. Further, a second cleaning can remove up to 10μg more. After cleaning—even when they are stored under their bell jars—the IPK and its replicas immediately begin gaining mass again. The BIPM even developed a model of this gain and concluded that it averaged 1.11μg per month for the first 3 months after cleaning and then decreased to an average of about 1μg per year thereafter. Since check standards like K4 are not cleaned for routine calibrations of other mass standards—a precaution to minimise the potential for wear and handling damage—the BIPM's model of time-dependent mass gain has been used as an "after cleaning" correction factor. Because the first forty official copies are made of the same alloy as the IPK and are stored under similar conditions, periodic verification using a number of replicas—especially the national primary standards, which are rarely used—can convincingly demonstrate the stability of the IPK. What has become clear after the third periodic verification performed between 1988 and 1992 is that masses of the entire worldwide ensemble of prototypes have been slowly but inexorably diverging from each other. It is also clear that the IPK lost perhaps of mass over the last century, and possibly significantly more, in comparison to its official copies. The reason for this drift has eluded physicists who have dedicated their careers to the SI unit of mass. No plausible mechanism has been proposed to explain either a steady decrease in the mass of the IPK, or an increase in that of its replicas dispersed throughout the world. Moreover, there are no technical means available to determine whether or not the entire worldwide ensemble of prototypes suffers from even greater long-term trends upwards or downwards because their mass "relative to an invariant of nature is unknown at a level below 1000μg over a period of 100 or even 50 years". Given the lack of data identifying which of the world's kilogram prototypes has been most stable in absolute terms, it is equally valid to state that the first batch of replicas has, as a group, gained an average of about 25μg over one hundred years in comparison to the IPK. What is known specifically about the IPK is that it exhibits a short-term instability of about over a period of about a month in its after-cleaned mass. The precise reason for this short-term instability is not understood but is thought to entail surface effects: microscopic differences between the prototypes' polished surfaces, possibly aggravated by hydrogen absorption due to catalysis of the volatile organic compounds that slowly deposit onto the prototypes as well as the hydrocarbon-based solvents used to clean them. It has been possible to rule out many explanations of the observed divergences in the masses of the world's prototypes proposed by scientists and the general public. The BIPM's FAQ explains, for example, that the divergence is dependent on the amount of time elapsed between measurements and not dependent on the number of times the prototype or its copies have been cleaned or possible changes in gravity or environment. Reports published in 2013 by Peter Cumpson of Newcastle University based on the X-ray photoelectron spectroscopy of samples that were stored alongside various prototype kilograms suggested that one source of the divergence between the various prototypes could be traced to mercury that had been absorbed by the prototypes being in the proximity of mercury-based instruments. The IPK has been stored within centimetres of a mercury thermometer since at least as far back as the late 1980s. In this Newcastle University work six platinum weights made in the nineteenth century were all found to have mercury at the surface, the most contaminated of which had the equivalent of 250μg of mercury when scaled to the surface area of a kilogram prototype. The increasing divergence in the masses of the world's prototypes and the short-term instability in the IPK prompted research into improved methods to obtain a smooth surface finish using diamond turning on newly manufactured replicas and was one of the reasons for the redefinition of the kilogram. Dependency of the SI on the IPK The stability of the IPK was crucial because the kilogram underpinned much of the SI as defined and structured until 2019. The majority of SI units with special names are derived units, meaning they are defined simply multiplying or dividing or in one case offsetting relative to other, more basic, units. For instance, the newton is defined as the force necessary to accelerate one kilogram at one metre per second squared. If the mass of the IPK were to change slightly then the newton would also change proportionally. In turn, the pascal, the SI unit of pressure, is defined in terms of the newton. This chain of dependency follows to many other SI units of measure. For instance, the joule, the SI unit of energy, is defined as that expended when a force of one newton acts through one metre. Next to be affected is the SI unit of power, the watt, which is one joule per second. N = kg m/s2 Pa = N/m2 = kg/(m s) J = N m = kg m2/s2 W = J/s = N m/s = kg m2/s3 Furthermore, prior to the revision the SI base unit of electric current, the ampere (A), was defined as the current needed to produce a force of 0.2 μN between 2 parallel wires 1 m apart for every metre of length. Substituting these parameters into Ampère's force law gives: 2 kA A2/m = 0.2 μN/m or A2 = , making the magnitude of the ampere proportional to the square root of the newton and hence of the mass of the IPK. The base unit of amount of substance, mole, was defined prior to the revision as the number of atoms in 12 grams of carbon 12 and the base unit of luminous intensity, candela, was defined as that of watts per steradian of 540 THz green light. Hence the magnitudes of the mole and candela were proportional to the mass of the IPK. These dependencies then extend to many chemical, photometric, and electrical units: kat = mol/s lm = cd sr lx = lm/m2 = cd sr/m2 C = A s V = W/A = J/C = = Wb = V s = J/A = T = Wb/m2 = The SI derived units whose values were not susceptible to changes in the mass of the IPK were either dimensionless quantities, derived entirely from the second, metre, or kelvin, or were defined as the ratio of 2 quantities, both of which were related in the same way to the mass of the IPK, for example: Ω = V/A = = = = = kA m/s Here the newtons in the numerator and the denominator exactly cancel out when calculating the value of the ohm. Similarly: F = C/V = = = = Gy = J/kg = = m2/s2 S = 1/Ω = H = Ω s = kA m Because the magnitude of many of the units composing the SI system of measurement was until 2019 defined by its mass, the quality of the IPK was diligently protected to preserve the integrity of the SI system. However, the average mass of the worldwide ensemble of prototypes and the mass of the IPK have likely diverged another μg since the third periodic verification years ago. Further, the world's national metrology laboratories must wait for the fourth periodic verification to confirm whether the historical trendspersisted. Insulating effects of practical realisations Fortunately, definitions of the SI units are quite different from their practical realisations. For instance, the metre is defined as the distance light travels in a vacuum during a time interval of of a second. However, the metre's practical realisation typically takes the form of a helium–neon laser, and the metre's length is delineated—not defined—as wavelengths of light from this laser. Now suppose that the official measurement of the second was found to have drifted by a few parts per billion (it is actually extremely stable with a reproducibility of a few parts in 1015). There would be no automatic effect on the metre because the second—and thus the metre's length—is abstracted via the laser comprising the metre's practical realisation. Scientists performing metre calibrations would simply continue to measure out the same number of laser wavelengths until an agreement was reached to do otherwise. The same is true with regard to the real-world dependency on the kilogram: if the mass of the IPK was found to have changed slightly, there would be no automatic effect upon the other units of measure because their practical realisations provide an insulating layer of abstraction. Any discrepancy would eventually have to be reconciled though, because the virtue of the SI system is its precise mathematical and logical harmony amongst its units. If the IPK's value were to have been definitively proven to have changed, one solution would have been to simply redefine the kilogram as being equal to the mass of the IPK plus an offset value, similarly to what had previously been done with its replicas; e.g., "the kilogram is equal to the mass of the (equivalent to 42μg). The long-term solution to this problem, however, was to liberate the SI system from its dependency on the IPK by developing a practical realisation of the kilogram that can be reproduced in different laboratories by following a written specification. The units of measure in such a practical realisation would have their magnitudes precisely defined and expressed in terms of only physical constants. While major portions of the SI system are still based on the kilogram, the kilogram is now in turn based on invariant, universal constants of nature.
Physical sciences
Measurement systems
Basics and measurement
10614384
https://en.wikipedia.org/wiki/Platypterygius
Platypterygius
Platypterygius is a historically paraphyletic genus of platypterygiine ichthyosaur from the Cretaceous period. It was historically used as a wastebasket taxon, and most species within Platypterygius likely are undiagnostic at the genus or species level, or represent distinct genera, even being argued as invalid. While fossils referred to Platypterygius have been found throughout different continents, the holotype specimen was found in Germany. Description As Platypterygius contains multiple species not especially close to each other, little can be said in terms of shared characteristics. According to an analysis by Fischer (2012), all anatomical features used to unify Platypterygius species are either not actually present in each species, or much more widespread among unrelated ophthalmosaurs. Generally, species referred to this genus were large bodied macropredators based on their robust dentition. This is also supported by P. australis having been found with remains of sea turtles and birds (specifically, of the genus Nanantius) in its guts, as well as an unidentified pterosaur fossil with tooth marks that may be from this genus. In 1998, Arkhangelsky estimated that P. platydactylus was about long, while "P." americanus was about long. In 2010, Zammit and colleagues estimated that "P." australis was about long. Discovery and species The type species of Platypterygius was described in 1922 based on remains found in upper Aptian strata around Hannover, Germany that were previously described as a species of Ichthyosaurus (I. platydactylus) in 1907 by Ferdinand Broili. These remains however were not adequately described and to complicate matters further, destroyed during World War 2. In the time after its discovery however Platypterygius has become a catch-all genus for Cretaceous ichthyosaurs, creating the misconstrued view of post-Jurassic ichthyosaurs as being a single global genus lacking in diversity. Later research conducted in the 2000s and 2010s has repeatedly shown this to be false, with all of the autapomorphies previously used to define Platypterygius either not being present in all assigned species or also being present in other ophthalmosaurids. As the holotype was destroyed, a redescription of the material attempting to identify valid autapomorphies is out of the question and leaves the genus in a problematic state. Furthermore, the inclusion of later described genera of Cretaceous, platypterygiine ichthyosaurs has shown Platypterygius to be paraphyletic, with the different species not clading closely to one another. Subsequently, many redescriptions of referred Platypterygius species have found them to be their own distinct genera. One notable attempt at revising Platypterygius was conducted by Arkhangel'sky in 1998, who split the genus into three new subgenera. Longirostria (including the Australian "P." longmani, a synonym of "P." australis, and the Argentinian "P." hauthali), Tenuirostria ("P." americanus) and Pervushovisaurus (which included the newly described "P." bannovkensis). Both Platypterygius platydactylus,"P." kiprianoffi and "P." hercynicus were placed in the subgenus Platypterygius. "Platypterygius" bannovkensis was eventually elevated to its own genus Pervushovisaurus in 2014, utilizing Arkhangel'sky's proposed subgenus name and "P." campylodon was also assigned to this genus by a study published in 2016. "P." kiprianoffi was also assigned to P. campylodon (now Pervushovisaurus). Simbirskiasaurus was originally described in 1985 and later sunk into Platypterygius before being declared distinct in the same paper as Pervushovisaurus. "Platypterygius" ochevi, described in 2008 by Arkhangel'sky et al., was found to be a junior synonym of Maiaspondylus cantabrigiensis and in 2021 "Platypterygius" sachicarum was described by Cortés et al. as Kyhytysuka sachicarum. It is argued that the inclusion of oldest species "P." hauthali requires reinvestigation, for it lacks a skull to attribute. Because of this, recent analyses on ichthyosaur classification neglect this species. In 2024, "P." hauthali was reclassified back into the original genus, Myobradypterygius. Accepted species Platypterygius platydactylus Platypterygius americanus (=Tenuirostria) Platypterygius australis (=Longirostria) Platypterygius hercynicus Formerly assigned species Pervushovisaurus bannovkensis Pervushovisaurus campylodon Simbirskiasaurus birjukovi Plutoniosaurus bedengensis Maiaspondylus cantabrigiensis (senior synonym of Platypterygius ochevi ) Kyhytysuka sachicarum (formerly Platypterygius sachicarum ) Myobradypterygius hauthali Classification The following cladogram shows the internal relationships of ophthalmosaurian ichthyosaurs according to an analysis performed by Zverkov and Jacobs (2020) which shows that P. americanus is too distantly related compared to the other three species.
Biology and health sciences
Prehistoric marine reptiles
Animals
10618229
https://en.wikipedia.org/wiki/Bowstring
Bowstring
A bowstring joins the two ends of the bow stave and launches the arrow. Desirable properties include light weight, strength, resistance to abrasion, and resistance to water. Mass has most effect at the center of the string; of extra mass in the middle of the string slows the arrow about as much as at the ends. String forms Most bowstrings may be described as either simple, reverse-twisted, or looped. Simple strings may be made of any fiber, twisted into a single cord. Such strings have been used in many parts of the world and are still effective and fairly quick to make. However, they tend to be weaker for their weight, and they may also come apart if not kept constantly under tension. They are normally secured to the bow by a knot/round turn and two half-hitches at each end. Reverse-twisted strings are traditional in Europe and North America for most natural materials. Linen and hemp fiber have been widely used. The form is also used for modern materials. A reverse-twisted string is made of separate bundles, each bundle individually twisted in one direction; the entire group of bundles is then twisted in the other direction. The result tends to be stronger for its weight than a simple or looped string, and holds together better than a simple string. Unlike some looped strings, the full thickness of the string passes around the nocks on the ends of the bow, where wear is usually greatest. Additional threads may also be laid in at the nocking points for the bow stave and for the arrow, which are sites of likely wear. The string may be secured to the bow by a knot at each end, usually a timber hitch, also known as the bowyer's knot. The traditional "Flemish" string has a laid-in loop at one end, which is easier than most knots to fit over the nock of the bow when stringing and unstringing. It is more trouble to make; the short length, towards one end, that will form the loop is reverse-twisted first. The ends of each bundle are then laid into the main length of the bundles, which are reverse-twisted in turn. The Japanese bowstring is made by reverse-twisting in different directions in the core and outer layers of the string. See Kyūdō. Looped strings are made of one or more continuous loops of material. Modern strings are often made as a single continuous loop: this is then served to give the final form. Disadvantages include the lesser amount of fiber at the ends, where wear is most likely; this may be overcome by serving the string. In many parts of Asia, traditional strings have a single loop in the center, with the ends made of separate lengths tied on using a special knot. This design allows extra fiber to be used at the ends, where weight is less important and wear more likely. String materials Traditional materials include linen, hemp, other vegetable fibers, hair, sinew, silk, and rawhide. Almost any fiber may be used in emergency. Natural fibers would be very unusual on a modern recurve bow or compound bow, but are still effective and still used on traditional wooden or composite bows. Sinew and hide strings may be seriously affected by water. The author of Arab Archery suggests the hide of a young, emaciated camel. Njál's saga describes the refusal of a wife, Hallgerður, to cut her hair to make an emergency bowstring for her husband, Gunnar Hámundarson, who is then killed. Widely used modern materials are stronger for their weight than any natural material, and most are unaffected by water. They include: Dacron (strength per strand = , stretch = 2.6%), a commonly used polyester material. Because of its durability and stretch, Dacron is commonly used on beginners' equipment, wooden bows, and older bows. The relatively high stretch causes less shock to the bow, which is an important consideration for wooden-handled recurves. Dacron strings are easy to maintain and can last several years. Liquid crystal polymers such as Kevlar and Vectran (strength per strand = , stretch = 0.8%) are polymer materials with a higher density and smaller diameter than Dacron, which results in a faster arrow speed (approximately faster). Ultra-high-molecular-weight polyethylenes, such as Spectra and Dyneema (strength per strand = , stretch = 1.0%), have been used since the 1990s. They are lighter, therefore faster, than Kevlar—and have a much longer life. Modern strings are often made from composite fibres—such as a mixture of Vectran and Dyneema—to gain the advantages of both. Serving Serving a bowstring refers to the use of an additional thread, commonly wrapped round the main string at the nocking points where abrasion is most likely, and also used on looped strings to keep the two sides of the loop together.
Technology
Archery
null
10624594
https://en.wikipedia.org/wiki/Krypton
Krypton
Krypton (from 'the hidden one') is a chemical element; it has symbol Kr and atomic number 36. It is a colorless, odorless noble gas that occurs in trace amounts in the atmosphere and is often used with other rare gases in fluorescent lamps. Krypton is chemically inert. Krypton, like the other noble gases, is used in lighting and photography. Krypton light has many spectral lines, and krypton plasma is useful in bright, high-powered gas lasers (krypton ion and excimer lasers), each of which resonates and amplifies a single spectral line. Krypton fluoride also makes a useful laser medium. From 1960 to 1983, the official definition of the metre was based on the wavelength of one spectral line of krypton-86, because of the high power and relative ease of operation of krypton discharge tubes. History Krypton was discovered in Britain in 1898 by William Ramsay, a Scottish chemist, and Morris Travers, an English chemist, in residue left from evaporating nearly all components of liquid air. Neon was discovered by a similar procedure by the same workers just a few weeks later. William Ramsay was awarded the 1904 Nobel Prize in Chemistry for discovery of a series of noble gases, including krypton. In 1960, the International Bureau of Weights and Measures defined the meter as 1,650,763.73 wavelengths of light emitted in the vacuum corresponding to the transition between the 2p10 and 5d5 levels in the isotope krypton-86. This agreement replaced the 1889 international prototype meter, which was a metal bar located in Sèvres. This also made obsolete the 1927 definition of the ångström based on the red cadmium spectral line, replacing it with 1 Å = 10−10 m. The krypton-86 definition lasted until the October 1983 conference, which redefined the meter as the distance that light travels in vacuum during 1/299,792,458 s. Characteristics Krypton is characterized by several sharp emission lines (spectral signatures) the strongest being green and yellow. Krypton is one of the products of uranium fission. Solid krypton is white and has a face-centered cubic crystal structure, which is a common property of all noble gases (except helium, which has a hexagonal close-packed crystal structure). Isotopes Naturally occurring krypton in Earth's atmosphere is composed of five stable isotopes, plus one isotope (78Kr) with such a long half-life (9.2×1021 years) that it can be considered stable. (This isotope has the third-longest known half-life among all isotopes for which decay has been observed; it undergoes double electron capture to 78Se). In addition, about thirty unstable isotopes and isomers are known. Traces of 81Kr, a cosmogenic nuclide produced by the cosmic ray irradiation of 80Kr, also occur in nature: this isotope is radioactive with a half-life of 230,000 years. Krypton is highly volatile and does not stay in solution in near-surface water, but 81Kr has been used for dating old (50,000–800,000 years) groundwater. 85Kr is an inert radioactive noble gas with a half-life of 10.76 years. It is produced by the fission of uranium and plutonium, such as in nuclear bomb testing and nuclear reactors. 85Kr is released during the reprocessing of fuel rods from nuclear reactors. Concentrations at the North Pole are 30% higher than at the South Pole due to convective mixing. Chemistry Like the other noble gases, krypton is chemically highly unreactive. The rather restricted chemistry of krypton in the +2 oxidation state parallels that of the neighboring element bromine in the +1 oxidation state; due to the scandide contraction it is difficult to oxidize the 4p elements to their group oxidation states. Until the 1960s no noble gas compounds had been synthesized. Following the first successful synthesis of xenon compounds in 1962, synthesis of krypton difluoride () was reported in 1963. In the same year, was reported by Grosse, et al., but was subsequently shown to be a mistaken identification. Under extreme conditions, krypton reacts with fluorine to form KrF2 according to the following equation: Kr + F2 -> KrF2 Krypton gas in a krypton fluoride laser absorbs energy from a source, causing the krypton to react with fluorine gas, producing the exciplex krypton fluoride, a temporary complex in an excited energy state: 2Kr + F2 -> 2KrF The complex can undergo spontaneous or stimulated emission, reducing its energy state to a metastable, but highly repulsive ground state. The ground state complex quickly dissociates into unbound atoms: 2KrF -> 2Kr + F2 The result is an exciplex laser which radiates energy at 248 nm, near the ultraviolet portion of the spectrum, corresponding with the energy difference between the ground state and the excited state of the complex. Compounds with krypton bonded to atoms other than fluorine have also been discovered. There are also unverified reports of a barium salt of a krypton oxoacid. ArKr+ and KrH+ polyatomic ions have been investigated and there is evidence for KrXe or KrXe+. The reaction of with produces an unstable compound, , that contains a krypton-oxygen bond. A krypton-nitrogen bond is found in the cation [HC≡N–Kr–F], produced by the reaction of with [HC≡NH][AsF] below −50 °C. HKrCN and HKrC≡CH (krypton hydride-cyanide and hydrokryptoacetylene) were reported to be stable up to 40 K. Krypton hydride (Kr(H2)4) crystals can be grown at pressures above 5 GPa. They have a face-centered cubic structure where krypton octahedra are surrounded by randomly oriented hydrogen molecules. Natural occurrence Earth has retained all of the noble gases that were present at its formation except helium. Krypton's concentration in the atmosphere is about 1 ppm. It can be extracted from liquid air by fractional distillation. The amount of krypton in space is uncertain, because measurement is derived from meteoric activity and solar winds. The first measurements suggest an abundance of krypton in space. Applications Krypton's multiple emission lines make ionized krypton gas discharges appear whitish, which in turn makes krypton-based bulbs useful in photography as a white light source. Krypton is used in some photographic flashes for high speed photography. Krypton gas is also combined with mercury to make luminous signs that glow with a bright greenish-blue light. Krypton is mixed with argon in energy efficient fluorescent lamps, reducing the power consumption, but also reducing the light output and raising the cost. Krypton costs about 100 times as much as argon. Krypton (along with xenon) is also used to fill incandescent lamps to reduce filament evaporation and allow higher operating temperatures. Krypton's white discharge is sometimes used as an artistic effect in gas discharge "neon" tubes. Krypton produces much higher light power than neon in the red spectral line region, and for this reason, red lasers for high-power laser light-shows are often krypton lasers with mirrors that select the red spectral line for laser amplification and emission, rather than the more familiar helium-neon variety, which could not achieve the same multi-watt outputs. The krypton fluoride laser is important in nuclear fusion energy research in confinement experiments. The laser has high beam uniformity, short wavelength, and the spot size can be varied to track an imploding pellet. In experimental particle physics, liquid krypton is used to construct quasi-homogeneous electromagnetic calorimeters. A notable example is the calorimeter of the NA48 experiment at CERN containing about 27 tonnes of liquid krypton. This usage is rare, since liquid argon is less expensive. The advantage of krypton is a smaller Molière radius of 4.7 cm, which provides excellent spatial resolution with little overlapping. The other parameters relevant for calorimetry are: radiation length of X0=4.7 cm, and density of 2.4 g/cm3. Krypton-83 has application in magnetic resonance imaging (MRI) for imaging airways. In particular, it enables the radiologist to distinguish between hydrophobic and hydrophilic surfaces containing an airway. Although xenon has potential for use in computed tomography (CT) to assess regional ventilation, its anesthetic properties limit its fraction in the breathing gas to 35%. A breathing mixture of 30% xenon and 30% krypton is comparable in effectiveness for CT to a 40% xenon fraction, while avoiding the unwanted effects of a high partial pressure of xenon gas. The metastable isotope krypton-81m is used in nuclear medicine for lung ventilation/perfusion scans, where it is inhaled and imaged with a gamma camera. Krypton-85 in the atmosphere has been used to detect clandestine nuclear fuel reprocessing facilities in North Korea and Pakistan. Those facilities were detected in the early 2000s and were believed to be producing weapons-grade plutonium. Krypton-85 is a medium lived fission product and thus escapes from spent fuel when the cladding is removed. Krypton is used occasionally as an insulating gas between window panes. SpaceX Starlink uses krypton as a propellant for their electric propulsion system. Precautions Krypton is considered to be a non-toxic asphyxiant. Being lipophilic, krypton has a significant anaesthetic effect (although the mechanism of this phenomenon is still not fully clear, there is good evidence that the two properties are mechanistically related), with narcotic potency seven times greater than air, and breathing an atmosphere of 50% krypton and 50% natural air (as might happen in the locality of a leak) causes narcosis in humans similar to breathing air at four times atmospheric pressure. This is comparable to scuba diving at a depth of and could affect anyone breathing it.
Physical sciences
Chemical elements_2
null
19666626
https://en.wikipedia.org/wiki/Lemming
Lemming
A lemming is a small rodent, usually found in or near the Arctic in tundra biomes. Lemmings form the subfamily Arvicolinae (also known as Microtinae) together with voles and muskrats, which form part of the superfamily Muroidea, which also includes rats, mice, hamsters and gerbils. A longstanding myth holds that they exhibit herd mentality and jump off cliffs, committing mass suicide. Description and habitat Lemmings measure around in length and weigh around . Lemmings are quite rounded in shape, with brown and black, long, soft fur. They have a very short tail, a stubby, hairy snout, short legs and small ears. They have a flattened claw on the first digit of their front feet, which helps them to dig in the snow. They are herbivorous, feeding mostly on mosses and grasses. They also forage through the snow surface to find berries, leaves, shoots, roots, bulbs, and lichens. Lemmings choose their preferred dietary vegetation disproportionately to its occurrence in their habitat. They digest grasses and sedges less effectively than related voles. Like other rodents, they have incisors that grow continuously, allowing them to feed on much tougher forage. Lemmings do not hibernate through the harsh northern winter. They remain active, finding food by burrowing through the snow. These rodents live in large tunnel systems beneath the snow in winter, which protect them from predators. Their burrows have rest areas, toilet areas and nesting rooms. They make nests out of grasses, feathers, and muskox wool (qiviut). In the spring, they move to higher ground, where they live on mountain heaths or in forests, continuously breeding before returning in autumn to the tundra. Behaviour Like many other rodents, lemmings have periodic population booms and then disperse in all directions, seeking food and shelter their natural habitats cannot provide. The Norway lemming and West Siberian lemming are two of the few vertebrates which reproduce so quickly that their population fluctuations are chaotic, rather than following linear growth to a carrying capacity or regular oscillations. Why lemming populations fluctuate with such great variance roughly every four years, before numbers drop to near extinction, is not known. Lemming behaviour and appearance are markedly different from those of other rodents, which are inconspicuously coloured and try to conceal themselves from their predators. Lemmings, by contrast, are conspicuously coloured and behave aggressively toward predators and even human observers. The lemming defence system is thought to be based on aposematism (warning display). Fluctuations in the lemming population affect the behaviour of predators, and may fuel irruptions of birds of prey such as snowy owls to areas further south. For many years, the population of lemmings was believed to change with the population cycle, but now some evidence suggests their predators' populations, particularly those of the stoat, may be more closely involved in changing the lemming population. Misconceptions Misconceptions about lemmings go back many centuries. In 1532, the geographer Jacob Ziegler of Bavaria proposed the theory that the creatures fell out of the sky during stormy weather and then died suddenly when the grass grew in spring. This description was contradicted by natural historian Ole Worm, who accepted that lemmings could fall out of the sky, but claimed that they had been brought over by the wind rather than created by spontaneous generation. Worm published dissections of a lemming, which showed that they are anatomically similar to most other rodents such as voles and hamsters, and the work of Carl Linnaeus proved that they had a natural origin. Lemmings have become the subject of a widely popular misconception that they are driven to commit mass suicide when they migrate by jumping off cliffs or drowning in bodies of water. It is true that the local population of some lemmings fluctuates. Contrary to the myth, it is not a deliberate mass suicide, in which animals voluntarily choose to die, but rather a result of their migratory behavior. Driven by strong biological urges, some species of lemmings may migrate in large groups when population density becomes too great. Thus, the unexplained fluctuations in the population of Norwegian lemmings helped give rise to the popular stereotype of the suicidal lemmings, particularly after this behaviour was staged in the Walt Disney documentary White Wilderness in 1958. The misconception itself is much older, dating back to at least the late 19th century. In the August 1877 issue of Popular Science Monthly, apparently suicidal lemmings are presumed to be swimming in the Atlantic Ocean in search of the submerged continent of Lemuria. Classification Order Rodentia Superfamily Muroidea Family Cricetidae Subfamily Arvicolinae: voles, lemmings, and related species Tribe Dicrostonychini Dicrostonyx Northern collared lemming (D. groenlandicus) Ungava collared lemming (D. hudsonius) Nelson's collared lemming (D. nelsoni) Ogilvie Mountains collared lemming (D. nunatakensis) Richardson's collared lemming (D. richardsoni) Arctic lemming (D. torquatus) Unalaska collared lemming (D. unalascensis) Tribe Lemmini Lemmus Amur lemming (L. amurensis) Norway lemming (L. lemmus) Beringian lemming (L. nigripes) East Siberian lemming (L. paulus) West Siberian lemming (L. sibiricus) North American brown lemming (L. trimucronatus) Myopus Wood lemming (M. schisticolor) Synaptomys Northern bog lemming (S. borealis) Southern bog lemming (S. cooperi) Tribe Lagurini Eolagurus Yellow steppe lemming (E. luteus) Przewalski's steppe lemming (E. przewalskii) Lagurus Steppe lemming (L. lagurus) In popular culture and media The misconception of lemming "mass suicide" is long-standing and has been popularized by a number of factors. The myth was mentioned in "The Marching Morons", a 1951 short story by Cyril M. Kornbluth. In 1955, Disney Studio illustrator Carl Barks drew an Uncle Scrooge adventure comic with the title "The Lemming with the Locket". This comic, which was inspired by a 1953 American Mercury article, showed massive numbers of lemmings jumping over Norwegian cliffs. Lemmings also appear in Arthur C. Clarke's 1953 short story "The Possessed", where their suicidal urges are attributed to the lingering consciousness of an alien group mind, which had inhabited the species in the prehistoric past. Perhaps the most influential and infamous presentation of the myth was the 1958 Disney film White Wilderness, which won an Academy Award for Documentary Feature and in which producers threw lemmings off a cliff to their deaths to fake footage of a "mass suicide", as well as faked scenes of mass migration. A Canadian Broadcasting Corporation documentary, Cruel Camera, found the lemmings used for White Wilderness were flown from Hudson Bay to Calgary, Alberta, Canada, where, far from "casting themselves bodily out into space" (as the film's narrator states), they were, in fact, dumped off the cliff by the camera crew from a truck. Because of the limited number of lemmings at their disposal, which in any case were the wrong subspecies, the migration scenes were simulated using tight camera angles and a large, snow-covered turntable. The song "Lemmings (Including 'Cog')" from the 1971 album Pawn Hearts by progressive rock band Van der Graaf Generator is about a person who sees their loved ones "crashing on quite blindly to the sea". The 1976 album "Howlin' Wind," which introduced Graham Parker and the Rumour, includes the song "Don't Ask Me Questions," whose lyrics include the lines, "I see the thousands screamin'/Rushin' for the cliffs/Just like lemmings/Into the sea." The 1983 song Synchronicity II by The Police makes an allusion to the supposed suicidal tendencies of lemmings in its reference to commuters "packed like lemmings into shiny metal boxes, contestants in a suicidal race." In 1991, a puzzle-platform video game called Lemmings was released, in which the player must save a certain percentage of the titular small humanoid creatures as they march heedlessly through a dangerous environment. The game and its sequels had sold 4 million copies by 1995. In Russian Terra_Nova_(2008_film) lemmings ate food stocks of the group of prisoners at Novaya Zemlya island, that caused the cannibalism among the population of colony. Lemmings are main characters of the 2016 French animated television series Grizzy and the Lemmings. As a humorous allusion to the popular myth, the series frequently features lemmings jumping down from elevated platforms. In the animated Disney film Zootopia (2016), lemmings are employed as investment bankers of Lemmings Brothers, named after the bank that went bankrupt in 2008.
Biology and health sciences
Rodents
Animals
19673093
https://en.wikipedia.org/wiki/Matter
Matter
In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume. All everyday objects that can be touched are ultimately composed of atoms, which are made up of interacting subatomic particles, and in everyday as well as scientific usage, matter generally includes atoms and anything made up of them, and any particles (or combination of particles) that act as if they have both rest mass and volume. However it does not include massless particles such as photons, or other energy phenomena or waves such as light or heat. Matter exists in various states (also known as phases). These include classical everyday phases such as solid, liquid, and gas – for example water exists as ice, liquid water, and gaseous steam – but other states are possible, including plasma, Bose–Einstein condensates, fermionic condensates, and quark–gluon plasma. Usually atoms can be imagined as a nucleus of protons and neutrons, and a surrounding "cloud" of orbiting electrons which "take up space". However, this is only somewhat correct because subatomic particles and their properties are governed by their quantum nature, which means they do not act as everyday objects appear to act – they can act like waves as well as particles, and they do not have well-defined sizes or positions. In the Standard Model of particle physics, matter is not a fundamental concept because the elementary constituents of atoms are quantum entities which do not have an inherent "size" or "volume" in any everyday sense of the word. Due to the exclusion principle and other fundamental interactions, some "point particles" known as fermions (quarks, leptons), and many composites and atoms, are effectively forced to keep a distance from other particles under everyday conditions; this creates the property of matter which appears to us as matter taking up space. For much of the history of the natural sciences, people have contemplated the exact nature of matter. The idea that matter was built of discrete building blocks, the so-called particulate theory of matter, appeared in both ancient Greece and ancient India. Early philosophers who proposed the particulate theory of matter include the ancient Indian philosopher Kanada (c. 6th–century BCE or after), pre-Socratic Greek philosopher Leucippus (~490 BCE), and pre-Socratic Greek philosopher Democritus (~470–380 BCE). Related concepts Comparison with mass Matter should not be confused with mass, as the two are not the same in modern physics. Matter is a general term describing any 'physical substance'. By contrast, mass is not a substance but rather an extensive property of matter and other substances or systems; various types of mass are defined within physics – including but not limited to rest mass, inertial mass, relativistic mass, and mass–energy. While there are different views on what should be considered matter, the mass of a substance has exact scientific definitions. Another difference is that matter has an "opposite" called antimatter, but mass has no opposite—there is no such thing as "anti-mass" or negative mass, so far as is known, although scientists do discuss the concept. Antimatter has the same (i.e. positive) mass property as its normal matter counterpart. Different fields of science use the term matter in different, and sometimes incompatible, ways. Some of these ways are based on loose historical meanings from a time when there was no reason to distinguish mass from simply a quantity of matter. As such, there is no single universally agreed scientific meaning of the word "matter". Scientifically, the term "mass" is well-defined, but "matter" can be defined in several ways. Sometimes in the field of physics "matter" is simply equated with particles that exhibit rest mass (i.e., that cannot travel at the speed of light), such as quarks and leptons. However, in both physics and chemistry, matter exhibits both wave-like and particle-like properties, the so-called wave–particle duality. Relation with chemical substance Definition Based on atoms A definition of "matter" based on its physical and chemical structure is: matter is made up of atoms. Such atomic matter is also sometimes termed ordinary matter. As an example, deoxyribonucleic acid molecules (DNA) are matter under this definition because they are made of atoms. This definition can be extended to include charged atoms and molecules, so as to include plasmas (gases of ions) and electrolytes (ionic solutions), which are not obviously included in the atoms definition. Alternatively, one can adopt the protons, neutrons, and electrons definition. Based on protons, neutrons and electrons A definition of "matter" more fine-scale than the atoms and molecules definition is: matter is made up of what atoms and molecules are made of, meaning anything made of positively charged protons, neutral neutrons, and negatively charged electrons. This definition goes beyond atoms and molecules, however, to include substances made from these building blocks that are not simply atoms or molecules, for example electron beams in an old cathode ray tube television, or white dwarf matter—typically, carbon and oxygen nuclei in a sea of degenerate electrons. At a microscopic level, the constituent "particles" of matter such as protons, neutrons, and electrons obey the laws of quantum mechanics and exhibit wave–particle duality. At an even deeper level, protons and neutrons are made up of quarks and the force fields (gluons) that bind them together, leading to the next definition. Based on quarks and leptons As seen in the above discussion, many early definitions of what can be called "ordinary matter" were based upon its structure or "building blocks". On the scale of elementary particles, a definition that follows this tradition can be stated as: "ordinary matter is everything that is composed of quarks and leptons", or "ordinary matter is everything that is composed of any elementary fermions except antiquarks and antileptons". The connection between these formulations follows. Leptons (the most famous being the electron), and quarks (of which baryons, such as protons and neutrons, are made) combine to form atoms, which in turn form molecules. Because atoms and molecules are said to be matter, it is natural to phrase the definition as: "ordinary matter is anything that is made of the same things that atoms and molecules are made of". (However, notice that one also can make from these building blocks matter that is not atoms or molecules.) Then, because electrons are leptons, and protons and neutrons are made of quarks, this definition in turn leads to the definition of matter as being "quarks and leptons", which are two of the four types of elementary fermions (the other two being antiquarks and antileptons, which can be considered antimatter as described later). Carithers and Grannis state: "Ordinary matter is composed entirely of first-generation particles, namely the [up] and [down] quarks, plus the electron and its neutrino." (Higher generations particles quickly decay into first-generation particles, and thus are not commonly encountered.) This definition of ordinary matter is more subtle than it first appears. All the particles that make up ordinary matter (leptons and quarks) are elementary fermions, while all the force carriers are elementary bosons. The W and Z bosons that mediate the weak force are not made of quarks or leptons, and so are not ordinary matter, even if they have mass. In other words, mass is not something that is exclusive to ordinary matter. The quark–lepton definition of ordinary matter, however, identifies not only the elementary building blocks of matter, but also includes composites made from the constituents (atoms and molecules, for example). Such composites contain an interaction energy that holds the constituents together, and may constitute the bulk of the mass of the composite. As an example, to a great extent, the mass of an atom is simply the sum of the masses of its constituent protons, neutrons and electrons. However, digging deeper, the protons and neutrons are made up of quarks bound together by gluon fields (see dynamics of quantum chromodynamics) and these gluon fields contribute significantly to the mass of hadrons. In other words, most of what composes the "mass" of ordinary matter is due to the binding energy of quarks within protons and neutrons. For example, the sum of the mass of the three quarks in a nucleon is approximately , which is low compared to the mass of a nucleon (approximately ). The bottom line is that most of the mass of everyday objects comes from the interaction energy of its elementary components. The Standard Model groups matter particles into three generations, where each generation consists of two quarks and two leptons. The first generation is the up and down quarks, the electron and the electron neutrino; the second includes the charm and strange quarks, the muon and the muon neutrino; the third generation consists of the top and bottom quarks and the tau and tau neutrino. The most natural explanation for this would be that quarks and leptons of higher generations are excited states of the first generations. If this turns out to be the case, it would imply that quarks and leptons are composite particles, rather than elementary particles. This quark–lepton definition of matter also leads to what can be described as "conservation of (net) matter" laws—discussed later below. Alternatively, one could return to the mass–volume–space concept of matter, leading to the next definition, in which antimatter becomes included as a subclass of matter. Based on elementary fermions (mass, volume, and space) A common or traditional definition of matter is "anything that has mass and volume (occupies space)". For example, a car would be said to be made of matter, as it has mass and volume (occupies space). The observation that matter occupies space goes back to antiquity. However, an explanation for why matter occupies space is recent, and is argued to be a result of the phenomenon described in the Pauli exclusion principle, which applies to fermions. Two particular examples where the exclusion principle clearly relates matter to the occupation of space are white dwarf stars and neutron stars, discussed further below. Thus, matter can be defined as everything composed of elementary fermions. Although we do not encounter them in everyday life, antiquarks (such as the antiproton) and antileptons (such as the positron) are the antiparticles of the quark and the lepton, are elementary fermions as well, and have essentially the same properties as quarks and leptons, including the applicability of the Pauli exclusion principle which can be said to prevent two particles from being in the same place at the same time (in the same state), i.e. makes each particle "take up space". This particular definition leads to matter being defined to include anything made of these antimatter particles as well as the ordinary quark and lepton, and thus also anything made of mesons, which are unstable particles made up of a quark and an antiquark. In general relativity and cosmology In the context of relativity, mass is not an additive quantity, in the sense that one cannot add the rest masses of particles in a system to get the total rest mass of the system. In relativity, usually a more general view is that it is not the sum of rest masses, but the energy–momentum tensor that quantifies the amount of matter. This tensor gives the rest mass for the entire system. Matter, therefore, is sometimes considered as anything that contributes to the energy–momentum of a system, that is, anything that is not purely gravity. This view is commonly held in fields that deal with general relativity such as cosmology. In this view, light and other massless particles and fields are all part of matter. Structure In particle physics, fermions are particles that obey Fermi–Dirac statistics. Fermions can be elementary, like the electron—or composite, like the proton and neutron. In the Standard Model, there are two types of elementary fermions: quarks and leptons, which are discussed next. Quarks Quarks are massive particles of spin-, implying that they are fermions. They carry an electric charge of − e (down-type quarks) or + e (up-type quarks). For comparison, an electron has a charge of −1 e. They also carry colour charge, which is the equivalent of the electric charge for the strong interaction. Quarks also undergo radioactive decay, meaning that they are subject to the weak interaction. Baryonic Baryons are strongly interacting fermions, and so are subject to Fermi–Dirac statistics. Amongst the baryons are the protons and neutrons, which occur in atomic nuclei, but many other unstable baryons exist as well. The term baryon usually refers to triquarks—particles made of three quarks. Also, "exotic" baryons made of four quarks and one antiquark are known as pentaquarks, but their existence is not generally accepted. Baryonic matter is the part of the universe that is made of baryons (including all atoms). This part of the universe does not include dark energy, dark matter, black holes or various forms of degenerate matter, such as those that compose white dwarf stars and neutron stars. Microwave light seen by Wilkinson Microwave Anisotropy Probe (WMAP) suggests that only about 4.6% of that part of the universe within range of the best telescopes (that is, matter that may be visible because light could reach us from it) is made of baryonic matter. About 26.8% is dark matter, and about 68.3% is dark energy. The great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10 per cent of the ordinary matter contribution to the mass–energy density of the universe. Hadronic Hadronic matter can refer to 'ordinary' baryonic matter, made from hadrons (baryons and mesons), or quark matter (a generalisation of atomic nuclei), i.e. the 'low' temperature QCD matter. It includes degenerate matter and the result of high energy heavy nuclei collisions. Degenerate In physics, degenerate matter refers to the ground state of a gas of fermions at a temperature near absolute zero. The Pauli exclusion principle requires that only two fermions can occupy a quantum state, one spin-up and the other spin-down. Hence, at zero temperature, the fermions fill up sufficient levels to accommodate all the available fermions—and in the case of many fermions, the maximum kinetic energy (called the Fermi energy) and the pressure of the gas becomes very large, and depends on the number of fermions rather than the temperature, unlike normal states of matter. Degenerate matter is thought to occur during the evolution of heavy stars. The demonstration by Subrahmanyan Chandrasekhar that white dwarf stars have a maximum allowed mass because of the exclusion principle caused a revolution in the theory of star evolution. Degenerate matter includes the part of the universe that is made up of neutron stars and white dwarfs. Strange Strange matter is a particular form of quark matter, usually thought of as a liquid of up, down, and strange quarks. It is contrasted with nuclear matter, which is a liquid of neutrons and protons (which themselves are built out of up and down quarks), and with non-strange quark matter, which is a quark liquid that contains only up and down quarks. At high enough density, strange matter is expected to be color superconducting. Strange matter is hypothesized to occur in the core of neutron stars, or, more speculatively, as isolated droplets that may vary in size from femtometers (strangelets) to kilometers (quark stars). Two meanings In particle physics and astrophysics, the term is used in two ways, one broader and the other more specific. The broader meaning is just quark matter that contains three flavors of quarks: up, down, and strange. In this definition, there is a critical pressure and an associated critical density, and when nuclear matter (made of protons and neutrons) is compressed beyond this density, the protons and neutrons dissociate into quarks, yielding quark matter (probably strange matter). The narrower meaning is quark matter that is more stable than nuclear matter. The idea that this could happen is the "strange matter hypothesis" of Bodmer and Witten. In this definition, the critical pressure is zero: the true ground state of matter is always quark matter. The nuclei that we see in the matter around us, which are droplets of nuclear matter, are actually metastable, and given enough time (or the right external stimulus) would decay into droplets of strange matter, i.e. strangelets. Leptons Leptons are particles of spin-, meaning that they are fermions. They carry an electric charge of −1 e (charged leptons) or 0 e (neutrinos). Unlike quarks, leptons do not carry colour charge, meaning that they do not experience the strong interaction. Leptons also undergo radioactive decay, meaning that they are subject to the weak interaction. Leptons are massive particles, therefore are subject to gravity. Phases In bulk, matter can exist in several different forms, or states of aggregation, known as phases, depending on ambient pressure, temperature and volume. A phase is a form of matter that has a relatively uniform chemical composition and physical properties (such as density, specific heat, refractive index, and so forth). These phases include the three familiar ones (solids, liquids, and gases), as well as more exotic states of matter (such as plasmas, superfluids, supersolids, Bose–Einstein condensates, ...). A fluid may be a liquid, gas or plasma. There are also paramagnetic and ferromagnetic phases of magnetic materials. As conditions change, matter may change from one phase into another. These phenomena are called phase transitions and are studied in the field of thermodynamics. In nanomaterials, the vastly increased ratio of surface area to volume results in matter that can exhibit properties entirely different from those of bulk material, and not well described by any bulk phase (see nanomaterials for more details). Phases are sometimes called states of matter, but this term can lead to confusion with thermodynamic states. For example, two gases maintained at different pressures are in different thermodynamic states (different pressures), but in the same phase (both are gases). Antimatter Antimatter is matter that is composed of the antiparticles of those that constitute ordinary matter. If a particle and its antiparticle come into contact with each other, the two annihilate; that is, they may both be converted into other particles with equal energy in accordance with Albert Einstein's equation . These new particles may be high-energy photons (gamma rays) or other particle–antiparticle pairs. The resulting particles are endowed with an amount of kinetic energy equal to the difference between the rest mass of the products of the annihilation and the rest mass of the original particle–antiparticle pair, which is often quite large. Depending on which definition of "matter" is adopted, antimatter can be said to be a particular subclass of matter, or the opposite of matter. Antimatter is not found naturally on Earth, except very briefly and in vanishingly small quantities (as the result of radioactive decay, lightning or cosmic rays). This is because antimatter that came to exist on Earth outside the confines of a suitable physics laboratory would almost instantly meet the ordinary matter that Earth is made of, and be annihilated. Antiparticles and some stable antimatter (such as antihydrogen) can be made in tiny amounts, but not in enough quantity to do more than test a few of its theoretical properties. There is considerable speculation both in science and science fiction as to why the observable universe is apparently almost entirely matter (in the sense of quarks and leptons but not antiquarks or antileptons), and whether other places are almost entirely antimatter (antiquarks and antileptons) instead. In the early universe, it is thought that matter and antimatter were equally represented, and the disappearance of antimatter requires an asymmetry in physical laws called CP (charge–parity) symmetry violation, which can be obtained from the Standard Model, but at this time the apparent asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physics. Possible processes by which it came about are explored in more detail under baryogenesis. Formally, antimatter particles can be defined by their negative baryon number or lepton number, while "normal" (non-antimatter) matter particles have positive baryon or lepton number. These two classes of particles are the antiparticle partners of one another. In October 2017, scientists reported further evidence that matter and antimatter, equally produced at the Big Bang, are identical, should completely annihilate each other and, as a result, the universe should not exist. This implies that there must be something, as yet unknown to scientists, that either stopped the complete mutual destruction of matter and antimatter in the early forming universe, or that gave rise to an imbalance between the two forms. Conservation Two quantities that can define an amount of matter in the quark–lepton sense (and antimatter in an antiquark–antilepton sense), baryon number and lepton number, are conserved in the Standard Model. A baryon such as the proton or neutron has a baryon number of one, and a quark, because there are three in a baryon, is given a baryon number of 1/3. So the net amount of matter, as measured by the number of quarks (minus the number of antiquarks, which each have a baryon number of −1/3), which is proportional to baryon number, and number of leptons (minus antileptons), which is called the lepton number, is practically impossible to change in any process. Even in a nuclear bomb, none of the baryons (protons and neutrons of which the atomic nuclei are composed) are destroyed—there are as many baryons after as before the reaction, so none of these matter particles are actually destroyed and none are even converted to non-matter particles (like photons of light or radiation). Instead, nuclear (and perhaps chromodynamic) binding energy is released, as these baryons become bound into mid-size nuclei having less energy (and, equivalently, less mass) per nucleon compared to the original small (hydrogen) and large (plutonium etc.) nuclei. Even in electron–positron annihilation, there is no net matter being destroyed, because there was zero net matter (zero total lepton number and baryon number) to begin with before the annihilation—one lepton minus one antilepton equals zero net lepton number—and this net amount matter does not change as it simply remains zero after the annihilation. In short, matter, as defined in physics, refers to baryons and leptons. The amount of matter is defined in terms of baryon and lepton number. Baryons and leptons can be created, but their creation is accompanied by antibaryons or antileptons; and they can be destroyed by annihilating them with antibaryons or antileptons. Since antibaryons/antileptons have negative baryon/lepton numbers, the overall baryon/lepton numbers are not changed, so matter is conserved. However, baryons/leptons and antibaryons/antileptons all have positive mass, so the total amount of mass is not conserved. Further, outside of natural or artificial nuclear reactions, there is almost no antimatter generally available in the universe (see baryon asymmetry and leptogenesis), so particle annihilation is rare in normal circumstances. Dark Ordinary matter, in the quarks and leptons definition, constitutes about 4% of the energy of the observable universe. The remaining energy is theorized to be due to exotic forms, of which 23% is dark matter and 73% is dark energy. In astrophysics and cosmology, dark matter is matter of unknown composition that does not emit or reflect enough electromagnetic radiation to be observed directly, but whose presence can be inferred from gravitational effects on visible matter. Observational evidence of the early universe and the Big Bang theory require that this matter have energy and mass, but not be composed of ordinary baryons (protons and neutrons). The commonly accepted view is that most of the dark matter is non-baryonic in nature. As such, it is composed of particles as yet unobserved in the laboratory. Perhaps they are supersymmetric particles, which are not Standard Model particles but relics formed at very high energies in the early phase of the universe and still floating about. Energy In cosmology, dark energy is the name given to the source of the repelling influence that is accelerating the rate of expansion of the universe. Its precise nature is currently a mystery, although its effects can reasonably be modeled by assigning matter-like properties such as energy density and pressure to the vacuum itself. Exotic Exotic matter is a concept of particle physics, which may include dark matter and dark energy but goes further to include any hypothetical material that violates one or more of the properties of known forms of matter. Some such materials might possess hypothetical properties like negative mass. Historical and philosophical study Classical antiquity (c. 600 BCE–c. 322 BCE) In ancient India, the Buddhist, Hindu, and Jain philosophical traditions each posited that matter was made of atoms (paramanu, pudgala) that were "eternal, indestructible, without parts, and innumerable" and which associated or dissociated to form more complex matter according to the laws of nature. They coupled their ideas of soul, or lack thereof, into their theory of matter. The strongest developers and defenders of this theory were the Nyaya-Vaisheshika school, with the ideas of the Indian philosopher Kanada being the most followed. Buddhist philosophers also developed these ideas in late 1st-millennium CE, ideas that were similar to the Vaisheshika school, but ones that did not include any soul or conscience. Jain philosophers included the soul (jiva), adding qualities such as taste, smell, touch, and color to each atom. They extended the ideas found in early literature of the Hindus and Buddhists by adding that atoms are either humid or dry, and this quality cements matter. They also proposed the possibility that atoms combine because of the attraction of opposites, and the soul attaches to these atoms, transforms with karma residue, and transmigrates with each rebirth. In ancient Greece, pre-Socratic philosophers speculated the underlying nature of the visible world. Thales (c. 624 BCE–c. 546 BCE) regarded water as the fundamental material of the world. Anaximander (c. 610 BCE–c. 546 BCE) posited that the basic material was wholly characterless or limitless: the Infinite (apeiron). Anaximenes (flourished 585 BCE, d. 528 BCE) posited that the basic stuff was pneuma or air. Heraclitus (c. 535 BCE–c. 475 BCE) seems to say the basic element is fire, though perhaps he means that all is change. Empedocles (c. 490–430 BCE) spoke of four elements of which everything was made: earth, water, air, and fire. Meanwhile, Parmenides argued that change does not exist, and Democritus argued that everything is composed of minuscule, inert bodies of all shapes called atoms, a philosophy called atomism. All of these notions had deep philosophical problems. Aristotle (384 BCE–322 BCE) was the first to put the conception on a sound philosophical basis, which he did in his natural philosophy, especially in Physics book I. He adopted as reasonable suppositions the four Empedoclean elements, but added a fifth, aether. Nevertheless, these elements are not basic in Aristotle's mind. Rather they, like everything else in the visible world, are composed of the basic principles matter and form. The word Aristotle uses for matter, ὕλη (hyle or hule), can be literally translated as wood or timber, that is, "raw material" for building. Indeed, Aristotle's conception of matter is intrinsically linked to something being made or composed. In other words, in contrast to the early modern conception of matter as simply occupying space, matter for Aristotle is definitionally linked to process or change: matter is what underlies a change of substance. For example, a horse eats grass: the horse changes the grass into itself; the grass as such does not persist in the horse, but some aspect of it—its matter—does. The matter is not specifically described (e.g., as atoms), but consists of whatever persists in the change of substance from grass to horse. Matter in this understanding does not exist independently (i.e., as a substance), but exists interdependently (i.e., as a "principle") with form and only insofar as it underlies change. It can be helpful to conceive of the relationship of matter and form as very similar to that between parts and whole. For Aristotle, matter as such can only receive actuality from form; it has no activity or actuality in itself, similar to the way that parts as such only have their existence in a whole (otherwise they would be independent wholes). Age of Enlightenment French philosopher René Descartes (1596–1650) originated the modern conception of matter. He was primarily a geometer. Unlike Aristotle, who deduced the existence of matter from the physical reality of change, Descartes arbitrarily postulated matter to be an abstract, mathematical substance that occupies space: For Descartes, matter has only the property of extension, so its only activity aside from locomotion is to exclude other bodies: this is the mechanical philosophy. Descartes makes an absolute distinction between mind, which he defines as unextended, thinking substance, and matter, which he defines as unthinking, extended substance. They are independent things. In contrast, Aristotle defines matter and the formal/forming principle as complementary principles that together compose one independent thing (substance). In short, Aristotle defines matter (roughly speaking) as what things are actually made of (with a potential independent existence), but Descartes elevates matter to an actual independent thing in itself. The continuity and difference between Descartes's and Aristotle's conceptions is noteworthy. In both conceptions, matter is passive or inert. In the respective conceptions matter has different relationships to intelligence. For Aristotle, matter and intelligence (form) exist together in an interdependent relationship, whereas for Descartes, matter and intelligence (mind) are definitionally opposed, independent substances. Descartes's justification for restricting the inherent qualities of matter to extension is its permanence, but his real criterion is not permanence (which equally applied to color and resistance), but his desire to use geometry to explain all material properties. Like Descartes, Hobbes, Boyle, and Locke argued that the inherent properties of bodies were limited to extension, and that so-called secondary qualities, like color, were only products of human perception. English philosopher Isaac Newton (1643–1727) inherited Descartes's mechanical conception of matter. In the third of his "Rules of Reasoning in Philosophy", Newton lists the universal qualities of matter as "extension, hardness, impenetrability, mobility, and inertia". Similarly in Optics he conjectures that God created matter as "solid, massy, hard, impenetrable, movable particles", which were "...even so very hard as never to wear or break in pieces". The "primary" properties of matter were amenable to mathematical description, unlike "secondary" qualities such as color or taste. Like Descartes, Newton rejected the essential nature of secondary qualities. Newton developed Descartes's notion of matter by restoring to matter intrinsic properties in addition to extension (at least on a limited basis), such as mass. Newton's use of gravitational force, which worked "at a distance", effectively repudiated Descartes's mechanics, in which interactions happened exclusively by contact. Though Newton's gravity would seem to be a power of bodies, Newton himself did not admit it to be an essential property of matter. Carrying the logic forward more consistently, Joseph Priestley (1733–1804) argued that corporeal properties transcend contact mechanics: chemical properties require the capacity for attraction. He argued matter has other inherent powers besides the so-called primary qualities of Descartes, et al. 19th and 20th centuries Since Priestley's time, there has been a massive expansion in knowledge of the constituents of the material world (viz., molecules, atoms, subatomic particles). In the 19th century, following the development of the periodic table, and of atomic theory, atoms were seen as being the fundamental constituents of matter; atoms formed molecules and compounds. The common definition in terms of occupying space and having mass is in contrast with most physical and chemical definitions of matter, which rely instead upon its structure and upon attributes not necessarily related to volume and mass. At the turn of the nineteenth century, the knowledge of matter began a rapid evolution. Aspects of the Newtonian view still held sway. James Clerk Maxwell discussed matter in his work Matter and Motion. He carefully separates "matter" from space and time, and defines it in terms of the object referred to in Newton's first law of motion. However, the Newtonian picture was not the whole story. In the 19th century, the term "matter" was actively discussed by a host of scientists and philosophers, and a brief outline can be found in Levere. A textbook discussion from 1870 suggests matter is what is made up of atoms:Three divisions of matter are recognized in science: masses, molecules and atoms. A Mass of matter is any portion of matter appreciable by the senses. A Molecule is the smallest particle of matter into which a body can be divided without losing its identity. An Atom is a still smaller particle produced by division of a molecule. Rather than simply having the attributes of mass and occupying space, matter was held to have chemical and electrical properties. In 1909 the famous physicist J. J. Thomson (1856–1940) wrote about the "constitution of matter" and was concerned with the possible connection between matter and electrical charge. In the late 19th century with the discovery of the electron, and in the early 20th century, with the Geiger–Marsden experiment discovery of the atomic nucleus, and the birth of particle physics, matter was seen as made up of electrons, protons and neutrons interacting to form atoms. There then developed an entire literature concerning the "structure of matter", ranging from the "electrical structure" in the early 20th century, to the more recent "quark structure of matter", introduced as early as 1992 by Jacob with the remark: "Understanding the quark structure of matter has been one of the most important advances in contemporary physics." In this connection, physicists speak of matter fields, and speak of particles as "quantum excitations of a mode of the matter field". And here is a quote from de Sabbata and Gasperini: "With the word 'matter' we denote, in this context, the sources of the interactions, that is spinor fields (like quarks and leptons), which are believed to be the fundamental components of matter, or scalar fields, like the Higgs particles, which are used to introduced mass in a gauge theory (and that, however, could be composed of more fundamental fermion fields)." Protons and neutrons however are not indivisible: they can be divided into quarks. And electrons are part of a particle family called leptons. Both quarks and leptons are elementary particles, and were in 2004 seen by authors of an undergraduate text as being the fundamental constituents of matter. These quarks and leptons interact through four fundamental forces: gravity, electromagnetism, weak interactions, and strong interactions. The Standard Model of particle physics is currently the best explanation for all of physics, but despite decades of efforts, gravity cannot yet be accounted for at the quantum level; it is only described by classical physics (see Quantum gravity and Graviton) to the frustration of theoreticians like Stephen Hawking. Interactions between quarks and leptons are the result of an exchange of force-carrying particles such as photons between quarks and leptons. The force-carrying particles are not themselves building blocks. As one consequence, mass and energy (which to our present knowledge cannot be created or destroyed) cannot always be related to matter (which can be created out of non-matter particles such as photons, or even out of pure energy, such as kinetic energy). Force mediators are usually not considered matter: the mediators of the electric force (photons) possess energy (see Planck relation) and the mediators of the weak force (W and Z bosons) have mass, but neither are considered matter either. However, while these quanta are not considered matter, they do contribute to the total mass of atoms, subatomic particles, and all systems that contain them. Summary The modern conception of matter has been refined many times in history, in light of the improvement in knowledge of just what the basic building blocks are, and in how they interact. The term "matter" is used throughout physics in a wide variety of contexts: for example, one refers to "condensed matter physics", "elementary matter", "partonic" matter, "dark" matter, "anti"-matter, "strange" matter, and "nuclear" matter. In discussions of matter and antimatter, the former has been referred to by Alfvén as koinomatter (Gk. common matter). It is fair to say that in physics, there is no broad consensus as to a general definition of matter, and the term "matter" usually is used in conjunction with a specifying modifier. The history of the concept of matter is a history of the fundamental length scales used to define matter. Different building blocks apply depending upon whether one defines matter on an atomic or elementary particle level. One may use a definition that matter is atoms, or that matter is hadrons, or that matter is leptons and quarks depending upon the scale at which one wishes to define matter. These quarks and leptons interact through four fundamental forces: gravity, electromagnetism, weak interactions, and strong interactions. The Standard Model of particle physics is currently the best explanation for all of physics, but despite decades of efforts, gravity cannot yet be accounted for at the quantum level; it is only described by classical physics (see Quantum gravity and Graviton).
Physical sciences
Physics
null
12873937
https://en.wikipedia.org/wiki/Bleach
Bleach
Bleach is the generic name for any chemical product that is used industrially or domestically to remove color from (i.e. to whiten) fabric or fiber (in a process called bleaching) or to disinfect after cleaning. It often refers specifically to a dilute solution of sodium hypochlorite, also called "liquid bleach". Many bleaches have broad-spectrum bactericidal properties, making them useful for disinfecting and sterilizing. They are used in swimming pool sanitation to control bacteria, viruses, and algae and in many places where sterile conditions are required. They are also used in many industrial processes, notably in the bleaching of wood pulp. Bleaches also have other minor uses, like removing mildew, killing weeds, and increasing the longevity of cut flowers. Bleaches work by reacting with many colored organic compounds, such as natural pigments, and turning them into colorless ones. While most bleaches are oxidizing agents (chemicals that can remove electrons from other molecules), some are reducing agents (that donate electrons). Chlorine, a powerful oxidizer, is the active agent in many household bleaches. Since pure chlorine is a toxic corrosive gas, these products usually contain hypochlorite, which releases chlorine. "Bleaching powder" usually refers to a formulation containing calcium hypochlorite. Oxidizing bleaching agents that do not contain chlorine are usually based on peroxides, such as hydrogen peroxide, sodium percarbonate, and sodium perborate. These bleaches are called "non-chlorine bleach", "oxygen bleach", or "color-safe bleach". Reducing bleaches have niche uses, such as sulfur dioxide, which is used to bleach wool, either as gas or from solutions of sodium dithionite, and sodium borohydride. Bleaches generally react with many other organic substances besides the intended colored pigments, so they can weaken or damage natural materials like fibers, cloth, and leather, and intentionally applied dyes, such as the indigo of denim. For the same reason, ingestion of the products, breathing of the fumes, or contact with skin or eyes can cause bodily harm and damage health. History The earliest form of bleaching involved spreading fabrics and cloth out in a bleachfield to be whitened by the action of the Sun and water. In the 17th century, there was a significant cloth bleaching industry in Western Europe, using alternating alkaline baths (generally lye) and acid baths (such as lactic acid from sour milk, and later diluted sulfuric acid). The whole process lasted up to six months. Chlorine-based bleaches, which shortened that process from months to hours, were invented in Europe in the late 18th century. Swedish chemist Carl Wilhelm Scheele discovered chlorine in 1774, and in 1785 Savoyard scientist Claude Berthollet recognized that it could be used to bleach fabrics. Berthollet also discovered sodium hypochlorite, which became the first commercial bleach, named Eau de Javel ("Javel water") after the borough of Javel, near Paris, where it was produced. Scottish chemist and industrialist Charles Tennant proposed in 1798 a solution of calcium hypochlorite as an alternative for Javel water, and patented bleaching powder (solid calcium hypochlorite) in 1799. Around 1820, French chemist Antoine Germain Labarraque discovered the disinfecting and deodorizing ability of hypochlorites and was instrumental in popularizing their use for such purpose. His work greatly improved medical practice, public health, and the sanitary conditions in hospitals, slaughterhouses, and all industries dealing with animal products. Louis Jacques Thénard first produced hydrogen peroxide in 1818 by reacting barium peroxide with nitric acid. Hydrogen peroxide was first used for bleaching in 1882, but did not become commercially important until after 1930. Sodium perborate as a laundry bleach has been used in Europe since the early twentieth century, and became popular in North America in the 1980s. Mechanism of action Whitening Colors of natural organic materials typically arise from organic pigments, such as beta carotene. Chemical bleaches work in one of two ways: An oxidizing bleach works by breaking the chemical bonds that make up the chromophore. This changes the molecule into a different substance that either does not contain a chromophore or contains a chromophore that does not absorb visible light. This is the mechanism of bleaches based on chlorine but also of oxygen-anions which react through the initial nucleophilic attack. A reducing bleach works by converting double bonds in the chromophore into single bonds. This eliminates the ability of the chromophore to absorb visible light. This is the mechanism of bleaches based on sulfur dioxide. Sunlight acts as a bleach through a process leading to similar results: high-energy photons of light, often in the violet or ultraviolet range, can disrupt the bonds in the chromophore, rendering the resulting substance colorless. Extended exposure often leads to massive discoloration usually reducing the colors to a white and typically very faded blue. Antimicrobial efficacy The broad-spectrum effectiveness of most bleaches is due to their general chemical reactivity against organic compounds, rather than the selective inhibitory or toxic actions of antibiotics. They irreversibly denature or destroy many proteins, including all prions, making them extremely versatile disinfectants. Hypochlorite bleaches in low concentration were also found to attack bacteria by interfering with heat shock proteins on their walls. According to 2013 Home Hygiene and Health report, using bleach, whether chlorine- or peroxide-based, significantly increases germicidal efficiency of laundry even at low temperatures (30-40 degrees Celsius), which makes it possible to eliminate viruses, bacteria, and fungi from a variety of clothing in a home setting. Types of bleaches Most industrial and household bleaches belong to three broad classes: Chlorine-based bleaches, whose active agent is chlorine, usually from the decomposition of some chlorine compound like hypochlorite or chloramine. Peroxide-based bleaches, whose active agent is oxygen, almost always from the decomposition of a peroxide compound like hydrogen peroxide. Sulfur dioxide-based bleaches, whose active agent is sulfur dioxide, possibly from the decomposition of some oxosulfur anion. Chlorine-based bleaches Chlorine-based bleaches are found in many household "bleach" products, as well as in specialized products for hospitals, public health, water chlorination, and industrial processes. The grade of chlorine-based bleaches is often expressed as percent active chlorine. One gram of 100% active chlorine bleach has the same bleaching power as one gram of elemental chlorine. The most common chlorine-based bleaches are: Sodium hypochlorite (), usually as a 3–6% solution in water, usually called "liquid bleach" or just "bleach". Historically called "Javel water" (). It is used in many households to whiten laundry, disinfect hard surfaces in kitchens and bathrooms, treat water for drinking, and keep swimming pools free of infectious agents. Bleaching powder (formerly known as "chlorinated lime"), usually a mixture of calcium hypochlorite (), calcium hydroxide (slaked lime, ), and calcium chloride () in variable amounts. Sold as a white powder or in tablets, it is used in many of the same applications as sodium hypochlorite but is more stable and contains more available chlorine. Chlorine gas (). It is used as a disinfectant in water treatment, especially to make drinking water and in large public swimming pools. It was used extensively to bleach wood pulp, but this use has decreased significantly due to environmental concerns. Chlorine dioxide (). This unstable gas is generated in situ or stored as dilute aqueous solutions. It finds large-scale applications for the bleaching of wood pulp, fats and oils, cellulose, flour, textiles, beeswax, skin, and in a number of other industries. Other examples of chlorine-based bleaches, used mostly as disinfectants, are monochloramine, halazone, and sodium dichloroisocyanurate. Peroxide-based bleaches Peroxide-based bleaches are characterized by the peroxide chemical group, namely two oxygen atoms connected by a single bond, (–O–O–). This bond is easily broken, giving rise to very reactive oxygen species, which are the active agents of this type of bleach. The main products in this class are: Hydrogen peroxide (). It is used, for example, to bleach wood pulp and hair or to prepare other bleaching agents like perborates, percarbonates, peracids, etc. Sodium percarbonate (), an adduct of hydrogen peroxide and sodium carbonate ("soda ash" or "washing soda", ). Dissolved in water, it yields a solution of the two products, that combines the degreasing action of the carbonate with the bleaching action of the peroxide. Sodium perborate (). Dissolved in water it forms some hydrogen peroxide, but also the perborate anion () which can perform nucleophilic oxidation. Peracetic (peroxoacetic) acid (). Generated in situ by some laundry detergents, and also marketed for use as industrial and agricultural disinfection and water treatment. Benzoyl peroxide (). It is used in topical medications for acne and to bleach flour. Ozone (). While not properly a peroxide, its mechanism of action is similar. It is used in the manufacture of paper products, especially newsprint and white kraft paper. Potassium persulfate (K2 S2O8) and other persulfate salts. It, alongside ammonium and sodium persulfate, is common in hair-lightening products. Permanganate salts such as Potassium permanganate (KMnO4). In the food industry, other oxidizing products like bromates are used as flour bleaching and maturing agents. Reducing bleaches Sodium dithionite (also known as sodium hydrosulfite) is one of the most important reductive bleaching agents. It is a white crystalline powder with a weak sulfurous odor. It can be obtained by reacting sodium bisulfite with zinc. 2 NaHSO3 + Zn → Na2S2O4 + Zn(OH)2 It is used as such in some industrial dyeing processes to eliminate excess dye, residual oxide, and unintended pigments and for bleaching wood pulp. Reaction of sodium dithionite with formaldehyde produces Rongalite. Na2S2O4 + 2 CH2O + H2O → NaHOCH2SO3 + NaHOCH2SO2 Thus is used in bleaching wood pulp, cotton, wool, leather and clay. Photographic bleach In negative film processing, silver halide grains are associated with couplers which, on development, produce metallic silver and a colored image. The silver is 'bleached' to a soluble form in a solution of ferric EDTA, which is then dissolved in 'fix', a solution of sodium or ammonium thiosulfate. The procedure is the same for paper processing except that the EDTA and thiosulfate are mixed in 'bleachfix'. In reversal processing, residual silver in the emulsion after the first development is reduced to a soluble silver salt using a chemical bleach, most commonly EDTA. A conventional fixer then dissolves the reduced silver but leaves the unexposed silver halide intact. This unexposed halide is then exposed to light or chemically treated so that a second development produces a positive image. In color and chromogenic film, this also generates a dye image in proportion to the silver. Photographic bleaches are also used in black-and-white photography to selectively reduce silver to reduce silver density in negatives or prints. In such cases, the bleach composition is typically an acid solution of potassium dichromate. Environmental impact A Risk Assessment Report (RAR) conducted by the European Union on sodium hypochlorite conducted under Regulation EEC 793/93 concluded that this substance is safe for the environment in all its current, normal uses. This is due to its high reactivity and instability. The disappearance of hypochlorite is practically immediate in the natural aquatic environment, reaching in a short time concentration as low as 10−22 μg/L or less in all emission scenarios. In addition, it was found that while volatile chlorine species may be relevant in some indoor scenarios, they have a negligible impact in open environmental conditions. Further, the role of hypochlorite pollution is assumed as negligible in soils. Industrial bleaching agents can be sources of concern. For example, the use of elemental chlorine in the bleaching of wood pulp produces organochlorines and persistent organic pollutants, including dioxins. According to an industry group, the use of chlorine dioxide in these processes has reduced the dioxin generation to under-detectable levels. However, the respiratory risk from chlorine and highly toxic chlorinated byproducts still exists. A European study conducted in 2008 indicated that sodium hypochlorite and organic chemicals (e.g., surfactants, fragrances) contained in several household cleaning products can react to generate chlorinated volatile organic compounds (VOCs). These chlorinated compounds are emitted during cleaning applications, some of which are toxic and probable human carcinogens. The study showed that indoor air concentrations significantly increase (8–52 times for chloroform and 1–1170 times for carbon tetrachloride, respectively, above baseline quantities in the household) during the use of bleach-containing products. The increase in chlorinated volatile organic compound concentrations was the lowest for plain bleach and the highest for the products in the form of "thick liquid and gel". The significant increases observed in indoor air concentrations of several chlorinated VOCs (especially carbon tetrachloride and chloroform) indicate that bleach use may be a source that could be important in terms of inhalation exposure to these compounds. While the authors suggested that using these cleaning products may significantly increase the cancer risk, this conclusion appears to be hypothetical: The highest level cited for a concentration of carbon tetrachloride (seemingly of highest concern) is 459 micrograms per cubic meter, translating to 0.073 ppm (part per million), or 73 ppb (part per billion). The OSHA-allowable time-weighted average concentration over eight hours is 10 ppm, almost 140 times higher; The OSHA highest allowable peak concentration (5-minute exposure for five minutes in 4 hours) is 200 ppm, twice as high as the reported highest peak level (from the headspace of a bottle of a sample of bleach plus detergent). Disinfection Sodium hypochlorite solution, 3–6%, (common household bleach) is typically diluted for safe use when disinfecting surfaces and when used to treat drinking water. A weak solution of 2% household bleach in warm water is typical for sanitizing smooth surfaces before the brewing of beer or wine. US government regulations (21 CFR 178 Subpart C) allow food processing equipment and food contact surfaces to be sanitized with solutions containing bleach, provided that the solution is allowed to drain adequately before contact with food and that the solutions do not exceed 200 parts per million (ppm) available chlorine (for example, one tablespoon of typical household bleach containing 5.25% sodium hypochlorite, per gallon of water). A 1-in-47 dilution of household bleach with water (1 part bleach to 47 parts water: e.g. one teaspoon of bleach in a cup of water, or 21 ml per litre, or cup of bleach in a gallon of water) is effective against many bacteria and some viruses in homes. Even "scientific-grade", commercially produced disinfection solutions such as Virocidin-X usually have sodium hypochlorite as their sole active ingredient, though they also contain surfactants (to prevent beading) and fragrances (to conceal the bleach smell). See hypochlorous acid for a discussion of the mechanism for disinfectant action. An oral rinse with a 0.05% dilute solution of household bleach is shown to treat gingivitis. Color-safe bleach Color-safe bleach is a solution with hydrogen peroxide as the active ingredient (for stain removal) rather than sodium hypochlorite or chlorine. It also has chemicals in it that help brighten colors. Though hydrogen peroxide is used for sterilization purposes and water treatment, its ability to disinfect laundry is limited because the concentration of hydrogen peroxide in laundry products is lower than what is used in other applications. Health hazards The safety of bleaches depends on the compounds present, and their concentration. Generally speaking, the ingestion of bleaches will cause damage to the esophagus and stomach, possibly leading to death. On contact with the skin or eyes, it causes irritation, drying, and potentially burns. Inhalation of bleach fumes can cause mild irritation of the upper airways. Personal protective equipment should always be used when using bleach. Bleach should never be mixed with vinegar or other acids, as this will create highly toxic chlorine gas, which can cause severe burns internally and externally. Mixing bleach with ammonia similarly produces chloramine gas, which can burn the lungs. Mixing bleach with rubbing alcohol or acetone makes chloroform, while mixing with hydrogen peroxide results in an exothermic and potentially explosive chemical reaction that releases oxygen. False claims as a cure Miracle Mineral Supplement (MMS), also promoted as "Master Mineral Solution" or "Chlorine Dioxide Solution" or CDS, to evade restrictions by online retail platforms, is a bleach solution that has been fraudulently promoted as a cure-all since 2006. Its main active ingredient is sodium chlorite, which is "activated" with citric acid to form chlorine dioxide. In an attempt to evade health regulations, its inventor, Jim Humble, a former Scientologist, founded the Genesis II Church of Health and Healing, which considers MMS as its sacrament.
Technology
Food, water and health
null
23672379
https://en.wikipedia.org/wiki/Agricultural%20machinery
Agricultural machinery
Agricultural machinery relates to the mechanical structures and devices used in farming or other agriculture. There are many types of such equipment, from hand tools and power tools to tractors and the farm implements that they tow or operate. Machinery is used in both organic and nonorganic farming. Especially since the advent of mechanised agriculture, agricultural machinery is an indispensable part of how the world is fed. Agricultural machinery can be regarded as part of wider agricultural automation technologies, which includes the more advanced digital equipment and agricultural robotics. While robots have the potential to automate the three key steps involved in any agricultural operation (diagnosis, decision-making and performing), conventional motorized machinery is used principally to automate only the performing step where diagnosis and decision-making are conducted by humans based on observations and experience. History The Industrial Revolution With the coming of the Industrial Revolution and the development of more complicated machines, farming methods took a great leap forward. Instead of harvesting grain by hand with a sharp blade, wheeled machines cut a continuous swath. Instead of threshing the grain by beating it with sticks, threshing machines separated the seeds from the heads and stalks. The first tractors appeared in the late 19th century. Steam power Power for agricultural machinery was originally supplied by ox or other domesticated animals. With the invention of steam power came the portable engine, and later the traction engine, a multipurpose, mobile energy source that was the ground-crawling cousin to the steam locomotive. Agricultural steam engines took over the heavy pulling work of oxen, and were also equipped with a pulley that could power stationary machines via the use of a long belt. The steam-powered machines were low-powered by today's standards but because of their size and their low gear ratios, they could provide a large drawbar pull. The slow speed of steam-powered machines led farmers to comment that tractors had two speeds: "slow, and damn slow". Internal combustion engines The internal combustion engine; first the petrol engine, and later diesel engines; became the main source of power for the next generation of tractors. These engines also contributed to the development of the self-propelled combine harvester and thresher, or the combine harvester (also shortened to 'combine'). Instead of cutting the grain stalks and transporting them to a stationary threshing machine, these combines cut, threshed, and separated the grain while moving continuously throughout the field. Agricultural machinery types Tractors Tractors do the majority of work on a modern farm. They are used to push/pull implements—machines that till the ground, plant seeds, and perform other tasks. Tillage implements prepare the soil for planting by loosening the soil and killing weeds or competing plants. The best-known is the plow, the ancient implement that was upgraded in 1838 by John Deere. Plows are now used less frequently in the U.S. than formerly, with offset disks used instead to turn over the soil, and chisels used to gain the depth needed to retain moisture. Combines Combine is a machine designed to efficiently harvest a variety of grain crops. The name derives from its combining four separate harvesting operations—reaping, threshing, gathering, and winnowing—into a single process. Among the crops harvested with a combine are wheat, rice, oats, rye, barley, corn (maize), sorghum, soybeans, flax (linseed), sunflowers and rapeseed. Planters The most common type of seeder is called a planter, and spaces seeds out equally in long rows, which are usually two to three feet apart. Some crops are planted by drills, which put out much more seed in rows less than a foot apart, blanketing the field with crops. Transplanters automate the task of transplanting seedlings to the field. With the widespread use of plastic mulch, plastic mulch layers, transplanters, and seeders lay down long rows of plastic, and plant through them automatically. Sprayers After planting, other agricultural machinery such as self-propelled sprayers can be used to apply fertilizer and pesticides. Agriculture sprayer application is a method to protect crops from weeds by using herbicides, fungicides, and insecticides. Spraying or planting a cover crop are ways to mix weed growth. Balers and other agriculture implements Planting crop hay balers can be used to tightly package grass or alfalfa into a storable form for the winter months. Modern irrigation relies on machinery. Engines, pumps and other specialized gear provide water quickly and in high volumes to large areas of land. Similar types of equipment such as agriculture sprayers can be used to deliver fertilizers and pesticides. Besides the tractor, other vehicles have been adapted for use in farming, including trucks, airplanes, and helicopters, such as for transporting crops and making equipment mobile, to aerial spraying and livestock herd management. New technology and the future The basic technology of agricultural machines has changed little in the last century. Though modern harvesters and planters may do a better job or be slightly tweaked from their predecessors, the combine of today still cuts, threshes, and separates grain in the same way it has always been done. However, technology is changing the way that humans operate the machines, as computer monitoring systems, GPS locators and self-steer programs allow the most advanced tractors and implements to be more precise and less wasteful in the use of fuel, seed, or fertilizer. In the foreseeable future, there may be mass production of driverless tractors, which use GPS maps and electronic sensors. Agricultural automation The Food and Agriculture Organization of the United Nations (FAO) defines agricultural automation as the use of machinery and equipment in agricultural operations to improve their diagnosis, decision-making, or performance, reducing the drudgery of agricultural work and improving the timeliness, and potentially the precision, of agricultural operations. The technological evolution in agriculture has been a journey from manual tools to animal traction, then to motorized mechanization, and further to digital equipment. This progression has culminated in the use of robotics with artificial intelligence (AI). Motorized mechanization, for instance, automates operations like ploughing, seeding, fertilizing, milking, feeding, and irrigating, thereby significantly reducing manual labor. With the advent of digital automation technologies, it has become possible to automate diagnosis and decision-making. For instance, autonomous crop robots can harvest and seed crops, and drones can collect information to help automate input applications. Tractors, on the other hand, can be transformed into automated vehicles that can sow fields independently. < ref name= ":1"/> A 2023 report by the United States Department of Agriculture (USDA) revealed that over 50% of corn, cotton, rice, sorghum, soybeans, and winter wheat in the United States is planted using automated guidance systems. These systems, which utilize technology to autonomously steer farm equipment, only require supervision from a farmer. This is a clear example of how agricultural automation is being implemented in real-world farming scenarios. Open source agricultural equipment Many farmers are upset by their inability to fix the new types of high-tech farm equipment. This is due mostly to companies using intellectual property law to prevent farmers from having the legal right to fix their equipment (or gain access to the information to allow them to do it). In October 2015 an exemption was added to the DMCA to allow inspection and modification of the software in cars and other vehicles including agricultural machinery. The Open Source Agriculture movement counts different initiatives and organizations such as Farm Labs which is a network in Europe, l'Atelier Paysan which is a cooperative to teach farmers in France how to build and repair their tools, and Ekylibre which is an open-source company to provide farmers in France with open source software (SaaS) to manage farming operations. In the United States, the MIT Media Lab's Open Agriculture Initiative seeks to foster "the creation of an open-source ecosystem of technologies that enable and promote transparency, networked experimentation, education, and hyper-local production". It develops the Personal Food Computer, an educational project to create a "controlled environment agriculture technology platform that uses robotic systems to control and monitor climate, energy, and plant growth inside of a specialized growing chamber". It includes the development of Open Phenom, an open source library with open data sets for climate recipes which link the phenotype response of plants (taste, nutrition) to environmental variables, biological, genetic and resource-related necessary for cultivation (input). Plants with the same genetics can naturally vary in color, size, texture, growth rate, yield, flavor, and nutrient density according to the environmental conditions in which they are produced. Manufacturers Active AGCO Agrale Al-Ghazi Tractors Algerian Tractors Company Arbos ARGO SpA Carraro Agritalia Case IH Challenger Tractors Claas CNH Industrial Daedong Deutz-Fahr Escorts Limited Fendt Goldoni Iseki Jacto JCB John Deere Kharkiv Tractor Plant Kirov Plant Kubota Lamborghini Trattori Landini Lindner LS Mtron Mahindra Tractors Massey Ferguson McCormick Tractors Millat Tractors Minsk Tractor Works Mitsubishi Agricultural Machinery New Holland Agriculture Pronar Shibaura Sonalika Tractors SAME SAS Motors SDF Group Stara Steyr TAFE TYM Ursus SA Valpadana Valtra Versatile Yanmar YTO Group Zetor Zoomlion Balwaan Agri Former Allis-Chalmers Case Corporation Ferguson-Brown Company Fiat Trattori Ford International Harvester Leyland Tractors Massey-Harris Renault Agriculture
Technology
Basics_2
null
23672633
https://en.wikipedia.org/wiki/Wattle%20and%20daub
Wattle and daub
Wattle and daub is a composite building method used for making walls and buildings, in which a woven lattice of wooden strips called "wattle" is "daubed" with a sticky material usually made of some combination of wet soil, clay, sand, and straw. Wattle and daub has been used for at least 6,000 years and is still an important construction method in many parts of the world. Many historic buildings include wattle and daub construction. History The wattle and daub technique has been used since the Neolithic period. It was common for houses of Linear pottery and Rössen cultures of middle Europe, but is also found in Western Asia (Çatalhöyük, Shillourokambos) as well as in North America (Mississippian culture) and South America (Brazil). In Africa it is common in the architecture of traditional houses such as those of the Ashanti people. Its usage dates back at least 6,000 years. There are suggestions that construction techniques such as lath and plaster and even cob may have evolved from wattle and daub. Fragments from prehistoric wattle and daub buildings have been found in Africa, Europe, Mesoamerica and North America. Evidence for wattle and daub (or "wattle and reed") fire pits, storage bins, and buildings shows up in Egyptian archaeological sites such as Merimda and El Omari, dating back to the 5th millennium BCE, predating the use of mud brick and continuing to be the preferred building material until about the start of the First Dynasty. It continued to flourish well into the New Kingdom and beyond. Vitruvius refers to it as being employed in Rome. A review of English architecture especially reveals that the sophistication of this craft is dependent on the various styles of timber frame housing. The wattle and plaster process has been replaced in modern architecture by brick and mortar or by lath and plaster, a common building material for wall and ceiling surfaces, in which a series of nailed wooden strips are covered with plaster smoothed into a flat surface. In many regions this building method has itself been overtaken by drywall construction using plasterboard sheets. Wattle The wattle is made by weaving thin branches (either whole, or more usually split) or slats between upright stakes. The wattle may be made as loose panels, slotted between timber framing to make infill panels, or made in place to form the whole of a wall. In different regions, the material of wattle can be different. For example, at the Mitchell Site on the northern outskirts of the city of Mitchell, South Dakota, willow has been found as the wattle material of the walls of the house. Reeds and vines can also be used as wattle material. The origin of the term wattle describing a group of acacias in Australia, is derived from the common use of acacias as wattle in early Australian European settlements. Daub Daub is usually created from a mixture of ingredients from three categories: binders, aggregates and reinforcement. Binders hold the mix together and can include clay, lime, chalk dust and limestone dust. Aggregates give the mix its bulk and dimensional stability through materials such as mud, sand, crushed chalk and crushed stone. Reinforcement is provided by straw, hair, hay or other fibrous materials, and helps to hold the mix together as well as to control shrinkage and provide flexibility. The daub may be mixed by hand, or by treading – either by humans or livestock. It is then applied to the wattle and allowed to dry, and often then whitewashed to increase its resistance to rain. Sometimes there can be more than one layer of daub. At the Mitchell Site, the anterior of the house had double layers of burned daub. Styles of infill panels There were two popular choices for wattle and daub infill paneling: close-studded paneling and square paneling. Close-studding Close-studding panels create a much narrower space between the timbers: anywhere from 7 to 16 inches (18 to 40 cm). For this style of panel, weaving is too difficult, so the wattles run horizontally and are known as ledgers. The ledgers are sprung into each upright timber (stud) through a system of augered holes on one side and short chiseled grooves along the other. The holes (along with holes of square paneling) are drilled at a slight angle towards the outer face of each stud. This allows room for upright hazels to be tied to ledgers from the inside of the building. The horizontal ledgers are placed every two to three feet (0.6 to 0.9 metres) with whole hazel rods positioned upright top to bottom and lashed to the ledgers. These hazel rods are generally tied a finger-width apart with 6–8 rods each with a 16-inch (40 cm) width. Gaps allow key formation for drying. Square panels Square panels are large, wide panels typical of some later timber-frame houses. These panels may be square in shape, or sometimes triangular to accommodate arched or decorative bracing. This style requires the wattles to be woven for better support of the daub. To insert wattles in a square panel several steps are required. First, a series of evenly spaced holes are drilled along the middle of the inner face of each upper timber. Next, a continuous groove is cut along the middle of each inner face of the lower timber in each panel. Vertical slender timbers, known as staves, are then inserted and these hold the whole panel within the timber frame. The staves are positioned into the holes and then sprung into the grooves. They must be placed with sufficient gaps to weave the flexible horizontal wattles. Applications In some places or cultures, the technique of wattle and daub was used with different materials and thus has different names. Pug and pine In the early days of the colonisation of South Australia, in areas where substantial timber was unavailable, pioneers' cottages and other small buildings were frequently constructed with light vertical timbers, which may have been "native pine" (Callitris or Casuarina spp.), driven into the ground, the gaps being stopped with pug (kneaded clay and grass mixture). Another term for this construction is palisade and pug. Mud and stud "Mud and stud" is a similar process to wattle and daub, with a simple frame consisting only of upright studs joined by cross rails at the tops and bottoms. Thin staves of ash were attached, then daubed with a mixture of mud, straw, hair and dung. The style of building was once common in Lincolnshire. Pierrotage, columbage Pierrotage is the infilling material used in French Vernacular architecture of the Southern United States to infill between half-timbering with diagonal braces, which is similar to daub. It is usually made of lime mortar clay mixed with small stones. It is also called bousillage or bouzillage, especially in French Vernacular architecture of Louisiana of the early 1700s. The materials of bousillage are Spanish moss or clay and grass. Bousillage also refers to the type of brick molded with the same materials and used as infilling between posts. Columbage refers to the timber-framed construction with diagonal bracing of the framework. Pierratage or bousillage is the material filled into the structural timbers. Bajarreque Bajarreque is a wall constructed with the technique of wattle and daub. The wattle here is made of bagasse, and the daub is the mix of clay and straw. Jacal Jacal can refer to a type of crude house whose wall is built with wattle and daub in southwestern US. Closely spaced upright sticks or poles driven into the ground with small branches (wattle) interwoven between them make the structural frame of the wall. Mud or an adobe clay (daub) is covered outside. To provide additional weather protection, the wall is usually plastered.
Technology
Building materials
null
18600440
https://en.wikipedia.org/wiki/Osmosis
Osmosis
Osmosis (, ) is the spontaneous net movement or diffusion of solvent molecules through a selectively-permeable membrane from a region of high water potential (region of lower solute concentration) to a region of low water potential (region of higher solute concentration), in the direction that tends to equalize the solute concentrations on the two sides. It may also be used to describe a physical process in which any solvent moves across a selectively permeable membrane (permeable to the solvent, but not the solute) separating two solutions of different concentrations. Osmosis can be made to do work. Osmotic pressure is defined as the external pressure required to prevent net movement of solvent across the membrane. Osmotic pressure is a colligative property, meaning that the osmotic pressure depends on the molar concentration of the solute but not on its identity. Osmosis is a vital process in biological systems, as biological membranes are semipermeable. In general, these membranes are impermeable to large and polar molecules, such as ions, proteins, and polysaccharides, while being permeable to non-polar or hydrophobic molecules like lipids as well as to small molecules like oxygen, carbon dioxide, nitrogen, and nitric oxide. Permeability depends on solubility, charge, or chemistry, as well as solute size. Water molecules travel through the plasma membrane, tonoplast membrane (vacuole) or organelle membranes by diffusing across the phospholipid bilayer via aquaporins (small transmembrane proteins similar to those responsible for facilitated diffusion and ion channels). Osmosis provides the primary means by which water is transported into and out of cells. The turgor pressure of a cell is largely maintained by osmosis across the cell membrane between the cell interior and its relatively hypotonic environment. History Some kinds of osmotic flow have been observed since ancient times, e.g., on the construction of Egyptian pyramids. Jean-Antoine Nollet first documented observation of osmosis in 1748. The word "osmosis" descends from the words "endosmose" and "exosmose", which were coined by French physician René Joachim Henri Dutrochet (1776–1847) from the Greek words ἔνδον (éndon "within"), ἔξω (éxō "outer, external"), and ὠσμός (ōsmós "push, impulsion"). In 1867, Moritz Traube invented highly selective precipitation membranes, advancing the art and technique of measurement of osmotic flow. Description Osmosis is the movement of a solvent across a semipermeable membrane toward a higher concentration of solute. In biological systems, the solvent is typically water, but osmosis can occur in other liquids, supercritical liquids, and even gases. When a cell is submerged in water, the water molecules pass through the cell membrane from an area of low solute concentration to high solute concentration. For example, if the cell is submerged in saltwater, water molecules move out of the cell. If a cell is submerged in freshwater, water molecules move into the cell. When the membrane has a volume of pure water on both sides, water molecules pass in and out in each direction at exactly the same rate. There is no net flow of water through the membrane. Osmosis can be demonstrated when potato slices are added to a high salt solution. The water from inside the potato moves out to the solution, causing the potato to shrink and to lose its 'turgor pressure'. The more concentrated the salt solution, the bigger the loss in size and weight of the potato slice. Chemical gardens demonstrate the effect of osmosis in inorganic chemistry. Mechanism The mechanism responsible for driving osmosis has commonly been represented in biology and chemistry texts as either the dilution of water by solute (resulting in lower concentration of water on the higher solute concentration side of the membrane and therefore a diffusion of water along a concentration gradient) or by a solute's attraction to water (resulting in less free water on the higher solute concentration side of the membrane and therefore net movement of water toward the solute). Both of these notions have been conclusively refuted. The diffusion model of osmosis is rendered untenable by the fact that osmosis can drive water across a membrane toward a higher concentration of water. The "bound water" model is refuted by the fact that osmosis is independent of the size of the solute molecules—a colligative property—or how hydrophilic they are. It is difficult to describe osmosis without a mechanical or thermodynamic explanation, but essentially there is an interaction between the solute and water that counteracts the pressure that otherwise free solute molecules would exert. One fact to take note of is that heat from the surroundings is able to be converted into mechanical energy (water rising). Many thermodynamic explanations go into the concept of chemical potential and how the function of the water on the solution side differs from that of pure water due to the higher pressure and the presence of the solute counteracting such that the chemical potential remains unchanged. The virial theorem demonstrates that attraction between the molecules (water and solute) reduces the pressure, and thus the pressure exerted by water molecules on each other in solution is less than in pure water, allowing pure water to "force" the solution until the pressure reaches equilibrium. Role in living things Osmotic pressure is the main agent of support in many plants. The osmotic entry of water raises the turgor pressure exerted against the cell wall, until it equals the osmotic pressure, creating a steady state. When a plant cell is placed in a solution that is hypertonic relative to the cytoplasm, water moves out of the cell and the cell shrinks. In doing so, the cell becomes flaccid. In extreme cases, the cell becomes plasmolyzed – the cell membrane disengages with the cell wall due to lack of water pressure on it. When a plant cell is placed in a solution that is hypotonic relative to the cytoplasm, water moves into the cell and the cell swells to become turgid. Osmosis also plays a vital role in human cells by facilitating the movement of water across cell membranes. This process is crucial for maintaining proper cell hydration, as cells can be sensitive to dehydration or overhydration. In human cells, osmosis is essential for maintaining the balance of water and solutes, ensuring optimal cellular function. Imbalances in osmotic pressure can lead to cellular dysfunction, highlighting the importance of osmosis in sustaining the health and integrity of human cells. In certain environments, osmosis can be harmful to organisms. Freshwater and saltwater aquarium fish, for example, will quickly die should they be placed in water of a maladaptive salinity. The osmotic effect of table salt to kill leeches and slugs is another example of a way osmosis can cause harm to organisms. Suppose an animal or plant cell is placed in a solution of sugar or salt in water. If the medium is hypotonic relative to the cell cytoplasm, the cell will gain water through osmosis. If the medium is isotonic, there will be no net movement of water across the cell membrane. If the medium is hypertonic relative to the cell cytoplasm, the cell will lose water by osmosis. This means that if a cell is put in a solution which has a solute concentration higher than its own, it will shrivel, and if it is put in a solution with a lower solute concentration than its own, the cell will swell and may even burst. Factors Osmotic pressure Osmosis may be opposed by increasing the pressure in the region of high solute concentration with respect to that in the low solute concentration region. The force per unit area, or pressure, required to prevent the passage of water (or any other high-liquidity solution) through a selectively permeable membrane and into a solution of greater concentration is equivalent to the osmotic pressure of the solution, or turgor. Osmotic pressure is a colligative property, meaning that the property depends on the concentration of the solute, but not on its content or chemical identity. Osmotic gradient The osmotic gradient is the difference in concentration between two solutions on either side of a semipermeable membrane, and is used to tell the difference in percentages of the concentration of a specific particle dissolved in a solution. Usually the osmotic gradient is used while comparing solutions that have a semipermeable membrane between them allowing water to diffuse between the two solutions, toward the hypertonic solution (the solution with the higher concentration). Eventually, the force of the column of water on the hypertonic side of the semipermeable membrane will equal the force of diffusion on the hypotonic (the side with a lesser concentration) side, creating equilibrium. When equilibrium is reached, water continues to flow, but it flows both ways in equal amounts as well as force, therefore stabilizing the solution. Variation Reverse osmosis Reverse osmosis is a separation process that uses pressure to force a solvent through a semi-permeable membrane that retains the solute on one side and allows the pure solvent to pass to the other side, forcing it from a region of high solute concentration through a membrane to a region of low solute concentration by applying a pressure in excess of the osmotic pressure. This process is known primarily for its role in turning seawater into drinking water, when salt and other unwanted substances are ridded from the water molecules. Forward osmosis Osmosis may be used directly to achieve separation of water from a solution containing unwanted solutes. A "draw" solution of higher osmotic pressure than the feed solution is used to induce a net flow of water through a semi-permeable membrane, such that the feed solution becomes concentrated as the draw solution becomes dilute. The diluted draw solution may then be used directly (as with an ingestible solute like glucose), or sent to a secondary separation process for the removal of the draw solute. This secondary separation can be more efficient than a reverse osmosis process would be alone, depending on the draw solute used and the feedwater treated. Forward osmosis is an area of ongoing research, focusing on applications in desalination, water purification, water treatment, food processing, and other areas of study. Future developments in osmosis Future developments in osmosis and osmosis research hold promise for a range of applications. Researchers are exploring advanced materials for more efficient osmotic processes, leading to improved water desalination and purification technologies. Additionally, the integration of osmotic power generation, where the osmotic pressure difference between saltwater and freshwater is harnessed for energy, presents a sustainable and renewable energy source with significant potential. Furthermore, the field of medical research is looking at innovative drug delivery systems that utilize osmotic principles, offering precise and controlled administration of medications within the body. As technology and understanding in this field continue to evolve, the applications of osmosis are expected to expand, addressing various global challenges in water sustainability, energy generation, and healthcare.
Physical sciences
Fluid mechanics
null
18600477
https://en.wikipedia.org/wiki/Hiccup
Hiccup
A hiccup (scientific name singultus, from Latin for "sob, hiccup"; also spelled hiccough) is an involuntary contraction (myoclonic jerk) of the diaphragm that may repeat several times per minute. The hiccup is an involuntary action involving a reflex arc. Once triggered, the reflex causes a strong contraction of the diaphragm followed about a quarter of a second later by closure of the epiglottis, a structure inside of the throat, which results in the "hic" sound. Hiccups may occur individually, or they may occur in bouts. The rhythm of the hiccup, or the time between hiccups, tends to be relatively constant. A bout of hiccups generally resolves itself without intervention, although many home remedies are often used to attempt to shorten the duration. Medical treatment is occasionally necessary in cases of chronic hiccups. Incidence Hiccups affect people of all ages, even being observed in utero. They become less frequent with advancing age. Intractable hiccups, lasting more than a month, are more common in adults. While males and females are affected equally often, men are more likely to develop protracted and intractable hiccups. Along with humans, hiccups have been studied and observed in cats, rats, rabbits, dogs, and horses. Signs and symptoms A hiccup consists of a single or a series of breathing diaphragm spasms, of variable spacing and duration, and a brief (less than one half second), unexpected, shoulder, abdomen, throat, or full body tremor. Causes Pathophysiological causes Food stuck in the esophagus Swallowing air excessively Gastroesophageal reflux Hiatal hernia Rapid eating Alcohol or carbonated beverages Spicy foods Opiate drug use Laughing vigorously or for a long time Hiccups may be triggered by a number of common human conditions. In rare cases, they can be a sign of serious medical problems such as myocardial infarction. Pre-phrenic nucleus irritation of medulla Kidney failure CNS disorders Stroke Multiple sclerosis Meningitis Nerve damage Damage to the vagus nerve after surgery Other known associations Although no clear pathophysiological mechanism has been described, hiccups is known to have been the initial symptom of Plasmodium vivax malaria in at least one documented case. Evolutionary theories The burping reflex hypothesis A leading hypothesis is that hiccups evolved to facilitate greater milk consumption in young mammals. The coordination of breathing and swallowing during suckling is a complicated process. Some air inevitably enters the stomach, occupying space that could otherwise be optimally used for calorie-rich milk. The hypothesis suggests that the presence of an air bubble in the stomach stimulates the sensory (afferent) limb of the reflex through receptors in the stomach, esophagus and along the underside of the diaphragm. This triggers the active part of the hiccup (efferent limb), sharply contracting the muscles of breathing and relaxing the muscles of the esophagus, then closing the vocal cords to prevent air from entering the lungs. This creates suction in the chest, pulling air from the stomach up into the esophagus. As the respiratory muscles relax the air is expelled through the mouth, effectively "burping" the animal. There are a number of characteristics of hiccups that support this theory. The burping of a suckling infant may increase its capacity for milk by more than 15–25%, bringing a significant survival advantage. There is a strong tendency for infants to get hiccups, and although the reflex persists throughout life it decreases in frequency with age. The location of the sensory nerves that trigger the reflex suggests it is a response to a condition in the stomach. The component of the reflex that suppresses peristalsis in the esophagus while the airway is being actively blocked suggests the esophagus is involved. Additionally, hiccups are only described in mammals, the group of animals that share the trait of suckling their young. Phylogenetic hypothesis An international respiratory research group composed of members from Canada, France, and Japan proposed that the hiccup is an evolutionary remnant of earlier amphibian respiration. Amphibians such as tadpoles gulp air and water across their gills via a rather simple motor reflex akin to mammalian hiccuping. The motor pathways that enable hiccuping form early during fetal development, before the motor pathways that enable normal lung ventilation form. Thus, the hiccup is evolutionarily antecedent to modern lung respiration. Additionally, this group (C. Straus et al.) points out that hiccups and amphibian gulping are inhibited by elevated CO2 and may be stopped by GABAB receptor agonists, illustrating a possible shared physiology and evolutionary heritage. These proposals may explain why premature infants spend 2.5% of their time hiccuping, possibly gulping like amphibians, as their lungs are not yet fully formed. The phylogenetic hypothesis may explain hiccups as an evolutionary remnant, held over from our amphibious ancestors. Duration Episodes of hiccups usually last under 30 minutes. Prolonged attacks, while rare, can be serious. Root causes of prolonged hiccups episodes are difficult to diagnose. Such attacks can cause significant morbidity and even death. An episode lasting more than a few minutes is termed a bout; a bout of over 48 hours is termed persistent or protracted. Hiccups lasting longer than a month are termed intractable. In many cases, only a single hemidiaphragm, usually the left one, is affected, although both may be involved. Treatment Hiccups are normally waited out, as fits will usually pass quickly. Folk cures for hiccups are common and varied. Hiccups are treated medically only in severe and persistent (termed "intractable") cases. Numerous medical remedies exist but no particular treatment is known to be especially effective, generally because of a lack of high-quality evidence. A vagus nerve stimulator has been used with an intractable case of hiccups. "It sends rhythmic bursts of electricity to the brain by way of the vagus nerve, which passes through the neck. The Food and Drug Administration approved the vagus nerve stimulator in 1997 as a way to control seizures in some patients with epilepsy." In one person, persistent digital rectal massage coincided with terminating intractable hiccups. Folk remedies There are many folk remedies for hiccups, including headstanding, drinking a glass of water upside-down, being frightened by someone, breathing into a bag, eating a large spoonful of peanut butter and placing sugar on or under the tongue. Acupressure, either through actual function or placebo effect, may cure hiccups in some people. For example, one technique is to relax the chest and shoulders and find the deepest points of the indentations directly below the protrusions of the collarbones. The index or middle fingers are inserted into the indents and pressed firmly for sixty seconds as long, deep breaths are taken. A simple treatment involves increasing the partial pressure of CO2 and inhibiting diaphragm activity by holding one's breath or rebreathing into a paper bag. Other potential remedies suggested by NHS Choices include pulling the knees up to the chest and leaning forward, sipping ice-cold water and swallowing some granulated sugar. A breathing exercise called supra-supramaximal inspiration (SSMI) has been shown to stop persistent hiccups. It combines the three principles of hypercapnia, diaphragm immobilization, and positive airway pressure. First, the subject must exhale completely, then take a deep breath. Then, they must hold their breath for ten seconds. After ten seconds, they must take another small breath without exhaling, then hold their breath for five seconds. Again, without exhaling, they must take another small breath and hold their breath for five seconds. Upon exhaling, the hiccups should be gone. Drinking through a straw with the ears plugged is a folk remedy that can be successful. In 2021 a scientific tool with a similar basis was tested on 249 hiccups subjects; the results were published in the Journal of the American Medical Association (JAMA). This device is named FISST (Forced Inspiratory Suction and Swallow Tool) and branded as "HiccAway". This study supports the use of FISST as an option to stop transient hiccups, with more than 90% of participants reporting better results than home remedies. HiccAway stops hiccups by forceful suction that is being generated by diaphragm contraction (phrenic nerve activity), followed by swallowing the water, which requires epiglottis closure. Society and culture The word hiccup itself was created through imitation. The alternative spelling of hiccough results from the association with the word cough. American Charles Osborne (1894–1991) had hiccups for 68 years, from 1922 to 1990, and was entered in the Guinness World Records as the man with the longest attack of hiccups, an estimated 430 million hiccups. In 2007, Florida teenager Jennifer Mee gained media fame for hiccuping around 50 times per minute for more than five weeks. British singer Christopher Sands hiccupped an estimated 10 million times in a 27-month period from February 2007 to May 2009. His condition, which meant that he could hardly eat or sleep, was eventually discovered to be caused by a tumor on his brain stem pushing on nerves causing him to hiccup every two seconds, 12 hours a day. His hiccups stopped in 2009 following surgery. In Baltic, German, Hungarian, Indian, Romanian, Slavic, Turkish, Greek and Albanian tradition, as well as among some tribes in Kenya, for example in the folklore of the Luo people, it is said that hiccups occur when the person experiencing them is being talked about by someone not present.
Biology and health sciences
Symptoms and signs
Health
18600953
https://en.wikipedia.org/wiki/Shield%20mantis
Shield mantis
Shield mantis, hood mantis (or hooded mantis) and leaf mantis (or leafy mantis) are common names for certain praying mantises with an extended thorax aiding it in camouflage and leaf mimicry. The terms are used for species in the following genera: Asiadodis Choeradodis Rhombodera Tamolanica
Biology and health sciences
Insects: General
Animals
18600991
https://en.wikipedia.org/wiki/Raccoon
Raccoon
The raccoon ( or , Procyon lotor), also spelled racoon and sometimes called the common raccoon or northern raccoon to distinguish it from the other species, is a mammal native to North America. It is the largest of the procyonid family, having a body length of , and a body weight of . Its grayish coat mostly consists of dense underfur, which insulates it against cold weather. The animal's most distinctive features include its extremely dexterous front paws, its facial mask, and its ringed tail, which are common themes in the mythologies of the Indigenous peoples of the Americas surrounding the species. The raccoon is noted for its intelligence, and studies show that it is able to remember the solution to tasks for at least three years. It is usually nocturnal and omnivorous, eating about 40% invertebrates, 33% plants, and 27% vertebrates. The original habitats of the raccoon are deciduous and mixed forests, but due to their adaptability, they have extended their range to mountainous areas, coastal marshes, and urban areas, where some homeowners consider them to be pests. As a result of escapes and deliberate introductions in the mid-20th century, raccoons are now also distributed across central Europe, the Caucasus, and Japan. In Europe, the raccoon has been included on the list of Invasive Alien Species of Union Concern since 2016. This implies that this species cannot be imported, bred, transported, commercialized, or intentionally released into the environment in the whole of the European Union. Though previously thought to be generally solitary, there is now evidence that raccoons engage in sex-specific social behavior. Related females often share a common area, while unrelated males live together in groups of up to four raccoons in order to maintain their positions against foreign males during the mating season and against other potential invaders. Home range sizes vary anywhere from for females in cities, to for males in prairies. After a gestation period of about 65 days, two to five young known as "kits" are born in spring. The kits are subsequently raised by their mother until dispersal in late fall. Although captive raccoons have been known to live over 20 years, their life expectancy in the wild is only 1.8 to 3.1 years. In many areas, hunting and vehicular injury are the two most common causes of death. Etymology Names for the species include the common raccoon, North American raccoon, and northern raccoon. In various North American native languages, the reference to the animal's manual dexterity, or use of its hands is the source for the names. The word raccoon was adopted into English from the native Powhatan term meaning 'animal that scratches with its hands', as used in the Colony of Virginia. It was recorded on John Smith's list of Powhatan words as , and on that of William Strachey as . It has also been identified as a reflex of a Proto-Algonquian root , meaning '[the] one who rubs, scrubs and scratches with its hands'. The word is sometimes spelled as racoon. In Spanish, the raccoon is called , derived from the Nahuatl of the Aztecs, meaning '[the] one who takes everything in its hands'. Its Latin name, procyon lotor, literally means 'before-dog washer'. The genus Procyon was named by Gottlieb Conrad Christian Storr. The animal's observed habit of "washing" or "dousing" (see below) is the source of its name in other languages. For example, the French "raton laveur" means "washing rat". The colloquial abbreviation coon is used in words like coonskin for fur clothing and in phrases like old coon as a self-designation of trappers. In the 1830s, the United States Whig Party used the raccoon as an emblem, causing them to be pejoratively known as "coons" by their political opponents, who saw them as too sympathetic to African-Americans. Soon after that the term became an ethnic slur, especially in use between 1880 and 1920 (see coon song), and the term is still considered offensive. Dogs bred to hunt raccoons are called coonhound and coon dog. Taxonomy In the first decades after its discovery by the members of the expedition of Christopher Columbus, who were the first Europeans to leave a written record about the species, taxonomists thought the raccoon was related to many different species, including dogs, cats, badgers and particularly bears. Carl Linnaeus, the father of modern taxonomy, placed the raccoon in the genus Ursus, first as Ursus cauda elongata ('long-tailed bear') in the second edition of his Systema Naturae (1740), then as Ursus Lotor ('washer bear') in the tenth edition (1758–59). In 1780, Gottlieb Conrad Christian Storr placed the raccoon in its own genus Procyon, which can be translated as either 'before the dog' or 'doglike'. It is also possible that Storr had its nocturnal lifestyle in mind and chose the star Procyon as eponym for the species. Evolution Based on fossil evidence from Russia and Bulgaria, the first known members of the family Procyonidae lived in Europe in the late Oligocene about 25 million years ago. Similar tooth and skull structures suggest procyonids and weasels share a common ancestor, but molecular analysis indicates a closer relationship between raccoons and bears. After the then-existing species crossed the Bering Strait at least six million years later in the early Miocene, the center of its distribution was probably in Central America. Coatis (Nasua and Nasuella) and raccoons (Procyon) have been considered to share common descent from a species in the genus Paranasua present between 5.2 and 6.0 million years ago. This assumption, based on morphological comparisons of fossils, conflicts with a 2006 genetic analysis which indicates raccoons are more closely related to ringtails. Unlike other procyonids, such as the crab-eating raccoon (Procyon cancrivorus), the ancestors of the common raccoon left tropical and subtropical areas and migrated farther north about 2.5 million years ago, in a migration that has been confirmed by the discovery of fossils in the Great Plains dating back to the middle of the Pliocene. Its most recent ancestor was likely Procyon rexroadensis, a large Blancan raccoon from the Rexroad Formation characterized by its narrow back teeth and large lower jaw. Subspecies As of 2005, Mammal Species of the World recognizes 22 subspecies of raccoons. Four of these subspecies living only on small Central American and Caribbean islands were often regarded as distinct species after their discovery. These are the Bahamian raccoon and Guadeloupe raccoon, which are very similar to each other; the Tres Marias raccoon, which is larger than average and has an angular skull; and the extinct Barbados raccoon. Studies of their morphological and genetic traits in 1999, 2003 and 2005 led all these island raccoons to be listed as subspecies of the common raccoon in Mammal Species of the World's third edition. A fifth island raccoon population, the Cozumel raccoon, which weighs only and has notably small teeth, is still regarded as a separate species. The four smallest raccoon subspecies, with a typical weight of , live along the southern coast of Florida and on the adjacent islands; an example is the Ten Thousand Islands raccoon (Procyon lotor marinus). Most of the other 15 subspecies differ only slightly from each other in coat color, size and other physical characteristics. The two most widespread subspecies are the eastern raccoon (Procyon lotor lotor) and the Upper Mississippi Valley raccoon (Procyon lotor hirtus). Both share a comparatively dark coat with long hairs, but the Upper Mississippi Valley raccoon is larger than the eastern raccoon. The eastern raccoon occurs in all U.S. states and Canadian provinces to the north of South Carolina and Tennessee. The adjacent range of the Upper Mississippi Valley raccoon covers all U.S. states and Canadian provinces to the north of Louisiana, Texas, and New Mexico. The taxonomic identity of feral raccoons inhabiting Central Europe, Causasia and Japan is unknown, as the founding populations consisted of uncategorized specimens from zoos and fur farms. Description Physical characteristics Head to hindquarters, raccoons measure between , not including the bushy tail which can measure between , but is usually not much longer than . The shoulder height is between . The body weight of an adult raccoon varies considerably with habitat, making the raccoon one of the most variably sized mammals. It can range from , but is usually between . The smallest specimens live in southern Florida, while those near the northern limits of the raccoon's range tend to be the largest . Males are usually 15 to 20% heavier than females. At the beginning of winter, a raccoon can weigh twice as much as in spring because of fat storage. The largest recorded wild raccoon weighed and measured in total length, by far the largest size recorded for a procyonid. The most characteristic physical feature of the raccoon is the area of black fur around the eyes, which contrasts sharply with the surrounding white face coloring. This is reminiscent of a "bandit's mask" and has thus enhanced the animal's reputation for mischief. The slightly rounded ears are also bordered by white fur. Raccoons are assumed to recognize the facial expression and posture of other members of their species more quickly because of the conspicuous facial coloration and the alternating light and dark rings on the tail. The dark mask may also reduce glare and thus enhance night vision. On other parts of the body, the long and stiff guard hairs, which shed moisture, are usually colored in shades of gray and, to a lesser extent, brown. Raccoons with a very dark coat are more common in the German population because individuals with such coloring were among those initially released to the wild. The dense underfur, which accounts for almost 90% of the coat, insulates against cold weather and is composed of long hairs. The raccoon, whose method of locomotion is usually considered to be plantigrade, can stand on its hind legs to examine objects with its front paws. As raccoons have short legs compared to their compact torso, they are usually not able either to run quickly or jump great distances. Their top speed over short distances is . Raccoons can swim with an average speed of about and can stay in the water for several hours. For climbing down a tree headfirst—an unusual ability for a mammal of its size—a raccoon rotates its hind feet so they are pointing backwards. Raccoons have a dual cooling system to regulate their temperature; that is, they are able to both sweat and pant for heat dissipation. Raccoon skulls have a short and wide facial region and a voluminous braincase. The facial length of the skull is less than the cranial, and their nasal bones are short and quite broad. The auditory bullae are inflated in form, and the sagittal crest is weakly developed. The dentition—40 teeth with the dental formula:—is adapted to their omnivorous diet: the carnassials are not as sharp and pointed as those of a full-time carnivore, but the molars are not as wide as those of a herbivore. The penis bone of males is about long and strongly bent at the front end, and its shape can be used to distinguish juvenile males from mature males. Seven of the thirteen identified vocal calls are used in communication between the mother and her kits, one of these being the birdlike twittering of newborns. Senses The most important sense for the raccoon is its sense of touch. The "hyper sensitive" front paws are protected by a thin horny layer that becomes pliable when wet. The five digits of the paws have no webbing between them, which is unusual for a carnivoran. Almost two-thirds of the area responsible for sensory perception in the raccoon's cerebral cortex is specialized for the interpretation of tactile impulses, more than in any other studied animal. They are able to identify objects before touching them with vibrissae located above their sharp, nonretractable claws. The raccoon's paws lack an opposable thumb; thus, it does not have the agility of the hands of primates. There is no observed negative effect on tactile perception when a raccoon stands in water below 10 °C (50 °F) for hours. Raccoons are thought to be color blind or at least poorly able to distinguish color, though their eyes are well-adapted for sensing green light. Although their accommodation of 11 dioptre is comparable to that of humans and they see well in twilight because of the tapetum lucidum behind the retina, visual perception is of subordinate importance to raccoons because of their poor long-distance vision. In addition to being useful for orientation in the dark, their sense of smell is important for intraspecific communication. Glandular secretions (usually from their anal glands), urine and feces are used for marking. With their broad auditory range, they can perceive tones up to 50–85 kHz as well as quiet noises, like those produced by earthworms underground. Intelligence Zoologist Clinton Hart Merriam described raccoons as "clever beasts", and that "in certain directions their cunning surpasses that of the fox". The animal's intelligence gave rise to the epithet "sly coon". Only a few studies have been undertaken to determine the mental abilities of raccoons, most of them based on the animal's sense of touch. In a study by the ethologist H. B. Davis in 1908, raccoons were able to open 11 of 13 complex locks in fewer than 10 tries and had no problems repeating the action when the locks were rearranged or turned upside down. Davis concluded that they understood the abstract principles of the locking mechanisms and their learning speed was equivalent to that of rhesus macaques. Studies in 1963, 1973, 1975 and 1992 concentrated on raccoon memory showed that they can remember the solutions to tasks for at least three years. In a study by B. Pohl in 1992, raccoons were able to instantly differentiate between identical and different symbols three years after the short initial learning phase. Stanislas Dehaene reports in his book The Number Sense that raccoons can distinguish boxes containing two or four grapes from those containing three. In research by Suzana Herculano-Houzel and other neuroscientists, raccoons have been found to be comparable to primates in density of neurons in the cerebral cortex, which they have proposed to be a neuroanatomical indicator of intelligence. Behavior Social behavior Studies in the 1990s by the ethologists Stanley D. Gehrt and Ulf Hohmann suggest that raccoons engage in sex-specific social behaviors and are not typically solitary, as was previously thought. Related females often live in a so-called "fission-fusion society"; that is, they share a common area and occasionally meet at feeding or resting grounds. Unrelated males often form loose male social groups to maintain their position against foreign males during the mating season—or against other potential invaders. Such a group does not usually consist of more than four individuals. Since some males show aggressive behavior towards unrelated kits, mothers will isolate themselves from other raccoons until their kits are big enough to defend themselves. With respect to these three different modes of life prevalent among raccoons, Hohmann called their social structure a "three-class society". Samuel I. Zeveloff, professor of zoology at Weber State University and author of the book Raccoons: A Natural History, is more cautious in his interpretation and concludes at least the females are solitary most of the time and, according to Erik K. Fritzell's study in North Dakota in 1978, males in areas with low population densities are solitary as well. The shape and size of a raccoon's home range varies depending on age, sex, and habitat, with adults claiming areas more than twice as large as juveniles. While the size of home ranges in the habitat of North Dakota's prairies lie between for males and between for females, the average size in a marsh at Lake Erie was . Irrespective of whether the home ranges of adjacent groups overlap, they are most likely not actively defended outside the mating season if food supplies are sufficient. Odor marks on prominent spots are assumed to establish home ranges and identify individuals. Urine and feces left at shared raccoon latrines may provide additional information about feeding grounds, since raccoons were observed to meet there later for collective eating, sleeping and playing. Concerning the general behavior patterns of raccoons, Gehrt points out that "typically you'll find 10 to 15 percent that will do the opposite" of what is expected. Diet Though usually nocturnal, the raccoon is sometimes active in daylight to take advantage of available food sources. Its diet consists of about 40% invertebrates, 33% plant material and 27% vertebrates. Since its diet consists of such a variety of different foods, Zeveloff argues the raccoon "may well be one of the world's most omnivorous animals". While its diet in spring and early summer consists mostly of insects, worms, and other animals already available early in the year, it prefers fruits and nuts, such as acorns and walnuts, which emerge in late summer and autumn, and represent a rich calorie source for building up fat needed for winter. Contrary to popular belief, raccoons only occasionally eat active or large prey, such as birds and mammals. They prefer prey that is easier to catch, specifically crayfish, insects, fish, amphibians and bird eggs. Raccoons are virulent predators of eggs and hatchlings in both birds and reptile nests, to such a degree that, for threatened prey species, raccoons may need to be removed from the area or nests may need to be relocated to mitigate the effect of their predations (i.e. in the case of some globally threatened turtles). When food is plentiful, raccoons can develop strong individual preferences for specific foods. In the northern parts of their range, raccoons go into a winter rest, reducing their activity drastically as long as a permanent snow cover makes searching for food difficult. Dousing One aspect of raccoon behavior is so well known that it gives the animal part of its scientific name, Procyon lotor; is Latin for 'washer'. In the wild, raccoons often dabble for underwater food near the shore-line. They then often pick up the food item with their front paws to examine it and rub the item, sometimes to remove unwanted parts. This gives the appearance of the raccoon "washing" the food. The tactile sensitivity of raccoons' paws is increased if this rubbing action is performed underwater, since the water softens the hard layer covering the paws. However, the behavior observed in captive raccoons in which they carry their food to water to "wash" or douse it before eating has not been observed in the wild. Naturalist Georges-Louis Leclerc, Comte de Buffon, believed that raccoons do not have adequate saliva production to moisten food thereby necessitating dousing, but this hypothesis is now considered to be incorrect. Captive raccoons douse their food more frequently when a watering hole with a layout similar to a stream is not farther away than . The widely accepted theory is that dousing in captive raccoons is a fixed action pattern from the dabbling behavior performed when foraging at shores for aquatic foods. This is supported by the observation that aquatic foods are doused more frequently. Cleaning dirty food does not seem to be a reason for "washing". Reproduction Raccoons usually mate in a period triggered by increasing daylight between late January and mid-March. However, there are large regional differences which are not completely explicable by solar conditions. For example, while raccoons in southern states typically mate later than average, the mating season in Manitoba also peaks later than usual in March and extends until June. During the mating season, males restlessly roam their home ranges in search of females in an attempt to court them during the three- to four-day period when conception is possible. These encounters will often occur at central meeting places. Copulation, including foreplay, can last over an hour and is repeated over several nights. The weaker members of a male social group also are assumed to get the opportunity to mate, since the stronger ones cannot mate with all available females. In a study in southern Texas during the mating seasons from 1990 to 1992, about one third of all females mated with more than one male. If a female does not become pregnant or if she loses her kits early, she will sometimes become fertile again 80 to 140 days later. After usually 63 to 65 days of gestation (although anywhere from 54 to 70 days is possible), a litter of typically two to five young is born. The average litter size varies widely with habitat, ranging from 2.5 in Alabama to 4.8 in North Dakota. Larger litters are more common in areas with a high mortality rate, due, for example, to hunting or severe winters. While male yearlings usually reach their sexual maturity only after the main mating season, female yearlings can compensate for high mortality rates and may be responsible for about 50% of all young born in a year. Males have no part in raising young. The kits (also called "cubs") are blind and deaf at birth, but their mask is already visible against their light fur. The birth weight of the roughly -long kits is between . Their ear canals open after around 18 to 23 days, a few days before their eyes open for the first time. Once the kits weigh about , they begin to explore outside the den, consuming solid food for the first time after six to nine weeks. After this point, their mother suckles them with decreasing frequency; they are usually weaned by 16 weeks. In the fall, after their mother has shown them dens and feeding grounds, the juvenile group splits up. While many females will stay close to the home range of their mother, males can sometimes move more than away. This is considered an instinctive behavior, preventing inbreeding. However, mother and offspring may share a den during the first winter in cold areas. Life expectancy Captive raccoons have been known to live for more than 20 years. However, the species' life expectancy in the wild is only 1.8 to 3.1 years, depending on the local conditions such as traffic volume, hunting, and weather severity. It is not unusual for only half of the young born in one year to survive a full year. After this point, the annual mortality rate drops to between 10% and 30%. Young raccoons are vulnerable to losing their mother and to starvation, particularly in long and cold winters. The most frequent natural cause of death in the North American raccoon population is distemper, which can reach epidemic proportions and kill most of a local raccoon population. In areas with heavy vehicular traffic and extensive hunting, these factors can account for up to 90% of all deaths of adult raccoons. The most important natural predators of the raccoon are bobcats, coyotes, and great horned owls, the latter mainly preying on young raccoons but capable of killing adults in some cases. In Florida, they have been reported to fall victim to larger carnivores like American black bear and cougars and these species may also be a threat on occasion in other areas. Where still present, gray wolves may still occasionally take raccoons as a supplemental prey item. Also in the southeast, they are among the favored prey for adult American alligators. On occasion, both bald and golden eagles will prey on raccoons. In the tropics, raccoons are known to fall prey to smaller eagles such as ornate hawk-eagles and black hawk-eagles, although it is not clear whether adults or merely juvenile raccoons are taken by these. In rare cases of overlap, they may fall victim from carnivores ranging from species averaging smaller than themselves such as fishers to those as large and formidable as jaguars in Mexico. In their introduced range in the former Soviet Union, their main predators are wolves, lynxes and Eurasian eagle-owls. However, predation is not a significant cause of death, especially because larger predators have been exterminated in many areas inhabited by raccoons. Range Habitat Although they have thrived in sparsely wooded areas in the last decades, raccoons depend on vertical structures to climb when they feel threatened. Therefore, they avoid open terrain and areas with high concentrations of beech trees, as beech bark is too smooth to climb. Tree hollows in old oaks or other trees and rock crevices are preferred by raccoons as sleeping, winter and litter dens. If such dens are unavailable or accessing them is inconvenient, raccoons use burrows dug by other mammals, dense undergrowth or tree crotches. In a study in the Solling range of hills in Germany, more than 60% of all sleeping places were used only once, but those used at least ten times accounted for about 70% of all uses. Since amphibians, crustaceans, and other animals around the shore of lakes and rivers are an important part of the raccoon's diet, lowland deciduous or mixed forests abundant with water and marshes sustain the highest population densities. While population densities range from 0.5 to 3.2 animals per square kilometer (1.3 to 8.3 animals per square mile) in prairies and do not usually exceed 6 animals per square kilometer (15.5 animals per square mile) in upland hardwood forests, more than 20 raccoons per square kilometer (51.8 animals per square mile) can live in lowland forests and marshes. Distribution in North America Raccoons are common throughout North America from Canada to Panama, where the subspecies Procyon lotor pumilus coexists with the crab-eating raccoon (Procyon cancrivorus). The population on Hispaniola was exterminated as early as 1513 by Spanish colonists who hunted them for their meat. Raccoons were also exterminated in Cuba and Jamaica, where the last sightings were reported in 1687. The Barbados raccoon became extinct relatively recently, in 1964. When they were still considered separate species, the Bahamas raccoon, Guadeloupe raccoon and Tres Marias raccoon were classified as endangered by the IUCN in 1996. There is archeological evidence that in pre-Columbian times raccoons were numerous only along rivers and in the woodlands of the Southeastern United States. As raccoons were not mentioned in earlier reports of pioneers exploring the central and north-central parts of the United States, their initial spread may have begun a few decades before the 20th century. Since the 1950s, raccoons have expanded their range from Vancouver Island—formerly the northernmost limit of their range—far into the northern portions of the four south-central Canadian provinces. New habitats which have recently been occupied by raccoons (aside from urban areas) include mountain ranges, such as the Western Rocky Mountains, prairies and coastal marshes. After a population explosion starting in the 1940s, the estimated number of raccoons in North America in the late 1980s was 15 to 20 times higher than in the 1930s, when raccoons were comparatively rare. Urbanization, the expansion of agriculture, deliberate introductions, and the extermination of natural predators of the raccoon have probably caused this increase in abundance and distribution. Distribution outside North America As a result of escapes and deliberate introductions in the mid-20th century, the raccoon is now distributed in several European and Asian countries. Sightings have occurred in all the countries bordering Germany, which hosts the largest population outside of North America. Another stable population exists in northern France, where several pet raccoons were released by members of the U.S. Air Force near the Laon-Couvron Air Base in 1966. Furthermore, raccoons have been known to be in the area around Madrid since the early 1970s. In 2013, the city authorized "the capture and death of any specimen". It is also present in Italy, with one self-sustaining population in Lombardy. About 1,240 animals were released in nine regions of the former Soviet Union between 1936 and 1958 for the purpose of establishing a population to be hunted for their fur. Two of these introductions were successful – one in the south of Belarus between 1954 and 1958, and another in Azerbaijan between 1941 and 1957. With a seasonal harvest of between 1,000~1,500 animals, in 1974 the estimated size of the population distributed in the Caucasus region was around 20,000 animals and the density was four animals per square kilometer (10 animals per square mile). Distribution in Japan In Japan, up to 1,500 raccoons were imported as pets each year after the success of the anime series Rascal the Raccoon (1977). In 2004, the descendants of discarded or escaped animals lived in 42 of 47 prefectures. The range of raccoons in the wild in Japan grew from 17 prefectures in 2000 to all 47 prefectures in 2008. It is estimated that raccoons cause thirty million yen (~$275,000) of agricultural damage on Hokkaido alone. Distribution in Germany In Germany – where the raccoon is called the (literally, 'wash-bear' or 'washing bear') due to its habit of "dousing" food in water – two pairs of pet raccoons were released into the German countryside at the Edersee reservoir in the north of Hesse in April 1934 by a forester upon request of their owner, a poultry farmer. He released them two weeks before receiving permission from the Prussian hunting office to "enrich the fauna". Several prior attempts to introduce raccoons in Germany had been unsuccessful. A second population was established in eastern Germany in 1945 when 25 raccoons escaped from a fur farm at Wolfshagen (today district of Altlandsberg), east of Berlin, after an air strike. The two populations are parasitologically distinguishable: 70% of the raccoons of the Hessian population are infected with the roundworm Baylisascaris procyonis, but none of the Brandenburgian population is known to have the parasite. In the Hessian region, there were an estimated 285 raccoons in 1956, which increased to over 20,000 in 1970; in 2008 there were between 200,000 and 400,000 raccoons in the whole of Germany. By 2012 it was estimated that Germany now had more than a million raccoons. The raccoon was once a protected species in Germany, but has been declared a game animal in 14 of the 16 German states since 1954. Hunters and environmentalists argue the raccoon spreads uncontrollably, threatens protected bird species, and supersedes indigenous competitors. This view is opposed by the zoologist Frank-Uwe Michler, who finds no evidence that a high population density of raccoons leads to negative effects on the biodiversity of an area. Hohmann holds that extensive hunting cannot be justified by the absence of natural predators, because predation is not a significant cause of death in the North American raccoon population. The raccoon is extensively hunted in Germany as it is seen as an invasive species and pest. In the 1990s, only about 400 raccoons were hunted yearly. This increased dramatically over the next quarter-century: during the 2015–2016 hunting season, 128,100 raccoons were hunted, 60 percent of them in the state of Hesse. Distribution in the former Soviet Union Experiments in acclimatising raccoons into the Soviet Union began in 1936, and were repeated a further 25 times until 1962. Overall, 1,222 individuals were released, 64 of which came from zoos and fur farms (38 of them having been imports from western Europe). The remainder originated from a population previously established in Transcaucasia. The range of Soviet raccoons was never single or continuous, as they were often introduced to different locations far from each other. All introductions into the Russian Far East failed; melanistic raccoons were released on Petrov Island near Vladivostok and some areas of southern Primorsky Krai, but died. In Central Asia, raccoons were released in Kyrgyzstan's Jalal-Abad Province, though they were later recorded as "practically absent" there in January 1963. A large and stable raccoon population (yielding 1,000~1,500 catches a year) was established in Azerbaijan after an introduction to the area in 1937. Raccoons apparently survived an introduction near Terek, along the Sulak River into the Dagestani lowlands. Attempts to settle raccoons on the Kuban River's left tributary and Kabardino-Balkaria were unsuccessful. A successful acclimatization occurred in Belarus, where three introductions (consisting of 52, 37, and 38 individuals in 1954 and 1958) took place. By January 1963, 700 individuals were recorded in the country. Urban raccoons Due to its adaptability, the raccoon has been able to use urban areas as a habitat. The first sightings were recorded in a suburb of Cincinnati in the 1920s. Since the 1950s, raccoons have been present in metropolitan areas like Washington, DC, Chicago, Toronto, and New York City. Since the 1960s, Kassel has hosted Europe's first and densest population in a large urban area, with about 50 to 150 animals per square kilometer (130 to 390 animals per square mile), a figure comparable to those of urban habitats in North America. Home range sizes of urban raccoons are only 3 to 40 hectares (7.5 to 100 acres) for females and 8 to 80 hectares (20 to 200 acres) for males. In small towns and suburbs, many raccoons sleep in a nearby forest after foraging in the settlement area. Fruit and insects in gardens and leftovers in municipal waste are easily available food sources. Furthermore, a large number of additional sleeping areas exist in these areas, such as hollows in old garden trees, cottages, garages, abandoned houses, and attics. The percentage of urban raccoons sleeping in abandoned or occupied houses varies from 15% in Washington, DC (1991) to 43% in Kassel (2003). Health Raccoons can carry rabies, a lethal disease caused by the neurotropic rabies virus carried in the saliva and transmitted by bites. Its spread began in Florida and Georgia in the 1950s and was facilitated by the introduction of infected individuals to Virginia and North Dakota in the late 1970s. Of the 6,940 documented rabies cases reported in the United States in 2006, 2,615 (37.7%) were in raccoons. The U.S. Department of Agriculture, as well as local authorities in several U.S. states and Canadian provinces, has developed oral vaccination programs to fight the spread of the disease in endangered populations. Only one human fatality has been reported after transmission of the rabies virus strain commonly known as "raccoon rabies". Among the main symptoms for rabies in raccoons are a generally sickly appearance, impaired mobility, abnormal vocalization, and aggressiveness. There may be no visible signs at all, however, and most individuals do not show the aggressive behavior seen in infected canids; rabid raccoons will often retire to their dens instead. Organizations like the U.S. Forest Service encourage people to stay away from animals with unusual behavior or appearance, and to notify the proper authorities, such as an animal control officer from the local health department. Since healthy animals, especially nursing mothers, will occasionally forage during the day, daylight activity is not a reliable indicator of illness in raccoons. Unlike rabies and at least a dozen other pathogens carried by raccoons, distemper, an epizootic virus, does not affect humans. This disease is the most frequent natural cause of death in the North American raccoon population and affects individuals of all age groups. For example, 94 of 145 raccoons died during an outbreak in Clifton, Ohio, in 1968. It may occur along with a following inflammation of the brain (encephalitis), causing the animal to display rabies-like symptoms. In Germany, the first eight cases of distemper were reported in 2007. Some of the most important bacterial diseases which affect raccoons are leptospirosis, listeriosis, tetanus, and tularemia. Although internal parasites weaken their immune systems, well-fed individuals can carry a great many roundworms in their digestive tracts without showing symptoms. The larvae of the roundworm Baylisascaris procyonis, which can be contained in the feces and seldom causes a severe illness in humans, can be ingested when cleaning raccoon latrines without wearing breathing protection. While not endemic, the worm Trichinella does infect raccoons, and undercooked raccoon meat has caused trichinosis in humans. Trematode Metorchis conjunctus can also infect raccoons. Relationship with humans Conflicts Raccoons have become notorious in urban areas for consuming food waste. They possess impressive problem-solving abilities and can break into all but the most secure food waste bins, which has earned them the derisive nickname trash panda. The presence of raccoons in close proximity to humans may be undesirable, as raccoon droppings (like most wild animals) contain parasites and other disease vectors. Raccoon roundworm is of particular concern to public health. It can be contracted in humans by accidental ingestion or inhalation of the eggs, which are present in the feces of infected raccoons. While usually harmless to the host, it causes progressive neurological damage in humans, and is eventually fatal if untreated. It is found in about 60% of adult raccoons. The general presence of raccoons in an area is not typically of concern, but nests or droppings found within or near structures should be destroyed. Roundworm eggs are very robust and bleach alone is insufficient; burning or treatment with hot solutions of sodium hydroxide is required. The keeping of raccoons as pets is illegal in some jurisdictions due to these risks. The increasing number of raccoons in urban areas has resulted in diverse reactions in humans, ranging from outrage at their presence to deliberate feeding. Some wildlife experts and most public authorities caution against feeding wild animals because they might become increasingly obtrusive and dependent on humans as a food source. Other experts challenge such arguments and give advice on feeding raccoons and other wildlife in their books. Raccoons without a fear of humans are a concern to those who attribute this trait to rabies, but scientists point out this behavior is much more likely to be a behavioral adjustment to living in habitats with regular contact to humans for many generations. Raccoons usually do not prey on domestic cats and dogs, but isolated cases of killings have been reported. Attacks on pets may also target their owners. While overturned waste containers and raided fruit trees are just a nuisance to homeowners, it can cost several thousand dollars to repair damage caused by the use of attic space as dens. Relocating or killing raccoons without a permit is forbidden in many urban areas on grounds of animal welfare. These methods usually only solve problems with particularly wild or aggressive individuals, since adequate dens are either known to several raccoons or will quickly be rediscovered. Loud noises, flashing lights, and unpleasant odors have proven particularly effective in driving away a mother and her kits before they would normally leave the nesting place (when the kits are about eight weeks old). Typically, though, only precautionary measures to restrict access to food waste and den sites are effective in the long term. Among all fruits and crops cultivated in agricultural areas, sweet corn in its milk stage is particularly popular among raccoons. In a two-year study by Purdue University researchers, published in 2004, raccoons were responsible for 87% of the damage to corn plants. Like other predators, raccoons searching for food can break into poultry houses to feed on chickens, ducks, their eggs, or food. Mythology, arts, and entertainment In the mythology of the Indigenous peoples of the Americas, the raccoon is the subject of folk tales. Stories such as "How raccoons catch so many crayfish" from the Tuscarora centered on its skills at foraging. In other tales, the raccoon played the role of the trickster which outsmarts other animals, like coyotes and wolves. Among others, the Dakota believe the raccoon has natural spirit powers, since its mask resembles the facial paintings, two-fingered swashes of black and white, used during rituals to connect to spirit beings. The Aztecs linked supernatural abilities especially to females, whose commitment to their young was associated with the role of wise women in their society. The raccoon also appears in Native American art across a wide geographic range. Petroglyphs with engraved raccoon tracks were found in Lewis Canyon, Texas; at the Crow Hollow petroglyph site in Grayson County, Kentucky; and in river drainages near Tularosa, the San Francisco River of New Mexico and Arizona. The meaning and significance of the Raccoon Priests Gorget, which features a stylized carving of a raccoon and was found at the Spiro Mounds, Oklahoma, remains unknown. Hunting and fur trade The fur of raccoons is used for clothing, especially for coats and coonskin caps. At present, it is the material used for the inaccurately named "sealskin" cap worn by the Royal Fusiliers of Great Britain. Sporrans made of raccoon pelt and hide have sometimes been used as part of traditional Scottish highland men's apparel since the 18th century, especially in North America. Such sporrans may or may not be of the "full-mask" type. Historically, Native American tribes not only used the fur for winter clothing, but also used the tails for ornament. The famous Sioux leader Spotted Tail took his name from a raccoon skin hat with the tail attached he acquired from a fur trader. Since the late 18th century, various types of scent hounds, called coonhounds, which are able to tree animals have been bred in the United States. In the 19th century, when coonskins occasionally even served as means of payment, several thousand raccoons were killed each year in the United States. This number rose quickly when automobile coats became popular after the turn of the 20th century. In the 1920s, wearing a raccoon coat was regarded as status symbol among college students. Attempts to breed raccoons in fur farms in the 1920s and 1930s in North America and Europe turned out not to be profitable, and farming was abandoned after prices for long-haired pelts dropped in the 1940s. Although raccoons had become rare in the 1930s, at least 388,000 were killed during the hunting season of 1934–1935. After persistent population increases began in the 1940s, the seasonal coon hunting harvest reached about one million animals in 1946–1947 and two million in 1962–1963. The broadcast of three television episodes about the frontiersman Davy Crockett and the film Davy Crockett, King of the Wild Frontier in 1954 and 1955 led to a high demand for coonskin caps in the United States, although it is unlikely either Crockett or the actor who played him, Fess Parker, actually wore a cap made from raccoon fur. The seasonal hunt reached an all-time high with 5.2 million animals in 1976–1977 and ranged between 3.2 and 4.7 million for most of the 1980s. In 1982, the average pelt price was $20. As of 1987, the raccoon was identified as the most important wild furbearer in North America in terms of revenue. In the first half of the 1990s, the seasonal hunt dropped to 0.9 from 1.9 million due to decreasing pelt prices. Food While primarily hunted for their fur, raccoons were also a source of food for Native Americans and early American settlers. According to Ernest Thompson Seton, young specimens killed without a fight are palatable, whereas old raccoons caught after a lengthy battle are inedible. Raccoon meat was extensively eaten during the early years of California, where it was sold in the San Francisco market for $1–3 apiece. American slaves occasionally ate raccoon at Christmas, but it was not necessarily a dish of the poor or rural. The first edition of The Joy of Cooking, released in 1931, contained a recipe for preparing raccoon, and US President Calvin Coolidge's pet raccoon Rebecca was originally sent to be served at the White House Thanksgiving Dinner. Although the idea of eating raccoons may seem repulsive to most mainstream consumers, who see them as endearing, cute, or vermin, several thousand raccoons are still eaten each year in the United States, primarily in the Southern United States. Some people tout the taste of the meat. Other uses In addition to the fur and meat, the raccoon baculum (penis bone) have had numerous traditional uses in the Southern United States and beyond. Indigenous people used the bones as a pipe cleaning tool. The bones were used by moonshine distillers to guide the flow of whiskey from the drip tube to the bottle. With their tips filed down, the bones were used as toothpicks under the moniker "coon rods". In hoodoo, the folk magic of the American South, the baculum is sometimes worn as an amulet for love or luck. The bones also have decorative uses (e.g. on the trademark hat of stock car racer Richard Petty or as earrings by actresses Sarah Jessica Parker and Vanessa Williams). Pet raccoons Raccoons are sometimes kept as pets, which is discouraged by many experts because the raccoon is not a domesticated species. Raccoons may act unpredictably and aggressively and it is extremely difficult to teach them to obey commands. In places where keeping raccoons as pets is not forbidden, such as in Wisconsin and other U.S. states, an exotic pet permit may be required. One notable pet raccoon was Rebecca, kept by US president Calvin Coolidge. Their propensity for unruly behavior exceeds that of captive skunks, and they are even less trustworthy when allowed to roam freely. Because of their intelligence and nimble forelimbs, even inexperienced raccoons are easily capable of unscrewing jars, uncorking bottles and opening door latches, with more experienced specimens having been recorded to open door knobs. Sexually mature raccoons often show aggressive natural behaviors such as biting during the mating season. Neutering them at around five or six months of age decreases the chances of aggressive behavior developing. Raccoons can become obese and suffer from other disorders due to poor diet and lack of exercise. When fed with cat food over a long time period, raccoons can develop gout. With respect to the research results regarding their social behavior, it is now required by law in Austria and Germany to keep at least two individuals to prevent loneliness. Raccoons are usually kept in a pen (indoor or outdoor), also a legal requirement in Austria and Germany, rather than in the apartment where their natural curiosity may result in damage to property. When orphaned, it is possible for kits to be rehabilitated and reintroduced to the wild. However, it is uncertain whether they readapt well to life in the wild. Feeding unweaned kits with cow's milk rather than a kitten replacement milk or a similar product can be dangerous to their health. Local and indigenous names
Biology and health sciences
Procyonidae
Animals
18603849
https://en.wikipedia.org/wiki/Leech
Leech
Leeches are segmented parasitic or predatory worms that comprise the subclass Hirudinea within the phylum Annelida. They are closely related to the oligochaetes, which include the earthworm, and like them have soft, muscular segmented bodies that can lengthen and contract. Both groups are hermaphrodites and have a clitellum, but leeches typically differ from the oligochaetes in having suckers at both ends and ring markings that do not correspond with their internal segmentation. The body is muscular and relatively solid; the coelom, the spacious body cavity found in other annelids, is reduced to small channels. The majority of leeches live in freshwater habitats, while some species can be found in terrestrial or marine environments. The best-known species, such as the medicinal leech, Hirudo medicinalis, are hematophagous, attaching themselves to a host with a sucker and feeding on blood, having first secreted the peptide hirudin to prevent the blood from clotting. The jaws used to pierce the skin are replaced in other species by a proboscis which is pushed into the skin. A minority of leech species are predatory, mostly preying on small invertebrates. The eggs are enclosed in a cocoon, which in aquatic species is usually attached to an underwater surface; members of one family, Glossiphoniidae, exhibit parental care, and the eggs being brooded by the parent. In terrestrial species, the cocoon is often concealed under a log, in a crevice or buried in damp soil. Almost seven hundred species of leech are currently recognised, of which some hundred are marine, ninety terrestrial and the remainder freshwater. Leeches have been used in medicine from ancient times until the 19th century to draw blood from patients. In modern times, leeches find medical use in treatment of joint diseases such as epicondylitis and osteoarthritis, extremity vein diseases, and in microsurgery, while hirudin is used as an anticoagulant drug to treat blood-clotting disorders. The leech appears in the biblical Book of Proverbs as an archetype of insatiable greed. The term "leech" is used to characterise a person who takes without giving, living at the expense of others. Diversity and phylogeny Some 680 species of leech have been described, of which around 100 are marine, 480 freshwater and the remainder terrestrial. Among Euhirudinea, the true leeches, the smallest is about long, and the largest is the giant Amazonian leech, Haementeria ghilianii, which can reach . Except for Antarctica, leeches are found throughout the world but are at their most abundant in temperate lakes and ponds in the northern hemisphere. The majority of freshwater leeches are found in the shallow, vegetated areas on the edges of ponds, lakes and slow-moving streams; very few species tolerate fast-flowing water. In their preferred habitats, they may occur in very high densities; in a favourable environment with water high in organic pollutants, over 10,000 individuals were recorded per square metre (over 930 per square foot) under rocks in Illinois. Some species aestivate during droughts, burying themselves in the sediment, and can lose up to 90% of their bodyweight and still survive. Among the freshwater leeches are the Glossiphoniidae, dorso-ventrally flattened animals mostly parasitic on vertebrates such as turtles, and unique among annelids in both brooding their eggs and carrying their young on the underside of their bodies. The terrestrial Haemadipsidae are mostly native to the tropics and subtropics, while the aquatic Hirudinidae have a wider global range; both of these feed largely on mammals, including humans. A distinctive family is the Piscicolidae, marine or freshwater ectoparasites chiefly of fish, with cylindrical bodies and usually well-marked, bell-shaped, anterior suckers. Not all leeches feed on blood; the Erpobdelliformes, freshwater or amphibious, are carnivorous and equipped with a relatively large, toothless mouth to ingest insect larvae, molluscs, and other annelid worms, which are swallowed whole. In turn, leeches are prey to fish, birds, and invertebrates. The name for the subclass, Hirudinea, comes from the Latin hirudo (genitive hirudinis), a leech; the element -bdella found in many leech group names is from the Greek βδέλλα bdella, also meaning leech. The name Les hirudinées was given by Jean-Baptiste Lamarck in 1818. Leeches were traditionally divided into two infraclasses, the Acanthobdellidea (primitive leeches) and the Euhirudinea (true leeches). The Euhirudinea are divided into the proboscis-bearing Rhynchobdellida and the rest, including some jawed species, the "Arhynchobdellida", without a proboscis. The phylogenetic tree of the leeches and their annelid relatives is based on molecular analysis (2019) of DNA sequences. Both the former classes "Polychaeta" (bristly marine worms) and "Oligochaeta" (including the earthworms) are paraphyletic: in each case the complete groups (clades) would include all the other groups shown below them in the tree. The Branchiobdellida are sister to the leech clade Hirudinida, which approximately corresponds to the traditional subclass Hirudinea. The main subdivision of leeches is into the Rhynchobdellida and the Arhynchobdellida, though the Acanthobdella are sister to the clade that contains these two groups. Evolution The most ancient annelid group consists of the free-living polychaetes that evolved in the Cambrian period, being plentiful in the Burgess Shale about 500million years ago. Oligochaetes evolved from polychaetes and the leeches branched off from the oligochaetes. The oldest leech fossils are from the middle Permian period around 266million years ago, there is also unpublished study about possible leech from Virgilian (Late Carboniferous) of New Mexico. Although fossil with external ring markings found from Silurian strata in Wisconsin is sometimes identified as leech, but assignment of fossil is still putative and contentious, and the animal was also alternatively interpreted as a member of Cycloneuralia. Anatomy and physiology Leeches show a remarkable similarity to each other in morphology, very different from typical annelids which are cylindrical with a fluid-filled space, the coelom (body cavity). In leeches, most of the coelom is filled with botryoidal tissue, a loose connective tissue composed of clusters of cells of mesodermal origin. The remaining body cavity has been reduced to four slender longitudinal channels. Typically, the body is dorso-ventrally flattened and tapers at both ends. Longitudinal and circular muscles in the body wall are supplemented by diagonal muscles, giving the leech the ability to adopt a large range of body shapes and show great flexibility. Most leeches have a sucker at both the anterior (front) and posterior (back) ends, but some primitive leeches have a single sucker at the back. Like most annelids, with a few exceptions like Sipuncula, Echiura and Diurodrilus, the leech is a segmented animal, but unlike other annelids, the segmentation is masked by secondary external ring markings (annuli). The number of annulations varies, both between different regions of the body and between species. In one species, the body surface is divided into 102 annuli. All leech species, however, have 32 segments, called somites, (34 if two head segments, which have different organization, are counted). Of these segments, the first five are designated as the head and include the anterior brain, several ocelli (eyespots) dorsally and the sucker ventrally. The following 21 mid-body segments each contain a nerve ganglion, and between them contain two reproductive organs, a single female gonopore and nine pairs of testes. The last seven segments contain the posterior brain and are fused to form the animal's tail sucker. The septa that separates the body segments—and the mesenteries which in turn separates each segment into a left and right half—in the majority of annelids, have been lost in leeches except for the primitive genus Acanthobdella, which still have some septa and mesenteries. The body wall consists of a cuticle, an epidermis and a thick layer of fibrous connective tissue in which are embedded the circular muscles, the diagonal muscles and the powerful longitudinal muscles. There are also dorso-ventral muscles. In leeches the original blood vascular system has been lost and replaced by the modified coelom known as the haemocoelomic system, and the coelomic fluid, called the haemocoelomic fluid, has taken over the role as blood. The haemocoelomic channels run the full length of the body, the two main ones being on either side. Part of the lining epithelium consists of chloragogen cells which are used for the storage of nutrients and in excretion. There are 10 to 17 pairs of metanephridia (excretory organs) in the mid-region of the leech. From these, ducts typically lead to a urinary bladder, which empties to the outside at a nephridiopore. Reproduction and development Leeches are hermaphrodites, with the male reproductive organs, the testes, maturing first and the ovaries later. In hirudinids, a pair will line up with the clitellar regions in contact, with the anterior end of one leech pointing towards the posterior end of the other; this results in the male gonopore of one leech being in contact with the female gonopore of the other. The penis passes a spermatophore into the female gonopore and sperm is transferred to, and probably stored in, the vagina. Some jawless leeches (Rhynchobdellida) and proboscisless leeches (Arhynchobdellida) lack a penis, and in these, sperm is passed from one individual to another by hypodermic injection. The leeches intertwine and grasp each other with their suckers. A spermatophore is pushed by one through the integument of the other, usually into the clitellar region. The sperm is liberated and passes to the ovisacs, either through the coelomic channels or interstitially through specialist "target tissue" pathways. Some time after copulation, the small, relatively yolkless eggs are laid. In most species, an albumin-filled cocoon is secreted by the clitellum and receives one or more eggs as it passes over the female gonopore. In the case of the North American Erpobdella punctata, the clutch size is about five eggs, and some ten cocoons are produced. Each cocoon is fixed to a submerged object, or in the case of terrestrial leeches, deposited under a stone or buried in damp soil. The cocoon of Hemibdella soleae is attached to a suitable fish host. The glossiphoniids brood their eggs, either by attaching the cocoon to the substrate and covering it with their ventral surface, or by securing the cocoon to their ventral surface, and even carrying the newly hatched young to their first meal. When breeding, most marine leeches leave their hosts and become free-living in estuaries. Here they produce their cocoons, after which the adults of most species die. When the eggs hatch, the juveniles seek out potential hosts when these approach the shore. Leeches mostly have an annual or biannual life cycle. Feeding and digestion About three-quarters of leech species are parasites that feed on the blood of a host, while the remainder are predators. Leeches either have a pharynx that they can protrude, commonly called a proboscis, or a pharynx that they cannot protrude, which in some groups is armed with jaws. In the proboscisless leeches, the jaws (if any) of Arhynchobdellids are at the front of the mouth, and have three blades set at an angle to each other. In feeding, these slice their way through the skin of the host, leaving a Y-shaped incision. Behind the blades is the mouth, located ventrally at the anterior end of the body. It leads successively into the pharynx, a short oesophagus, a crop (in some species), a stomach and a hindgut, which ends at an anus located just above the posterior sucker. The stomach may be a simple tube, but the crop, when present, is an enlarged part of the midgut with a number of pairs of ceca that store ingested blood. The leech secretes an anticoagulant, hirudin, in its saliva which prevents the blood from clotting before ingestion. A mature medicinal leech may feed only twice a year, taking months to digest a blood meal. The bodies of predatory leeches are similar, though instead of a jaw many have a protrusible proboscis, which for most of the time they keep retracted into the mouth. Such leeches are often ambush predators that lie in wait until they can strike prey with the proboscises in a spear-like fashion. Predatory leeches feed on small invertebrates such as snails, earthworms and insect larvae. The prey is usually sucked in and swallowed whole. Some Rhynchobdellida however suck the soft tissues from their prey, making them intermediate between predators and blood-suckers. Blood-sucking leeches use their anterior suckers to connect to hosts for feeding. Once attached, they use a combination of mucus and suction to stay in place while they inject hirudin into the hosts' blood. In general, blood-feeding leeches are non host-specific, and do little harm to their host, dropping off after consuming a blood meal. Some marine species however remain attached until it is time to reproduce. If present in great numbers on a host, these can be debilitating, and in extreme cases, cause death. Leeches are unusual in that they do not produce certain digestive enzymes such as amylases, lipases or endopeptidases. A deficiency of these enzymes and of B complex vitamins is compensated for by enzymes and vitamins produced by endosymbiotic microflora. In Hirudo medicinalis, these supplementary factors are produced by an obligatory mutualistic relationship with the bacterial species, Aeromonas veronii. Non-bloodsucking leeches, such as Erpobdella octoculata, are host to more bacterial symbionts. In addition, leeches produce intestinal exopeptidases which remove amino acids from the long protein molecules one by one, possibly aided by proteases from endosymbiotic bacteria in the hindgut. This evolutionary choice of exopeptic digestion in Hirudinea distinguishes these carnivorous clitellates from oligochaetes, and may explain why digestion in leeches is so slow. Nervous system A leech's nervous system is formed of a few large nerve cells. Their large size makes leeches convenient as model organisms for the study of invertebrate nervous systems. The main nerve centre consists of the cerebral ganglion above the gut and another ganglion beneath it, with connecting nerves forming a ring around the pharynx a little way behind the mouth. A nerve cord runs backwards from this in the ventral coelomic channel, with 21 pairs of ganglia in segments six to 26. In segments 27 to 33, other paired ganglia fuse to form the caudal ganglion. Several sensory nerves connect directly to the cerebral ganglion; there are sensory and motor nerve cells connected to the ventral nerve cord ganglia in each segment. Leeches have between two and ten pigment spot ocelli, arranged in pairs towards the front of the body. There are also sensory papillae arranged in a lateral row in one annulation of each segment. Each papilla contains many sensory cells. Some rhynchobdellids have the ability to change colour dramatically by moving pigment in chromatophore cells; this process is under the control of the nervous system but its function is unclear as the change in hue seems unrelated to the colour of the surroundings. Leeches can detect touch, vibration, movement of nearby objects, and chemicals secreted by their hosts; freshwater leeches crawl or swim towards a potential host standing in their pond within a few seconds. Species that feed on warm-blooded hosts move towards warmer objects. Many leeches avoid light, though some blood feeders move towards light when they are ready to feed, presumably increasing the chances of finding a host. Gas exchange Leeches live in damp surroundings and in general respire through their body wall. The exception to this is in the Piscicolidae, where branching or leaf-like lateral outgrowths from the body wall form gills. Some rhynchobdellid leeches have an extracellular haemoglobin pigment, but this only provides for about half of the leech's oxygen transportation needs, the rest occurring by diffusion. Movement Leeches move using their longitudinal and circular muscles in a modification of the locomotion by peristalsis, self-propulsion by alternately contracting and lengthening parts of the body, seen in other annelids such as earthworms. They use their posterior and anterior suckers (one on each end of the body) to enable them to progress by looping or inching along, in the manner of geometer moth caterpillars. The posterior end is attached to the substrate, and the anterior end is projected forward peristaltically by the circular muscles until it touches down, as far as it can reach, and the anterior end is attached. Then the posterior end is released, pulled forward by the longitudinal muscles, and reattached; then the anterior end is released, and the cycle repeats. Leeches explore their environment with head movements and body waving. The Hirudinidae and Erpobdellidae can swim rapidly with up-and-down or sideways undulations of the body; the Glossiphoniidae in contrast are poor swimmers and curl up and fall to the sediment below when disturbed. Stories of leeches jumping have persisted for over a century; in 2024 footage was finally captured showing Chtonobdella fallax jumping. Interactions with humans Bites Leech bites are generally alarming rather than dangerous, though a small percentage of people have severe allergic or anaphylactic reactions and require urgent medical care. Symptoms of these reactions include red blotches or an itchy rash over the body, swelling around the lips or eyes, a feeling of faintness or dizziness, and difficulty in breathing. An externally attached leech will detach and fall off on its own accord when it is satiated on blood, which may take from twenty minutes to a few hours; bleeding from the wound may continue for some time. Internal attachments, such as inside the nose, are more likely to require medical intervention. Bacteria, viruses, and protozoan parasites from previous blood sources can survive within a leech for months, so leeches could potentially act as vectors of pathogens. Nevertheless, only a few cases of leeches transmitting pathogens to humans have been reported. Leech saliva is commonly believed to contain anaesthetic compounds to numb the bite area, but some authorities disagree. Although morphine-like substances have been found in leeches, they have been found in the neural tissues, not the salivary tissues. They are used by the leeches in modulating their own immunocytes and not for anaesthetising bite areas on their hosts. Depending on the species and size, leech bites can be barely noticeable or they can be fairly painful. Medical use The medicinal leech Hirudo medicinalis, and some other species, have been used for clinical bloodletting for at least 2,500 years: Ayurvedic texts describe their use for bloodletting in ancient India. In ancient Greece, bloodletting was practised according to the theory of humours found in the Hippocratic Corpus of the fifth centuryBC, which maintained that health depended on a balance of the four humours: blood, phlegm, black bile and yellow bile. Bloodletting using leeches enabled physicians to restore balance if they considered blood was present in excess. Pliny the Elder reported in his Natural History that the horse leech could drive elephants mad by climbing up inside their trunks to drink blood. Pliny also noted the medicinal use of leeches in ancient Rome, stating that they were often used for gout, and that patients became addicted to the treatment. In Old English, lǣce was the name for a physician as well as for the animal, though the words had different origins, and lǣcecraft, leechcraft, was the art of healing. William Wordsworth's 1802 poem "Resolution and Independence" describes one of the last of the leech-gatherers, people who travelled Britain catching leeches from the wild, and causing a sharp decline in their abundance, though they remain numerous in Romney Marsh. By 1863, British hospitals had switched to imported leeches, some seven million being imported to hospitals in London that year. In the nineteenth century, demand for leeches was sufficient for hirudiculture, the farming of leeches, to become commercially viable. Leech usage declined with the demise of humoral theory, but made a small-scale comeback in the 1980s after years of decline, with the advent of microsurgery, where venous congestion can arise due to inefficient venous drainage. Leeches can reduce swelling in the tissues and promote healing, helping in particular to restore circulation after microsurgery to reattach body parts. Other clinical applications include varicose veins, muscle cramps, thrombophlebitis, and joint diseases such as epicondylitis and osteoarthritis. Leech secretions contain several bioactive substances with anti-inflammatory, anticoagulant and antimicrobial effects. One active component of leech saliva is a small protein, hirudin. It is widely used as an anticoagulant drug to treat blood-clotting disorders, and manufactured by recombinant DNA technology. In 2012 and 2018, Ida Schnell and colleagues trialled the use of Haemadipsa leeches to gather data on the biodiversity of their mammalian hosts in the tropical rainforest of Vietnam, where it is hard to obtain reliable data on rare and cryptic mammals. They showed that mammal mitochondrial DNA, amplified by the polymerase chain reaction, can be identified from a leech's blood meal for at least four months after feeding. They detected Annamite striped rabbit, small-toothed ferret-badger, Truong Son muntjac, and serow in this way. Water pollution Exposure to synthetic estrogen as used in contraceptive medicines, which may enter freshwater ecosystems from municipal wastewater, can affect leeches' reproductive systems. Although not as sensitive to these compounds as fish, leeches showed physiological changes after exposure, including longer sperm sacs and vaginal bulbs, and decreased epididymis weight.
Biology and health sciences
Lophotrochozoa
null
790967
https://en.wikipedia.org/wiki/Hepatitis%20C%20virus
Hepatitis C virus
The hepatitis C virus (HCV) is a small (55–65 nm in size), enveloped, positive-sense single-stranded RNA virus of the family Flaviviridae. The hepatitis C virus is the cause of hepatitis C and some cancers such as liver cancer (hepatocellular carcinoma, abbreviated HCC) and lymphomas in humans. Taxonomy The hepatitis C virus belongs to the genus Hepacivirus, a member of the family Flaviviridae. Before 2011, it was considered to be the only member of this genus. However a member of this genus has been discovered in dogs: canine hepacivirus. There is also at least one virus in this genus that infects horses. Several additional viruses in the genus have been described in bats and rodents. Structure The hepatitis C virus particle consists of a lipid membrane envelope that is 55 to 65 nm in diameter. Two viral envelope glycoproteins, E1 and E2, are embedded in the lipid envelope. They take part in viral attachment and entry into the cell. Within the envelope is an icosahedral core that is 33 to 40 nm in diameter. Inside the core is the RNA material of the virus. E1 and E2 glycoproteins E1 and E2 are covalently bonded when embedded in the envelope of HCV and are stabilized by disulfide bonds. E2 is globular and seems to protrude 6 nm out from the envelope membrane according to electron microscope images. These glycoproteins play an important role in the interactions hepatitis C has with the immune system. A hypervariable region, the hypervariable region 1 (HVR1) can be found on the E2 glycoprotein. HVR1 is flexible and quite accessible to surrounding molecules. HVR1 helps E2 shield the virus from the immune system. It prevents CD81 from latching onto its respective receptor on the virus. In addition, E2 can shield E1 from the immune system. Although HVR1 is quite variable in amino acid sequence, this region has similar chemical, physical, and conformational characteristics across many E2 glycoproteins. Genome Hepatitis C virus has a positive sense single-stranded RNA genome. The genome consists of a single open reading frame that is 9,600 nucleotide bases long. This single open reading frame is translated to produce a single protein product, which is then further processed to produce smaller active proteins. This is why on publicly available databases, such as the European Bioinformatics Institute, the viral proteome only consists of 2 proteins. At the 5′ and 3′ ends of the RNA are the untranslated regions (UTR), that are not translated into proteins but are important to translation and replication of the viral RNA. The 5′ UTR has a ribosome binding site or internal ribosome entry site (IRES) that initiates the translation of a very long protein containing about 3,000 amino acids. The core domain of the HCV IRES contains a four-way helical Holliday junction that is integrated within a predicted pseudoknot. The conformation of this core domain constrains the open reading frame's orientation for positioning on the 40S ribosomal subunit. The large pre-protein is later cleaved by cellular and viral proteases into the 10 smaller proteins that allow viral replication within the host cell, or assemble into the mature viral particles. Structural proteins made by the hepatitis C virus include Core protein, E1 and E2; nonstructural proteins include NS2, NS3, NS4A, NS4B, NS5A, and NS5B. Molecular biology The proteins of this virus are arranged along the genome in the following order: N terminal-core-envelope (E1)–E2–p7-nonstructural protein 2 (NS2)–NS3–NS4A–NS4B–NS5A–NS5B–C terminal. The mature nonstructural proteins (NS2 to NS5B) generation relies on the activity of viral proteinases. The NS2/NS3 junction is cleaved by a metal-dependent autocatalytic proteinase encoded within NS2 and the N-terminus of NS3. The remaining cleavages downstream from this site are catalysed by a serine protease also contained within the N-terminal region of NS3. The core protein has 191 amino acids and can be divided into three domains on the basis of hydrophobicity: domain 1 (residues 1–117) contains mainly basic residues with two short hydrophobic regions; domain 2 (residues 118–174) is less basic and more hydrophobic and its C-terminus is at the end of p21; domain 3 (residues 175–191) is highly hydrophobic and acts as a signal sequence for E1 envelope protein. Both envelope proteins (E1 and E2) are highly glycosylated and important in cell entry. E1 serves as the fusogenic subunit and E2 acts as the receptor binding protein. E1 has 4–5 N-linked glycans and E2 has 11 N-glycosylation sites. NS1 (p7) protein is dispensable for viral genome replication but plays a critical role in virus morphogenesis. This protein is a 63 amino acid membrane-spanning protein which locates itself in the endoplasmic reticulum. Cleavage of p7 is mediated by the endoplasmic reticulum's signal peptidases. Two transmembrane domains of p7 are connected by a cytoplasmic loop and are oriented towards the endoplasmic reticulum's lumen. NS2 protein is a 21–23 kilodalton (kDa) transmembrane protein with protease activity. NS3 is 67 kDa protein whose N-terminal has serine protease activity and whose C-terminal has NTPase/helicase activity. It is located within the endoplasmic reticulum and forms a heterodimeric complex with NS4A—a 54 amino acid membrane protein that acts as a cofactor of the proteinase. NS4A—a 54 amino acid membrane protein that acts as a cofactor of the proteinase. NS4B is a small (27 kDa) hydrophobic integral membrane protein with four transmembrane domains. It is located within the endoplasmic reticulum and plays an important role for recruitment of other viral proteins. It induces morphological changes to the endoplasmic reticulum forming a structure termed the membranous web. NS5A is a hydrophilic phosphoprotein which plays an important role in viral replication, modulation of cell signaling pathways and the interferon response. It is known to bind to endoplasmic reticulum-anchored human VAP proteins. The NS5B protein (65 kDa) is the viral RNA-dependent RNA polymerase. NS5B has the key function of replicating the HCV's viral RNA by using the viral positive sense RNA strand as its template and catalyzes the polymerization of ribonucleoside triphosphates (rNTP) during RNA replication. Several crystal structures of NS5B polymerase in several crystalline forms have been determined based on the same consensus sequence BK (HCV-BK, genotype 1). The structure can be represented by a right hand shape with fingers, palm, and thumb. The encircled active site, unique to NS5B, is contained within the palm structure of the protein. Recent studies on NS5B protein genotype 1b strain J4's (HC-J4) structure indicate a presence of an active site where possible control of nucleotide binding occurs and initiation of de-novo RNA synthesis. De-novo adds necessary primers for initiation of RNA replication. Current research attempts to bind structures to this active site to alter its functionality in order to prevent further viral RNA replication. An 11th protein has also been described. This protein is encoded by a +1 frameshift in the capsid gene. It appears to be antigenic but its function is unknown. Replication Replication of HCV involves several steps. The virus replicates mainly in the hepatocytes of the liver, where it is estimated that daily each infected cell produces approximately fifty virions (virus particles) with a calculated total of one trillion virions generated. The virus may also replicate in peripheral blood mononuclear cells, potentially accounting for the high levels of immunological disorders found in chronically infected HCV patients. In the liver, the HCV particles are brought into the hepatic sinusoids by blood flow. These sinusoids neighbor hepatocyte cells. HCV is able to pass through the endothelium of the sinusoids and make its way to the basolateral surface of the hepatocyte cells. HCV has a wide variety of genotypes and mutates rapidly due to a high error rate on the part of the virus' RNA-dependent RNA polymerase. The mutation rate produces so many variants of the virus it is considered a quasispecies rather than a conventional virus species. Entry into host cells occur through complex interactions between virions, especially through their glycoproteins, and cell-surface molecules CD81, LDL receptor, SR-BI, DC-SIGN, Claudin-1, and Occludin. The envelope of HCV is similar to very low-density lipoproteins (VLDL) and low-density lipoproteins (LDL). Because of this similarity, the virus is thought to be able to associate with apolipoproteins. It could surround itself with lipoproteins, partially covering up E1 and E2. Recent research indicates that these apolipoproteins interact with scavenger receptor B1 (SR-B1). SR-B1 is able to remove lipids from the lipoproteins around the virus to better allow for HVR1 contact. Claudin 1, which is a tight-junction protein, and CD81 link to create a complex, priming them for later HCV infection processes. As the immune system is triggered, macrophages increase the amount of TNF-α around the hepatocytes which are being infected. This triggers the migration of occludin, which is another tight-junction complex, to the basolateral membrane. The HCV particle is ready to enter the cell. These interactions lead to the endocytosis of the viral particle. This process is aided by clathrin proteins. Once inside an early endosome, the endosome and the viral envelope fuse and the RNA is allowed into the cytoplasm. HCV takes over portions of the intracellular machinery to replicate. The HCV genome is translated to produce a single protein of around 3,011 amino acids. The polyprotein is then proteolytically processed by viral and cellular proteases to produce three structural (virion-associated) and seven nonstructural (NS) proteins. Alternatively, a frameshift may occur in the Core region to produce an alternate reading frame protein (ARFP). HCV encodes two proteases, the NS2 cysteine autoprotease and the NS3-4A serine protease. The NS proteins then recruit the viral genome into an RNA replication complex, which is associated with rearranged cytoplasmic membranes. RNA replication takes place via the viral RNA-dependent RNA polymerase NS5B, which produces a negative strand RNA intermediate. The negative strand RNA then serves as a template for the production of new positive strand viral genomes. Nascent genomes can then be translated, further replicated or packaged within new virus particles. The virus replicates on intracellular lipid membranes. The endoplasmic reticulum in particular is deformed into uniquely shaped membrane structures termed 'membranous webs'. These structures can be induced by sole expression of the viral protein NS4B. The core protein associates with lipid droplets and utilises microtubules and dyneins to alter their location to a perinuclear distribution. Release from the hepatocyte may involve the VLDL secretory pathway. Another hypothesis states that the viral particle may be secreted from the endoplasmic reticulum through the endosomal sorting complex required for transport (ESCRT) pathway. This pathway is normally utilized to bud vesicles out of the cell. The only limitation to this hypothesis is that the pathway is normally used for cellular budding, and it is not known how HCV would commandeer the ESCRT pathway for use with the endoplasmic reticulum. Genotypes Based on genetic differences between HCV isolates, the hepatitis C virus species is classified into six genotypes (1–6) with several subtypes within each genotype (represented by lowercase letters). Subtypes are further broken down into quasispecies based on their genetic diversity. Genotypes differ by 30–35% of the nucleotide sites over the complete genome. The difference in genomic composition of subtypes of a genotype is usually 20–25%. Subtypes 1a and 1b are found worldwide and cause 60% of all cases. Clinical importance Genotype is clinically important in determining potential response to interferon-based therapy and the required duration of such therapy. Genotypes 1 and 4 are less responsive to interferon-based treatment than are the other genotypes (2, 3, 5 and 6). The duration of standard interferon-based therapy for genotypes 1 and 4 is 48 weeks, whereas treatment for genotypes 2 and 3 is completed in 24 weeks. Sustained virological responses occur in 70% of genotype 1 cases, ~90% of genotypes 2 and 3, ~65% of genotype 4 and ~80% of genotype 6. In addition, people of African descent are much less likely to respond to treatment when infected with genotypes 1 or 4. The substantial proportion of this lack of response to treatment is proposed to be caused by a single-nucleotide polymorphism (SNP) on chromosome 19 of the human genome that is predictive of treatment success. HCV genotypes 1 and 4 have been distributed endemically in overlapping areas of West and Central Africa, infecting for centuries human populations carrying the genetic polymorphism in question. This has prompted scientists to suggest that the protracted persistence of HCV genotypes 1 and 4 in people of African origin is an evolutionary adaptation of HCV over many centuries to these populations’ immunogenetic responses. Infection with one genotype does not confer immunity against others, and concurrent infection with two strains is possible. In most of these cases, one of the strains outcompetes the other in a short time. This finding may be useful in treatment, in replacing strains non-responsive to medication with others easier to treat. Recombination When two viruses infect the same cell, genetic recombination may occur. Although infrequent, HCV recombination has been observed between different genotypes, between subtypes of the same genotype and even between strains of the same subtype. Epidemiology Hepatitis C virus is predominantly a blood-borne virus, with very low risk of sexual or vertical transmission. Because of this mode of spread the key groups at risk are intravenous drug users (IDUs), recipients of blood products and sometimes patients on haemodialysis. Common setting for transmission of HCV is also intra-hospital (nosocomial) transmission, when practices of hygiene and sterilization are not correctly followed in the clinic. A number of cultural or ritual practices have been proposed as a potential historical mode of spread for HCV, including circumcision, genital mutilation, ritual scarification, traditional tattooing and acupuncture. It has also been argued that given the extremely prolonged periods of persistence of HCV in humans, even very low and undetectable rates of mechanical transmission via biting insects may be sufficient to maintain endemic infection in the tropics, where people receive large number of insect bites. Evolution Identification of the origin of this virus has been difficult but genotypes 1 and 4 appear to share a common origin. A Bayesian analysis suggests that the major genotypes diverged about 300–400 years ago from the common ancestor virus. The minor genotypes diverged about 200 years ago from their major genotypes. All of the extant genotypes appear to have evolved from genotype 1 subtype 1b. A study of genotype 6 strains suggests an earlier date of evolution: approximately 1,100 to 1,350 years Before Present. The estimated rate of mutation was 1.8 × 10−4. An experimental study estimated the mutation rate at 2.5–2.9 × 10−3 base substitutions per site per year. This genotype may be the ancestor of the other genotypes. A study of European, US and Japanese isolates suggested that the date of origin of genotype 1b was approximately in the year 1925. The estimated dates of origin of types 2a and 3a were 1917 and 1943 respectively. The time of divergence of types 1a and 1b was estimated to be 200–300 years. A study of genotype 1a and 1b estimated the dates of origin to be 1914–1930 for type 1a and 1911–1944 for type 1b. Both types 1a and 1b underwent massive expansions in their effective population size between 1940 and 1960. The expansion of HCV subtype 1b preceded that of subtype 1a by at least 16 years. Both types appear to have spread from the developed world to the developing world. The genotype 2 strains from Africa can be divided into four clades that correlate with their country of origin: (1) Cameroon and Central African Republic (2) Benin, Ghana and Burkina Faso (3) Gambia, Guinea, Guinea-Bissau and Senegal (4) Madagascar. There is also strong evidence for the dissemination of HCV genotype 2 from West Africa to the Caribbean by the trans-Atlantic slave trade. Genotype 3 is thought to have its origin in South East Asia. These dates from these various countries suggests that this virus may have evolved in South East Asia and was spread to West Africa by traders from Western Europe. It was later introduced into Japan once that country's self-imposed isolation was lifted. Once introduced to a country its spread has been influenced by many local factors including blood transfusions, vaccination programmes, intravenous drug use and treatment regimes. Given the reduction in the rate of spread once screening for HCV in blood products was implemented in the 1990s, it would seem that previously blood transfusion was an important method of spread. Additional work is required to determine the dates of evolution of the various genotypes and the timing of their spread across the globe. Vaccination Unlike hepatitis A and B, there is currently no vaccine to prevent hepatitis C infection. Current research The study of HCV has been hampered by the narrow host range of HCV. The use of replicons has been successful but these have only been recently discovered. HCV, as with most RNA viruses, exists as a viral quasispecies, making it very difficult to isolate a single strain or receptor type for study. Current research is focused on small-molecule inhibitors of the viral protease, RNA polymerase and other nonstructural genes. Two agents—boceprevir by Merck and telaprevir by Vertex Pharmaceuticals—both inhibitors of NS3 protease were approved for use on May 13, 2011, and May 23, 2011, respectively. A possible association between low Vitamin D levels and a poor response to treatment has been reported. In vitro work has shown that vitamin D may be able to reduce viral replication. While this work looks promising the results of clinical trials are pending. However, it has been proposed that vitamin D supplementation is important in addition to standard treatment, in order to enhance treatment response. Naringenin, a flavonoid found in grapefruit and other fruits and herbs, has been shown to block the assembly of intracellular infectious viral particles without affecting intracellular levels of the viral RNA or protein. Other agents that are under investigation include nucleoside and nucleotide analogue inhibitors and non-nucleoside inhibitors of the RNA-dependent RNA polymerase, inhibitors of NSP5A, and host-targeted compounds such as cyclophilin inhibitors and silibinin. Sofosbuvir for use against chronic hepatitis C infection was approved by the FDA on December 6, 2013. It has been reported to be the first drug that has demonstrated safety and efficacy to treat certain types of HCV infection without the need for co-administration of interferon. On November 22, the FDA approved simeprevir for use in combination with peginterferon-alfa and ribavirin. Simeprevir has been approved in Japan for the treatment of chronic hepatitis C infection, genotype 1. There is also current experimental research on non drug related therapies. Oxymatrine, for example, is a root extract found in the continent of Asia that has been reported to have antiviral activity against HCV in cell cultures and animal studies. Small and promising human trials have shown beneficial results and no serious side effects, but they were too small to generalize conclusions. On October 5, 2020, it was announced that Harvey J. Alter, Michael Houghton, and Charles M. Rice had been awarded the 2020 Nobel Prize in Physiology or Medicine for the discovery of HCV.
Biology and health sciences
Specific viruses
Health
792246
https://en.wikipedia.org/wiki/Square%20Kilometre%20Array
Square Kilometre Array
The Square Kilometre Array (SKA) is an intergovernmental international radio telescope project being built in Australia (low-frequency) and South Africa (mid-frequency). The combining infrastructure, the Square Kilometre Array Observatory (SKAO), and headquarters, are located at the Jodrell Bank Observatory in the United Kingdom. The SKA cores are being built in the southern hemisphere, where the view of the Milky Way galaxy is the best and radio interference is at its least. Conceived in the 1990s, and further developed and designed by the late-2010s, when completed a total collecting area of approximately one square kilometre. It will operate over a wide range of frequencies and its size will make it 50 times more sensitive than any other radio instrument. If built as planned, it should be able to survey the sky more than ten thousand times faster than before. With receiving stations extending out to a distance of at least from a concentrated central core, it will exploit radio astronomy's ability to provide the highest-resolution images in all astronomy. The SKAO consortium was founded in Rome in March 2019 by seven initial member countries, with several others subsequently joining; there were 14 members of the consortium. This international organisation is tasked with building and operating the facility. The project has two phases of construction: the current SKA1, commonly just called SKA, and a possible later significantly enlarged phase sometimes called SKA2. The construction phase of the project began on 5 December 2022 in both South Africa and Australia. History The Square Kilometre Array (SKA) was originally conceived in 1991 with an international working group set up in 1993. This led to the signing of the first Memorandum of Agreement in 2000. In the early days of planning, China vied to host the SKA, proposing to build several large dishes in the natural limestone depressions (karst) that dimple its southwestern provinces; China called their proposal Kilometer-square Area Radio Synthesis Telescope (KARST). Australia's first radio quiet zone was established by the Australian Communications and Media Authority on 11 April 2005 specifically to protect and maintain the current "radio-quietness" of the main Australian SKA site at the Murchison Radio-astronomy Observatory. The project has two phases of construction: the current SKA1, commonly just called SKA, and a possible later significantly enlarged phase sometimes called SKA2. PrepSKA commenced in 2008, leading to a full SKA design in 2012. Construction of Phase 1 providing an operational array, with Phase 2 In April 2011, Jodrell Bank Observatory of the University of Manchester, in Cheshire, England was announced as the location for the project headquarters. In November 2011, the SKA Organisation was formed as an intergovernmental organisation and the project moved from a collaboration to an independent, not for profit, company. In February 2012, a former Australian SKA Committee chairman raised concerns with South African media about risks at the Australian candidate site, particularly in terms of cost, mining interference and land agreements. SKA Australia stated that all points had been addressed in the site bid. In March 2012 it was reported that the SKA Site Advisory Committee had made a confidential report in February that the South African bid was stronger. However a scientific working group was set up to explore possible implementation options of the two candidate host regions, and on 25 May 2012 it was announced that it had been determined that the SKA would be split over the South African and African sites, and the Australia and New Zealand sites. While New Zealand remained a member of the SKA Organisation in 2014, it appeared that no SKA infrastructure was likely to be located in New Zealand. In April 2015, the headquarters of the SKA project were chosen to be located at the Jodrell Bank Observatory in the UK, officially opened in July 2019. Initial construction contracts began in 2018. Scientific observations with the fully completed array On 12 March 2019, the Square Kilometre Array Observatory (SKAO) was founded in Rome by seven initial member countries: Australia, China, Italy, the Netherlands, Portugal, South Africa and the United Kingdom. India and Sweden are expected to follow shortly, and eight other countries have expressed interest to join in the future. This international organisation was tasked with building and operating the facility, with the first construction contracts By mid-2019, the start of scientific observations were expected to start no earlier than 2027. In July 2019, New Zealand withdrew from the project. , five precursor facilities were already operating: MeerKAT and the Hydrogen Epoch of Reionization Array (HERA) in South Africa, the Australian SKA Pathfinder (ASKAP) and Murchison Widefield Array (MWA) in Western Australia and the International LOFAR Telescope, spread across Europe with a core in the Netherlands. The construction phase of the project began on 5 December 2022 in Australia and South Africa, with delegations from each of the eight countries leading the project attending ceremonies to celebrate the event. The Australian part of the project comprises 100,000 antennas built across , also in the Murchison region, in the traditional lands of the Wajarri Aboriginal people. Bulldozers The site has been named , which means in the Wajarri language. The Department of Atomic Energy (DAE) in India and UK Research and Innovation (UKRI) are investigating the possibility of establishing supercomputing facilities to handle data from the Square Kilometre Array radio telescope. The UK and India are part of the team developing the computational processing for the SKA radio telescope. On 3 January 2024, Indian government approved joining the SKA project accompanied by a financial commitment of ₹1,250 crore which marks the initial step towards ratification as a member state. Description The SKA will combine the signals received from thousands of small antennas spread over a distance of several thousand kilometres to simulate a single giant radio telescope capable of extremely high sensitivity and angular resolution, using a technique called aperture synthesis. Some of the sub-arrays of the SKA will also have a very large field-of-view (FOV), making it possible to survey very large areas of sky at once. One innovative development is the use of focal-plane arrays using phased-array technology to provide multiple FOVs. This will greatly increase the survey speed of the SKA and enable several users to observe different pieces of the sky simultaneously, which is useful for (e.g.) monitoring multiple pulsars. The combination of a very large FOV with high sensitivity means that the SKA will be able to compile extremely large surveys of the sky considerably faster than any other telescope. The combined SKA will provide a wide range of coverage, with Australia's Murchison Widefield Array providing low-frequency coverage and South Africa's MeerKAT providing mid-frequency coverage. There will be continuous frequency coverage from 50 MHz to 14 GHz in the first two phases of its construction. Phase 1: Providing ~10% of the total collecting area at low and mid frequencies by 2023 (SKA1). Phase 2: Completion of the full array (SKA2) at low and mid frequencies by 2030. The frequency range from 50 MHz to 14 GHz, spanning more than two decades, cannot be realised using one design of antenna and so the SKA will comprise separate sub-arrays of different types of antenna elements that will make up the SKA-low, SKA-mid and survey arrays: SKA-low array: a phased array of simple dipole antennas to cover the frequency range from 50 to 350 MHz. These will be grouped in 40 m diameter stations each containing 256 vertically oriented dual-polarisation dipole elements. Stations will be arranged with 75% located within a 2 km diameter core and the remaining stations situated on three spiral arms, extending out to a radius of 50 km. SKA-mid array: an array of several thousand dish antennas (around 200 to be built in Phase 1) to cover the frequency range 350 MHz to 14 GHz. It is expected that the antenna design will follow that of the Allen Telescope Array using an offset Gregorian design having a height of 15 metres and a width of 12 metres. SKA-survey array: a compact array of parabolic dishes of 12–15 meters diameter each for the medium-frequency range, each equipped with a multi-beam, phased array feed with a large field of view and several receiving systems covering about 350 MHz – 4 GHz. The survey sub-array was removed from the SKA1 specification following a "rebaselining" exercise in 2015. The area covered by the SKA – extending out to ~3000 km – will comprise three regions: A central region containing about 5 km diameter cores of SKA-mid antennas (South Africa) and SKA-low dipoles (Western Australia). These central regions will contain approximately half of the total collecting area of the SKA arrays. A mid region extending out to 180 km. This will contain dishes and pairs of SKA-mid and SKA-low stations. In each case they will be randomly placed within the area with the density of dishes and stations falling off towards the outer part of the region. An outer region from 180 km to 3000 km. This will comprise five spiral arms, along which dishes of SKA-mid, grouped into stations of 20 dishes, will be located. The separation of the stations increases towards the outer ends of the spiral arms. Costs The SKA was estimated to cost €1.8 billion in 2014, including €650 million for Phase 1, which represented about 10% of the planned capability of the entire telescope array. There have been numerous delays and rising costs over the nearly 30-year history of the intergovernmental project. , the whole project was reported to be worth around A$3 billion. Members In February 2021, the members of the SKAO consortium were: Australia: Department of Industry and Science Canada: National Research Council China: National Astronomical Observatories of the Chinese Academy of Sciences France: French National Centre for Scientific Research Germany: Max-Planck-Gesellschaft India: National Centre for Radio Astrophysics Italy: National Institute for Astrophysics Portugal: Portugal Space South Africa: National Research Foundation Spain: Institute of Astrophysics of Andalusia Sweden: Onsala Space Observatory Switzerland: École Polytechnique Fédérale de Lausanne The Netherlands: Netherlands Organisation for Scientific Research United Kingdom: Science and Technology Facilities Council , there were 16 countries involved in the project. SKA locations The headquarters of the SKA are located at the University of Manchester's Jodrell Bank Observatory in Cheshire, England, while the telescopes will be installed in Australia and South Africa. Suitable sites for the SKA telescope must be in unpopulated areas with guaranteed very low levels of man-made radio interference. Four sites were initially proposed in South Africa, Australia, Argentina and China. After considerable site evaluation surveys, Argentina and China were dropped and the other two sites were shortlisted (with New Zealand joining the Australian bid, and 8 other African countries joining the South African bid): Australia The core site is located at the Murchison Radio-astronomy Observatory (MRO) at Mileura Station near Boolardy in the state of Western Australia, north-east of Geraldton South Africa The core site is located at the Meerkat National Park, at an elevation of about 1000 metres, in the Karoo area of the arid Northern Cape Province. There are also distant stations in Botswana, Ghana, Kenya, Madagascar, Mauritius, Mozambique, Namibia and Zambia. Precursors, pathfinders and design studies Many groups are working globally to develop the technology and techniques required for the SKA. Their contributions to the international SKA project are classified as either: Precursors, Pathfinders or Design Studies. Precursor facility: A telescope on one of the two SKA candidate sites, carrying out SKA-related activity. Pathfinder: A telescope or programme carrying out SKA-related technology, science and operations activity. Design Study: A study of one or more major sub-systems of the SKA design, including the construction of prototypes Precursor facilities Australian SKA Pathfinder (ASKAP) The Australian SKA Pathfinder, or ASKAP, is an A$100 million project which built a telescope array of thirty-six twelve-metre dishes. It employs advanced, innovative technologies such as phased array feeds to give a wide field of view (30 square degrees). ASKAP was built by CSIRO at the Murchison Radio-astronomy Observatory site, located near Boolardy in the mid-west region of Western Australia. All 36 antennas and their technical systems were officially opened in October 2012. MeerKAT MeerKAT is a South African project consisting of an array of sixty-four 13.5-metre diameter dishes as a world class science instrument, and was also built to help develop technology for the SKA. KAT-7, a seven-dish engineering and science testbed instrument for MeerKAT, in the Meerkat National Park near Carnarvon in the Northern Cape Province of South Africa was commissioned in 2012 and was up and running by May 2018 when all sixty-four 13.5-metre diameter (44.3 feet) dish antennae were completed, with verification tests then underway to ensure the instruments are functioning correctly. The dishes are equipped with a number of high performance single pixel feeds to cover frequencies from 580 MHz up to 14 GHz. Murchison Widefield Array (MWA) The Murchison Widefield Array is a low-frequency radio array operating in the frequency range 80–300 MHz that began upgraded operation in 2018 at the Murchison Radio-astronomy Observatory site in Western Australia. Hydrogen Epoch of Reionization Array (HERA) The HERA array is located in South Africa's Meerkat National Park. It is designed to study highly redshifted atomic hydrogen emission emitted prior to, and during the epoch of reionization. Pathfinders APERture Tile in Focus (Apertif) Very Long Baseline Interferometry Electronic MultiBeam Radio Astronomy ConcEpt e-MERLIN Expanded Very Large Array Long Wavelength Array SKA Molonglo Prototype (SKAMP) NenuFAR Giant Metrewave Radio Telescope Allen Telescope Array The Allen Telescope Array in California uses innovative 6.1m offset Gregorian dishes equipped with wide band single feeds covering frequencies from 500 MHz to 11 GHz. The 42-element array in operation by 2017 is to be extended to 350 elements. The dish design has explored methods of low-cost manufacture. LOFAR The International LOFAR Telescope —a €150 million Dutch-led project— a novel low-frequency phased aperture array spread over northern Europe. An all-electronic telescope covering low frequencies from 10 to 240 MHz, it came online from 2009 to 2011. LOFAR was in 2017 developing crucial processing techniques for the SKA.. Because of its baselines of up to 2000 km, it can make images with sub-arcsecond angular resolution over a wide field of view. Such high-resolution imaging at low frequencies is unique and will be a factor of more than an order of magnitude better than SKA1-LOW. Design studies Aperture Array Verification Programme Canadian SKA Program Preparatory Study for the SKA Square Kilometre Array Design Studies (SKADS) Electronic MultiBeam Radio Astronomy ConcEpt (EMBRACE) BEST Data challenges The amount of sensory data collected poses a huge storage problem, and will require real-time signal processing to reduce the raw data to relevant derived information. In mid 2011 it was estimated the array could generate an exabyte a day of raw data, which could be compressed to around 10 petabytes. China, a founding member of the project, has designed and constructed the first prototype of the regional data processing centre. An Tao, head of the SKA group of the Shanghai Astronomical Observatory, stated, "It will generate data streams far beyond the total Internet traffic worldwide." The Tianhe-2 supercomputer was used in 2016 to train the software. The processing of the project will be performed on Chinese-designed and -manufactured Virtex-7 processors by Xilinx, integrated into platforms by the CSIRO. China has pushed for a unified beamforming design that has led other major countries to drop out of the project. Canada continues to use Altera Stratix-10 processors (by Intel). It is illegal for any US company to export high end Intel FPGAs or any related CSP design details or firmware to China amid the US-embargo which will severely limit cooperation. Technology Development Project (TDP) The Technology Development Project, or TDP, is a project to specifically develop dish and feed technology for the SKA. It is operated by a consortium of universities and was completed in 2012. Project risks and opposition Potential risks for priority astronomical sites in South Africa are protected by the Astronomy Geographic Advantage Act of 2007. Put in place to specifically support the South African SKA bid, it outlaws all activities that could endanger scientific operation of core astronomical instruments. In 2010, concerns were raised over the will to enforce this law when Royal Dutch Shell applied to explore the Karoo for shale gas using hydraulic fracturing, an activity that would have the potential to increase radio interference at the site. An identified remote station location for the southern African array in Mozambique was subject to flooding and excluded from the project, despite the SKA Site Selection Committee technical analysis reporting that all African remote stations could implement flood mitigation solutions. During 2014, South Africa experienced a month-long strike action by the National Union of Metalworkers (NUMSA), which added to the delays of the installation of dishes. The largest risk to the overall project is probably its budget, which up until 2014 had not been committed. There has been opposition to the project from farmers, businesses, and individuals in South Africa since the project's inception. The advocacy group called Save the Karoo has stated that the radio quiet zone would create further unemployment in the South African region where unemployment is already above 32%. Farmers had stated that the agriculture-based economy in the Karoo would collapse if they were forced to sell their land. Key projects The capabilities of the SKA will be designed to address a wide range of questions in astrophysics, fundamental physics, cosmology and particle astrophysics as well as extending the range of the observable universe. A number of key science projects that have been selected for implementation via the SKA are listed below. Extreme tests of general relativity For almost one hundred years, Albert Einstein's general theory of relativity has precisely predicted the outcome of every experiment made to test it. Most of these tests, including the most stringent ones, have been carried out using radio astronomical measurements. By using pulsars as cosmic gravitational wave detectors, or timing pulsars found orbiting black holes, astronomers will be able to examine the limits of general relativity such as the behaviour of spacetime in regions of extremely curved space. The goal is to reveal whether Einstein was correct in his description of space, time and gravity, or whether alternatives to general relativity are needed to account for these phenomena. Galaxies, cosmology, dark matter and dark energy The sensitivity of the SKA in the 21 cm hydrogen line will map a billion galaxies out to the edge of the observable Universe. The large-scale structure of the cosmos thus revealed will give constraints to determine the processes resulting in galaxy formation and evolution. Imaging hydrogen throughout the Universe will provide a three-dimensional picture of the first ripples of structure that formed individual galaxies and clusters. This may also allow the measurement of effects hypothetically caused by dark energy and causing the increasing rate of expansion of the universe. The cosmological measurements enabled by SKA galaxy surveys include testing models of dark energy, gravity, the primordial universe, and fundamental cosmology, and they are summarised in a series of papers available online. Epoch of re-ionization The SKA is intended to provide observational data from the so-called Dark Ages (between 300,000 years after the Big Bang when the universe became cool enough for hydrogen to become neutral and decouple from radiation) and the time of First Light (a billion years later when young galaxies are seen to form for the first time and hydrogen becomes ionized again). By observing the primordial distribution of gas, the SKA should be able to see how the Universe gradually lit up as its stars and galaxies formed and then evolved. This period of the Dark Ages, culminating in First Light, is considered the first chapter in the cosmic story of creation, and the resolving power required to see this event is the reason for the Square Kilometre Array's design. To see back to First Light requires a telescope 100 times more powerful than the biggest radio telescopes currently in the world, taking up 1 million square metres of collecting area, or one square kilometre. Cosmic magnetism It is still not possible to answer basic questions about the origin and evolution of cosmic magnetic fields, but it is clear that they are an important component of interstellar and intergalactic space. By mapping the effects of magnetism on the radiation from very distant galaxies, the SKA will investigate the form of cosmic magnetism and the role it has played in the evolving Universe. Search for extraterrestrial life This key science program, called "Cradle of Life", will focus on three objectives: observing protoplanetary discs in habitable zones, searching for prebiotic chemistry, and contributing to the search for extraterrestrial intelligence (SETI). The SKA will be able to probe the habitable zone of Sun-like protostars, where Earth-like planets or moons are most likely to have environments favourable for the development of life. The signatures of forming Earth-like planets imprinted on circumstellar dust may be the most conspicuous evidence of their presence and evolution, and may even detect planets capable of supporting life. Astrobiologists will also use the SKA to search for complex organic compounds (carbon-containing chemicals) in outer space, including amino acids, by identifying spectral lines at specific frequencies. The SKA will be capable of detecting extremely weak radio emission "leakage" from nearby extraterrestrial civilizations, if they exist.
Technology
Ground-based observatories
null
793565
https://en.wikipedia.org/wiki/Highland
Highland
Highlands or uplands are areas of high elevation such as a mountainous region, elevated mountainous plateau or high hills. Generally, upland refers to a range of hills, typically from up to , while highland is usually reserved for ranges of low mountains. However, the two terms are interchangeable and also include regions that are transitional between hilly and mountainous terrain. Highlands internationally Probably the best-known area officially or unofficially referred to as highlands in the Anglosphere is the Scottish Highlands in northern Scotland, the mountainous region north and west of the Highland Boundary Fault. The Highland council area is a local government area in the Scottish Highlands and Britain's largest local government area. Other highland or upland areas reaching 400 m or higher in the United Kingdom include the Southern Uplands in Scotland, the Pennines, North York Moors, Dartmoor and Exmoor in England, and the Cambrian Mountains in Wales. Many countries and regions also have areas referred to as highlands. These include parts of Afghanistan, Tibet, Ethiopia, Canada, Kenya, Eritrea, Yemen, Ghana, Nigeria, Papua New Guinea, Syria, Turkey and Cantabria. Similar terms used in other countries include high country, used in New Zealand, New South Wales, Victoria, Tasmania and Southern Queensland in Australia, and parts of the United States (notably Western North Carolina), highveld, used in South Africa and Roof of the World, used for Tibet. The central Afghan highlands are in the center of Afghanistan, mostly located between 2,000 and 3,000 m above sea level. They have a very cold winter, and a short and cool summer. These highlands have mountain pastures during summer (sardsīr), watered by many small streams and rivers. There are also pastures available during winter in the neighboring warm lowlands (garmsīr), which makes the region ideal for seasonal transhumance. The highlands in Australia are often above the elevation of 500 m. These areas often receive snowfall in winter. Most of the highlands lead up to large alpine or sub-alpine mountainous regions such as the Australian Alps, Snowy Mountains, Great Dividing Range, Northern Tablelands and Blue Mountains. The most mountainous region of Tasmania is the Central Highlands area, which covers most of the central-western parts of the state. Many of these areas are highly elevated alpine regions. The Ozarks cover nearly , making it the most extensive highland region between the Appalachians and Rockies. This region contains some of the oldest rocks in North America. The spine of the mountains stretches across the island of New Guinea, forming the densely populated highlands of Papua New Guinea, and the Highland Papua, Indonesia. The Central Highlands of Sri Lanka are rain forests, where the elevation reaches 2,500 m (8,200 ft) above sea level. The Sri Lanka montane rain forests represent the montane and submontane moist forests above 1,000 m (3,300 ft) in the central highlands and in the Knuckles mountain range. Half of Sri Lanka's endemic flowering plants and 51 percent of the endemic vertebrates are restricted to this ecoregion. The highlands of Iceland cover about 40% of the country and are mostly inhospitable to humans. They are generally considered to be any land above 500 m. The mountainous natural region of the Thai highlands is found in Northern Thailand. The Cameron Highlands is a highland area and hill station in Pahang, Malaysia. Shillong in India in the state of Meghalaya is a hill station that is surrounded by highlands. Officers of the British Raj referred to Shillong as "The Scotland of the East". Other planets Highland continents—or terrae—are areas of topographically unstable terrain, with high peaks and valleys. They resemble highlands on Earth, but the term is applied to much larger areas on other planets. They can be found on Mercury, Venus, Mars, and the Moon.
Physical sciences
Landforms: General
Earth science
793635
https://en.wikipedia.org/wiki/Eoarchean
Eoarchean
The Eoarchean ( ; also spelled Eoarchaean) is the first era of the Archean Eon of the geologic record. It spans 431 million years, from the end of the Hadean Eon 4031 Mya to the start of the Paleoarchean Era 3600 Mya. Some estimates place the beginnings of life on Earth in this era, while others place it earlier. Evidence of archaea and cyanobacteria date to 3500 Mya, comparatively shortly after the Eoarchean. At that time, the atmosphere was without oxygen and the pressure values ranged from 10 to 100 bar (around 10 to 100 times the atmospheric pressure today). Chronology The Eoarchean Era was formerly officially unnamed and informally referred to as the first part of the Early Archean Eon (which is now an obsolete name) alongside the Paleoarchean Era. The International Commission on Stratigraphy now officially recognizes the Eoarchean Era as the first part of the Archaean Eon, preceded by the Hadean Eon, during which the Earth is believed to have been essentially molten. The Eoarchaean's lower boundary or starting point of 4.031 Gya (4031 million years ago) is officially recognized by the International Commission on Stratigraphy. The name comes from two Greek words: (dawn) and (ancient). The first supercontinent candidate Vaalbara appeared around the end of this period at about . Geology The beginning of the Eoarchean is characterized by heavy asteroid bombardment within the Inner Solar System: the Late Heavy Bombardment. The largest Eoarchean rock formation is the Isua Greenstone Belt on the south-west coast of Greenland, which dates from 3.8 billion years. The Acasta Gneiss within the Canadian Shield have been dated to be 4,031 Ma and are therefore the oldest preserved rock formations. In 2008, another rock formation was discovered in the Nuvvuagittuq Greenstone Belt in northern Québec, Canada, which has been dated to be . These formations are presently under intense investigation. Oxygen isotope ratios show that the hydrological cycle had begun by the early Eoarchaean and possibly earlier. Carbonate precipitation (caused by heating of sea water by hydrothermal vents) acted as an important sink regulating the concentration of carbon dioxide in the atmosphere during this era. Atmosphere 3,850 million years old apatite from Greenland shows evidence of Carbon-12 enrichment. This has sparked a debate whether there might have been photosynthetic life before 3.8 billion years ago. Proposed subdivisions Eoarchean Era — 4031–3600 Mya Acastan Period — 4031–3810 Mya Isuan Period — 3810–3600 Mya
Physical sciences
Geological timescale
Earth science
793825
https://en.wikipedia.org/wiki/Goose%20bumps
Goose bumps
Goose bumps, goosebumps or goose pimples (also called chill bumps) are the bumps on a person's skin at the base of body hairs which may involuntarily develop when a person is tickled, cold or experiencing strong emotions such as fear, euphoria or sexual arousal. The formation of goose bumps in humans under stress is considered by some to be a vestigial reflex, though visible piloerection is associated with changes in skin temperature in humans. The reflex of producing goose bumps is known as piloerection or the pilomotor reflex, or, more traditionally, horripilation. It occurs in many mammals; a prominent example is porcupines, which raise their quills when threatened, or sea otters when they encounter sharks or other predators. Anatomy and biology Goose bumps are created when tiny muscles at the base of each hair, known as arrector pili muscles, contract and pull the hair straight up. The reflex is started by the sympathetic nervous system, which is responsible for many fight-or-flight responses. The muscle cells connected to the hair follicle have been visualized by actin immunofluorescence. Arrector pili muscle Arrector pili muscles (APM) are smooth muscles which connect the basement membrane to the hair follicle. When these muscles contract, they increase the trapping of air on the surface of the skin and in turn, causes thermoregulation to the body. It used to be believed that each APM was connected to an individual hair follicle. More recent studies have disproved this and now explain that there can be multiple hair follicles connected to a single APM. In between the hair follicle and the APM there are lobules which form an angular shape. These lobules are sebaceous gland lobules which are supported by the APM. Hair follicle Hair follicles have four parts. There is the bulb, supra bulbar area, isthmus and infundibulum. The bulb is to be known as the part that is responsible for the growth of the rest of the hair follicle. As a response to cold In animals covered with fur or hair, the erect hairs trap air to create a layer of insulation. Goose bumps can also be a response to anger or fear: the erect hairs make the animal appear larger, in order to intimidate enemies. This can be observed in the intimidation displays of chimpanzees, some New World monkeys like the cotton-top tamarin, in stressed mice and rats, and in frightened cats. In humans In humans, goose bumps can even extend to piloerection as a reaction to hearing nails scratch on a chalkboard, or feeling or remembering strong and positive emotions (e.g., after winning a sports event), or while watching a horror film. Some people can deliberately evoke goose bumps in themselves without any external trigger. This is called "voluntarily generated piloerection." Further research is needed to discover more on such people. Goose bumps are accompanied by a specific physiological response pattern that is thought to indicate the emotional state of being moved. In humans, goose bumps occur everywhere on the body, including the legs, neck, and other areas of the skin that have hair. In some people, they even occur in the face or on the head. In humans, goose bumps tends to occur across the whole body, especially when elicited by thermal or emotional stimuli, and only locally when elicited via tactile stimuli. Piloerection is also a classic symptom of some diseases, such as temporal lobe epilepsy, some brain tumors, and autonomic hyperreflexia. Goose bumps can also be caused by withdrawal from opiates such as heroin. A skin condition that mimics goose bumps in appearance is keratosis pilaris. Causes Extreme temperatures Goose bumps can be experienced in the presence of flash-cold temperatures, for example being in a cold environment, and the skin being able to re-balance its surface temperature quickly. The stimulus of cold surroundings causes the tiny muscles (arrector pili muscle) attached to each hair follicle to contract. This contraction causes the hair strands to stand straight, the purpose of which is to aid in quicker drying via evaporation of water clinging to the hair which is moved upward and away from the skin. Intense emotion The emotional correlates of piloerection in humans are not well understood. People often say they feel their "hair standing on end" when they are frightened or in awe. Music Most research using musical stimuli has focused on self-reported "chills" which is a subjective experience, unlike piloerection which is an objectively quantifiable physiological reaction. However, research has shown that self-reported piloerection does not correspond to observed piloerection. Thus, research on the chills should not be considered to extend to the physiological phenomena of piloerection. Ingestion Medications and herbal supplements that affect body temperature and blood flow may cause piloerection. For example, one of the common reported side effects of the intake of yohimbine is piloerection. Opiate withdrawal Piloerection is one of the signs of opioid withdrawal. The term "cold turkey" meaning abrupt withdrawal from a drug, may derive from the goose bumps that occur during abrupt withdrawal from opioids; this resembles the skin of a refrigerated plucked turkey. Voluntary control An unknown proportion of people may consciously initiate the sensation and physiological signs of piloerection. The phenomenon is discovered spontaneously, appearing to be innate, and is not known to be possible to learn or acquire. Those with the ability frequently are unaware that it is not possible for everyone. The ability appears to correlate with personality traits associated with openness to experience. Etymology The term "goose bumps" derives from the phenomenon's association with goose skin. Goose feathers grow from pores in the epidermis that resemble human hair follicles. When a goose's feathers are plucked, its skin has protrusions where the feathers were, and these bumps are what the human phenomenon resembles. It is not clear why the particular fowl, goose, was chosen in English (and German, Greek, Icelandic, Italian, Swedish, Danish, Norwegian, Polish and Czech) as most other birds share this same anatomical feature. Other languages may use a different species. For example, the hen or chicken is used in Vietnamese, Korean, Japanese, Cantonese, Finnish, Dutch, Luxembourgish, French, Spanish, Portuguese, Romanian, and Galician; Irish uses both; Hebrew, the duck; the ants (referred to as "murashki", alluding to the feeling of ants crawling on one's skin) in Ukrainian and Russian; and a variety of synonyms in Mandarin. Some authors have applied "goose bumps" to the symptoms of sexually transmitted diseases. "Bitten by a Winchester goose" was a common euphemism for having contracted syphilis in the 16th century. "Winchester geese" was the nickname for the prostitutes of Southern London, licensed by the Bishop of Winchester in the area around his London palace.
Biology and health sciences
Symptoms and signs
Health
793975
https://en.wikipedia.org/wiki/Horizontal%20branch
Horizontal branch
The horizontal branch (HB) is a stage of stellar evolution that immediately follows the red-giant branch in stars whose masses are similar to the Sun's. Horizontal-branch stars are powered by helium fusion in the core (via the triple-alpha process) and by hydrogen fusion (via the CNO cycle) in a shell surrounding the core. The onset of core helium fusion at the tip of the red-giant branch causes substantial changes in stellar structure, resulting in an overall reduction in luminosity, some contraction of the stellar envelope, and the surface reaching higher temperatures. Discovery Horizontal branch stars were discovered with the first deep photographic photometric studies of globular clusters and were notable for being absent from all open clusters that had been studied up to that time. The horizontal branch is so named because in low-metallicity star collections like globular clusters, HB stars lie along a roughly horizontal line in a Hertzsprung–Russell diagram. Because the stars of one globular cluster are all at essentially the same distance from us, their apparent magnitudes all have the same relationship to their absolute magnitudes, and thus absolute-magnitude-related properties are plainly visible on an H-R diagram confined to stars of that cluster, undiffused by distance and thence magnitude uncertainties. Evolution After exhausting their core hydrogen, stars leave the main sequence and begin fusion in a hydrogen shell around the helium core and become giants on the red-giant branch. In stars with masses up to 2.3 times the mass of the Sun the helium core becomes a region of degenerate matter that does not contribute to the generation of energy. It continues to grow and increase in temperature as the hydrogen fusion in the shell contributes more helium. If the star has more than about 0.5 solar masses, the core eventually reaches the temperature necessary for the fusion of helium into carbon through the triple-alpha process. The initiation of helium fusion begins across the core region, which will cause an immediate temperature rise and a rapid increase in the rate of fusion. Within a few seconds the core becomes non-degenerate and quickly expands, producing an event called helium flash. Non-degenerate cores initiate fusion more smoothly, without a flash. The output of this event is absorbed by the layers of plasma above, so the effects are not seen from the exterior of the star. The star now changes to a new equilibrium state, and its evolutionary path switches from the red-giant branch (RGB) onto the horizontal branch of the Hertzsprung–Russell diagram. Stars initially between about and have larger helium cores that do not become degenerate. Instead their cores reach the Schönberg–Chandrasekhar mass at which they are no longer in hydrostatic or thermal equilibrium. They then contract and heat up, which triggers helium fusion before the core becomes degenerate. These stars also become hotter during core helium fusion, but they have different core masses and hence different luminosities from HB stars. They vary in temperature during core helium fusion and perform a blue loop before moving to the asymptotic giant branch. Stars more massive than about also ignite their core helium smoothly, and also go on to burn heavier elements as a red supergiant. Stars remain on the horizontal branch for around 100 million years, becoming slowly more luminous in the same way that main sequence stars increase luminosity as the virial theorem shows. When their core helium is eventually exhausted, they progress to helium shell burning on the asymptotic giant branch (AGB). On the AGB they become cooler and much more luminous. Horizontal branch morphology Stars on the horizontal branch all have very similar core masses, following the helium flash. This means that they have very similar luminosities, and on a Hertzsprung–Russell diagram plotted by visual magnitude the branch is horizontal. The size and temperature of an HB star depends on the mass of the hydrogen envelope remaining around the helium core. Stars with larger hydrogen envelopes are cooler. This creates the spread of stars along the horizontal branch at constant luminosity. The temperature variation effect is much stronger at lower metallicity, so old clusters usually have more pronounced horizontal branches. Although the horizontal branch is named because it consists largely of stars with approximately the same absolute magnitude across a range of temperatures, lying in a horizontal bar on a color–magnitude diagrams, the branch is far from horizontal at the blue end. The horizontal branch ends in a "blue tail" with hotter stars having lower luminosity, occasionally with a "blue hook" of extremely hot stars. It is also not horizontal when plotted by bolometric luminosity, with hotter horizontal branch stars being less luminous than cooler ones. The hottest horizontal-branch stars, referred to as extreme horizontal branch, have temperatures of 20,000–30,000 K. This is far beyond what would be expected for a normal core helium burning star. Theories to explain these stars include binary interactions, and "late thermal pulses", where a thermal pulse that asymptotic giant branch (AGB) stars experience regularly, occurs after fusion has ceased and the star has entered the superwind phase. These stars are "born again" with unusual properties. Despite the bizarre-sounding process, this is expected to occur for 10% or more of post-AGB stars, although it is thought that only particularly late thermal pulses create extreme horizontal-branch stars, after the planetary nebular phase and when the central star is already cooling towards a white dwarf. The RR Lyrae gap Globular cluster CMDs (Color-Magnitude diagrams) generally show horizontal branches that have a prominent gap in the HB. This gap in the CMD incorrectly suggests that the cluster has no stars in this region of its CMD. The gap occurs at the instability strip, where many pulsating stars are found. These pulsating horizontal-branch stars are known as RR Lyrae variable stars and they are obviously variable in brightness with periods of up to 1.2 days. It requires an extended observing program to establish the star's true (that is, averaged over a full period) apparent magnitude and color. Such a program is usually beyond the scope of an investigation of a cluster's color–magnitude diagram. Because of this, while the variable stars are noted in tables of a cluster's stellar content from such an investigation, these variable stars are not included in the graphic presentation of the cluster CMD because data adequate to plot them correctly are unavailable. This omission often results in the RR Lyrae gap seen in many published globular cluster CMDs. Different globular clusters often display different HB morphologies, by which is meant that the relative proportions of HB stars existing on the hotter end of the RR Lyr gap, within the gap, and to the cooler end of the gap varies sharply from cluster to cluster. The underlying cause of different HB morphologies is a long-standing problem in stellar astrophysics. Chemical composition is one factor (usually in the sense that more metal-poor clusters have bluer HBs), but other stellar properties like age, rotation and helium content have also been suggested as affecting HB morphology. This has sometimes been called the "Second Parameter Problem" for globular clusters, because there exist pairs of globular clusters which seem to have the same metallicity yet have very different HB morphologies; one such pair is NGC 288 (which has a very blue HB) and NGC 362 (which has a rather red HB). The label "second parameter" acknowledges that some unknown physical effect is responsible for HB morphology differences in clusters that seem otherwise identical. Relationship to the red clump A related class of stars is the clump giants, those belonging to the so-called red clump, which are the relatively younger (and hence more massive) and usually more metal-rich population I counterparts to HB stars (which belong to population II). Both HB stars and clump giants are fusing helium to carbon in their cores, but differences in the structure of their outer layers result in the different types of stars having different radii, effective temperatures, and color. Since color index is the horizontal coordinate in a Hertzsprung–Russell diagram, the different types of star appear in different parts of the CMD despite their common energy source. In effect, the red clump represents one extreme of horizontal-branch morphology: all the stars are at the red end of the horizontal branch, and may be difficult to distinguish from stars ascending the red-giant branch for the first time.
Physical sciences
Stellar astronomy
Astronomy
794439
https://en.wikipedia.org/wiki/Sodium%20sulfate
Sodium sulfate
Sodium sulfate (also known as sodium sulphate or sulfate of soda) is the inorganic compound with formula Na2SO4 as well as several related hydrates. All forms are white solids that are highly soluble in water. With an annual production of 6 million tonnes, the decahydrate is a major commodity chemical product. It is mainly used as a filler in the manufacture of powdered home laundry detergents and in the Kraft process of paper pulping for making highly alkaline sulfides. Forms Anhydrous sodium sulfate, known as the rare mineral thenardite, used as a drying agent in organic synthesis. Heptahydrate sodium sulfate, a very rare form. Decahydrate sodium sulfate, known as the mineral mirabilite, widely used by chemical industry. It is also known as Glauber's salt. History The decahydrate of sodium sulfate is known as Glauber's salt after the Dutch–German chemist and apothecary Johann Rudolf Glauber (1604–1670), who discovered it in Austrian spring water in 1625. He named it (miraculous salt), because of its medicinal properties: the crystals were used as a general-purpose laxative, until more sophisticated alternatives came about in the 1900s. However, J. Kunckel later alleged that it was known as a secret medicine in Saxony already in the mid-16th century. In the 18th century, Glauber's salt began to be used as a raw material for the industrial production of soda ash (sodium carbonate), by reaction with potash (potassium carbonate). Demand for soda ash increased, and the supply of sodium sulfate had to increase in line. Therefore, in the 19th century, the large-scale Leblanc process, producing synthetic sodium sulfate as a key intermediate, became the principal method of soda-ash production. Chemical properties Sodium sulfate is a typical electrostatically bonded ionic sulfate. The existence of free sulfate ions in solution is indicated by the easy formation of insoluble sulfates when these solutions are treated with Ba2+ or Pb2+ salts: Sodium sulfate is unreactive toward most oxidizing or reducing agents. At high temperatures, it can be converted to sodium sulfide by carbothermal reduction (aka thermo-chemical sulfate reduction (TSR), high temperature heating with charcoal, etc.): This reaction was employed in the Leblanc process, a defunct industrial route to sodium carbonate. Sodium sulfate reacts with sulfuric acid to give the acid salt sodium bisulfate: Sodium sulfate displays a moderate tendency to form double salts. The only alums formed with common trivalent metals are NaAl(SO4)2 (unstable above 39 °C) and NaCr(SO4)2, in contrast to potassium sulfate and ammonium sulfate which form many stable alums. Double salts with some other alkali metal sulfates are known, including Na2SO4·3K2SO4 which occurs naturally as the mineral aphthitalite. Formation of glaserite by reaction of sodium sulfate with potassium chloride has been used as the basis of a method for producing potassium sulfate, a fertiliser. Other double salts include 3Na2SO4·CaSO4, 3Na2SO4·MgSO4 (vanthoffite) and NaF·Na2SO4. Physical properties Sodium sulfate has unusual solubility characteristics in water. Its solubility in water rises more than tenfold between 0 °C and 32.384 °C, where it reaches a maximum of 49.7 g/100 mL. At this point the solubility curve changes slope, and the solubility becomes almost independent of temperature. This temperature of 32.384 °C, corresponding to the release of crystal water and melting of the hydrated salt, serves as an accurate temperature reference for thermometer calibration. Structure Crystals of the decahydrate consist of [Na(OH2)6]+ ions with octahedral molecular geometry. These octahedra share edges such that 8 of the 10 water molecules are bound to sodium and 2 others are interstitial, being hydrogen-bonded to sulfate. These cations are linked to the sulfate anions by hydrogen bonds. The Na–O distances are about 240 pm. Crystalline sodium sulfate decahydrate is also unusual among hydrated salts in having a measurable residual entropy (entropy at absolute zero) of 6.32 J/(K·mol). This is ascribed to its ability to distribute water much more rapidly compared to most hydrates. Production The world production of sodium sulfate, almost exclusively in the form of the decahydrate, amounts to approximately 5.5 to 6 million tonnes annually (Mt/a). In 1985, production was 4.5 Mt/a, half from natural sources, and half from chemical production. After 2000, at a stable level until 2006, natural production had increased to 4 Mt/a, and chemical production decreased to 1.5 to 2 Mt/a, with a total of 5.5 to 6 Mt/a. For all applications, naturally produced and chemically produced sodium sulfate are practically interchangeable. Natural sources Two thirds of the world's production of the decahydrate (Glauber's salt) is from the natural mineral form mirabilite, for example as found in lake beds in southern Saskatchewan. In 1990, Mexico and Spain were the world's main producers of natural sodium sulfate (each around 500,000 tonnes), with Russia, United States, and Canada around 350,000 tonnes each. Natural resources are estimated at over 1 billion tonnes. Major producers of 200,000 to 1,500,000 tonnes/year in 2006 included Searles Valley Minerals (California, US), Airborne Industrial Minerals (Saskatchewan, Canada), Química del Rey (Coahuila, Mexico), Minera de Santa Marta and Criaderos Minerales Y Derivados, also known as Grupo Crimidesa (Burgos, Spain), Minera de Santa Marta (Toledo, Spain), Sulquisa (Madrid, Spain), Chengdu Sanlian Tianquan Chemical (Tianquan County, Sichuan, China), Hongze Yinzhu Chemical Group (Hongze District, Jiangsu, China), (Shanxi, China), Sichuan Province Chuanmei Mirabilite (, Dongpo District, Meishan, Sichuan, China), and Kuchuksulphat JSC (Altai Krai, Siberia, Russia). Anhydrous sodium sulfate occurs in arid environments as the mineral thenardite. It slowly turns to mirabilite in damp air. Sodium sulfate is also found as glauberite, a calcium sodium sulfate mineral. Both minerals are less common than mirabilite. Chemical industry About one third of the world's sodium sulfate is produced as by-product of other processes in chemical industry. Most of this production is chemically inherent to the primary process, and only marginally economical. By effort of the industry, therefore, sodium sulfate production as by-product is declining. The most important chemical sodium sulfate production is during hydrochloric acid production, either from sodium chloride (salt) and sulfuric acid, in the Mannheim process, or from sulfur dioxide in the Hargreaves process. The resulting sodium sulfate from these processes is known as salt cake. Mannheim: Hargreaves: The second major production of sodium sulfate are the processes where surplus sodium hydroxide is neutralised by sulfuric acid to obtain sulfate () by using copper sulfate (CuSO4) (as historically applied on a large scale in the production of rayon by using copper(II) hydroxide). This method is also a regularly applied and convenient laboratory preparation.     ΔH = -112.5 kJ (highly exothermic) In the laboratory it can also be synthesized from the reaction between sodium bicarbonate and magnesium sulfate, by precipitating magnesium carbonate. However, as commercial sources are readily available, laboratory synthesis is not practised often. Formerly, sodium sulfate was also a by-product of the manufacture of sodium dichromate, where sulfuric acid is added to sodium chromate solution forming sodium dichromate, or subsequently chromic acid. Alternatively, sodium sulfate is or was formed in the production of lithium carbonate, chelating agents, resorcinol, ascorbic acid, silica pigments, nitric acid, and phenol. Bulk sodium sulfate is usually purified via the decahydrate form, since the anhydrous form tends to attract iron compounds and organic compounds. The anhydrous form is easily produced from the hydrated form by gentle warming. Major sodium sulfate by-product producers of 50–80 Mt/a in 2006 include Elementis Chromium (chromium industry, Castle Hayne, NC, US), Lenzing AG (200 Mt/a, rayon industry, Lenzing, Austria), Addiseo (formerly Rhodia, methionine industry, Les Roches-Roussillon, France), Elementis (chromium industry, Stockton-on-Tees, UK), Shikoku Chemicals (Tokushima, Japan) and Visko-R (rayon industry, Russia). Applications Commodity industries With US pricing at $30 per tonne in 1970, up to $90 per tonne for salt cake quality, and $130 for better grades, sodium sulphate is a very cheap material. The largest use is as filler in powdered home laundry detergents, consuming approximately 50% of world production. This use is waning as domestic consumers are increasingly switching to compact or liquid detergents that do not include sodium sulfate. Papermaking Another formerly major use for sodium sulfate, notably in the US and Canada, is in the Kraft process for the manufacture of wood pulp. Organics present in the "black liquor" from this process are burnt to produce heat, needed to drive the reduction of sodium sulfate to sodium sulfide. However, due to advances in the thermal efficiency of the Kraft recovery process in the early 1960s, more efficient sulfur recovery was achieved and the need for sodium sulfate makeup was drastically reduced. Hence, the use of sodium sulfate in the US and Canadian pulp industry declined from 1,400,000 tonnes per year in 1970 to only approx. 150,000 tonnes in 2006. Glassmaking The glass industry provides another significant application for sodium sulfate, as second largest application in Europe. Sodium sulfate is used as a fining agent, to help remove small air bubbles from molten glass. It fluxes the glass, and prevents scum formation of the glass melt during refining. The glass industry in Europe has been consuming from 1970 to 2006 a stable 110,000 tonnes annually. Textiles Sodium sulfate is important in the manufacture of textiles, particularly in Japan, where this is the largest application. Sodium sulfate is added to increase the ionic strength of the solution and so helps in "levelling", i.e. reducing negative electrical charges on textile fibres, so that dyes can penetrate evenly (see the theory of the diffuse double layer (DDL) elaborated by Gouy and Chapman). Unlike the alternative sodium chloride, it does not corrode the stainless steel vessels used in dyeing. This application in Japan and US consumed in 2006 approximately 100,000 tonnes. Food industry Sodium sulfate is used as a diluent for food colours. It is known as E number additive E514. Heat storage The high heat-storage capacity in the phase change from solid to liquid, and the advantageous phase change temperature of makes this material especially appropriate for storing low-grade solar heat for later release in space heating applications. In some applications the material is incorporated into thermal tiles that are placed in an attic space, while in other applications, the salt is incorporated into cells surrounded by solar–heated water. The phase change allows a substantial reduction in the mass of the material required for effective heat storage (the heat of fusion of sodium sulfate decahydrate is 82 kJ/mol or 252 kJ/kg), with the further advantage of a consistency of temperature as long as sufficient material in the appropriate phase is available. For cooling applications, a mixture with common sodium chloride salt (NaCl) lowers the melting point to . The heat of fusion of NaCl·Na2SO4·10H2O, is actually increased slightly to 286 kJ/kg. Small-scale applications In the laboratory, anhydrous sodium sulfate is widely used as an inert drying agent, for removing traces of water from organic solutions. It is more efficient, but slower-acting, than the similar agent magnesium sulfate. It is only effective below about , but it can be used with a variety of materials since it is chemically fairly inert. Sodium sulfate is added to the solution until the crystals no longer clump together; the two video clips (see above) demonstrate how the crystals clump when still wet, but some crystals flow freely once a sample is dry. Glauber's salt, the decahydrate, is used as a laxative. It is effective for the removal of certain drugs, such as paracetamol (acetaminophen) from the body; thus it can be used after an overdose. In 1953, sodium sulfate was proposed for heat storage in passive solar heating systems. This takes advantage of its unusual solubility properties, and the high heat of crystallisation (78.2 kJ/mol). Other uses for sodium sulfate include de-frosting windows, starch manufacture, as an additive in carpet fresheners, and as an additive to cattle feed. At least one company, Thermaltake, makes a laptop computer chill mat (iXoft Notebook Cooler) using sodium sulfate decahydrate inside a quilted plastic pad. The material slowly turns to liquid and recirculates, equalizing laptop temperature and acting as an insulation. Safety Although sodium sulfate is generally regarded as non-toxic, it should be handled with care. The dust can cause temporary asthma or eye irritation; this risk can be prevented by using eye protection and a paper mask. Transport is not limited, and no Risk Phrase or Safety Phrase applies.
Physical sciences
Salts
null
795199
https://en.wikipedia.org/wiki/Breast%20milk
Breast milk
Breast milk (sometimes spelled as breastmilk) or mother's milk is milk produced by the mammary glands in the breasts of women. Breast milk is the primary source of nutrition for newborn infants, comprising fats, proteins, carbohydrates, and a varying composition of minerals and vitamins. Breast milk also contains substances that help protect an infant against infection and inflammation, such as symbiotic bacteria and other microorganisms and immunoglobulin A, whilst also contributing to the healthy development of the infant's immune system and gut microbiome. Use and methods of consumption The World Health Organization (WHO) and UNICEF recommend exclusive breastfeeding with breast milk for the first six months of an infant’s life. This period is followed by the incorporation of nutritionally adequate and safe complementary solid foods at six months, a stage when an infant’s nutrient and energy requirements start to surpass what breast milk alone can provide. Continuation of breastfeeding is recommended up to two years of age. This guidance is due to the protective benefits of breast milk, which include less infections such as diarrhea—a protection not afforded by formula milk. Breast milk constitutes the sole source of nutrition for exclusively breastfed newborns, supplying all necessary nutrients for infants up to six months. Beyond this age, breast milk continues to be a source of energy for children up to two years old, providing over half of a child's energy needs up to the age of one and a third of the needs between one and two years of age. Despite the capability of most newborns to latch onto the mother's breast within an hour of birth, globally, sixty percent of infants are not breastfed within this crucial first hour. Breastfeeding within the first hour of life protects the newborn from acquiring infections and reduces risk of death during the neonatal period. Alternatively, breast milk can be expressed using a breast pump and administered via baby bottle, cup, spoon, supplementation drip system, or nasogastric tube. This method is especially beneficial for preterm babies who may initially lack the ability to suck effectively. Using cups to feed expressed breast milk and other supplements results in improved breastfeeding outcomes in terms of both duration and extent, compared with traditional bottle and tube feeding. For mothers unable to produce an adequate supply of breast milk, the use of pasteurized donor human breast milk is a viable option. In the absence of pasteurized donor milk, commercial formula milk is recommended as a secondary alternative. However, unpasteurized breast milk from a source other than the infant's mother, particularly when shared informally, carries the risk of vertically transmitting bacteria, viruses (such as HIV), and other microorganisms from the donor to the infant, rendering it an unsafe alternative. Benefits Breastfeeding offers health benefits to mother and child even after infancy. These benefits include proper heat production and adipose tissue development, a 73% decreased risk of sudden infant death syndrome, increased intelligence, decreased likelihood of contracting middle ear infections, cold and flu resistance, a tiny decrease in the risk of childhood leukemia, lower risk of childhood onset diabetes, decreased risk of asthma and eczema, decreased dental problems, decreased risk of obesity later in life, and a decreased risk of developing psychological disorders, including in adopted children. In addition, feeding an infant breast milk is associated with lower insulin levels and higher leptin levels compared feeding an infant via powdered-formula. Many of the infection-fighting and immune system related benefits are associated with human milk oligosaccharides. Breastfeeding also provides health benefits for the mother. It assists the uterus in returning to its pre-pregnancy size and reduces post-partum bleeding, through the production of oxytocin (see Production). Breastfeeding can also reduce the risk of breast cancer later in life. Lactation may also reduce the risk for both mother and infant from both types of diabetes. Lactation may protect the infant from specifically developing Type 2 diabetes, as studies have shown that bioactive ingredients in human breast milk could prevent excess weight gain during childhood via contributing to a feeling of energy and satiety. The lower risk of child-onset diabetes may be more applicable to infants who were born from diabetic mothers. The reason is that while breastfeeding for at least the first six months of life minimizes the risk of type 1 diabetes from occurring in the infant, inadequate breastfeeding in an infant prenatally exposed to diabetes was associated with a higher risk of the child developing diabetes later. There are arguments that breastfeeding may contribute to protective effects against the development of type 1 diabetes because the alternative of bottle-feeding may expose infants to unhygienic feeding conditions. Though it is almost universally prescribed, in some countries during the 1950s, the practice of breastfeeding went through a period where it was out of vogue and the use of infant formula was considered superior to breast milk. However, it is since universally recognized that there is no commercial formula that can adequately substitute for breast milk. In addition to the appropriate amounts of carbohydrate, protein, and fat, breast milk provides vitamins, minerals, digestive enzymes, and hormones. Breast milk also contains antibodies and lymphocytes from the mother that may help the baby resist infections. The immune function of breast milk is individualized, as the mother, through her touching and taking care of the baby, comes into contact with pathogens that colonize the baby, and, as a consequence, her body makes the appropriate antibodies and immune cells. At around four months of age, the internal iron supplies of the infant, held in the hepatic cells of the liver, are exhausted. The American Academy of Pediatrics recommends that at this time that an iron supplement should be introduced. Other health organisations such as the NHS in the UK have no such recommendation. Breast milk contains less iron than formula, but the iron is more bioavailable as lactoferrin, which carries more safety for mothers and children than ferrous sulphate. Both the AAP and the NHS recommend vitamin D supplementation for breastfed infants. Vitamin D can be synthesised by the infant via exposure to sunlight; however, many infants are deficient due to being kept indoors or living in areas with insufficient sunlight. Formula is supplemented with vitamin D for this reason. Production Under the influence of the hormones prolactin and oxytocin, women produce milk after childbirth to feed the baby. The initial milk produced is referred to as colostrum, which is high in the immunoglobulin IgA, which coats the gastrointestinal tract. This helps to protect the newborn until its own immune system is functioning properly. It also creates a mild laxative effect, expelling meconium and helping to prevent the build-up of bilirubin (a contributory factor in jaundice). Male lactation can occur; the production or administration of the hormone prolactin is necessary to induce lactation (see male lactation). Actual inability to produce enough milk is rare, with studies showing that mothers from malnourished regions still produce amounts of milk of similar quality to that of mothers in developed countries. There are many reasons a mother may not produce enough breast milk. Some of the most common reasons are an improper latch (i.e., the baby does not connect efficiently with the nipple), not nursing or pumping enough to meet supply, certain medications (including estrogen-containing hormonal contraceptives), illness, and dehydration. A rarer reason is Sheehan's syndrome, also known as postpartum hypopituitarism, which is associated with prolactin deficiency and may require hormone replacement. The amount of milk produced depends on how often the mother is nursing and/or pumping: the more the mother nurses her baby or pumps, the more milk is produced. It is beneficial to nurse when the baby wants to nurse rather than on a schedule. A Cochrane review came to the conclusion that a greater volume of milk is expressed whilst listening to relaxing audio during breastfeeding, along with warming and massaging of the breast prior to and during feeding. A greater volume of milk expressed can also be attributed to instances where the mother starts pumping milk sooner, even if the infant is unable to breastfeed. Sodium concentration is higher in hand-expressed milk, when compared with the use of manual and electric pumps, and fat content is higher when the breast has been massaged, in conjunction with listening to relaxing audio. This may be important for low birthweight infants. If pumping, it is helpful to have an electric, high-grade pump so that all of the milk ducts are stimulated. Galactagogues increase milk supply, although even herbal variants carry risks. Non-pharmaceutical methods should be tried first, such as pumping out the mother's breast milk supply often, warming or massaging the breast, as well as starting milk pumping earlier after the child is born if they cannot drink milk at the breast. Composition Breast milk contains fats, proteins, carbohydrates (including lactose and human milk oligosaccharides), and a varying composition of minerals and vitamins. The composition changes over a single feed as well as over the period of lactation. Changes are particularly pronounced in marsupials. During the first few days after delivery, the mother produces colostrum. This is a thin yellowish fluid that is the same fluid that sometimes leaks from the breasts during pregnancy. It is rich in protein and antibodies that provide passive immunity to the baby (the baby's immune system is not fully developed at birth). Colostrum also helps the newborn's digestive system to grow and function properly. Colostrum will gradually change to become mature milk. In the first 3–4 days it will appear thin and watery and will taste very sweet; later, the milk will be thicker and creamier. Human milk quenches the baby's thirst and hunger and provides the proteins, sugar, minerals, and antibodies that the baby needs. In the 1980s and 1990s, lactation professionals (De Cleats) used to make a differentiation between foremilk and hindmilk. But this differentiation causes confusion as there are not two types of milk. Instead, as a baby breastfeeds, the fat content very gradually increases, with the milk becoming fattier and fattier over time. The level of Immunoglobulin A (IgA) in breast milk remains high from day 10 until at least 7.5 months post-partum. Human milk contains 0.8–0.9% protein, 4.5% fat, 7.1% carbohydrates, and 0.2% ash (minerals). Carbohydrates are mainly lactose; several lactose-based oligosaccharides (also called human milk oligosaccharides) have been identified as minor components. The fat fraction contains specific triglycerides of palmitic and oleic acid (O-P-O triglycerides), and also lipids with trans bonds (see: trans fat). The lipids are vaccenic acid, and conjugated linoleic acid (CLA) accounting for up to 6% of the human milk fat. The principal proteins are alpha-lactalbumin, lactoferrin (apo-lactoferrin), IgA, lysozyme, and serum albumin. In an acidic environment such as the stomach, alpha-lactalbumin unfolds into a different form and binds oleic acid to form a complex called HAMLET that kills tumor cells. This is thought to contribute to the protection of breastfed babies against cancer. Non-protein nitrogen-containing compounds, making up 25% of the milk's nitrogen, include urea, uric acid, creatine, creatinine, amino acids, and nucleotides. Breast milk has circadian variations; some of the nucleotides are more commonly produced during the night, others during the day. Mother's milk has been shown to supply endocannabinoids (the natural neurotransmitters that cannabis simulates) 2-arachidonoylglycerol, anandamide, oleoylethanolamide, palmitoylethanolamide, N-arachidonoyl glycine, eicosapentaenoyl ethanolamide, docosahexaenoyl ethanolamide, N-palmitoleoyl-ethanolamine, dihomo-γ-linolenoylethanolamine, N-stearoylethanolamine, prostaglandin F2alpha ethanolamides and prostaglandin F2 ethanolamides, Palmitic acid esters of hydroxy-stearic acids (PAHSAs). They may act as an appetite stimulant, but they also regulate appetite so infants do not eat too much. That may be why formula-fed babies have a higher caloric intake than breastfed babies. Breast milk is not sterile and has its own microbiome, but contains as many as 600 different species of various bacteria, including beneficial Bifidobacterium breve, B. adolescentis, B. longum, B. bifidum, and B. dentium, which contribute to colonization of the infant gut. As a result, it can be defined as a probiotic food, depending on how one defines "probiotic". Breast milk also contains a variety of somatic cells and stem cells and the proportion of each cell type differs from individual to individual. The somatic cells are mainly lactocytes and myoepithelial cells derived from the mother's mammary glands. The stem cells found in human breast milk have been shown to be able to differentiate into a variety of other cells involved in the production of bodily tissues and a small proportion of these cross over the nursing infant's intestinal tract into the bloodstream to reach certain organs and transform into fully functional cells. Because of its diverse population of cells and multifarious functions, researchers have argued that breast milk should be considered a living tissue. Breast milk contains a unique type of sugars, human milk oligosaccharides (HMOs), which were not present in traditional infant formula, however they are increasing added by many manufacturers. HMOs are not digested by the infant but help to make up the intestinal flora. They act as decoy receptors that block the attachment of disease causing pathogens, which may help to prevent infectious diseases. They also alter immune cell responses, which may benefit the infant. As of 2015 more than a hundred different HMOs have been identified; both the number and composition vary between women and each HMO may have a distinct functionality. The breast milk of diabetic mothers has been shown to have a different composition from that of non-diabetic mothers. It may contain elevated levels of glucose and insulin and decreased polyunsaturated fatty acids. A dose-dependent effect of diabetic breast milk on increasing language delays in infants has also been noted, although doctors recommend that diabetic mothers breastfeed despite this potential risk. Women breastfeeding should consult with their physician regarding substances that can be unwittingly passed to the infant via breast milk, such as alcohol, viruses (HIV or HTLV-1), or medications. Even though most infants infected with HIV contract the disease from breastfeeding, most infants that are breastfed by their HIV positive mothers never contract the disease. While this paradoxical phenomenon suggests that the risk of HIV transmission between an HIV positive mother and her child via breastfeeding is small, studies have also shown that feeding infants with breast milk of HIV-positive mothers can actually have a preventative effect against HIV transmission between the mother and child. This inhibitory effect against the infant contracting HIV is likely due to unspecified factors exclusively present in breast milk of HIV-positive mothers. Most women that do not breastfeed use infant formula, but breast milk donated by volunteers to human milk banks can be obtained by prescription in some countries. In addition, research has shown that women who rely on infant formula could minimize the gap between the level of immunity protection and cognitive abilities a breastfed child benefits from versus the degree to which a bottle-fed child benefits from them. This can be done by supplementing formula-fed infants with bovine milk fat globule membranes (MFGM) meant to mimic the positive effects of the MFGMs which are present in human breast milk. Storage of expressed breast milk Expressed breast milk can be stored. Lipase may cause thawed milk to taste soapy or rancid due to milk fat breakdown. It is still safe to use, and most babies will drink it. Scalding it will prevent rancid taste at the expense of antibodies. It should be stored with airtight seals. Some plastic bags are designed for storage periods of less than 72 hours. Others can be used for up to 12 months if frozen. This table describes safe storage time limits. Comparison to other milks All mammalian species produce milk, but the composition of milk for each species varies widely and other kinds of milk are often very different from human breast milk. As a rule, the milk of mammals that nurse frequently (including human babies) is less rich, or more watery, than the milk of mammals whose young nurse less often. Human milk is noticeably thinner and sweeter than cow's milk. Whole cow's milk contains too little iron, retinol, vitamin E, vitamin C, vitamin D, unsaturated fats or essential fatty acids for human babies. Whole cow's milk also contains too much protein, sodium, potassium, phosphorus and chloride which may put a strain on an infant's immature kidneys. In addition, the proteins, fats and calcium in whole cow's milk are more difficult for an infant to digest and absorb than the ones in breast milk. The composition of marsupial and monotreme milk contains essential nutrients, growth factors and immunological properties to support the development of joeys and puggles. Note: Milk is generally fortified with vitamin D in the U.S. and Canada. Non-fortified milk contains only 2 IU per 3.5 oz. Effects of medications and other substances on milk content Almost all medicines, or drugs, pass into breastmilk in small amounts by a concentration gradient. The amount of the drug bound by maternal plasma proteins, the size of the drug molecule, the pH and/or pKa of the drug, and the lipophilicity of the drug all determine whether and how much of the drug will pass into breastmilk. Medications that are mostly non-protein bound, low in molecular weight, and highly lipid-soluble are more likely to enter the breast milk in larger quantities. Some drugs have no effect on the baby and can be used whilst breastfeeding, while other medications may be dangerous and harmful to the infant. Some medications considered generally safe for use by a breastfeeding mother, with a doctor’s or pharmacist’s advice, include simple analgesics or pain killers such as paracetamol/acetaminophen, anti-hypertensives such as the ACE-inhibitors enalapril and captopril, anti-depressants of the SSRI and SNRI classes, and medications for gastroesophageal reflux such as omeprazole and ranitidine. Conversely, there are medications that are known to be toxic to the baby and thus should not be used in breastfeeding mothers, such as chemotherapeutic agents which are cytotoxic like cyclosporine, immunosuppressants like methotrexate, amiodarone, or lithium. Furthermore, drugs of abuse, such as cocaine, amphetamines, heroin, and marijuana cause adverse effects on the infant during breastfeeding. Adverse effects include seizures, tremors, restlessness, and diarrhea. To reduce infant exposure to medications used by the mother, use topical therapy or avoid taking the medication during breastfeeding times when possible. Hormonal products and combined oral contraceptives should be avoided during the early postpartum period as they can interfere with lactation. There are some medications that may stimulate the production of breast milk. These medications may be beneficial in cases where women with hypothyroidism may be unable to produce milk. A Cochrane review looked at the drug domperidone (10 mg three times per day) with results showing a significant increase in volume of milk produced over a period of one to two weeks. However, another review concluded little evidence that use of domperidone and metoclopramide to enhance milk supply works. Instead, non-pharmacological approaches such as support and more frequent breastfeeding may be more efficacious. Finally, there are other substances besides medications that may appear in breast milk. Alcohol use during pregnancy carries a significant risk of serious birth defects, but consuming alcohol after the birth of the infant is considered safe. High caffeine intake by breastfeeding mothers may cause their infants to become irritable or have trouble sleeping. A meta-analysis has shown that breastfeeding mothers who smoke expose their infants to nicotine, which may cause respiratory illnesses, including otitis media in the nursing infant. Market There is a commercial market for human breast milk, both in the form of a wet nurse service and as a milk product. As a product, breast milk is exchanged by human milk banks, as well as directly between milk donors and customers as mediated by websites on the internet. Human milk banks generally have standardized measures for screening donors and storing the milk, sometimes even offering pasteurization, while milk donors on websites vary in regard to these measures. A study in 2013 came to the conclusion that 74% of breast milk samples from providers found from websites were colonized with gram-negative bacteria or had more than 10,000 colony-forming units/mL of aerobic bacteria. Bacterial growth happens during transit. According to the FDA, bad bacteria in food at room temperature can double every 20 minutes. Human milk is considered to be healthier than cow's milk and infant formula when it comes to feeding an infant in the first six months of life, but only under extreme situations do international health organizations support feeding an infant breast milk from a healthy wet nurse rather than that of its biological mother. One reason is that the unregulated breast milk market is fraught with risks, such as drugs of abuse and prescription medications being present in donated breast milk. The transmission of these substances through breast milk can do more harm than good when it comes to the health outcomes of the infant recipient. Fraud In the United States, the online marketplace for breast milk is largely unregulated and the high premium has encouraged food fraud. Human breast milk may be diluted with other liquids to increase volume including cow’s milk, soy milk, and water, thus undermining its health benefits. A 2015 CBS article cites an editorial led by Dr. Sarah Steele in the Journal of the Royal Society of Medicine, in which they say that "health claims do not stand up clinically and that raw human milk purchased online poses many health risks." CBS found a study from the Center for Biobehavioral Health at Nationwide Children's Hospital in Columbus that "found that 11 out of 102 breast milk samples purchased online were actually blended with cow's milk." The article also explains that milk purchased online may be improperly sanitized or stored, so it may contain food-borne illness and infectious diseases such as hepatitis and HIV. Consumption by adults Restaurants and recipes A minority of people, including restaurateurs Hans Lochen of Switzerland and Daniel Angerer of Austria, who operates a restaurant in New York City, have used human breast milk, or at least advocated its use, as a substitute for cow's milk in dairy products and food recipes. An Icecreamist in London's Covent Garden started selling an ice cream named Baby Gaga in February 2011. Each serving cost £14. All the milk was donated by a Mrs Hiley who earned £15 for every 10 ounces and called it a "great recession beater". The ice cream sold out on its first day. Despite the success of the new flavour, the Westminster Council officers removed the product from the menu to make sure that it was, as they said, "fit for human consumption." Tammy Frissell-Deppe, a family counsellor specialized in attachment parenting, published a book, titled A Breastfeeding Mother's Secret Recipes, providing a lengthy compilation of detailed food and beverage recipes containing human breast milk. Human breast milk is not produced or distributed industrially or commercially, because the use of human breast milk as an adult food is considered unusual to the majority of cultures around the world, and most disapprove of such a practice. In Costa Rica, there have been trials to produce human cheese, and custard from human milk, as an alternative to weaning. Bodybuilders While there is no scientific evidence that shows that breast milk is advantageous for adults, according to several 2015 news sources, breast milk is being used by bodybuilders for its nutritional value. In a February 2015 ABC News article, one former competitive body builder said, "It isn't common, but I've known people who have done this. It's certainly talked about quite a bit on the bodybuilding forums on the Internet." Calling bodybuilders "a strange breed of individuals", he said, "Even if this type of thing is completely unsupported by research, they're prone to gym lore and willing to give it a shot if there is any potential effect." At the time the article was written, in the U.S., the price of breast milk procured from milk banks that pasteurize the milk, and have expensive quality and safety controls, was about , and the price in the alternative market online, bought directly from mothers, ranges from , compared to cow's milk at about . Erotic lactation For sexual purposes, some couples have decided to induce lactation outside a pregnancy through a practice called "Erotic lactation". Breast milk contamination Breast milk is oftentimes used as an environmental bioindicator given its ability to accumulate certain chemicals, including organochlorine pesticides. Research has found that certain organic contaminants such as PCBs, organochlorine pesticides, PCDDs, PBDEs, and DDT can contaminate breastmilk. According to research done in 2002, the levels of the organochlorine pesticides, PCBs, and dioxins have declined in breast milk in countries where these chemicals have been banned or otherwise regulated, while levels of PBDEs are rising. Pesticide contamination in breastmilk Pesticides and other toxic substances bioaccumulate; i.e., creatures higher up the food chain will store more of them in their body fat. This is an issue in particular for the Inuit, whose traditional diet is predominantly meat. Studies are looking at the effects of polychlorinated biphenyls and persistent organic pollutants in the body; the breast milk of Inuit mothers is extraordinarily high in toxic compounds. The CDC has provided some resources for breastfeeding mothers to reference for safe medication use, including LactMed, Mother to Baby, and The InfantRisk Center. Contamination effects of organochlorine pesticides on infants When a mother is exposed to organochlorine pesticides (OCP's), her infant can be exposed to these OCP's through breast milk intake. This result is supported by a study done in India, which revealed that in each lactation period there is a loss of OCPs from the mother's body involved in the nursing of their children. A longitudinal study was conducted to assess pesticide residues in human breast milk samples and evaluate the risk-exposure of infants to these pesticides from consumption of mother’s milk in Ethiopia. The estimated daily intake (EDI) of infants in the present study was above provisional tolerable daily intake (PTDI) during the first month of breastfeeding which indicates that there is a health risk for infants consuming breast milk at an early stage of breastfeeding in the study areas. Based on these studies, the exposure of women during pregnancy to these OCPs may lead to various health problems for fetus such as low birth weight, disturbance of thyroid hormone, and neurodevelopmental delay.
Biology and health sciences
Health and fitness: General
Health
795334
https://en.wikipedia.org/wiki/Dislocation
Dislocation
In materials science, a dislocation or Taylor's dislocation is a linear crystallographic defect or irregularity within a crystal structure that contains an abrupt change in the arrangement of atoms. The movement of dislocations allow atoms to slide over each other at low stress levels and is known as glide or slip. The crystalline order is restored on either side of a glide dislocation but the atoms on one side have moved by one position. The crystalline order is not fully restored with a partial dislocation. A dislocation defines the boundary between slipped and unslipped regions of material and as a result, must either form a complete loop, intersect other dislocations or defects, or extend to the edges of the crystal. A dislocation can be characterised by the distance and direction of movement it causes to atoms which is defined by the Burgers vector. Plastic deformation of a material occurs by the creation and movement of many dislocations. The number and arrangement of dislocations influences many of the properties of materials. The two primary types of dislocations are sessile dislocations which are immobile and glissile dislocations which are mobile. Examples of sessile dislocations are the stair-rod dislocation and the Lomer–Cottrell junction. The two main types of mobile dislocations are edge and screw dislocations. Edge dislocations can be visualized as being caused by the termination of a plane of atoms in the middle of a crystal. In such a case, the surrounding planes are not straight, but instead bend around the edge of the terminating plane so that the crystal structure is perfectly ordered on either side. This phenomenon is analogous to half of a piece of paper inserted into a stack of paper, where the defect in the stack is noticeable only at the edge of the half sheet. The theory describing the elastic fields of the defects was originally developed by Vito Volterra in 1907. In 1934, Egon Orowan, Michael Polanyi and G. I. Taylor, proposed that the low stresses observed to produce plastic deformation compared to theoretical predictions at the time could be explained in terms of the theory of dislocations. History The theory describing the elastic fields of the defects was originally developed by Vito Volterra in 1907. The term 'dislocation' referring to a defect on the atomic scale was coined by G. I. Taylor in 1934. Prior to the 1930s, one of the enduring challenges of materials science was to explain plasticity in microscopic terms. A simplistic attempt to calculate the shear stress at which neighbouring atomic planes slip over each other in a perfect crystal suggests that, for a material with shear modulus , shear strength is given approximately by: The shear modulus in metals is typically within the range 20 000 to 150 000 MPa indicating a predicted shear stress of 3 000 to 24 000 MPa. This was difficult to reconcile with measured shear stresses in the range of 0.5 to 10 MPa. In 1934, Egon Orowan, Michael Polanyi and G. I. Taylor, independently proposed that plastic deformation could be explained in terms of the theory of dislocations. Dislocations can move if the atoms from one of the surrounding planes break their bonds and rebond with the atoms at the terminating edge. In effect, a half plane of atoms is moved in response to shear stress by breaking and reforming a line of bonds, one (or a few) at a time. The energy required to break a row of bonds is far less than that required to break all the bonds on an entire plane of atoms at once. Even this simple model of the force required to move a dislocation shows that plasticity is possible at much lower stresses than in a perfect crystal. In many materials, particularly ductile materials, dislocations are the "carrier" of plastic deformation, and the energy required to move them is less than the energy required to fracture the material. Mechanisms A dislocation is a linear crystallographic defect or irregularity within a crystal structure which contains an abrupt change in the arrangement of atoms. The crystalline order is restored on either side of a dislocation but the atoms on one side have moved or slipped. Dislocations define the boundary between slipped and unslipped regions of material and cannot end within a lattice and must either extend to a free edge or form a loop within the crystal. A dislocation can be characterised by the distance and direction of movement it causes to atoms in the lattice which is called the Burgers vector. The Burgers vector of a dislocation remains constant even though the shape of the dislocation may change. A variety of dislocation types exist, with mobile dislocations known as glissile and immobile dislocations called sessile. The movement of mobile dislocations allow atoms to slide over each other at low stress levels and is known as glide or slip. The movement of dislocations may be enhanced or hindered by the presence of other elements within the crystal and over time, these elements may diffuse to the dislocation forming a Cottrell atmosphere. The pinning and breakaway from these elements explains some of the unusual yielding behavior seen with steels. The interaction of hydrogen with dislocations is one of the mechanisms proposed to explain hydrogen embrittlement. Dislocations behave as though they are a distinct entity within a crystalline material where some types of dislocation can move through the material bending, flexing and changing shape and interacting with other dislocations and features within the crystal. Dislocations are generated by deforming a crystalline material such as metals, which can cause them to initiate from surfaces, particularly at stress concentrations or within the material at defects and grain boundaries. The number and arrangement of dislocations give rise to many of the properties of metals such as ductility, hardness and yield strength. Heat treatment, alloy content and cold working can change the number and arrangement of the dislocation population and how they move and interact in order to create useful properties. Generating dislocations When metals are subjected to cold working (deformation at temperatures which are relatively low as compared to the material's absolute melting temperature, i.e., typically less than ) the dislocation density increases due to the formation of new dislocations. The consequent increasing overlap between the strain fields of adjacent dislocations gradually increases the resistance to further dislocation motion. This causes a hardening of the metal as deformation progresses. This effect is known as strain hardening or work hardening. Dislocation density in a material can be increased by plastic deformation by the following relationship: . Since the dislocation density increases with plastic deformation, a mechanism for the creation of dislocations must be activated in the material. Three mechanisms for dislocation formation are homogeneous nucleation, grain boundary initiation, and interfaces between the lattice and the surface, precipitates, dispersed phases, or reinforcing fibers. Homogeneous nucleation The creation of a dislocation by homogeneous nucleation is a result of the rupture of the atomic bonds along a line in the lattice. A plane in the lattice is sheared, resulting in 2 oppositely faced half planes or dislocations. These dislocations move away from each other through the lattice. Since homogeneous nucleation forms dislocations from perfect crystals and requires the simultaneous breaking of many bonds, the energy required for homogeneous nucleation is high. For instance, the stress required for homogeneous nucleation in copper has been shown to be , where is the shear modulus of copper (46 GPa). Solving for , we see that the required stress is 3.4 GPa, which is very close to the theoretical strength of the crystal. Therefore, in conventional deformation homogeneous nucleation requires a concentrated stress, and is very unlikely. Grain boundary initiation and interface interaction are more common sources of dislocations. Irregularities at the grain boundaries in materials can produce dislocations which propagate into the grain. The steps and ledges at the grain boundary are an important source of dislocations in the early stages of plastic deformation. Frank–Read source The Frank–Read source is a mechanism that is able to produce a stream of dislocations from a pinned segment of a dislocation. Stress bows the dislocation segment, expanding until it creates a dislocation loop that breaks free from the source. Surfaces The surface of a crystal can produce dislocations in the crystal. Due to the small steps on the surface of most crystals, stress in some regions on the surface is much larger than the average stress in the lattice. This stress leads to dislocations. The dislocations are then propagated into the lattice in the same manner as in grain boundary initiation. In single crystals, the majority of dislocations are formed at the surface. The dislocation density 200 micrometres into the surface of a material has been shown to be six times higher than the density in the bulk. However, in polycrystalline materials the surface sources do not have a major effect because most grains are not in contact with the surface. Interfaces The interface between a metal and an oxide can greatly increase the number of dislocations created. The oxide layer puts the surface of the metal in tension because the oxygen atoms squeeze into the lattice, and the oxygen atoms are under compression. This greatly increases the stress on the surface of the metal and consequently the amount of dislocations formed at the surface. The increased amount of stress on the surface steps results in an increase in dislocations formed and emitted from the interface. Dislocations may also form and remain in the interface plane between two crystals. This occurs when the lattice spacing of the two crystals do not match, resulting in a misfit of the lattices at the interface. The stress caused by the lattice misfit is released by forming regularly spaced misfit dislocations. Misfit dislocations are edge dislocations with the dislocation line in the interface plane and the Burgers vector in the direction of the interface normal. Interfaces with misfit dislocations may form e.g. as a result of epitaxial crystal growth on a substrate. Irradiation Dislocation loops may form in the damage created by energetic irradiation. A prismatic dislocation loop can be understood as an extra (or missing) collapsed disk of atoms, and can form when interstitial atoms or vacancies cluster together. This may happen directly as a result of single or multiple collision cascades, which results in locally high densities of interstitial atoms and vacancies. In most metals, prismatic dislocation loops are the energetically most preferred clusters of self-interstitial atoms. Interaction and arrangement Geometrically necessary dislocations Geometrically necessary dislocations are arrangements of dislocations that can accommodate a limited degree of plastic bending in a crystalline material. Tangles of dislocations are found at the early stage of deformation and appear as non well-defined boundaries; the process of dynamic recovery leads eventually to the formation of a cellular structure containing boundaries with misorientation lower than 15° (low angle grain boundaries). Pinning Adding pinning points that inhibit the motion of dislocations, such as alloying elements, can introduce stress fields that ultimately strengthen the material by requiring a higher applied stress to overcome the pinning stress and continue dislocation motion. The effects of strain hardening by accumulation of dislocations and the grain structure formed at high strain can be removed by appropriate heat treatment (annealing) which promotes the recovery and subsequent recrystallization of the material. The combined processing techniques of work hardening and annealing allow for control over dislocation density, the degree of dislocation entanglement, and ultimately the yield strength of the material. Persistent slip bands Repeated cycling of a material can lead to the generation and bunching of dislocations surrounded by regions that are relatively dislocation free. This pattern forms a ladder like structure known as a persistent slip bands (PSB). PSB's are so-called, because they leave marks on the surface of metals that even when removed by polishing, return at the same place with continued cycling. PSB walls are predominately made up of edge dislocations. In between the walls, plasticity is transmitted by screw dislocations. Where PSB's meet the surface, extrusions and intrusions form, which under repeated cyclic loading, can lead to the initiation of a fatigue crack. Movement Glide Dislocations can slip in planes containing both the dislocation line and the Burgers vector, the so called glide plane. For a screw dislocation, the dislocation line and the Burgers vector are parallel, so the dislocation may slip in any plane containing the dislocation. For an edge dislocation, the dislocation and the Burgers vector are perpendicular, so there is one plane in which the dislocation can slip. Climb Dislocation climb is an alternative mechanism of dislocation motion that allows an edge dislocation to move out of its slip plane. The driving force for dislocation climb is the movement of vacancies through a crystal lattice. If a vacancy moves next to the boundary of the extra half plane of atoms that forms an edge dislocation, the atom in the half plane closest to the vacancy can jump and fill the vacancy. This atom shift moves the vacancy in line with the half plane of atoms, causing a shift, or positive climb, of the dislocation. The process of a vacancy being absorbed at the boundary of a half plane of atoms, rather than created, is known as negative climb. Since dislocation climb results from individual atoms jumping into vacancies, climb occurs in single atom diameter increments. During positive climb, the crystal shrinks in the direction perpendicular to the extra half plane of atoms because atoms are being removed from the half plane. Since negative climb involves an addition of atoms to the half plane, the crystal grows in the direction perpendicular to the half plane. Therefore, compressive stress in the direction perpendicular to the half plane promotes positive climb, while tensile stress promotes negative climb. This is one main difference between slip and climb, since slip is caused by only shear stress. One additional difference between dislocation slip and climb is the temperature dependence. Climb occurs much more rapidly at high temperatures than low temperatures due to an increase in vacancy motion. Slip, on the other hand, has only a small dependence on temperature. Dislocation avalanches Dislocation avalanches occur when multiple simultaneous movement of dislocations occur. Dislocation Velocity Dislocation velocity is largely dependent upon shear stress and temperature, and can often be fit using a power law function: where is a material constant, is the applied shear stress, is a constant that decreases with increasing temperature. Increased shear stress will increase the dislocation velocity, while increased temperature will typically decrease the dislocation velocity. Greater phonon scattering at higher temperatures is hypothesized to be responsible for increased damping forces which slow the dislocation movement. Geometry Two main types of mobile dislocations exist: edge and screw. Dislocations found in real materials are typically mixed, meaning that they have characteristics of both. Edge A crystalline material consists of a regular array of atoms, arranged into lattice planes. An edge dislocation is a defect where an extra half-plane of atoms is introduced midway through the crystal, distorting nearby planes of atoms. When enough force is applied from one side of the crystal structure, this extra plane passes through planes of atoms breaking and joining bonds with them until it reaches the grain boundary. The dislocation has two properties, a line direction, which is the direction running along the bottom of the extra half plane, and the Burgers vector which describes the magnitude and direction of distortion to the lattice. In an edge dislocation, the Burgers vector is perpendicular to the line direction. The stresses caused by an edge dislocation are complex due to its inherent asymmetry. These stresses are described by three equations: where is the shear modulus of the material, is the Burgers vector, is Poisson's ratio and and are coordinates. These equations suggest a vertically oriented dumbbell of stresses surrounding the dislocation, with compression experienced by the atoms near the "extra" plane, and tension experienced by those atoms near the "missing" plane. Screw A screw dislocation can be visualized by cutting a crystal along a plane and slipping one half across the other by a lattice vector, the halves fitting back together without leaving a defect. If the cut only goes part way through the crystal, and then slipped, the boundary of the cut is a screw dislocation. It comprises a structure in which a helical path is traced around the linear defect (dislocation line) by the atomic planes in the crystal lattice. In pure screw dislocations, the Burgers vector is parallel to the line direction. An array of screw dislocations can cause what is known as a twist boundary. In a twist boundary, the misalignment between adjacent crystal grains occurs due to the cumulative effect of screw dislocations within the material. These dislocations cause a rotational misorientation between the adjacent grains, leading to a twist-like deformation along the boundary. Twist boundaries can significantly influence the mechanical and electrical properties of materials, affecting phenomena such as grain boundary sliding, creep, and fracture behavior The stresses caused by a screw dislocation are less complex than those of an edge dislocation and need only one equation, as symmetry allows one radial coordinate to be used: where is the shear modulus of the material, is the Burgers vector, and is a radial coordinate. This equation suggests a long cylinder of stress radiating outward from the cylinder and decreasing with distance. This simple model results in an infinite value for the core of the dislocation at and so it is only valid for stresses outside of the core of the dislocation. If the Burgers vector is very large, the core may actually be empty resulting in a micropipe, as commonly observed in silicon carbide. Mixed In many materials, dislocations are found where the line direction and Burgers vector are neither perpendicular nor parallel and these dislocations are called mixed dislocations, consisting of both screw and edge character. They are characterized by , the angle between the line direction and Burgers vector, where for pure edge dislocations and for screw dislocations. Partial Partial dislocations leave behind a stacking fault. Two types of partial dislocation are the Frank partial dislocation which is sessile and the Shockley partial dislocation which is glissile. A Frank partial dislocation is formed by inserting or removing a layer of atoms on the {111} plane which is then bounded by the Frank partial. Removal of a close packed layer is known as an intrinsic stacking fault and inserting a layer is known as an extrinsic stacking fault. The Burgers vector is normal to the {111} glide plane so the dislocation cannot glide and can only move through climb. In order to lower the overall energy of the lattice, edge and screw dislocations typically disassociate into a stacking fault bounded by two Shockley partial dislocations. The width of this stacking-fault region is proportional to the stacking-fault energy of the material. The combined effect is known as an extended dislocation and is able to glide as a unit. However, dissociated screw dislocations must recombine before they can cross slip, making it difficult for these dislocations to move around barriers. Materials with low stacking-fault energies have the greatest dislocation dissociation and are therefore more readily cold worked. Stair-rod and the Lomer–Cottrell junction If two glide dislocations that lie on different {111} planes split into Shockley partials and intersect, they will produce a stair-rod dislocation with a Lomer-Cottrell dislocation at its apex. It is called a stair-rod because it is analogous to the rod that keeps carpet in-place on a stair. Jog A Jog describes the steps of a dislocation line that are not in the glide plane of a crystal structure. A dislocation line is rarely uniformly straight, often containing many curves and steps that can impede or facilitate dislocation movement by acting as pinpoints or nucleation points respectively. Because jogs are out of the glide plane, under shear they cannot move by glide (movement along the glide plane). They instead must rely on vacancy diffusion facilitated climb to move through the lattice. Away from the melting point of a material, vacancy diffusion is a slow process, so jogs act as immobile barriers at room temperature for most metals. Jogs typically form when two non-parallel dislocations cross during slip. The presence of jogs in a material increases its yield strength by preventing easy glide of dislocations. A pair of immobile jogs in a dislocation will act as a Frank–Read source under shear, increasing the overall dislocation density of a material. When a material's yield strength is increased via dislocation density increase, particularly when done by mechanical work, it is called work hardening. At high temperatures, vacancy facilitated movement of jogs becomes a much faster process, diminishing their overall effectiveness in impeding dislocation movement. Kink Kinks are steps in a dislocation line parallel to glide planes. Unlike jogs, they facilitate glide by acting as a nucleation point for dislocation movement. The lateral spreading of a kink from the nucleation point allows for forward propagation of the dislocation while only moving a few atoms at a time, reducing the overall energy barrier to slip. Example in two dimensions (2D) In two dimensions (2D) only the edge dislocations exist, which play a central role in melting of 2D crystals, but not the screw dislocation. Those dislocations are topological point defects which implies that they cannot be created isolated by an affine transformation without cutting the hexagonal crystal up to infinity (or at least up to its border). They can only be created in pairs with antiparallel Burgers vector. If a lot of dislocations are e. g. thermally excited, the discrete translational order of the crystal is destroyed. Simultaneously, the shear modulus and the Young's modulus disappear, which implies that the crystal is molten to a fluid phase. The orientational order is not yet destroyed (as indicated by lattice lines in one direction) and one finds - very similar to liquid crystals - a fluid phase with typically a six-folded director field. This so-called hexatic phase still has an orientational stiffness. The isotropic fluid phase appears, if the dislocations dissociate into isolated five-folded and seven-folded disclinations. This two step melting is described within the so-called Kosterlitz-Thouless-Halperin-Nelson-Young-theory (KTHNY theory), based on two transitions of Kosterlitz-Thouless-type. Observation Transmission electron microscopy (TEM) Transmission electron microscopy can be used to observe dislocations within the microstructure of the material. Thin foils of material are prepared to render them transparent to the electron beam of the microscope. The electron beam undergoes diffraction by the regular crystal lattice planes into a diffraction pattern and contrast is generated in the image by this diffraction (as well as by thickness variations, varying strain, and other mechanisms). Dislocations have different local atomic structure and produce a strain field, and therefore will cause the electrons in the microscope to scatter in different ways. Note the characteristic 'wiggly' contrast of the dislocation lines as they pass through the thickness of the material in the figure (dislocations cannot end in a crystal, and these dislocations are terminating at the surfaces since the image is a 2D projection). Dislocations do not have random structures, the local atomic structure of a dislocation is determined by the Burgers vector. One very useful application of the TEM in dislocation imaging is the ability to experimentally determine the Burgers vector. Determination of the Burgers vector is achieved by what is known as ("g dot b") analysis. When performing dark field microscopy with the TEM, a diffracted spot is selected to form the image (as mentioned before, lattice planes diffract the beam into spots), and the image is formed using only electrons that were diffracted by the plane responsible for that diffraction spot. The vector in the diffraction pattern from the transmitted spot to the diffracted spot is the vector. The contrast of a dislocation is scaled by a factor of the dot product of this vector and the Burgers vector (). As a result, if the Burgers vector and vector are perpendicular, there will be no signal from the dislocation and the dislocation will not appear at all in the image. Therefore, by examining different dark field images formed from spots with different g vectors, the Burgers vector can be determined. Other methods Field ion microscopy and atom probe techniques offer methods of producing much higher magnifications (typically 3 million times and above) and permit the observation of dislocations at an atomic level. Where surface relief can be resolved to the level of an atomic step, screw dislocations appear as distinctive spiral features – thus revealing an important mechanism of crystal growth: where there is a surface step, atoms can more easily add to the crystal, and the surface step associated with a screw dislocation is never destroyed no matter how many atoms are added to it. Chemical etching When a dislocation line intersects the surface of a metallic material, the associated strain field locally increases the relative susceptibility of the material to acid etching and an etch pit of regular geometrical format results. In this way, dislocations in silicon, for example, can be observed indirectly using an interference microscope. Crystal orientation can be determined by the shape of the etch pits associated with the dislocations. If the material is deformed and repeatedly re-etched, a series of etch pits can be produced which effectively trace the movement of the dislocation in question. Dislocation forces Forces on dislocations Dislocation motion as a result of external stress on a crystal lattice can be described using virtual internal forces which act perpendicular to the dislocation line. The Peach-Koehler equation can be used to calculate the force per unit length on a dislocation as a function of the Burgers vector, , stress, , and the sense vector, . The force per unit length of dislocation is a function of the general state of stress, , and the sense vector, . The components of the stress field can be obtained from the Burgers vector, normal stresses, , and shear stresses, . Forces between dislocations The force between dislocations can be derived from the energy of interactions of the dislocations, . The work done by displacing cut faces parallel to a chosen axis that creates one dislocation in the stress field of another displacement. For the and directions: The forces are then found by taking the derivatives. Free surface forces Dislocations will also tend to move towards free surfaces due to the lower strain energy. This fictitious force can be expressed for a screw dislocation with the component equal to zero as: where is the distance from free surface in the direction. The force for an edge dislocation with can be expressed as:
Physical sciences
Crystallography
Physics
795403
https://en.wikipedia.org/wiki/Messier%2081
Messier 81
Messier 81 (also known as NGC 3031 or Bode's Galaxy) is a grand design spiral galaxy about 12 million light-years away in the constellation Ursa Major. It has a D25 isophotal diameter of . Because of its relative proximity to the Milky Way galaxy, large size, and active galactic nucleus (which harbors a 70 million supermassive black hole), Messier 81 has been studied extensively by professional astronomers. The galaxy's large size and relatively high brightness also makes it a popular target for amateur astronomers. In late February 2022, astronomers reported that M81 may be the source of FRB 20200120E, a repeating fast radio burst. Discovery Messier 81 was first discovered by Johann Elert Bode on 31 December 1774. Thus, it is sometimes referred to as "Bode's Galaxy". In 1779, Pierre Méchain and Charles Messier reidentified Bode's object, hence listed it in the Messier Catalogue. Visibility The galaxy is to be found approximately 10° northwest of Alpha Ursae Majoris (Dubhe) along with several other galaxies in the Messier 81 Group. Its apparent magnitude due to its distance means it requires a good night sky and only rises very briefly and extremely low at its southernmost limit from Earth's surface, about the 20th parallel south. Messier 81 and Messier 82 are considered ideal for viewing using binoculars and small telescopes. The two objects are generally not observable to the unaided eye, although highly experienced amateur astronomers may be able to see Messier 81 under exceptional observing conditions with a very dark sky. Telescopes with apertures of or larger are needed to distinguish structures in the galaxy. The galaxy is best observed during April. Interstellar dust Most of the emission at infrared wavelengths originates from interstellar dust. This interstellar dust is found primarily within the galaxy's spiral arms, and it has been shown to be associated with star formation regions. The general explanation is that the hot, short-lived blue stars that are found within star formation regions are very effective at heating the dust and thus enhancing the infrared dust emission from these regions. Globular clusters It is estimated M81 has 210 ± 30 globular clusters. In late February 2022, astronomers reported that M81 may be the source of FRB 20200120E, a repeating fast radio burst. Supernovae Only one supernova has been detected in Messier 81. The supernova, named SN 1993J, was discovered on 28 March 1993 by F. García in Spain. At the time, it was the second brightest type II supernova observed in the 20th century, peaking at an apparent magnitude of 10.7. The spectral characteristics of the supernova changed over time. Initially, it looked more like a type II supernova (a supernova formed by the explosion of a supergiant star) with strong hydrogen spectral line emission, but later the hydrogen lines faded and strong helium spectral lines appeared, making the supernova look more like a type Ib. Moreover, the variations in SN 1993J's luminosity over time were not like the variations observed in other type II supernovae, but did resemble the variations observed in type Ib supernovae. Hence, the supernova has been classified as a type IIb, a transitory class between type II and type Ib. The scientific results from this supernova suggested that type Ib and Ic supernovae were formed through the explosions of giant stars through processes similar to those taking place in type II supernovae. Despite the uncertainties in modeling the unusual supernova, it was also used to estimate a very approximate distance of 8.5 ± 1.3 Mly (2.6 ± 0.4 Mpc) to Messier 81. As a local galaxy, the Central Bureau for Astronomical Telegrams (CBAT) tracks novae in M81 along with M31 and M33. SMBH In the center of M81 there exists a supermassive black hole (SMBH) with a mass of about . The SMBH is active, having an accretion disk and one-sided relativistic jet. The observation also demonstrate that there may exist a second SMBH that orbits the primary SMBH with a period of around 30 years. The mass of the secondary SMBH is estimated at 0.1 of the primary. Environment Messier 81 is the largest galaxy in the M81 Group, a group of 34 in the constellation Ursa Major. At approximately 11.7 Mly (3.6 Mpc) from the Earth, it makes this group and the Local Group, containing the Milky Way, relative neighbors in the Virgo Supercluster. Gravitational interactions of M81 with M82 and NGC 3077 have stripped hydrogen gas away from all three galaxies, forming gaseous filamentary structures in the group. Moreover, these interactions have allowed interstellar gas to fall into the centers of M82 and NGC 3077, leading to vigorous star formation or starburst activity there. Distance The distance to Messier 81 has been measured by Freedman et al to be 3.63 ± 0.34 Megaparsecs (11.8 ± 1.1 million light years) by using the Hubble Space Telescope to identify classical Cepheid variables and measure their periods using the period-luminosity relation discovered by Henrietta Swan Leavitt.
Physical sciences
Notable galaxies
Astronomy
795422
https://en.wikipedia.org/wiki/Lipizzan
Lipizzan
The Lipizzan or Lipizzaner (, , , , , , ) is a European breed of riding horse developed in the Habsburg Empire in the sixteenth century. It is of Baroque type, and is powerful, slow to mature and long-lived; the coat is usually gray. The name of the breed derives from that of the village of Lipica (), which was part of the Habsburg empire at the time the breed was developed, now in Slovenia, one of the earliest stud farms established; the stud farm there is still active. The breed has been endangered numerous times by warfare sweeping Europe, including during the War of the First Coalition, World War I, and World War II. The rescue of the Lipizzans during World War II by American troops was made famous by the Disney movie Miracle of the White Stallions. The Lipizzaner is closely associated with the Spanish Riding School of Vienna, Austria, where the horses demonstrate the haute école or "high school" movements of classical dressage, including the highly controlled, stylized jumps and other movements known as the "airs above the ground". These horses are mostly bred at the Piber Federal Stud, near Graz, Austria, and are trained using traditional methods of classical dressage that date back hundreds of years. Eight stallions are recognized as the classic foundation bloodstock of the breed, all foaled in the late eighteenth and early nineteenth centuries. All modern Lipizzans trace their bloodlines to these eight stallions, and all breeding stallions have included in their name the name of the foundation sire of their bloodline. Also classic mare lines are known, with up to 35 recognized by various breed registries. The majority of horses are registered through the member organizations of the Lipizzan International Federation, which covers almost 11,000 horses in 19 countries and at 9 state studs in Europe. Most Lipizzans reside in Europe, with smaller numbers in the Americas, South Africa, and Australia. Lipizzan horse breeding traditions are recognized by UNESCO and inscribed on the Representative List of the Intangible Cultural Heritage of Humanity. Characteristics Most adult Lipizzans measure between . However, horses bred to be closer to the original carriage-horse type are taller, approaching . Lipizzans have a long head, with a straight or slightly convex profile. The jaw is deep, the ears small, the eyes large and expressive, and the nostrils flared. They have a neck that is sturdy, yet arched and withers that are low, muscular, and broad. They are a Baroque horse, with a wide, deep chest, broad croup, and muscular shoulder. The tail is carried high and well set. The legs are well-muscled and strong, with broad joints and well-defined tendons. The feet tend to be small, but are tough. Lipizzan horses tend to mature slowly. However, they live and are active longer than many other breeds, with horses performing the difficult exercises of the Spanish Riding School well into their 20s and living into their 30s. Color Aside from the rare solid-colored horse (usually bay or black), most Lipizzans are gray. Like all gray horses, they have black skin, dark eyes, and as adult horses, a white hair coat. Gray horses, including Lipizzans, are born with a pigmented coat—in Lipizzans, foals are usually bay or black—and become lighter each year as the graying process takes place, with the process being complete between 6 and 10 years of age. Lipizzans are not actually true white horses, but this is a common misconception. A white horse is born white and has unpigmented skin. Until the eighteenth century, Lipizzans had other coat colors, including dun, bay, chestnut, black, piebald, and skewbald. However, gray is a dominant gene. Gray was the color preferred by the royal family, so the color was emphasized in breeding practices. Thus, in a small breed population when the color was deliberately selected as a desirable feature, it came to be the color of the overwhelming majority of Lipizzan horses. However, it is a long-standing tradition for the Spanish Riding School to have at least one bay Lipizzan stallion in residence, and this tradition is continued through the present day. History The ancestors of the Lipizzan can be traced to around 800 AD. The earliest predecessors of the Lipizzan originated in the seventh century when Barb horses were brought into Spain by the Moors and crossed on native Spanish stock. The result was the Andalusian horse and other Iberian horse breeds. By the sixteenth century, when the Habsburgs ruled both Spain and Austria, a powerful but agile horse was desired both for military uses and for use in the fashionable and rapidly growing riding schools for the nobility of central Europe. Therefore, in 1562, the Habsburg Emperor Maximillian II brought the Spanish Andalusian horse to Austria and founded the court stud at Kladrub. In 1580, his brother, Archduke Charles II, ruler of Inner Austria, established a similar stud at Lipizza (now Lipica), located in modern-day Slovenia, from which the breed obtained its name. When the stud farm was established, Lipizza was located within the municipal limits of Trieste, an autonomous city under Habsburg sovereignty. The name of the village itself derives from the Slovene word lipa, meaning "linden tree." Spanish, Barb, and Arabian stock were crossed at Lipizza, and succeeding generations were crossed with the now-extinct Neapolitan breed from Italy and other Baroque horses of Spanish descent obtained from Germany and Denmark. While breeding stock was exchanged between the two studs, Kladrub specialized in producing heavy carriage horses, while riding and light carriage horses came from the Lipizza stud. Beginning in 1920, the Piber Federal Stud, near Graz, Austria, became the main stud for the horses used in Vienna. Breeding became very selective, allowing only stallions that had proved themselves at the Riding School to stand at stud, and breeding only mares that had passed rigorous performance testing. Foundation horses Today, eight foundation lines for Lipizzans are recognized by various registries, which refer to them as "dynasties". They are divided into two groups. Six trace to classical foundation stallions used in the eighteenth and nineteenth centuries by the Lipizza stud, and two additional lines were not used at Lipizza, but were used by other studs within the historic boundaries of the Habsburg Empire. The six "classical dynasties" are: Pluto: a gray Spanish stallion from the Royal Danish Stud, foaled in 1765 Conversano: a black Neapolitan stallion, foaled in 1767 Maestoso: a gray stallion from the Kladrub stud with a Spanish dam, foaled 1773, descendants today all trace via Maestoso X, foaled in Hungary in 1819 Favory: a dun stallion from the Kladrub stud, foaled in 1779 Neapolitano: a bay Neapolitan stallion from the Polesine, foaled in 1790 Siglavy: a gray Arabian stallion, originally from Syria, foaled in 1810 Two additional stallion lines are found in Croatia, Hungary, and other eastern European countries, as well as in North America. They are accepted as equal to the six classical lines by the Lipizzan International Federation. These are: Tulipan: A black stallion of Baroque type and Spanish pedigree foaled about 1800 from the Croatian stud farm of Terezovac, owned by Count Janković-Bésán. Incitato: A stallion of Spanish lines foaled 1802, bred in Transylvania by Count Bethlen, and sold to the Hungarian stud farm Mezőhegyes Several other stallion lines have died out over the years, but were used in the early breeding of the horses. In addition to the foundation stallion lines, there were 20 "classic" mare lines, 14 of which exist today. However, up to 35 mare lines are recognized by various Lipizzan organizations. Traditional naming patterns are used for both stallions and mares, required by Lipizzan breed registries. Stallions traditionally are given two names, with the first being the line of the sire and the second being the name of the dam. For example, "Maestoso Austria" is a horse sired by Maestoso Trompeta out of a mare named Austria. The horse's sire line traces to the foundation sire Maestoso. The names of mares are chosen to be "complementary to the traditional Lipizzan line names" and are required to end in the letter "a". Spanish Riding School The Spanish Riding School uses highly trained Lipizzan stallions in public performances that demonstrate classical dressage movements and training. In 1572, the first Spanish riding hall was built, during the Austrian Empire, and is the oldest of its kind in the world. The Spanish Riding School, though located in Vienna, Austria, takes its name from the original Spanish heritage of its horses. In 1729, Charles VI commissioned the building of the Winter Riding School in Vienna and in 1735, the building was completed that remains the home of the Spanish Riding School today. Wartime preservation The Lipizzans endured several wartime relocations throughout their history, each of which saved the breed from extinction. The first was in March 1797 during the War of the First Coalition, when the horses were evacuated from Lipica. During the journey, 16 mares gave birth to foals. In November 1797, the horses returned to Lipica, but the stables were in ruins. They were rebuilt, but in 1805, the horses were evacuated again when Napoleon invaded Austria. They were being taken care of in Đakovo Stud. They remained away from the stud for two years, returning April 1, 1807, but then, following the Treaty of Schönbrunn in 1809, the horses were evacuated three more times during the unsettled period that followed, resulting in the loss of many horses and the destruction of the written studbooks that documented bloodlines of horses prior to 1700. The horses finally returned to Lipica for good in 1815, where they remained for the rest of the nineteenth century. The first evacuation of the twentieth century occurred in 1915 when the horses were evacuated from Lipica due to World War I and placed at Laxenburg and Kladrub. Following the war, the Austro-Hungarian Empire was broken up, with Lipica becoming part of Italy. Thus, the animals were divided between several different studs in the new postwar nations of Austria, Italy, Hungary, Czechoslovakia, Romania, and Yugoslavia. The nation of Austria kept the stallions of the Spanish Riding School and some breeding stock. By 1920, the Austrian breeding stock was consolidated at Piber. During World War II, the high command of Nazi Germany transferred most of Europe's Lipizzan breeding stock to Hostau, Czechoslovakia. The breeding stock was taken from Piber in 1942, and additional mares and foals from other European nations arrived in 1943. The stallions of the Spanish Riding School were evacuated to St. Martins, Austria, from Vienna in January 1945, when bombing raids neared the city and the head of the Spanish Riding School, Colonel Alois Podhajsky, feared the horses were in danger. By spring of 1945, the horses at Hostau were threatened by the advancing Soviet army, which might have slaughtered the animals for horse meat had it captured the facility. The rescue of the Lipizzans by the United States Army, made famous by the Disney movie Miracle of the White Stallions, occurred in two parts: The Third United States Army, under the command of General George S. Patton, was near St. Martins in the spring of 1945 and learned that the Lipizzan stallions were in the area.<ref>Letts, Elizabeth. 2016. The Perfect Horse: The Daring U.S. Mission to Rescue the Priceless Stallions Kidnapped by the Nazis.</ref> Patton himself was a horseman, and like Podhajsky, had competed in the Olympic Games. On May 7, 1945, Podhajsky put on an exhibition of the Spanish Riding School stallions for Patton and Undersecretary of War Robert P. Patterson, and at its conclusion requested that Patton take the horses under his protection. Meanwhile, the Third Army's United States Second Cavalry, a tank unit under the command of Colonel Charles Reed, had discovered the horses at Hostau, where 400 Allied prisoners of war were also being kept, and had occupied it on April 28, 1945. "Operation Cowboy", as the rescue was known, resulted in the recovery of 1,200 horses, including 375 Lipizzans. Patton learned of the raid, and arranged for Podhajsky to fly to Hostau. On May 12, American soldiers began riding, trucking, and herding the horses 35 miles across the border into Kotztinz, Germany. The Lipizzans were eventually settled in temporary quarters in Wimsbach, until the breeding stock returned to Piber in 1952, and the stallions returned to the Spanish Riding School in 1955. In 2005, the Spanish Riding School celebrated the 60th anniversary of Patton's rescue by touring the United States. During the Croatian War of Independence, from 1991 to 1995, the horses at the Lipik stable in Croatia were taken by the Serbs to Novi Sad, Serbia. The horses remained there until 2007, when calls began to be made for them to be returned to their country of origin. In October 2007, 60 horses were returned to Croatia. Modern breed The Lipizzan breed suffered a setback to its population when a viral epidemic hit the Piber Stud in 1983. Forty horses and 8% of the expected foal crop were lost. Since then, the population at the stud has increased. By 1994, 100 mares were at the stud farm and a foal crop of 56 was born in 1993. In 1994, the rate of successful pregnancy and birth of foals increased from 27 to 82%; the result of a new veterinary center. In 1996, a study funded by the European Union Indo-Copernicus Project assessed 586 Lipizzan horses from eight stud farms in Europe, with the goal of developing a "scientifically based description of the Lipizzan horse". A study of the mitochondrial DNA (mtDNA) was performed on 212 of the animals, and those studied were found to contain 37 of the 39 known mtDNA haplotypes known in modern horses, meaning that they show a high degree of genetic diversity. This had been expected, as it was known that the mare families of the Lipizzan included a large number of different breeds, including Arabians, Thoroughbreds, and other European breeds. The Lipizzan International Federation (LIF) is the international governing organization for the breed, composed of many national and private organizations representing the Lipizzan. The organizations work together under the banner of the LIF to promote the breed and maintain standards. As of 2012, almost 11,000 Lipizzans were registered with the LIF; residing with private breeders in 19 countries and at nine state studs in Europe. The largest number are in Europe, with almost 9,000 registered horses, followed by the Americas, with just over 1,700, then Africa and Australia with around 100 horses each. The nine state studs that are part of the LIF represent almost one-quarter of the horses in Europe. Sâmbăta de Jos, in Romania, has the greatest number of horses, with 400, followed by Piber in Austria (360), Lipica in Slovenia (358), Szilvásvárad in Hungary (262), Monterotondo in Italy (230), Đakovo-Lipik in Croatia (220), and Topoľčianky in Slovakia (200). The other two studs are smaller, with stud Vučijak in Bosnia near Prnjavor having 130 horses and Karađorđevo in Serbia having just 30. Educational programs have been developed to promote the breed and foster adherence to traditional breeding objectives. Because of the status of Lipizzans as the only breed of horse developed in Slovenia, via the Lipica stud that is now located within its borders, Lipizzans are recognized in Slovenia as a national animal. For example, a pair of Lipizzans is featured on the 20-cent Slovenian euro coins. Mounted regiments of Carabinieri police in Italy also employ the Lipizzan as one of their mounts. In October 2008, during a visit to Slovenia, a Lipizzan at Lipica, named 085 Favory Canissa XXII, was given to Queen Elizabeth II of the United Kingdom. She decided to leave the animal in the care of the stud farm. Heritage of humanity list On the initiative of Slovenian Ministry of Culture, the tradition of breeding and maintaining a purebred Lipizzaner is recognized by UNESCO and inscribed on the Representative List of the Intangible Cultural Heritage of Humanity as Lipizzan horse breeding traditions since 2022. Inscriptions include state parties Austria, Bosnia and Herzegovina, Croatia, Hungary, Italy, Romania, Slovakia and Slovenia. Training and uses The traditional horse training methods for Lipizzans were developed at the Spanish Riding School and are based on the principles of classical dressage, which in turn traces to the Ancient Greek writer Xenophon, whose works were rediscovered in the sixteenth century. His thoughts on development of horses' mental attitude and psyche are still considered applicable today. Other writers who strongly influenced the training methods of the Spanish Riding School include Federico Grisone, the founder of the first riding academy in Naples, who lived during the sixteenth century, and Antoine de Pluvinel and François Robichon de la Guérinière, two Frenchmen from the seventeenth and eighteenth centuries. The methods for training the Lipizzan stallions at the Spanish Riding School were passed down via an oral tradition until Field Marshal Franz Holbein and Johann Meixner, Senior Rider at the School, published the initial guidelines for the training of horse and rider at the school in 1898. In the mid-twentieth century, Alois Podhajsky wrote a number of works that serve as textbooks for many dressage riders today. The principles taught at the Spanish Riding School are based on practices taught to cavalry riders to prepare their horses for warfare. Young stallions come to the Spanish Riding School for training when they are four years old. Full training takes an average of six years for each horse, and schooling is considered complete when they have mastered the skills required to perform the "School Quadrille". There are three progressively more difficult skill sets taught to the stallions, which are: Forward riding, also called straight riding or the Remontenschule, is the name given to the skills taught in the first year of training, where a young horse learns to be saddled and bridled, learns basic commands on a longe line, and then is taught to be ridden, mostly in an arena in simple straight lines and turns, to teach correct responses to the rider's legs and hands while mounted. The main goal during this time is to develop free forward movement in as natural a position as possible. Campaign school, Campagneschule or Campagne, is where the horse learns collection and balance through all gaits, turns, and maneuvers. The horse learns to shorten and lengthen his stride and perform lateral movements to the side, and is introduced to the more complex double bridle. This is the longest training phase and may take several years. High-school dressage, the haute école or Hohe Schule, includes riding the horse with greater collection with increased use of the hindquarters, developing increased regularity, skill, and finesse in all natural gaits. In this period, the horse learns the most advanced movements such as the half-pass, counter-canter, flying change, pirouette, passage, and piaffe. This is also when the horse may be taught the "airs above the ground." This level emphasizes performance with a high degree of perfection.Podhajsky, The Complete Training of Horse and Rider, pp. 25–26 Although the Piber Stud trains mares for driving and under saddle, the Spanish Riding School exclusively uses stallions in its performances. Worldwide, the Lipizzan today competes in dressage and driving, as well as retaining their classic position at the Spanish Riding School. "Airs above the ground" The "airs above the ground" are the difficult "high school" dressage movements made famous by the Lipizzans. The finished movements include: The levade is a position wherein the horse raises up both front legs, standing at a 30° angle entirely on its hind legs in a controlled form that requires a great deal of hindquarter strength. A less difficult but related movement is the pesade, where the horse rises up to a 45° angle. The courbette is a movement where the horse balances on its hind legs and then essentially "hops", jumping with the front legs off the ground and hind legs together. The capriole is a jump in place where the stallion leaps into the air, tucking his forelegs under himself, and kicks out with his hind legs at the top of the jump. Other movements include: The croupade and ballotade are predecessors to the capriole. In the croupade, the horse jumps with both front and hind legs remaining tucked under the body and he does not kick out. In the ballotade, the horse jumps and untucks his hind legs slightly, he does not kick out, but the soles of the hind feet are visible if viewed from the rear. The mezair is a series of successive levades in which the horse lowers its forefeet to the ground before rising again on hindquarters, achieving forward motion. This movement is no longer used at the Spanish Riding School. In popular culture Lipizzans have starred or played supporting roles in many movies, TV shows, books, and other media. The 1940 film Florian stars two Lipizzan stallions. It was based on a 1934 novel by Felix Salten. The wife of the film's producer owned the only Lipizzans in the US at the time the movie was made. The rescue during World War II of the Lipizzan stallions is depicted in the 1963 Walt Disney movie Miracle of the White Stallions. The movie was the only live-action, relatively realistic film set against a World War II backdrop that Disney has ever produced. Television programs featuring the Lipizzans include The White Horses'', a 1965 children's television series co-produced by RTV Ljubljana (now RTV Slovenija) of Yugoslavia and BR-TV of Germany, rebroadcast in the United Kingdom. It followed the adventures of a teenaged girl who visits a farm where Lipizzan horses are raised.
Biology and health sciences
Horses
Animals
6243993
https://en.wikipedia.org/wiki/LU%20decomposition
LU decomposition
In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. It is also sometimes referred to as LR decomposition (factors into left and right triangular matrices). Definitions Let A be a square matrix. An LU factorization refers to expression of A into product of two factors – a lower triangular matrix L and an upper triangular matrix U: Sometimes factorization is impossible without prior reordering of A to prevent division by zero or uncontrolled growth of rounding errors hence alternative expression becomes: , where P and Q are row and column permutation matrices (cf. pivoting). In the lower triangular matrix all elements above the diagonal are zero, in the upper triangular matrix, all the elements below the diagonal are zero. For example, for a 3 × 3 matrix A, its LU decomposition looks like this: Without a proper ordering or permutations in the matrix, the factorization may fail to materialize. For example, it is easy to verify (by expanding the matrix multiplication) that . If , then at least one of and has to be zero, which implies that either L or U is singular. This is impossible if A is nonsingular (invertible). In terms of operations, zeroing/elimination of remaining elements of first column of A involves division of with , impossible if it is 0. This is a procedural problem. It can be removed by simply reordering the rows of A so that the first element of the permuted matrix is nonzero. The same problem in subsequent factorization steps can be removed the same way. For numerical stability against rounding errors/division by small numbers it is important to select of large absolute value (c.f. pivoting). Matrix A of side has coefficients while two triangle matrices combined contain coefficients, therefore coefficients of matrices LU are not independent. Usual convention is to set L unitriangular, i.e. with all main diagonal elements equal one. LU factorization with partial pivoting It turns out that a proper permutation of rows (or columns) to select column (or row) absolute maximal pivot is sufficient for numerically stable LU factorization, except for known pathological cases. It is called LU factorization with partial pivoting (LUP): where L and U are again lower and upper triangular matrices, and P(Q) are corresponding permutation matrices, which, when left/right-multiplied to A, reorder the rows/columns of A. It turns out that all square matrices can be factorized in this form, and the factorization is numerically stable in practice. This makes LUP decomposition a useful technique in practice. A variant called rook pivoting at each step involves search of maximum element the way rook moves on a chessboard, along column, row, column again and so on till reaching a pivot maximal in both its row and column. It can be proven that for large matrices of random elements its cost of operations at each step is similarly to partial pivoting proportional to the length of matrix side unlike its square for full pivoting. LU factorization with full pivoting An LU factorization with full pivoting involves both row and column permutations to find absolute maximum element in the whole submatrix: where L, U and P are defined as before, and Q is a permutation matrix that reorders the columns of A. Lower-diagonal-upper (LDU) decomposition A Lower-diagonal-upper (LDU) decomposition is a decomposition of the form where D is a diagonal matrix, and L and U are unitriangular matrices, meaning that all the entries on the diagonals of L and U are one. Rectangular matrices Above we required that A be a square matrix, but these decompositions can all be generalized to rectangular matrices as well., In that case, L and D are square matrices both of which have the same number of rows as A, and U has exactly the same dimensions as A. Upper triangular should be interpreted as having only zero entries below the main diagonal, which starts at the upper left corner. Similarly, the more precise term for U is that it is the row echelon form of the matrix A. Example We factor the following 2-by-2 matrix: One way to find the LU decomposition of this simple matrix would be to simply solve the linear equations by inspection. Expanding the matrix multiplication gives This system of equations is underdetermined. In this case any two non-zero elements of L and U matrices are parameters of the solution and can be set arbitrarily to any non-zero value. Therefore, to find the unique LU decomposition, it is necessary to put some restriction on L and U matrices. For example, we can conveniently require the lower triangular matrix L to be a unit triangular matrix, so that all the entries of its main diagonal are set to one. Then the system of equations has the following solution: Substituting these values into the LU decomposition above yields Existence and uniqueness Square matrices Any square matrix admits LUP and PLU factorizations. If is invertible, then it admits an LU (or LDU) factorization if and only if all its leading principal minors are nonzero (for example does not admit an LU or LDU factorization). If is a singular matrix of rank , then it admits an LU factorization if the first leading principal minors are nonzero, although the converse is not true. If a square, invertible matrix has an LDU (factorization with all diagonal entries of L and U equal to 1), then the factorization is unique. In that case, the LU factorization is also unique if we require that the diagonal of (or ) consists of ones. In general, any square matrix could have one of the following: a unique LU factorization (as mentioned above); infinitely many LU factorizations if any of the first (n−1) columns are linearly dependent; no LU factorization if the first (n−1) columns are linearly independent and at least one leading principal minor is zero. In Case 3, one can approximate an LU factorization by changing a diagonal entry to to avoid a zero leading principal minor. Symmetric positive-definite matrices If A is a symmetric (or Hermitian, if A is complex) positive-definite matrix, we can arrange matters so that U is the conjugate transpose of L. That is, we can write A as This decomposition is called the Cholesky decomposition. If is positive definite, then the Cholesky decomposition exists and is unique. Furthermore, computing the Cholesky decomposition is more efficient and numerically more stable than computing some other LU decompositions. General matrices For a (not necessarily invertible) matrix over any field, the exact necessary and sufficient conditions under which it has an LU factorization are known. The conditions are expressed in terms of the ranks of certain submatrices. The Gaussian elimination algorithm for obtaining LU decomposition has also been extended to this most general case. Algorithms Closed formula When an LDU factorization exists and is unique, there is a closed (explicit) formula for the elements of L, D, and U in terms of ratios of determinants of certain submatrices of the original matrix A. In particular, , and for , is the ratio of the -th principal submatrix to the -th principal submatrix. Computation of the determinants is computationally expensive, so this explicit formula is not used in practice. Using Gaussian elimination The following algorithm is essentially a modified form of Gaussian elimination. Computing an LU decomposition using this algorithm requires floating-point operations, ignoring lower-order terms. Partial pivoting adds only a quadratic term; this is not the case for full pivoting. Generalized explanation Notation Given an N × N matrix , define as the original, unmodified version of the matrix . The parenthetical superscript (e.g., ) of the matrix is the version of the matrix. The matrix is the matrix in which the elements below the main diagonal have already been eliminated to 0 through Gaussian elimination for the first columns. Below is a matrix to observe to help us remember the notation (where each represents any real number in the matrix): Procedure During this process, we gradually modify the matrix using row operations until it becomes the matrix in which all the elements below the main diagonal are equal to zero. During this, we will simultaneously create two separate matrices and , such that . We define the final permutation matrix as the identity matrix which has all the same rows swapped in the same order as the matrix while it transforms into the matrix . For our matrix , we may start by swapping rows to provide the desired conditions for the n-th column. For example, we might swap rows to perform partial pivoting, or we might do it to set the pivot element on the main diagonal to a non-zero number so that we can complete the Gaussian elimination. For our matrix , we want to set every element below to zero (where is the element in the n-th column of the main diagonal). We will denote each element below as (where ). To set to zero, we set for each row . For this operation, . Once we have performed the row operations for the first columns, we have obtained an upper triangular matrix which is denoted by . We can also create the lower triangular matrix denoted as , by directly inputting the previously calculated values of via the formula below. Example If we are given the matrixwe will choose to implement partial pivoting and thus swap the first and second row so that our matrix and the first iteration of our matrix respectively becomeOnce we have swapped the rows, we can eliminate the elements below the main diagonal on the first column by performing such that,Once these rows have been subtracted, we have derived from the matrix Because we are implementing partial pivoting, we swap the second and third rows of our derived matrix and the current version of our matrix respectively to obtainNow, we eliminate the elements below the main diagonal on the second column by performing such that . Because no non-zero elements exist below the main diagonal in our current iteration of after this row subtraction, this row subtraction derives our final matrix (denoted as ) and final matrix:After also switching the corresponding rows, we obtain our final matrix:Now these matrices have a relation such that . Relations when no rows are swapped If we did not swap rows at all during this process, we can perform the row operations simultaneously for each column by setting where is the N × N identity matrix with its n-th column replaced by the transposed vector In other words, the lower triangular matrix Performing all the row operations for the first columns using the formula is equivalent to finding the decomposition Denote so that . Now let's compute the sequence of . We know that has the following formula. If there are two lower triangular matrices with 1s in the main diagonal, and neither have a non-zero item below the main diagonal in the same column as the other, then we can include all non-zero items at their same location in the product of the two matrices. For example: Finally, multiply together and generate the fused matrix denoted as (as previously mentioned). Using the matrix , we obtain It is clear that in order for this algorithm to work, one needs to have at each step (see the definition of ). If this assumption fails at some point, one needs to interchange n-th row with another row below it before continuing. This is why an LU decomposition in general looks like . LU Crout decomposition Note that the decomposition obtained through this procedure is a Doolittle decomposition: the main diagonal of L is composed solely of 1s. If one would proceed by removing elements above the main diagonal by adding multiples of the columns (instead of removing elements below the diagonal by adding multiples of the rows), we would obtain a Crout decomposition, where the main diagonal of U is of 1s. Another (equivalent) way of producing a Crout decomposition of a given matrix A is to obtain a Doolittle decomposition of the transpose of A. Indeed, if is the LU-decomposition obtained through the algorithm presented in this section, then by taking and , we have that is a Crout decomposition. Through recursion Cormen et al. describe a recursive algorithm for LUP decomposition. Given a matrix A, let P1 be a permutation matrix such that , where , if there is a nonzero entry in the first column of A; or take P1 as the identity matrix otherwise. Now let , if ; or otherwise. We have Now we can recursively find an LUP decomposition . Let . Therefore which is an LUP decomposition of A. Randomized algorithm It is possible to find a low rank approximation to an LU decomposition using a randomized algorithm. Given an input matrix and a desired low rank , the randomized LU returns permutation matrices and lower/upper trapezoidal matrices of size and respectively, such that with high probability , where is a constant that depends on the parameters of the algorithm and is the -th singular value of the input matrix . Theoretical complexity If two matrices of order n can be multiplied in time M(n), where M(n) ≥ na for some a > 2, then an LU decomposition can be computed in time O(M(n)). This means, for example, that an O(n2.376) algorithm exists based on the Coppersmith–Winograd algorithm. Sparse-matrix decomposition Special algorithms have been developed for factorizing large sparse matrices. These algorithms attempt to find sparse factors L and U. Ideally, the cost of computation is determined by the number of nonzero entries, rather than by the size of the matrix. These algorithms use the freedom to exchange rows and columns to minimize fill-in (entries that change from an initial zero to a non-zero value during the execution of an algorithm). General treatment of orderings that minimize fill-in can be addressed using graph theory. Applications Solving linear equations Given a system of linear equations in matrix form we want to solve the equation for x, given A and b. Suppose we have already obtained the LUP decomposition of A such that , so . In this case the solution is done in two logical steps: First, we solve the equation for y. Second, we solve the equation for x. In both cases we are dealing with triangular matrices (L and U), which can be solved directly by forward and backward substitution without using the Gaussian elimination process (however we do need this process or equivalent to compute the LU decomposition itself). The above procedure can be repeatedly applied to solve the equation multiple times for different b. In this case it is faster (and more convenient) to do an LU decomposition of the matrix A once and then solve the triangular matrices for the different b, rather than using Gaussian elimination each time. The matrices L and U could be thought to have "encoded" the Gaussian elimination process. The cost of solving a system of linear equations is approximately floating-point operations if the matrix has size . This makes it twice as fast as algorithms based on QR decomposition, which costs about floating-point operations when Householder reflections are used. For this reason, LU decomposition is usually preferred. Inverting a matrix When solving systems of equations, b is usually treated as a vector with a length equal to the height of matrix A. In matrix inversion however, instead of vector b, we have matrix B, where B is an n-by-p matrix, so that we are trying to find a matrix X (also a n-by-p matrix): We can use the same algorithm presented earlier to solve for each column of matrix X. Now suppose that B is the identity matrix of size n. It would follow that the result X must be the inverse of A. Computing the determinant Given the LUP decomposition of a square matrix A, the determinant of A can be computed straightforwardly as The second equation follows from the fact that the determinant of a triangular matrix is simply the product of its diagonal entries, and that the determinant of a permutation matrix is equal to (−1)S where S is the number of row exchanges in the decomposition. In the case of LU decomposition with full pivoting, also equals the right-hand side of the above equation, if we let S be the total number of row and column exchanges. The same method readily applies to LU decomposition by setting P equal to the identity matrix. History The LU decomposition is related to elimination of linear systems of equations, as e.g. described by Ralston. The solution of N linear equations in N unknowns by elimination was already known to ancient Chinese. Subsequently, many mathematicians were performing and perfecting it yet as the method became relegated to school grade, few of them left any detailed descriptions. Thus the name Gaussian elimination is only a convenient abbreviation of a complex history. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." Banachiewicz was the first to consider elimination in terms of matrices and in this way formulated LU decomposition, as demonstrated by his graphic illustration. His calculations follow ordinary matrix ones, yet notation deviates in that he preferred to write one factor transposed, to be able to multiply them mechanically column by column, by sliding ruler down consecutive rows of both (using arithmometer). Combined with swapped order of indices his formulae in modern notation read ,, where , primes refer to matrices extended with the last column, and the last component of is -1. Matrix formulae to calculate rows and columns of LU factors by recursion are given in the remaining part of Banachiewicz's paper as Eq. (2.3) and (2.4) (see F90 code example). This paper by Banachiewicz contains both derivation of and factors of respectively non-symmetric and symmetric matrices. They are sometimes confused as later publications tend to tie his name solely with the rediscovery of Cholesky decomposition. Banachiewicz himself can be excused of inaction as already next year he suffered from persecution by occupiers, spending three month in the Sachsenhausen Concentration Camp, on release from which he carried himself from a train his collaborator and co-prisoner Antoni Wilk, who died of exhaustion a week later. Code examples Fortran90 code example Module mlu Implicit None Integer, Parameter :: SP = Kind(1d0) ! set I/O real precision Private Public luban, lusolve Contains Subroutine luban (a,tol,g,h,ip,condinv,detnth) ! By LU decomposition calculates such upper triangles L=G^T, and U=H ! that square A=LU=G^TH. Partial pivoting IP(:) is modern addition. ! Normal use is for square A, however for RHS a already known ! input of (A|a)^T yields (L|y^T)^T where x in L^Tx=y is solution of Ax=a. Real (SP), Intent (In) :: a (:, :) ! input matrix A(m,n), n<=m Real (SP), Intent (In) :: tol ! tolerance for near zero pivot Real (SP), Intent (Out) :: g (size(a,dim=1), size(a,dim=2)), &! L(m,n) h (size(a,dim=2), size(a,dim=2)), &! U(n,n) condinv, & ! 1/cond(A), 0 for singular A detnth ! sign*Abs(det(A))**(1/n) Integer, Intent (Out) :: ip (size(a,dim=2)) ! columns permutation Integer :: k, n, j, l, isig Real (SP) :: tol0, pivmax, pivmin, piv ! n = size (a, dim=2) tol0 = max(tol, 3._SP * epsilon(tol0)) ! use default for tol=0 ! Rectangular A and G are permitted under condition: If (n > size(a, dim=1) .Or. n < 1) Stop 90 Forall (k=1:n) ip(k) = k h=0._SP g=0._SP isig = 1 detnth = 0._SP pivmax = Maxval (Abs (a(1, :))) pivmin = pivmax ! Do k = 1, n ! Banachiewicz (1938) Eq. (2.3) h(k,ip(k:n)) = a(k,ip(k:)) - Matmul(g(k, :k-1),h(:k-1, ip(k:))) ! Find row pivot j = (Maxloc(Abs(h(k, ip(k:))), dim=1)+k-1) If (j /= k) Then ! Swap columns j and k isig = -isig ! Change Det(A) sign because of permutation l = ip(k) ip(k) = ip(j) ip(j) = l End If piv = Abs(h(k, ip(k))) pivmax = Max(piv, pivmax) ! Adjust condinv pivmin = Min(piv, pivmin) If (piv < tol0) Then ! singular matrix isig = 0 pivmax = 1._SP Exit Else ! Account for pivot contribution to Det(A) sign and value If (h(k, ip(k)) < 0._SP) isig = -isig detnth = detnth + Log(piv) End If ! Banachiewicz (1938) Eq. (2.4) g(k+1:, k) = (a(k+1:, ip(k))-Matmul(g(k+1:, :k-1),h(:k-1,ip(k)))) & / h(k,ip(k)) g(k,k) = 1._SP End Do detnth = isig * exp(detnth / n) condinv = Abs(isig) * pivmin / pivmax ! Test for square A(n,n) by uncommenting below ! Print *, '|AP-LU| ',Maxval (Abs(a(:,ip(:))-Matmul(g, h(:,ip(:))))) End Subroutine luban Subroutine lusolve(l,u,ip,x) ! Solves Ax=a system using triangle factors LU=A Real (SP), Intent (In) :: l (:, :) ! lower triangle matrix L(n,n) Real (SP), Intent (In) :: u (:, :) ! upper triangle matrix U(n,n) Integer, Intent (In) :: ip (:) ! columns permutation IP(n) Real (SP), Intent (InOut) :: x (:, :) ! X(n,m) for m sets of input ! right hand sides replaced with output unknowns Integer :: n, m, i, j n = size(ip) m = size(x, dim=2) If (n<1.Or.m<1.Or.Any([n,n]/=shape(l)).Or.Any(shape(l)/=shape(u)).Or. & n/=size(x,dim=1)) Stop 91 Do i = 1, m Do j = 1, n x(j,i) = x(j,i)-dot_product(x(:j-1,i),l(j,:j-1)) End Do Do j = n, 1, -1 x(j,i) = (x(j,i)-dot_product(x(j+1:,i),u(j,ip(j+1:)))) / & u(j,ip(j)) End Do End Do End Subroutine lusolve End Module mlu C code example /* INPUT: A - array of pointers to rows of a square matrix having dimension N * Tol - small tolerance number to detect failure when the matrix is near degenerate * OUTPUT: Matrix A is changed, it contains a copy of both matrices L-E and U as A=(L-E)+U such that P*A=L*U. * The permutation matrix is not stored as a matrix, but in an integer vector P of size N+1 * containing column indexes where the permutation matrix has "1". The last element P[N]=S+N, * where S is the number of row exchanges needed for determinant computation, det(P)=(-1)^S */ int LUPDecompose(double **A, int N, double Tol, int *P) { int i, j, k, imax; double maxA, *ptr, absA; for (i = 0; i <= N; i++) P[i] = i; //Unit permutation matrix, P[N] initialized with N for (i = 0; i < N; i++) { maxA = 0.0; imax = i; for (k = i; k < N; k++) if ((absA = fabs(A[k][i])) > maxA) { maxA = absA; imax = k; } if (maxA < Tol) return 0; //failure, matrix is degenerate if (imax != i) { //pivoting P j = P[i]; P[i] = P[imax]; P[imax] = j; //pivoting rows of A ptr = A[i]; A[i] = A[imax]; A[imax] = ptr; //counting pivots starting from N (for determinant) P[N]++; } for (j = i + 1; j < N; j++) { A[j][i] /= A[i][i]; for (k = i + 1; k < N; k++) A[j][k] -= A[j][i] * A[i][k]; } } return 1; //decomposition done } /* INPUT: A,P filled in LUPDecompose; b - rhs vector; N - dimension * OUTPUT: x - solution vector of A*x=b */ void LUPSolve(double **A, int *P, double *b, int N, double *x) { for (int i = 0; i < N; i++) { x[i] = b[P[i]]; for (int k = 0; k < i; k++) x[i] -= A[i][k] * x[k]; } for (int i = N - 1; i >= 0; i--) { for (int k = i + 1; k < N; k++) x[i] -= A[i][k] * x[k]; x[i] /= A[i][i]; } } /* INPUT: A,P filled in LUPDecompose; N - dimension * OUTPUT: IA is the inverse of the initial matrix */ void LUPInvert(double **A, int *P, int N, double **IA) { for (int j = 0; j < N; j++) { for (int i = 0; i < N; i++) { IA[i][j] = P[i] == j ? 1.0 : 0.0; for (int k = 0; k < i; k++) IA[i][j] -= A[i][k] * IA[k][j]; } for (int i = N - 1; i >= 0; i--) { for (int k = i + 1; k < N; k++) IA[i][j] -= A[i][k] * IA[k][j]; IA[i][j] /= A[i][i]; } } } /* INPUT: A,P filled in LUPDecompose; N - dimension. * OUTPUT: Function returns the determinant of the initial matrix */ double LUPDeterminant(double **A, int *P, int N) { double det = A[0][0]; for (int i = 1; i < N; i++) det *= A[i][i]; return (P[N] - N) % 2 == 0 ? det : -det; } C# code example public class SystemOfLinearEquations { public double[] SolveUsingLU(double[,] matrix, double[] rightPart, int n) { // decomposition of matrix double[,] lu = new double[n, n]; double sum = 0; for (int i = 0; i < n; i++) { for (int j = i; j < n; j++) { sum = 0; for (int k = 0; k < i; k++) sum += lu[i, k] * lu[k, j]; lu[i, j] = matrix[i, j] - sum; } for (int j = i + 1; j < n; j++) { sum = 0; for (int k = 0; k < i; k++) sum += lu[j, k] * lu[k, i]; lu[j, i] = (1 / lu[i, i]) * (matrix[j, i] - sum); } } // lu = L+U-I // find solution of Ly = b double[] y = new double[n]; for (int i = 0; i < n; i++) { sum = 0; for (int k = 0; k < i; k++) sum += lu[i, k] * y[k]; y[i] = rightPart[i] - sum; } // find solution of Ux = y double[] x = new double[n]; for (int i = n - 1; i >= 0; i--) { sum = 0; for (int k = i + 1; k < n; k++) sum += lu[i, k] * x[k]; x[i] = (1 / lu[i, i]) * (y[i] - sum); } return x; } } MATLAB code example function LU = LUDecompDoolittle(A) n = length(A); LU = A; % decomposition of matrix, Doolittle's Method for i = 1:1:n for j = 1:(i - 1) LU(i,j) = (LU(i,j) - LU(i,1:(j - 1))*LU(1:(j - 1),j)) / LU(j,j); end j = i:n; LU(i,j) = LU(i,j) - LU(i,1:(i - 1))*LU(1:(i - 1),j); end %LU = L+U-I end function x = SolveLinearSystem(LU, B) n = length(LU); y = zeros(size(B)); % find solution of Ly = B for i = 1:n y(i,:) = B(i,:) - LU(i,1:i)*y(1:i,:); end % find solution of Ux = y x = zeros(size(B)); for i = n:(-1):1 x(i,:) = (y(i,:) - LU(i,(i + 1):n)*x((i + 1):n,:))/LU(i, i); end end A = [ 4 3 3; 6 3 3; 3 4 3 ] LU = LUDecompDoolittle(A) B = [ 1 2 3; 4 5 6; 7 8 9; 10 11 12 ]' x = SolveLinearSystem(LU, B) A * x
Mathematics
Linear algebra
null
6244916
https://en.wikipedia.org/wiki/Spectral%20energy%20distribution
Spectral energy distribution
A spectral energy distribution (SED) is a plot of energy versus frequency or wavelength of light (not to be confused with a 'spectrum' of flux density vs frequency or wavelength). It is used in many branches of astronomy to characterize astronomical sources. For example, in radio astronomy they are used to show the emission from synchrotron radiation, free-free emission and other emission mechanisms. In infrared astronomy, SEDs can be used to classify young stellar objects. Detector for spectral energy distribution The count rates observed from a given astronomical radiation source have no simple relationship to the flux from that source, such as might be incident at the top of the Earth's atmosphere. This lack of a simple relationship is due in no small part to the complex properties of radiation detectors. These detector properties can be divided into those that merely attenuate the beam, including residual atmosphere between source and detector, absorption in the detector window when present, quantum efficiency of the detecting medium, those that redistribute the beam in detected energy, such as fluorescent photon escape phenomena, inherent energy resolution of the detector.
Physical sciences
Basics
Astronomy
6245046
https://en.wikipedia.org/wiki/Spectral%20index
Spectral index
In astronomy, the spectral index of a source is a measure of the dependence of radiative flux density (that is, radiative flux per unit of frequency) on frequency. Given frequency in Hz and radiative flux density in Jy, the spectral index is given implicitly by Note that if flux does not follow a power law in frequency, the spectral index itself is a function of frequency. Rearranging the above, we see that the spectral index is given by Clearly the power law can only apply over a certain range of frequency because otherwise the integral over all frequencies would be infinite. Spectral index is also sometimes defined in terms of wavelength . In this case, the spectral index is given implicitly by and at a given frequency, spectral index may be calculated by taking the derivative The spectral index using the , which we may call differs from the index defined using The total flux between two frequencies or wavelengths is which implies that The opposite sign convention is sometimes employed, in which the spectral index is given by The spectral index of a source can hint at its properties. For example, using the positive sign convention, the spectral index of the emission from an optically thin thermal plasma is -0.1, whereas for an optically thick plasma it is 2. Therefore, a spectral index of -0.1 to 2 at radio frequencies often indicates thermal emission, while a steep negative spectral index typically indicates synchrotron emission. The observed emission can be affected by several absorption processes that affect the low-frequency emission the most; the reduction in the observed emission at low frequencies might result in a positive spectral index even if the intrinsic emission has a negative index. Therefore, it is not straightforward to associate positive spectral indices with thermal emission. Spectral index of thermal emission At radio frequencies (i.e. in the low-frequency, long-wavelength limit), where the Rayleigh–Jeans law is a good approximation to the spectrum of thermal radiation, intensity is given by Taking the logarithm of each side and taking the partial derivative with respect to yields Using the positive sign convention, the spectral index of thermal radiation is thus in the Rayleigh–Jeans regime. The spectral index departs from this value at shorter wavelengths, for which the Rayleigh–Jeans law becomes an increasingly inaccurate approximation, tending towards zero as intensity reaches a peak at a frequency given by Wien's displacement law. Because of the simple temperature-dependence of radiative flux in the Rayleigh–Jeans regime, the radio spectral index is defined implicitly by
Physical sciences
Radio astronomy
Astronomy
12877273
https://en.wikipedia.org/wiki/Testing%20cosmetics%20on%20animals
Testing cosmetics on animals
Cosmetic testing on animals is a type of animal testing used to test the safety and hypoallergenic properties of cosmetic products for use by humans. Since this type of animal testing is often harmful to the animal subjects, it is opposed by animal rights activists and others. Cosmetic animal testing is banned in many parts of the world, including Colombia, the European Union, the United Kingdom, India, and Norway. Cosmetics that have been produced without any testing on animals are sometimes known as "cruelty-free cosmetics". Some popular cruelty-free beauty brands include: E.L.F., Charlotte Tilbury, Farsali, Fenty Beauty, Fenty Skin, Glow Recipe and others. The website "Cruelty-Free Kitty" was created to assess which brands are cruelty-free. Furthermore, some brands have participated in animal testing in the past, however, if they currently do not test on animals, these cosmetics are considered "cruelty-free". Definition Using animal testing in the development of cosmetics may involve testing either a finished product or the individual ingredients of a finished product on animals, often rabbits, as well as mice, rats, monkeys, dogs, guinea pigs and other animals. Cosmetics can be defined as products applied to the body to enhance the body's appearance or to cleanse the body. This includes all hair products, makeup, and skin products. The United States Food and Drug Administration (FDA) continues to endorse animal testing methods. Re-using existing test data obtained from previous animal testing is generally not considered to be cosmetic testing on animals; however, the acceptability of this to opponents of testing is inversely proportional to how recent the data is. Methods Methods of testing cosmetics on animals include various tests that are categorized differently based on which areas the cosmetics will be used for. One new ingredient in any cosmetic product used in these tests could lead to the deaths of at least 1,400 animals. Dermal penetration: Rats are mostly used in this method that analyzes chemical movement, through the penetration of the chemical into the bloodstream. Dermal penetration is a method that creates a better understanding of skin absorption. Skin sensitization: This is a method that tests for allergic reactions to different chemicals. In some tests, a chemical adjuvant is injected to boost the immune system, which was typically performed on guinea pigs. In some tests, no chemical adjuvant is injected with the test chemical, or the chemical is applied on a shaved patch of skin. The reaction is then recorded by the appearance of the skin afterward. Acute toxicity: This test is used to determine the danger of exposure to a chemical by mouth, skin, or inhalation. It shows the various dangerous effects of a substance that result from a short period of exposure. Large amounts of rats and mice are injected in Lethal Dose 50 (LD50) tests that continue until half of the test subjects die. Other tests can use a smaller number of animals but can cause convulsions, loss of motor function, and seizures. The animals are often then killed afterward to gather information about the internal effects of the chemicals. Draize test: This is a method of testing that may cause irritation or corrosion to the skin or eye on animals, dermal sensitization, airway sensitization, endocrine disruption, and (which refers to the lethal dose which kills 50% of the treated animals). Skin corrosivity or irritation: This method of the test assesses the potential of a substance causing irreversible damage to the skin. It is typically performed on rabbits and involves putting chemicals on a shaved patch of skin. This determines the level of damage to the skin including itching, inflammation, swelling, etc. Alternatives A variety of alternatives exists to of animal testing. Cosmetics manufacturers who do not test on animals may use in vitro screens to test for endpoints that can determine the potential risk to humans with very high sensitivity and specificity. Companies such as CeeTox in the USA, acquired by Cyprotex, specialize in such testing and organizations like the Center for Alternatives to Animal Testing (CAAT), PETA and many other organizations advocate the use of in vitro and other non-animal tests in the development of consumer products. Using safe ingredients from a list of 5,000 that have already been tested in conjunction with modern methods of cosmetics testing, the need for tests using animals is negated. EpiSkin, EpiDerm, SkinEthic and BioDEpi are lab-made reconstructed artificial human skin models that are non-animal alternative testing platforms with histological similarity with native skin tissues. Artificial skin can imitate the actual human skin, on which cosmetic products can be tested. For example, using UV light on EpiSkin can cause it to resemble older skin and adding melanocytes will turn the skin a darker color. This helped create a spectrum of different skin colors that are then used to compare the results of sunblock on a different variety of people. To address potential issues with other parts of the human body, research companies such as NOTOX have developed a synthetic model of the human liver, which is the main organ to detox the body, to test harmful ingredients and chemicals to see if the liver can detox those elements. Lab-grown tissues are now being used to test chemicals in makeup products. MatTek is one of the companies that do this. It sells small amounts of skin cells to companies to test their products on them. Some of these companies are those that make laundry detergent, makeup, toilet bowl cleaner, anti-aging creams, and tanning lotion. Without these tissues, companies would be testing their products on living animals. Lab-grown tissues are a great alternative to testing harmful products on animals. One lab was able to grow 11 different types of tissues in a petri dish. The downfall was that the tissues were not fully functional on their own, in fact, many of these tissues only resembled tiny parts of an actual-sized human organ, most of which were too small to transplant into humans. This technology could potentially be great, but it was a major downfall, 'Ministomachs that took about nine weeks to cultivate in a petri dish formed "oval-shaped, hollow structures". Research companies can also use body parts and organs taken from animals slaughtered for the meat industry to perform tests such as the Bovine Corneal Opacity and Permeability Test and Isolated Chicken Eye Test. Many companies have not made the switch to cruelty-free yet for many reasons, one of them being the time it takes for lab-grown tissues to be useable. Animals, on the other hand, can mature quickly. Rats, for example, have a much quicker growth rate "From birth to adult, rats take about three weeks to mature and begin fending for themselves. The rodents reach sexual maturity in about five weeks and begin mating soon after to produce the next generation to start the rat life cycle over again". On top of the extremely short time it takes a rat to mature, they can provide us with a complete set of organ systems, not just a paper-thin sheet of cells. Rats can also reproduce, and they do so at a very fast pace "In general, rats produce about seven offspring per litter and can reach up to 14 at times. Typical gestation periods last only a few weeks, allowing each female rat to produce around five litters a year". History The first known tests on animals were done as early as 300 BC. "Writings of ancient civilizations all document the use of animal testing. These civilizations, led by men like Aristotle and Erasistratus, used live animals to test various medical procedures". This testing was important because it led to new discoveries such as how blood circulated and the fact that living beings needed air to survive. The idea of taking an animal and comparing it to how human beings survived was a completely new idea. It would not have existed (at least not as quickly as it did) without our ancestors studying animals and how their bodies worked. "Proving the germ theory of disease was the crowning achievement of the French scientist Louis Pasteur. He was not the first to propose that diseases were caused by microscopic organisms, but the view was controversial in the 19th century and opposed the accepted theory of 'spontaneous generation'". The idea of germs and other microscopic organisms was an entirely new idea and would not have come to be without the use of animals. In 1665, scientists Robert Hooke and Antoni van Leeuwenhoek discovered and studied how germs worked. They published a book about their discovery, which was not accepted by very many people, including the science community, at first. After some time, scientists were able to give animals diseases from microbes and realized that microbes really did exist. From there, they were able to use animals to understand how the disease worked, and the effects it could potentially have on the human body. All of this has led up to something a bit more recent, the use of animals to test beauty products. This has become a very controversial topic in recent years. There are various people who are extremely against the use of animals for this purpose, and for a good reason. "Typically, animal tests for cosmetics include skin and eye irritation tests where chemicals are rubbed onto the shaved skin or dripped into the eyes of rabbits; repeated oral force-feeding studies lasting weeks or months to look for signs of general illness or specific health hazards, such as cancer or birth defects; and even widely condemned "lethal dose" tests, in which animals are forced to swallow massive amounts of a test chemical to determine the dose that causes death". This kind of testing can be vital in finding important information about products, but can be harmful to the animals it is tested on. In 1937, a mistake was made that ended up changing the pharmaceutical industry drastically. A company created a medicine (elixir sulfanilamide) "to treat streptococcal infections", and without any scientific research the medicine was out on shelves. This medicine turned out to be extremely poisonous to people, leading to large poisoning outbreaks followed by over 100 deaths. This epidemic led to a law being passed in 1938, called the U.S. Federal Food, Drug, and Cosmetic Act, enforcing more rigorous guidelines on cosmetic products. After this law was passed, companies looked to animals to test their products, in turn, creating the first encounters of cosmetic animal testing. Non-profit organizations Cruelty Free International: Cruelty Free International and its partners manage the certification of all the companies across the world looking to be cruelty-free. Companies producing beauty and household products which do not test their products on animals for any market can request membership of The Leaping Bunny Program, which allows that company to feature Cruelty Free International's Leaping Bunny logo on their products. This program sets global standard of operations and sales. Companies headquartered internationally can obtain certification from Cruelty Free International. Companies headquartered in the United States and Canada can obtain certification from The Coalition for Consumer Information on Cosmetics (CCIC). In 2013, over 500 companies were certified. However, some companies' certifications were revoked after it was discovered they continued to test on animals in Asia. Humane Society International: This is a global animal protection organization that works to help all animals—including animals in laboratories. This organization promotes human-animal interaction to tackle the existence of all cruelty that innocent animals experience. PETA: PETA certifies cosmetics and beauty products as free from animal testing, or as "cruelty-free" (free from animal testing and also vegan). Procedures of animal testing There is a strategy used in animal testing laboratories titled the 'Three R's:' Reduction, refinement, and replacement' (Doke, "Alternatives to Animal Testing: A Review"). Replacement: This provides the opportunity to study the response of cellular models, but in other words, replacement searches for alternatives that could be done rather than testing on animal subjects. Reduction: This approach is built upon the ethics to have a minimal number of animal subjects being tested on for current and later tests. Refinement: This suggests that the planned distress and pain caused to an animal subject be as little as possible. This approach focuses on making a home for the animals before entering testing grounds to elongate the life of laboratory animals. Discomfort in animals causes an imbalance in hormonal levels which creates fluctuating results during testing. Legal requirements and status Due to the strong public backlash against cosmetic testing on animals, most cosmetic manufacturers say their products are not tested on animals. However, they are still required by trading standards and consumer protection laws in most countries to show their products are not toxic and not dangerous to public health. They also need to show that the ingredients are not dangerous in large quantities, such as when in transport or in the manufacturing plant. In some countries, it is possible to meet these requirements without any further tests on animals. Other countries, may require animal testing to meet legal requirements. The United States and Japan are frequently criticized for their insistence on stringent safety measures, which often require animal testing. Some retailers distinguish themselves in the marketplace by their stance on animal testing. Legal requirements in Japan Although Japanese law does not require non-medicated cosmetics to be tested on animals, it does not prohibit it either, leaving the decision to individual companies. Animal testing is required when the product contains newly-developed tar colours , ultraviolet ray protective ingredients or preservatives, and when the amount of any ingredient regulated in terms of how much can be added is increased. Japanese brands such as Shiseido and Mandom have ended much, but not all, of their animal testing. However, most other leading cosmetics companies in Japan still test on animals. Jurisdictions with bans Brazil, São Paulo São Paulo in Brazil banned cosmetic animal testing in 2014. Canada In June 2023, the Government of Canada banned the testing of cosmetics on animals, and the sale of cosmetics tested on animals. Amendments to the Food and Drugs Act to end cosmetic animal testing through Bill C-47, the Budget Implementation Act, 2023, No. 1, went into effect on December 22, 2023. Colombia In June 2020, the Senate of the Republic of Colombia approved a resolution banning the commercialization and testing of cosmetics on animals. In August 2020, presidential assent was granted to the resolution, thus effectively banning the testing of cosmetics on animals in Colombia. European Union The European Union (EU) followed suit, after it agreed to phase in a near-total ban on the sale of animal-tested cosmetics throughout the EU from 2009, and to ban cosmetics-related animal testing. Animal testing is regulated in EC Regulation 1223/2009 on cosmetics. Imported cosmetics ingredients tested on animals were phased out for EU consumer markets in 2013 by the ban, but can still be sold to outside of the EU. Norway banned cosmetics animal testing at the same time as the EU. In May 2018, the European Parliament voted for the EU and its Member States to work towards a UN convention against the use of animal testing for cosmetics. European Free Trade Association The four EFTA countries that are not in the EU, i.e. Norway, Liechtenstein, Switzerland, and Iceland, also banned cosmetic testing. Guatemala In 2017, Guatemala banned cosmetic animal testing. India In early 2014, India announced a ban on testing cosmetics on animals in the country, thereby becoming the second country in Asia to do so. Later India banned import of cosmetics tested on animals in November 2014. Israel Israel banned "the import and marketing of cosmetics, toiletries, or detergents that were tested on animals" in 2013. New Zealand In 2015, New Zealand also banned animal testing. However, the ban on testing cosmetics on animals was unlikely to lead to products being stripped from shelves in New Zealand, as around 90 percent of cosmetic products sold in New Zealand were made overseas. Taiwan In 2015, Taiwan launched a bill proposing a ban on cosmetic testing on animals. It passed in 2016 and went into effect in 2019. Shortly before the ban went into effect on 9 November 2019, however, it was noted that most Taiwan cosmetic companies already did not experiment with animals. Turkey Turkey "banned any animal testing for cosmetic products that have already been introduced to the market." UK Animal testing on cosmetics or their ingredients was banned in the UK in 1998. Jurisdictions where prohibitions are considered Association of Southeast Asian Nations The Association of Southeast Asian Nations (ASEAN) is potentially "making strides toward ending cosmetics testing on animals." Australia In Australia, the End Cruel Cosmetics Bill was introduced to Parliament in March 2014, which would ban local testing, which generally does not happen there, and importation of cosmetics tested on animals. In 2016 a bill was passed to ban the sale of cosmetics tested on animals, which came into effect in July 2017. United States In March 2014, the Humane Cosmetics Act was introduced to the U.S. Congress. It would ban cosmetic testing on animals and eventually would ban the sale of cosmetics tested on animals. The bill did not advance. Similar bills have been introduced and passed at the state level, and testing cosmetics on animals has been banned in ten US states as of 2023: California, Nevada, Illinois, Hawaii, Maryland, Maine, New Jersey, Virginia, Louisiana, and New York. Mexico On 19 March 2020, the Mexican Senate unanimously passed legislation banning testing cosmetics on animals. The proposed ban now awaits approval from the lower house of the Mexican Congress, the Mexican Chamber of Deputies. South Korea South Korea is also potentially "making strides toward ending cosmetics testing on animals." Other statuses China China passed a law on 30 June 2014 to eliminate the requirement for animal testing of cosmetics. Though domestically-produced ordinary cosmetic goods do not require testing, animal testing is still mandated by law for Chinese-made "cosmeceuticals" (cosmetic goods which make a functional claim) which are available for sale in China. Cosmetics intended solely for export are exempt from the animal testing requirement. As of March 2019, post-market testing (i.e. tests on cosmetics after they hit the market) for finished imported and domestically produced cosmetic products will no longer require animal testing. Chinese law was further amended in April 2020, fully dropping all remaining mandatory animal testing requirements for all cosmetics - both locally produced and imported, instead creating a regulatory 'preference' for non-animal based testing methods in the safety certification of cosmetic products. Russia In 2013, the Russian Ministry of Health stated "Toxicological testing is performed by means of testing for skin allergic reaction or test on mucous tissue/eye area (with use of lab animals) or by use of alternative general toxicology methods (IN VITRO). In this manner the technical regulations include measures which provide an alternative to animal testing".
Physical sciences
Research methods
Basics and measurement
12877572
https://en.wikipedia.org/wiki/Fan%20%28machine%29
Fan (machine)
A fan is a powered machine that creates airflow. A fan consists of rotating vanes or blades, generally made of wood, plastic, or metal, which act on the air. The rotating assembly of blades and hub is known as an impeller, rotor, or runner. Usually, it is contained within some form of housing, or case. This may direct the airflow, or increase safety by preventing objects from contacting the fan blades. Most fans are powered by electric motors, but other sources of power may be used, including hydraulic motors, handcranks, and internal combustion engines. Mechanically, a fan can be any revolving vane, or vanes used for producing currents of air. Fans produce air flows with high volume and low pressure (although higher than ambient pressure), as opposed to compressors which produce high pressures at a comparatively low volume. A fan blade will often rotate when exposed to an air-fluid stream, and devices that take advantage of this, such as anemometers and wind turbines, often have designs similar to that of a fan. Typical applications include climate control and personal thermal comfort (e.g., an electric table or floor fan), vehicle engine cooling systems (e.g., in front of a radiator), machinery cooling systems (e.g., inside computers and audio power amplifiers), ventilation, fume extraction, winnowing (e.g., separating chaff from cereal grains), removing dust (e.g. sucking as in a vacuum cleaner), drying (usually in combination with a heat source) and providing draft for a fire. Some fans may be indirectly used for cooling in the case of industrial heat exchangers. While fans are effective at cooling people, they do not cool air. Instead, they work by evaporative cooling of sweat and increased heat convection into the surrounding air due to the airflow from the fans. Thus, fans may become less effective at cooling the body if the surrounding air is near body temperature and contains high humidity. History Fans made with leaves were prevalent in ancient Egypt and India. In ancient India, they were handheld fans made from bamboo strips or other plant fiber, that could be rotated or fanned to move air. During British rule, the word came to be used by Anglo-Indians to mean a large swinging flat fan, fixed to the ceiling and pulled by a servant called the punkawallah. For purposes of air conditioning, the Han dynasty craftsman and engineer Ding Huan (fl. 180 CE) invented a manually operated rotary fan with seven wheels that measured 3 m (10 ft) in diameter; in the 8th century, during the Tang dynasty (618–907), the Chinese applied hydraulic power to rotate the fan wheels for air conditioning, while the rotary fan became even more common during the Song dynasty (960–1279). During the Heian period (794–1185) in Japan, fans adapted the role of symbolizing social class as well as a mechanical role. The tessen, a Japanese fan used in Feudal times, was a dangerous weapon hidden in plain sight in the shape of a regular fan, a weapon used by samurais when katanas were not ideal. In the 17th century, the experiments of scientists, including Otto von Guericke, Robert Hooke, and Robert Boyle, established the basic principles of vacuum and airflow. The English architect Sir Christopher Wren applied an early ventilation system in the Houses of Parliament that used bellows to circulate air. Wren's design was the catalyst for much later improvement and innovation. The first rotary fan used in Europe was for mine ventilation during the 16th century, as illustrated by Georg Agricola (1494–1555). John Theophilus Desaguliers, a British engineer, demonstrated the successful use of a fan system to draw out stagnant air from coal mines in 1727—ventilation was essential in coal mines to prevent asphyxiation—and soon afterward he installed a similar apparatus in Parliament. The civil engineer John Smeaton, and later John Buddle installed reciprocating air pumps in the mines in the North of England, though the machinery was liable to breaking down. Steam In 1849 a 6m radius steam-driven fan, designed by William Brunton, was made operational in the Gelly Gaer Colliery of South Wales. The model was exhibited at the Great Exhibition of 1851. Also in 1851 David Boswell Reid, a Scottish doctor installed four steam-powered fans in the ceiling of St George's Hospital in Liverpool so that the pressure produced by the fans would force the incoming air upward and through vents in the ceiling. Improvements in the technology were made by James Nasmyth, Frenchman Theophile Guibal and J. R. Waddle. Electrical Between 1882 and 1886 Schuyler Wheeler invented a fan powered by electricity. It was commercially marketed by the American firm Crocker & Curtis electric motor company. In 1885 a desktop direct drive electric fan was commercially available by Stout, Meadowcraft & Co. in New York. In 1882, Philip Diehl developed the world's first electric ceiling mounted fan. During this intense period of innovation, fans powered by alcohol, oil, or kerosene were common around the turn of the 20th century. In 1909, KDK of Japan pioneered the invention of mass-produced electric fans for home use. In the 1920s, industrial advances allowed steel fans to be mass-produced in different shapes, bringing fan prices down and allowing more homeowners to afford them. In the 1930s, the first art deco fan (the "Silver Swan") was designed by Emerson. By the 1940s, Crompton Greaves of India became the world's largest manufacturer of electric ceiling fans mainly for sale in India, Asia, and the Middle East. By the 1950s, table and stand fans were manufactured in bright colors and were eye-catching. Window and central air conditioning in the 1960s caused many companies to discontinue production of fans, but in the mid-1970s, with an increasing awareness of the cost of electricity and the amount of energy used to heat and cool homes, turn-of-the-century styled ceiling fans became popular again as both decorative and energy-efficient. In 1998 William Fairbank and Walter K. Boyd invented the high-volume low-speed (HVLS) ceiling fan, designed to reduce energy consumption by using long fan blades rotating at low speed to move a relatively large volume of air. Social implications Before powered fans were widely accessible, their use related to the social divide between social classes. In Britain and China, they were initially only installed in the buildings of Parliament and in noble homes. In Ancient Egypt (3150 BC), servants were required to fan Pharaohs and important figures. In parts of the world such as India, where the temperature reaches above , standing and electric box fans are essential in the business world for customer comfort and an efficient work environment. Fans have become solar-powered, energy-efficient, and battery-powered in places with unreliable energy sources. In South Korea, fans play a part in an old wives tale. Many older South Korean citizens believe in the unscientific and unsupported myth of fan death due to excessive use of an electric fan; Korean electric fans usually turn off after a few hours to protect from fan death. Typical room electrical fans consume 50 to 100 watts of power, while air-conditioning units use 500 to 4000 watts; fans use less electricity but do not cool the air, simply providing evaporative cooling of sweat. Commercial fans are louder than AC units and can be disruptively loud. According to the U.S. Consumer Product Safety Commission, reported incidents related to box fans include, fire (266 incidents), potential fire (29 incidents), electrocution (15), electric shock (4 incidents), and electrical hazard (2 incidents). Injuries related to AC units mostly relate to their falling from buildings. Types Mechanical revolving blade fans are made in a wide range of designs. They are used on the floor, table, desk, or hung from the ceiling (ceiling fan) and can be built into a window, wall, roof, etc. Tower fans tend to have smaller blades inside. Electronic systems generating significant heat, such as computers, incorporate fans. Appliances such as hair dryers and space heaters also use fans. They move air in air-conditioning systems and in automotive engines. Fans used for comfort inside a room create a wind chill by increasing the heat transfer coefficient but do not lower temperatures directly. Fans used to cool electrical equipment or in engines or other machines cool the equipment directly by exhausting hot air into the cooler environment outside of the machine so that cooler air flows in. Three main types of fans are used for moving air, axial, centrifugal (also called radial) and cross flow (also called tangential). The American Society of Mechanical Engineers Performance Testing Code 11 (PTC) provides standard procedures for conducting and reporting tests on fans, including those of the centrifugal, axial, and mixed flows. Axial-flow Axial-flow fans have blades that force air to move parallel to the shaft about which the blades rotate. This type of fan is used in a wide variety of applications, ranging from small cooling fans for electronics to the giant fans used in cooling towers. Axial flow fans are applied in air conditioning and industrial process applications. Standard axial flow fans have diameters of 300–400 mm or 1,800–2,000 mm and work under pressures up to 800 Pa. Special types of fans are used as low-pressure compressor stages in aircraft engines. Examples of axial fans are: Table fan: Basic elements of a typical table fan include the fan blade, base, armature, and lead wires, motor, blade guard, motor housing, oscillator gearbox, and oscillator shaft. The oscillator is a mechanism that motions the fan from side to side. The armature axle shaft comes out on both ends of the motor; one end of the shaft is attached to the blade, and the other is attached to the oscillator gearbox. The motor case joins the gearbox to contain the rotor and stator. The oscillator shaft combines the weighted base and the gearbox. A motor housing covers the oscillator mechanism. The blade guard joins the motor case for safety. Domestic extractor fan: Wall- or ceiling-mounted, the domestic extractor fan is employed to remove moisture and stale air from domestic dwellings. Bathroom extractor fans typically utilize a four-inch (100 mm) impeller, while kitchen extractor fans typically use a six-inch (150 mm) impeller as the room is often bigger. Axial fans with five-inch (125 mm) impellers are also used in larger bathrooms, though they are much less common. Domestic axial extractor fans are unsuitable for duct runs over 3 m or 4 m, depending on the number of bends in the run, as the increased air pressure in longer pipework inhibits the fan's performance. Continuous running extractor fans run continuously at a very slow rate, running fast when necessary, for example when a bathroom light is switched on. At working speed, they are just normal extractor fans. They extract typically 5 to 10 l/sec at continuous speed and use little electricity, 1 or 2 watts, for low annual cost. Some have humidity sensors to control trickle operation. They have the advantage of ensuring ventilation and preventing the build-up of humidity. Alternatively, a normal extractor fan may be fitted to operate intermittently at full power for the same purpose. In cold weather they may have noticeably cool the room they are in, or, if the door is open, the house. Electro-mechanical fans: Among collectors, are rated according to their condition, size, age, and number of blades. Four-blade designs are the most common. Five-blade or six-blade designs are rare. The materials from which the components are made, such as brass, are important factors in fan desirability. A ceiling fan is a fan suspended from the ceiling of a room. Most ceiling fans rotate at relatively low speeds and do not have blade guards because they are inaccessible and unwieldy. Ceiling fans are used in both residential and industrial/commercial settings. In automobiles, a mechanical or electrically driven fan provides engine cooling and prevents the engine from overheating by blowing or drawing air through a coolant-filled radiator. The fan may be driven with a belt and pulley off the engine's crankshaft or an electric motor switched on or off by a thermostatic switch. Computer fan for cooling electrical components and in laptop coolers. Fans inside audio power amplifiers help to draw heat away from the electrical components. Variable pitch fan: A variable-pitch fan is used to precisely control static pressure within supply ducts. The blades are arranged to rotate upon a control-pitch hub. The fan wheel will spin at a constant speed. The blades follow the control pitch hub. As the hub moves toward the rotor, the blades increase their angle of attack, and an increase in flow results. Centrifugal Often called a "squirrel cage" (because of its general similarity in appearance to exercise wheels for pet rodents) or "scroll fan", the centrifugal fan has a moving component (called an impeller) that consists of a central shaft about which a set of blades that form a spiral, or ribs, are positioned. Centrifugal fans blow air at right angles to the intake of the fan and spin the air outwards to the outlet (by deflection and centrifugal force). The impeller rotates, causing air to enter the fan near the shaft and move perpendicularly from the shaft to the opening in the scroll-shaped fan casing. A centrifugal fan produces more pressure for a given air volume, and is used where this is desirable such as in leaf blowers, blowdryers, air mattress inflators, inflatable structures, climate control in air handling units and various industrial purposes. They are typically noisier than comparable axial fans (although some types of centrifugal fans are quieter such as in air handling units). Cross-flow The cross-flow or tangential fan, sometimes known as a tubular fan, was patented in 1893 by Paul Mortier, and is used extensively in heating, ventilation, and air conditioning (HVAC), especially in ductless split air conditioners. The fan is usually long relative to its diameter, so the flow remains approximately two-dimensional away from the ends. The cross-flow fan uses an impeller with forward-curved blades, placed in a housing consisting of a rear wall and a vortex wall. Unlike radial machines, the main flow moves transversely across the impeller, passing the blading twice. The flow within a cross-flow fan may be broken up into three distinct regions: a vortex region near the fan discharge, called an eccentric vortex, the through-flow region, and a paddling region directly opposite. Both the vortex and paddling regions are dissipative, and as a result, only a portion of the impeller imparts usable work on the flow. The cross-flow fan, or transverse fan, is thus a two-stage partial admission machine. The popularity of the crossflow fan in HVAC comes from its compactness, shape, quiet operation, and ability to provide a high-pressure coefficient. Effectively a rectangular fan in terms of inlet and outlet geometry, the diameter readily scales to fit the available space, and the length is adjustable to meet flow rate requirements for the particular application. Common household tower fans are also cross-flow fans. Much of the early work focused on developing the cross-flow fan for both high- and low-flow-rate conditions and resulted in numerous patents. Key contributions were made by Coester, Ilberg and Sadeh, Porter and Markland, and Eck. One interesting phenomenon particular to the cross-flow fan is that, as the blades rotate, the local air incidence angle changes. The result is that in certain positions, the blades act as compressors (pressure increase), while at other azimuthal locations, the blades act as turbines (pressure decrease). Since the flow enters and exits the impeller radially, the crossflow fan has been studied and prototyped for potential aircraft applications. Due to the two-dimensional nature of the flow, the fan can be integrated into a wing for use in both thrust production and boundary-layer control. A configuration that utilizes a crossflow fan located at the wing leading edge is the FanWing design concept initially developed around 1997 and under development by a company of the same name. This design creates lift by deflecting the wake downward due to the rotational direction of the fan, causing a large Magnus force, similar to a spinning leading-edge cylinder. Another configuration utilizing a crossflow fan for thrust and flow control is the propulsive wing, another experimental concept prototype initially developed in the 1990s and 2000s. In this design, the crossflow fan is placed near the trailing edge of a thick wing and draws the air from the wing's suction (top) surface. By doing this, the propulsive wing is nearly stall-free, even at extremely high angles of attack, producing very high lift. However, the fanwing and propulsive wing concepts remain experimental and have only been used for unmanned prototypes. A cross-flow fan is a centrifugal fan in which the air flows straight through the fan instead of at a right angle. The rotor of a cross-flow fan is covered to create a pressure differential. A cross-flow fan has two walls outside the impeller and a thick vortex wall inside. The radial gap decreases in the direction of the impeller rotation. The rear wall has a log-spiral profile, while the vortex stabilizer is a thin horizontal wall with a rounded edge. The resultant pressure difference allows air to flow straight through the fan, even though the fan blades counter the flow of air on one side of the rotation. Cross-flow fans give airflow along the entire width of the fan; however, they are noisier than ordinary centrifugal fans. Cross-flow fans are often used in ductless air conditioners, air doors, in some types of laptop coolers, in automobile ventilation systems, and for cooling in medium-sized equipment such as photocopiers. Bladeless fans Dyson Air Multiplier fans introduced to the consumer market in 2009 have popularized a 1981 design by Toshiba that produces a fan that has no exposed fan blades or other visibly moving parts (unless augmented by other features such as for oscillation and directional adjustment). A relatively small quantity of air from a high-pressure-bladed impeller fan, which is contained inside the base rather than exposed, induces the slower flow of a larger airmass through a circular or oval-shaped opening via a low-pressure area created by an airfoil surface shape (the Coandă effect). Air curtains and air doors also utilize this effect to help retain warm or cool air within an otherwise exposed area that lacks a cover or door. Air curtains are commonly used on open-face dairy, freezer, and vegetable displays to help retain chilled air within the cabinet using a laminar airflow circulated across the display opening. The airflow is typically generated by a mechanical fan of any type, as described in this article, and is hidden in the base of the display cabinet. HVAC linear slot diffusers also utilize this effect to increase airflow evenly in rooms compared to registers while reducing the energy used by the air handling unit blower. Installation Fans may be installed in various ways, depending on the application. They are often used in free installations without any housing. There are also some specialised installations. Ducted fan In vehicles, a ducted fan is a method of propulsion in which a fan, propeller or rotor is surrounded by an aerodynamic duct or shroud which enhances its performance to create aerodynamic thrust or lift to transport the vehicle. Jet fan In ventilation systems, a jet fan, also known as an impulse or induction fan, ejects a stream of air that entrains ambient air to circulate the ambient air. The system takes up less space than conventional ventilation ducting and can significantly increase the rates of inflow of fresh air and expulsion of stale air. Noise Fans generate noise from the rapid flow of air around blades and obstacles causing vortexes, and from the motor. Fan noise is roughly proportional to the fifth power of fan speed; halving speed reduces noise by about 15 dB. The perceived loudness of fan noise also depends on the frequency distribution of the noise. This depends on the shape and distribution of moving parts, especially of the blades, and of stationary parts, struts in particular. Like with tire treads, and similar to the principle of acoustic diffusors, an irregular shape and distribution can flatten the noise spectrum, making the noise sound less disturbing. The inlet shape of the fan can also influence the noise levels generated by the fan. Optimal temperature for use The optimal temperature for using a fan to cool down remains uncertain. While fans are commonly used to lower body temperature through evaporative cooling, there is a point at which the convection effect of moving air can counteract this benefit. This temperature, at which fan use may become detrimental, is currently unknown. Health organizations offer varying guidance on fan usage in high temperatures. The Centers for Disease Control and Prevention (CDC) advises against fan use when temperatures exceed 32.2 °C (90 °F), while the World Health Organization (WHO) suggests avoiding fan use above 40 °C (104 °F). Recent studies have shed further light on this issue, though their findings are somewhat contradictory. One study found limited additional benefit from fan use above 35 °C (95 °F), while another study reported a 31% reduction in cardiac stress among elderly individuals using fans at 38 °C (100 °F). Fan motor drive methods Standalone fans are usually powered by an electric motor, often attached directly to the motor's output, with no gears or belts. The motor is either hidden in the fan's center hub or extends behind it. For big industrial fans, three-phase asynchronous motors are commonly used, may be placed near the fan, and drive it through a belt and pulleys. Smaller fans are often powered by shaded pole AC motors, or brushed or brushless DC motors. AC-powered fans usually use mains voltage, while DC-powered fans typically use low voltage, typically 24V, 12V, or 5 V. The fan is often connected to machines with a rotating part rather than being powered separately. This is commonly seen in motor vehicles with internal combustion engines, large cooling systems, locomotives, and winnowing machines, where the fan is connected to the drive shaft or through a belt and pulleys. Another common configuration is a dual-shaft motor, where one end of the shaft drives a mechanism, while the other has a fan mounted on it to cool the motor itself. Window air conditioners commonly use a dual-shaft fan to operate separate fans for the interior and exterior parts of the device. Where electrical power or rotating parts are not readily available, other methods may drive fans. High-pressure gases such as steam can drive a small turbine, and high-pressure liquids can drive a pelton wheel, either of which can provide the rotational drive for a fan. Large, slow-moving energy sources, such as a flowing river, can also power a fan using a water wheel and a series of step-down gears or pulleys to increase the rotational speed to that required for efficient fan operation. Solar power Electric fans used for ventilation may be powered by solar panels instead of mains current. This is an attractive option because once the capital costs of the solar panel have been covered, the resulting electricity is free. If ventilation needs are greatest during sunny weather, a solar-powered fan can be suitable. A typical example uses a detached 10-watt, solar panel and is supplied with appropriate brackets, cables, and connectors. It can be used to ventilate up to of area and can move air at up to . Because of the wide availability of 12 V brushless DC electric motors and the convenience of wiring such a low voltage, such fans usually operate on 12 volts. The detached solar panel is typically installed in the spot that gets most of the sunlight and then connected to the fan mounted as far as away. Other permanently mounted and small portable fans include an integrated (non-detachable) solar panel.
Technology
Heating and cooling
null
12878756
https://en.wikipedia.org/wiki/Lake%20island
Lake island
A lake island is any landmass within a lake. It is a type of inland island. Lake islands may form a lake archipelago. Formation Lake islands may form in numerous ways. They may occur through a build-up of sedimentation as shoals, and become true islands through changes in the level of the lake. They may have been originally part of the lake's shore, and been separated from it by erosion, or they may have been left as pinnacles when the lake formed through a raising in the level of a river or other waterway (either naturally, or artificially through the damming of a river or lake). On creation of a glacial lake a moraine can form an island. They may also have formed through earthquake, meteor, or volcanic activity. In the latter case, crater or caldera islands exist, with new volcanic prominences in lakes formed in the craters of larger volcanoes. Other lake islands include ephemeral beds of floating vegetation, and islands artificially formed by human activity. Volcanic crater and caldera lake islands Lakes may sometimes form in the circular depressions of volcanic craters. These craters are typically circular or oval basins around the vent or vents from which magma erupts. A large volcanic eruption sometimes results in the formation of a caldera, caused by the collapse of the magma chamber under the volcano. If enough magma is ejected, the emptied chamber is unable to support the weight of the volcano, and a roughly circular fracture, the ring fault, develops around the edge of the chamber. The centre of the volcano within the ring fracture collapses, creating a ring-shaped depression. Long after the eruption, this caldera may fill with water to become a lake. If volcanic activity continues or restarts, the centre of the caldera may be uplifted in the form of a resurgent dome, to become a crater lake island. Though typically calderas are larger and deeper than craters and form in different ways, a distinction between the two is often ignored in non-technical circumstances and the term crater lake is widely used for the lakes formed in both craters and calderas. The following is a list of large or notable crater lake islands: La Corota Island Flora Sanctuary in the Laguna de la Cocha, Colombia Teodoro Wolf and Yerovi Islands in Cuicocha Lake, Ecuador Teopan Island in Lake Coatepeque, El Salvador Islas Quemadas in Lake Ilopango, El Salvador Samosir Island in Lake Toba, Sumatra, Indonesia Bisentina and Martana Islands in Lake Bolsena, Italy Kamuishu Island in Lake Mashū, Hokkaidō, Japan Nakano Island in Lake Tōya, Hokkaidō, Japan Mokoia Island in Lake Rotorua, North Island, New Zealand Motutaiko Island in Lake Taupō, North Island, New Zealand Two islands in Lake Dakataua, in the caldera of Dakataua, West New Britain Province, Papua New Guinea Volcano Island in Taal Lake, Luzon, Philippines (and Vulcan Point in Crater Lake on Volcano Island) Samang, Chayachy, Serdtse (Heart), Nizkii (Low), and Glinyanii (Clay) Islands in Kurile Lake, Kamchatka, Russia Lahi, Molemole, Si'i, and A'ali Islands in Lake Vai Lahi, Niuafo'ou, Tonga Meke Dağı Island in Meke Golu crater lake, Turkey Horseshoe Island (now submerged) in Mount Katmai's crater lake, Alaska, United States Wizard Island and Phantom Ship in Crater Lake, Oregon, United States Impact crater islands Impact craters, formed by the collision of large meteorites or comets with the Earth, are relatively uncommon, and those which do exist are frequently heavily eroded or deeply buried. Several, however, do contain lakes. Where the impact crater is complex, a central peak emerges from the floor of the crater. If a lake is present, this central peak may break the water's surface as an island. In other cases, other geological processes may have caused only a ring-shaped annular lake to remain from an impact, with a large central island taking up the remaining area of the crater. The world's largest impact crater island (and the world's second-largest lake island of any kind) is René-Levasseur Island, in Lake Manicouagan, Canada. The Sanshan Islands of Lake Tai, China, are also examples of impact crater islands, as are the islands in Canada's Clearwater Lakes, and the Slate Islands of Lake Superior, also in Canada. Sollerön Island in Siljan Lake, Sweden, and an unnamed island in Lake Karakul, Tajikistan, was also formed by meteor impact. Floating islands The term floating island is sometimes used for accumulations of vegetation free-floating within a body of water. Due to the lack of currents and tides, these are more frequently found in lakes than in rivers or the open sea. Peaty masses of vegetable matter from shallow lake floors may rise due to the accumulation of gases during decomposition, and will often float for a considerable time, becoming ephemeral islands until the gas has dissipated enough for the vegetation to return to the lake floor. Artificial islands Artificial or man-made islands are islands constructed by human activity rather than formed by natural means. They may be totally created by humans, enlarged from existing islands or reefs, formed by joining small existing islands, or cut from a mainland (for example, by cutting through the isthmus of a peninsula). Artificial islands have a long history, dating back to the crannogs of prehistoric Britain and Ireland, and the traditional floating Uru islands of Lake Titicaca in South America. Notable early artificial islands include the Aztec city of Tenochtitlan, at the site of modern Mexico City. Though technically caused by human activity, islands formed from hilltops by the deliberate flooding of valleys (such as in the creation of hydroelectricity projects and reservoirs) are not normally regarded as artificial islands. Artificial islands are built for numerous uses, ranging from flood protection to immigration or quarantine stations. Other uses for reclaimed artificial islands include expansion of living space or transportation centres in densely populated regions. Agricultural land has also been developed through reclamation of polders in the Netherlands and other low lying countries. Notable modern examples of artificial lake islands include the Dutch polder of Flevopolder in Flevoland, the island of IJburg in Amsterdam, and Flamingo Island in Kamfers Dam, South Africa. At , Flevopolder, in the now-freshwater lake IJsselmeer, is the largest man-made island in the world. Former islands A number of lake islands have disappeared for various reasons. Many lakes have been shrinking, so that some of their islands have become attached to or part of the mainland, such as Vozrozhdeniya Island, Kokaral, Barsa-Kelmes, and others in the Aral Sea; the Bogomerom Archipelago and others in Lake Chad; Shahi Island and others in Lake Urmia; and many others around the world. Other islands are lost by sinking below the lake surface, either by erosion, subsidence, or rising water level. Sunken Island in Otsego Lake is one example. Islands may also be lost by being artificially attached to the mainland, such as Urk in the former Zuider Zee. Lists of lake islands Naturally occurring lake islands by area There are few naturally occurring lake islands with an area in excess of . Of these, five are located in the large Great Lakes of North America, three are located in the large African Great Lakes, one is located in the largest lake in Central America, one was formed by the world's fourth largest meteorite impact, and one is located in the largest (by volume) lake in the world. Manitoulin Island in Lake Huron, Canada – René-Levasseur Island in the Manicouagan Reservoir, Quebec, Canada – . It became an artificial island when the Manicouagan Reservoir was flooded in 1970, merging Mouchalagane Lake on the western side and Manicouagan Lake on the eastern side. Olkhon in Lake Baikal, Russia – Isle Royale in Lake Superior, United States – Ukerewe Island in Lake Victoria, Tanzania – St. Joseph Island in Lake Huron, Canada – Drummond Island in Lake Huron, United States – Idjwi in Lake Kivu, Democratic Republic of the Congo – Ometepe Island in Lake Nicaragua, Nicaragua – Bugala Island in Lake Victoria, Uganda – St. Ignace Island in Lake Superior, Canada – Note: Soisalo, a body of land in Finland that is surrounded by individual lakes (Kallavesi, Suvasvesi, Kermajärvi, Ruokovesi, Haukivesi and Unnukka) connected by creeks and rivers – rather than sitting within an individual lake – was suggested in a 1987 study as an island, due to being effectively "surrounded by water". Other scientists rebut this claim, noting that the waters surrounding Soisalo are not on the same level, with elevation differences up to between the surrounding lakes, and does not meet the criteria of a true island. Samosir, a body of land in Lake Toba, Indonesia, is a peninsula that is technically surrounded by water only because a canal was built across it, effectively separating it from the mainland. For this reason, it is not a naturally occurring lake island. Other lake islands larger than Big Simpson Island in Great Slave Lake, Canada – Blanchet Island in Great Slave Lake, Canada – Rubondo Island in Lake Victoria, Tanzania – Buvuma Island in Lake Victoria, Uganda – The largest island in Sobradinho Reservoir, Brazil – Glover Island in Grand Lake, Canada – Michipicoten Island in Lake Superior, Canada – Preble Island in Great Slave Lake, Canada – Cockburn Island in Lake Huron, Canada – Hurissalo in Lietvesi, Finland – Partalansaari in Haapaselkä, Finland – Teresa Island in Atlin Lake, Canada - Hecla Island in Lake Winnipeg, Canada – Beaver Island in Lake Michigan, United States – Sugar Island in Lake Nicolet – Lake George, United States – Wolfe Island in Lake Ontario, Canada – Viljakansaari in Haapaselkä, Finland – Antelope Island in the Great Salt Lake, United States – Black Island in Lake Winnipeg, Canada – Selaön in Mälaren, Sweden – Bois Blanc Island in Lake Huron, United States – Grand Isle in Lake Champlain, United States – Ukara Island in Lake Victoria, Tanzania – Tallest lake islands Islands within lakes recursively The largest lake on an island is Nettilling Lake on Baffin Island, Canada – . The largest island in a lake is Manitoulin Island in Lake Huron, Canada – . The largest island in a lake on an island is Samosir (a peninsula that is technically "surrounded by water" only because a narrow canal was built across it) in Danau Toba on Sumatra – . The largest lake on an island in a lake is Lake Manitou on Manitoulin Island in Lake Huron – . The largest lake on an island in a lake on an island is a nameless, approximately lake at which is, itself, on a nameless island in Nettilling Lake on Baffin Island, Canada. The largest island in a lake on an island in a lake is Treasure Island in Mindemoya Lake on Manitoulin Island in Lake Huron. The largest island in a lake on an island in a lake on an island is a nameless, approximately island at , situated within Nettilling Lake on Baffin Island, Canada. Notable island systems and former lake islands Vozrozhdeniya Island in the Aral Sea, Kazakhstan and Uzbekistan – . Originally only , the island grew rapidly from the 1960s until mid-2001, as the shrinking of the Aral Sea caused the water to recede from the land around the original island, until the moment when that same process caused the expanded island to connect to the mainland. By 2014, what used to be an island had become merely a part of the extensive Aralkum Desert. Sääminginsalo in Saimaa, Finland – . Saimaa is sometimes referred to as a "lake system", and Sääminginsalo is surrounded by three separately named lakes (Haukivesi, Puruvesi and Pihlajavesi) that are at the same level, and by an artificial canal, Raikuun kanava, built in the 1750s. Since it is only separated from other land by a canal, it is debatable whether Sääminginsalo can be considered an island. The Pamvotida lake, next to the city of Ioannina, Greece - , has an island with a village. The name of the village is "Nisos", which is the Greek word for island. The village has 219 permanent residents according to the 2011 census. The size of the island is . Islands in artificial lakes Islands of Lake Argyle, some seventy named islands in Lake Argyle, Australia
Physical sciences
Oceanic and coastal landforms
Earth science
19681430
https://en.wikipedia.org/wiki/Earthworm
Earthworm
An earthworm is a soil-dwelling terrestrial invertebrate that belongs to the phylum Annelida. The term is the common name for the largest members of the class (or subclass, depending on the author) Oligochaeta. In classical systems, they were in the order of Opisthopora since the male pores opened posterior to the female pores, although the internal male segments are anterior to the female. Theoretical cladistic studies have placed them in the suborder Lumbricina of the order Haplotaxida, but this may change. Other slang names for earthworms include "dew-worm", "rainworm", "nightcrawler", and "angleworm" (from its use as angling hookbaits). Larger terrestrial earthworms are also called megadriles (which translates to "big worms") as opposed to the microdriles ("small worms") in the semiaquatic families Tubificidae, Lumbricidae and Enchytraeidae. The megadriles are characterized by a distinct clitellum (more extensive than that of microdriles) and a vascular system with true capillaries. Earthworms are commonly found in moist, compost-rich soil, eating a wide variety of organic matters, which include detritus, living protozoa, rotifers, nematodes, bacteria, fungi and other microorganisms. An earthworm's digestive system runs the length of its body. They are one of nature's most important detritivores and coprophages, and also serve as food for many low-level consumers within the ecosystems. Earthworms exhibit an externally segmented tube-within-a-tube body plan with corresponding internal segmentations, and usually have setae on all segments. They have a cosmopolitan distribution wherever soil, water and temperature conditions allow. They have a double transport system made of coelomic fluid that moves within the fluid-filled coelom and a simple, closed circulatory system, and respire (breathe) via cutaneous respiration. As soft-bodied invertebrates, they lack a true skeleton, but their structure is maintained by fluid-filled coelom chambers that function as a hydrostatic skeleton. Earthworms have a central nervous system consisting of two ganglia above the mouth, one on either side, connected to an axial nerve running along its length to motor neurons and sensory cells in each segment. Large numbers of chemoreceptors concentrate near its mouth. Circumferential and longitudinal muscles edging each segment let the worm move. Similar sets of muscles line the gut tube, and their actions propel digested food toward the worm's anus. Earthworms are hermaphrodites: each worm carries male and female reproductive organs and genital pores. When mating, two individual earthworms will exchange sperm and fertilize each other's ova. Anatomy Form and function Depending on the species, an adult earthworm can be from long and wide to long and over wide, but the typical Lumbricus terrestris grows to about long. Probably the longest worm on confirmed records is Amynthas mekongianus that extends up to 3 m (10 ft) in the mud along the banks of the 4,350 km (2,703 mi) Mekong River in Southeast Asia. From front to back, the basic shape of the earthworm is a cylindrical tube-in-a-tube, divided into a series of segments (called metameres) that compartmentalize the body. Furrows are generally externally visible on the body demarking the segments; dorsal pores and nephridiopores exude a fluid that moistens and protects the worm's surface, allowing it to breathe. Except for the mouth and anal segments, each segment carries bristlelike hairs called lateral setae used to anchor parts of the body during movement; species may have four pairs of setae on each segment or more than eight sometimes forming a complete circle of setae per segment. Special ventral setae are used to anchor mating earthworms by their penetration into the bodies of their mates. Generally, within a species, the number of segments found is consistent across specimens, and individuals are born with the number of segments they will have throughout their lives. The first body segment (segment number 1) features both the earthworm's mouth and, overhanging the mouth, a fleshy lobe called the prostomium, which seals the entrance when the worm is at rest, but is also used to feel and chemically sense the worm's surroundings. Some species of earthworm can even use the prehensile prostomium to grab and drag items such as grasses and leaves into their burrow. An adult earthworm develops a belt-shaped glandular swelling, called the clitellum, which covers several segments toward the front part of the animal. This is part of the reproductive system and produces egg capsules. The posterior is most commonly cylindrical like the rest of the body, but depending on the species, it may also be quadrangular, octagonal, trapezoidal, or flattened. The last segment is called the periproct; the earthworm's anus, a short vertical slit, is found on this segment. The exterior of an individual segment is a thin cuticle over the skin, commonly pigmented red to brown, which has specialized cells that secrete mucus over the cuticle to keep the body moist and ease movement through the soil. Under the skin is a layer of nerve tissue, and two layers of muscles—a thin outer layer of circular muscle, and a much thicker inner layer of longitudinal muscle. Interior to the muscle layer is a fluid-filled chamber called a coelom that by its pressurization provides structure to the worm's boneless body. The segments are separated from each other by septa (the plural of "septum") which are perforated transverse walls, allowing the coelomic fluid to pass between segments. A pair of structures called nephrostomes are located at the back of each septum; a nephric tubule leads from each nephrostome through the septum and into the following segment. This tubule then leads to the main body fluid filtering organ, the nephridium or metanephridium, which removes metabolic waste from the coelomic fluid and expels it through pores called nephridiopores on the worm's sides; usually, two nephridia (sometimes more) are found in most segments. At the centre of a worm is the digestive tract, which runs straight through from mouth to anus without coiling, and is flanked above and below by blood vessels (the dorsal blood vessel and the ventral blood vessel as well as a subneural blood vessel) and the ventral nerve cord, and is surrounded in each segment by a pair of pallial blood vessels that connect the dorsal to the subneural blood vessels. Many earthworms can eject coelomic fluid through pores in the back in response to stress; the Australian Didymogaster sylvaticus (known as the "blue squirter earthworm") can squirt fluid as high as . Nervous system Central nervous system The CNS consists of a bilobed brain (cerebral ganglia, or supra-pharyngeal ganglion), sub-pharyngeal ganglia, circum-pharyngeal connectives and a ventral nerve cord. Earthworms' brains consist of a pair of pear-shaped cerebral ganglia. These are located in the dorsal side of the alimentary canal in the third segment, in a groove between the buccal cavity and pharynx. A pair of circum-pharyngeal connectives from the brain encircle the pharynx and then connect with a pair of sub-pharyngeal ganglia located below the pharynx in the fourth segment. This arrangement means the brain, sub-pharyngeal ganglia and the circum-pharyngeal connectives form a nerve ring around the pharynx. The ventral nerve cord (formed by nerve cells and nerve fibers) begins at the sub-pharyngeal ganglia and extends below the alimentary canal to the most posterior body segment. The ventral nerve cord has a swelling, or ganglion, in each segment, i.e. a segmental ganglion, which occurs from the fifth to the last segment of the body. There are also three giant axons, one medial giant axon (MGA) and two lateral giant axons (LGAs) on the mid-dorsal side of the ventral nerve cord. The MGA is 0.07 mm in diameter and transmits in an anterior-posterior direction at a rate of 32.2 m/s. The LGAs are slightly narrower at 0.05 mm in diameter and transmit in a posterior-anterior direction at 12.6 m/s. The two LGAs are connected at regular intervals along the body and are therefore considered one giant axon. Peripheral nervous system Eight to ten nerves arise from the cerebral ganglia to supply the prostomium, buccal chamber and pharynx. Three pairs of nerves arise from the subpharyangeal ganglia to supply the second, third and fourth segment. Three pairs of nerves extend from each segmental ganglion to supply various structures of the segment. The sympathetic nervous system consists of nerve plexuses in the epidermis and alimentary canal. (A plexus is a web of connected nerve cells.) The nerves that run along the body wall pass between the outer circular and inner longitudinal muscle layers of the wall. They give off branches that form the intermuscular plexus and the subepidermal plexus. These nerves connect with the cricopharyngeal connective. Movement On the surface, crawling speed varies both within and among individuals. Earthworms crawl faster primarily by taking longer "strides" and a greater frequency of strides. Larger Lumbricus terrestris worms crawl at a greater absolute speed than smaller worms. They achieve this by taking slightly longer strides but with slightly lower stride frequencies. Touching an earthworm, which causes a "pressure" response as well as (often) a response to the dehydrating quality of the salt on human skin (toxic to earthworms), stimulates the subepidermal nerve plexus which connects to the intermuscular plexus and causes the longitudinal muscles to contract. This causes the writhing movements observed when a human picks up an earthworm. This behaviour is a reflex and does not require the CNS; it occurs even if the nerve cord is removed. Each segment of the earthworm has its own nerve plexus. The plexus of one segment is not connected directly to that of adjacent segments. The nerve cord is required to connect the nervous systems of the segments. The giant axons carry the fastest signals along the nerve cord. These are emergency signals that initiate reflex escape behaviours. The larger dorsal giant axon conducts signals the fastest, from the rear to the front of the animal. If the rear of the worm is touched, a signal is rapidly sent forwards causing the longitudinal muscles in each segment to contract. This causes the worm to shorten very quickly as an attempt to escape from a predator or other potential threat. The two medial giant axons connect with each other and send signals from the front to the rear. Stimulation of these causes the earthworm to very quickly retreat (perhaps contracting into its burrow to escape a bird). The presence of a nervous system is essential for an animal to be able to experience nociception or pain. However, other physiological capacities are also required such as opioid sensitivity and central modulation of responses by analgesics. Enkephalin and α-endorphin-like substances have been found in earthworms. Injections of naloxone (an opioid antagonist) inhibit the escape responses of earthworms. This indicates that opioid substances play a role in sensory modulation, similar to that found in many vertebrates. Sensory reception Photosensitivity Although some worms have eyes, earthworms do not. However, they do have specialized photosensitive cells called "light cells of Hess". These photoreceptor cells have a central intracellular cavity (phaosome) filled with microvilli. As well as the microvilli, there are several sensory cilia in the phaosome which are structurally independent of the microvilli. The photoreceptors are distributed in most parts of the epidermis, but are more concentrated on the back and sides of the worm. A relatively small number occur on the ventral surface of the first segment. They are most numerous in the prostomium, and reduce in density in the first three segments; they are very few in number past the third segment. Epidermal receptor (Sense organ) These receptors are abundant and distributed all over the epidermis. Each receptor shows a slightly elevated cuticle which covers a group of tall, slender and columnar receptor cells. These cells bear small hairlike processes at their outer ends and their inner ends are connected with nerve fibres. The epidermal receptors are tactile in function. They are also concerned with changes in temperature and respond to chemical stimuli. Earthworms are extremely sensitive to touch and mechanical vibration. Buccal receptor (Sense organ) These receptors are located only in the epithelium of the buccal chamber. These receptors are gustatory and olfactory (related to taste and smell). They also respond to chemical stimuli. (Chemoreceptor) Digestive system The gut of the earthworm is a straight tube that extends from the worm's mouth to its anus. It is differentiated into an alimentary canal and associated glands which are embedded in the wall of the alimentary canal itself. The alimentary canal consists of a mouth, buccal cavity (generally running through the first one or two segments of the earthworm), pharynx (running generally about four segments in length), esophagus, crop, gizzard (usually), and intestine. Food enters at the mouth. The pharynx acts as a suction pump; its muscular walls draw in food. In the pharynx, the pharyngeal glands secrete mucus. Food moves into the esophagus, where calcium (from the blood and ingested from previous meals) is pumped in to maintain proper blood calcium levels in the blood and food pH. From there the food passes into the crop and gizzard. In the gizzard, strong muscular contractions grind the food with the help of mineral particles ingested along with the food. Once through the gizzard, food continues through the intestine for digestion. The intestine secretes pepsin to digest proteins, amylase to digest polysaccharides, cellulase to digest cellulose, and lipase to digest fats. Earthworms use, in addition to the digestive proteins, a class of surface active compounds called drilodefensins, which help digest plant material. Instead of being coiled like a mammalian intestine, in the earthworm's intestine a large mid-dorsal, tongue-like fold is present, called a typhlosole, with many folds running along its length, increasing its surface area to increase nutrient absorption. The intestine has its own pair of muscle layers like the body, but in reverse order—an inner circular layer within an outer longitudinal layer. Circulatory system Earthworms have a dual circulatory system in which both the coelomic fluid and a closed circulatory system carry the food, waste, and respiratory gases. The closed circulatory system has five main blood vessels: the dorsal (top) vessel, which runs above the digestive tract; the ventral (bottom) vessel, which runs below the digestive tract; the subneural vessel, which runs below the ventral nerve cord; and two lateroneural vessels on either side of the nerve cord. The dorsal vessel is mainly a collecting structure in the intestinal region. It receives a pair commissural and dorsal intestines in each segment. The ventral vessel branches off to a pair of ventro-tegumentaries and ventro-intestinals in each segment. The subneural vessel also gives out a pair of commissurals running along the posterior surface of the septum. The pumping action on the dorsal vessel moves the blood forward, while the other four longitudinal vessels carry the blood rearward. In segments seven through eleven, a pair of aortic arches ring the coelom and acts as hearts, pumping the blood to the ventral vessel that acts as the aorta. The blood consists of ameboid cells and haemoglobin dissolved in the plasma. The second circulatory system derives from the cells of the digestive system that line the coelom. As the digestive cells become full, they release non-living cells of fat into the fluid-filled coelom, where they float freely but can pass through the walls separating each segment, moving food to other parts and assist in wound healing. Excretory system The excretory system contains a pair of nephridia in every segment, except for the first three and the last ones. The three types of nephridia are: integumentary, septal, and pharyngeal. The integumentary nephridia lie attached to the inner side of the body wall in all segments except the first two. The septal nephridia are attached to both sides of the septa behind the 15th segment. The pharyngeal nephridia are attached to the fourth, fifth and sixth segments. The waste in the coelom fluid from a forward segment is drawn in by the beating of cilia of the nephrostome. From there it is carried through the septum (wall) via a tube which forms a series of loops entwined by blood capillaries that also transfer waste into the tubule of the nephrostome. The excretory wastes are then finally discharged through a pore on the worm's side. Respiration Earthworms have no special respiratory organs. Gases are exchanged through the moist skin and capillaries, where the oxygen is picked up by the haemoglobin dissolved in the blood plasma and carbon dioxide is released. Water, as well as salts, can also be moved through the skin by active transport. Life and physiology At birth, earthworms emerge small but fully formed, lacking only their sex structures which develop in about 60 to 90 days. They attain full size in about one year. Scientists predict that the average lifespan under field conditions is four to eight years, while most garden varieties live only one to two years. Reproduction Several common earthworm species are mostly parthenogenetic, meaning that growth and development of embryos happens without fertilization. Among lumbricid earthworms, parthenogenesis arose from sexual relatives many times. Parthenogenesis in some Aporrectodea trapezoides lineages arose 6.4 to 1.1 million years ago from sexual ancestors. A few species exhibit pseudogamous parthogenesis, meaning that mating is necessary to stimulate reproduction, even though no male genetic material passes to the offspring. Earthworm mating occurs on the surface, most often at night. Earthworms are hermaphrodites; that is, they have both male and female sexual organs. The sexual organs are located in segments 9 to 15. Earthworms have one or two pairs of testes contained within sacs. The two or four pairs of seminal vesicles produce, store and release the sperm via the male pores. Ovaries and oviducts in segment 13 release eggs via female pores on segment 14, while sperm is expelled from segment 15. One or more pairs of spermathecae are present in segments 9 and 10 (depending on the species) which are internal sacs that receive and store sperm from the other worm during copulation. As a result, segment 15 of one worm exudes sperm into segments 9 and 10 with its storage vesicles of its mate. Some species use external spermatophores for sperm transfer. In Hormogaster samnitica and Hormogaster elisae transcriptome DNA libraries were sequenced and two sex pheromones, Attractin and Temptin, were detected in all tissue samples of both species. Sex pheromones are probably important in earthworms because they live in an environment where chemical signaling may play a crucial role in attracting a partner and in facilitating outcrossing. Outcrossing would provide the benefit of masking the expression of deleterious recessive mutations in progeny (see Complementation). Copulation and reproduction are separate processes in earthworms. The mating pair overlap front ends ventrally and each exchanges sperm with the other. The clitellum becomes very reddish to pinkish in colour. Sometime after copulation, long after the worms have separated, the clitellum (behind the spermathecae) secretes material which forms a ring around the worm. The worm then backs out of the ring, and as it does so, it injects its own eggs and the other worm's sperm into it. Thus each worm becomes the genetic father of some of their offspring (due to its own sperm transferred to other earthworm) and the genetic mother (offsprings from its own egg cells) of the rest. As the worm slips out of the ring, the ends of the cocoon seal to form a vaguely onion-shaped incubator (cocoon) in which the embryonic worms develop. Hence fertilization is external. The cocoon is then deposited in the soil. After three weeks, 2 to 20 offspring hatch with an average of four. Development is direct i.e. without formation of any larva. DNA repair Exposure of the earthworm Eisenia fetida to ionizing radiation induced DNA strand breaks and oxidized DNA bases. These DNA damages could then be repaired in somatic and spermatogenic cells. Earthworms testis cells are also capable of repairing hydrogen peroxide induced oxidative DNA adducts. Locomotion Earthworms travel underground by means of waves of muscular contractions which alternately shorten and lengthen the body (peristalsis). The shortened part is anchored to the surrounding soil by tiny clawlike bristles (setae) set along its segmented length. In all the body segments except the first, last and clitellum, there is a ring of S-shaped setae embedded in the epidermal pit of each segment (perichaetine). The whole burrowing process is aided by the secretion of lubricating mucus. As a result of their movement through their lubricated tunnels, worms can make gurgling noises underground when disturbed. Earthworms move through soil by expanding crevices with force; when forces are measured according to body weight, hatchlings can push 500 times their own body weight whereas large adults can push only 10 times their own body weight. Regeneration Earthworms have the ability to regenerate lost segments, but this ability varies between species and depends on the extent of the damage. Stephenson (1930) devoted a chapter of his monograph to this topic, while G. E. Gates spent 20 years studying regeneration in a variety of species. But "because little interest was shown", Gates (1972) published only a few of his findings. These nevertheless show it is theoretically possible to grow two whole worms from a bisected specimen in certain species. Gates's reports included: Eisenia fetida (Savigny, 1826) with head regeneration, in an anterior direction, possible at each intersegmental level back to and including 23/24, while tails were regenerated at any levels behind 20/21; thus two worms may grow from one. Lumbricus terrestris (Linnaeus, 1758) replacing anterior segments from as far back as 13/14 and 16/17 but tail regeneration was never found. Perionyx excavatus (Perrier, 1872) readily regenerated lost parts of the body, in an anterior direction from as far back as 17/18, and in a posterior direction as far forward as 20/21. Lampito mauritii (Kinberg, 1867) with regeneration in anterior direction at all levels back to 25/26 and tail regeneration from 30/31; head regeneration was sometimes believed to be caused by internal amputation resulting from Sarcophaga sp. larval infestation. Criodrilus lacuum (Hoffmeister, 1845) also has prodigious regenerative capacity with 'head' regeneration from as far back as 40/41. An unidentified Tasmanian earthworm shown growing a replacement head has been reported. Taxonomy and distribution Within the world of taxonomy, the stable 'Classical System' of Michaelsen (1900) and Stephenson (1930) was gradually eroded by the controversy over how to classify earthworms, such that Fender and McKey-Fender (1990) went so far as to say, "The family-level classification of the megascolecid earthworms is in chaos." Over the years, many scientists have developed their own classification systems for earthworms, which led to confusion, and these systems have been and still continue to be revised and updated. The classification system used here which was developed by Blakemore (2000), is a modern reversion to the Classical System that is historically proven and widely accepted. Categorization of a megadrile earthworm into one of its taxonomic families under suborders Lumbricina and Moniligastrida is based on such features as the makeup of the clitellum, the location and disposition of the sex features (pores, prostatic glands, etc.), number of gizzards, and body shape. Currently, over 6,000 species of terrestrial earthworms are named, as provided in a species name database, but the number of synonyms is unknown. The families, with their known distributions or origins: Acanthodrilidae Ailoscolecidae – the Pyrenees and the southeast USA Almidae – tropical equatorial (South America, Africa, Indo-Asia) Benhamiinae – Ethiopian, Neotropical (a possible subfamily of Octochaetidae) Criodrilidae – southwestern Palaearctic: Europe, Middle East, Russia and Siberia to Pacific coast; Japan (Biwadrilus); mainly aquatic Diplocardiinae/-idae – Gondwanan or Laurasian? (a subfamily of Acanthodrilidae) Enchytraeidae – cosmopolitan but uncommon in tropics (usually classed with Microdriles) Eudrilidae – Tropical Africa south of the Sahara Exxidae – Neotropical: Central America and the Caribbean Glossoscolecidae – Neotropical: Central and South America, Caribbean Haplotaxidae – cosmopolitan distribution (usually classed with Microdriles) Hormogastridae – Mediterranean Kynotidae – Malagasian: Madagascar Lumbricidae – Holarctic: North America, Europe, Middle East, Central Asia to Japan Lutodrilidae – Louisiana the southeast USA Megascolecidae Microchaetidae – Terrestrial in Africa especially South African grasslands Moniligastridae – Oriental and Indian subregion Ocnerodrilidae – Neotropics, Africa; India Octochaetidae – Australasian, Indian, Oriental, Ethiopian, Neotropical Octochaetinae – Australasian, Indian, Oriental (subfamily if Benhamiinae is accepted) Sparganophilidae – Nearctic, Neotropical: North and Central America Tumakidae – Colombia, South America As an invasive species From a total of around 7,000 species, only about 150 species are widely distributed around the world. These are the peregrine or cosmopolitan earthworms. Of the 182 taxa of earthworms found in the United States and Canada, 60 (33%) are introduced species. Ecology Earthworms are classified into three main ecophysiological categories: (1) leaf litter- or compost-dwelling worms that are nonburrowing, live at the soil-litter interface and eat decomposing organic matter (epigeic) e.g. Eisenia fetida; (2) topsoil- or subsoil-dwelling worms that feed (on soil), burrow and cast within the soil, creating horizontal burrows in upper 10–30  cm of soil (endogeic); and (3) worms that construct permanent deep vertical burrows which they use to visit the surface to obtain plant material for food, such as leaves (anecic, meaning "reaching up"), e.g. Lumbricus terrestris. Earthworm populations depend on both physical and chemical properties of the soil, such as temperature, moisture, pH, salts, aeration, and texture, as well as available food, and the ability of the species to reproduce and disperse. One of the most important environmental factors is pH, but earthworms vary in their preferences. Most favour neutral to slightly acidic soils. Lumbricus terrestris is still present in a pH of 5.4, Dendrobaena octaedra at a pH of 4.3 and some Megascolecidae are present in extremely acidic humic soils. Soil pH may also influence the numbers of worms that go into diapause. The more acidic the soil, the sooner worms go into diapause, and remain in diapause the longest time at a pH of 6.4. Earthworms are preyed upon by many species of birds (e.g. robins, starlings, thrushes, gulls, crows), snakes, wood turtles, mammals (e.g. bears, boars, foxes, hedgehogs, pigs, moles) and invertebrates (e.g. ants, flatworms, ground beetles and other beetles, snails, spiders, and slugs). Earthworms have many internal parasites, including protozoa, platyhelminthes, mites, and nematodes; they can be found in the worms' blood, seminal vesicles, coelom, or intestine, or in their cocoons (e.g. the mite Histiostoma murchiei is a parasite of earthworm cocoons). The earthworm activity aerates and mixes the soil, and is conducive to mineralization of nutrients and their uptake by vegetation. Certain species of earthworm come to the surface and graze on the higher concentrations of organic matter present there, mixing it with the mineral soil. Because a high level of organic matter mixing is associated with soil fertility, an abundance of earthworms is generally considered beneficial by farmers and gardeners. As long ago as 1881 Charles Darwin wrote: "It may be doubted whether there are many other animals which have played so important a part in the history of the world, as have these lowly organized creatures." Also, while, as the name suggests, the main habitat of earthworms is in soil, they are not restricted to this habitat. The brandling worm Eisenia fetida lives in decaying plant matter and manure. Arctiostrotus vancouverensis from Vancouver Island and the Olympic Peninsula is generally found in decaying conifer logs. Aporrectodea limicola, Sparganophilus spp., and several others are found in mud in streams. Some species are arboreal, some aquatic and some euryhaline (salt-water tolerant) and littoral (living on the sea-shore, e.g. Pontodrilus litoralis). Even in the soil species, special habitats, such as soils derived from serpentine, have an earthworm fauna of their own. Vermicomposting of organic "wastes" and addition of this organic matter to the soil, preferably as a surface mulch, will provide several species of earthworms with their food and nutrient requirements, and will create the optimum conditions of temperature and moisture that will stimulate their activity. Earthworms are environmental indicators of soil health. Earthworms feed on the decaying matter in the soil and analyzing the contents of their digestive tracts gives insight into the overall condition of the soil. The earthworm gut accumulates chemicals, including heavy metals such as cadmium, mercury, zinc, and copper. The population size of the earthworm indicates the quality of the soil, as healthy soil would contain a larger number of earthworms. Environmental impacts The major benefits of earthworm activities to soil fertility for agriculture can be summarized as: Biological: In many soils, earthworms play a major role in the conversion of large pieces of organic matter into rich humus, thus improving soil fertility. This is achieved by the worm's actions of pulling below the surface deposited organic matter such as leaf fall or manure, either for food or to plug its burrow. Once in the burrow, the worm will shred the leaf, partially digest it and mingle it with the earth. Worm casts (see bottom right) can contain 40 percent more humus than the top of soil in which the worm is living. Chemical: In addition to dead organic matter, the earthworm also ingests any other soil particles that are small enough—including sand grains up to —into its gizzard, wherein those minute fragments of grit grind everything into a fine paste which is then digested in the intestine. When the worm excretes this in the form of casts, deposited on the surface or deeper in the soil, minerals and plant nutrients are changed to an accessible form for plants to use. Investigations in the United States show that fresh earthworm casts are five times richer in available nitrogen, seven times richer in available phosphates, and 11 times richer in available potassium than the surrounding upper of soil. In conditions where humus is plentiful, the weight of casts produced may be greater than per worm per year. Physical: The earthworm's burrowing creates a multitude of channels through the soil and is of great value in maintaining the soil structure, enabling processes of aeration and drainage. Permaculture co-founder Bill Mollison points out that by sliding in their tunnels, earthworms "act as an innumerable army of pistons pumping air in and out of the soils on a 24-hour cycle (more rapidly at night)". Thus, the earthworm not only creates passages for air and water to traverse the soil, but also modifies the vital organic component that makes a soil healthy (see Bioturbation). Earthworms promote the formation of nutrient-rich casts (globules of soil, stable in soil mucus) that have high soil aggregation and soil fertility and quality. In podzol soils, earthworms can obliterate the characteristic banded appearance of the soil profile by mixing the organic (LFH), eluvial (E) and upper illuvial (B) horizons to create a single dark Ap horizon. Earthworms accelerate nutrient cycling in the soil-plant system through fragmentation & mixing of plant debris – physical grinding & chemical digestion. The earthworm's existence cannot be taken for granted. Dr. W. E. Shewell-Cooper observed "tremendous numerical differences between adjacent gardens", and worm populations are affected by a host of environmental factors, many of which can be influenced by good management practices on the part of the gardener or farmer. Darwin estimated that arable land contains up to of worms, but more recent research has produced figures suggesting that even poor soil may support , whilst rich fertile farmland may have up to , meaning that the weight of earthworms beneath a farmer's soil could be greater than that of the livestock upon its surface. Richly organic topsoil populations of earthworms are much higher – averaging and up to 400 g2 – such that, for the 7 billion of us, each person alive today has support of 7 million earthworms. The ability to break down organic materials and excrete concentrated nutrients makes the earthworm a functional contributor in restoration projects. In response to ecosystem disturbances, some sites have utilized earthworms to prepare soil for the return of native flora. Research from the Station d'écologie Tropicale de Lamto asserts that the earthworms positively influence the rate of macroaggregate formation, an important feature for soil structure. The stability of aggregates in response to water was also found to be improved when constructed by earthworms. Though not fully quantified yet, greenhouse gas emissions of earthworms likely contribute to global warming, especially since top-dwelling earthworms increase the speed of carbon cycles and have been spread by humans into many new geographies. Threats Nitrogenous fertilizers tend to create acidic conditions, which are fatal to the worms, and dead specimens are often found on the surface following the application of substances such as DDT, lime sulphur, and lead arsenate. In Australia, changes in farming practices such as the application of superphosphates on pastures and a switch from pastoral farming to arable farming had a devastating effect on populations of the giant Gippsland earthworm, leading to their classification as a protected species. Globally, certain earthworms populations have been devastated by deviation from organic production and the spraying of synthetic fertilizers and biocides, with at least three species now listed as extinct, but many more endangered. Economic impact Various species of worms are used in vermiculture, the practice of feeding organic waste to earthworms to decompose food waste. These are usually Eisenia fetida (or its close relative Eisenia andrei) or the brandling worm, commonly known as the tiger worm or red wiggler. They are distinct from soil-dwelling earthworms. In the tropics, the African nightcrawler Eudrilus eugeniae and the Indian blue Perionyx excavatus are used. Earthworms are sold all over the world; the market is sizable. According to Doug Collicutt, "In 1980, 370 million worms were exported from Canada, with a Canadian export value of $13 million and an American retail value of $54 million." Earthworms provide an excellent source of protein for fish, fowl, and pigs, but have also been used traditionally for human consumption. Noke is a culinary term used by the Māori of New Zealand to refer to earthworms, which they consider delicacies for their chiefs.
Biology and health sciences
Lophotrochozoa
null
19681795
https://en.wikipedia.org/wiki/Caranx
Caranx
Caranx is a genus of tropical to subtropical marine fishes in the jack family Carangidae, commonly known as jacks, trevallies and kingfishes. They are moderate- to large-sized, deep-bodied fishes which are distinguished from other carangid genera by specific gill raker, fin ray and dentition characteristics. The genus is represented in the Pacific, Indian and Atlantic Oceans, inhabiting both inshore and offshore regions, ranging from estuaries and bays to deep reefs and offshore islands. All species are powerful predators, taking a variety of fish, crustaceans and cephalopods, while they in turn are prey to larger pelagic fishes and sharks. A number of fish in the genus have a reputation as powerful gamefish and are highly sought by anglers. They often make up high amounts of the catch in various fisheries, but are generally considered poor to fair table fishes. Taxonomy and naming The genus Caranx is one of 30 currently recognised genera of fish in the jack and horse mackerel family Carangidae, this family are part of the order Carangiformes. The species has long been placed in the subfamily Caranginae (or tribe Carangini), with modern molecular and genetic studies indicating this subdivision is acceptable, and Caranx is well defined as a genus. Phylogenetically, the monotypic genus of Gnathanodon is most closely related to Caranx; and indeed its sole member was once classified under Caranx. Caranx was created by the French naturalist Bernard Germain de Lacépède in 1801 to accommodate a new species he had described, Caranx carangua (the crevalle jack), which was later found to be a junior synonym of Scomber hippos, which in turn was transferred to Caranx. The early days of carangid taxonomy had over 100 'species' designated as members of the genus, most of which were synonyms, and a number of genera were created which were later synonymised with Caranx. Caranx took authority over these other genera names due to its prior description, rendering the rest as invalid junior synonyms. Today, after extensive reviews of the family, 18 species are considered valid by major taxonomic authorities Fishbase and ITIS, although many other species are unable to be properly validated due to poor descriptions. The fish in the genus are commonly referred to as jacks, trevallies or kingfishes. Like the genus Carangoides, the word Caranx is derived from the French carangue, used for some fishes of the Caribbean. Species The 18 currently recognized extant species in this genus are: Evolution The first representative of Caranx found in the fossil record dates back to the mid-Eocene, a period when many modern Perciform lineages appeared. Fossils mostly consist of otoliths, with the bony skeletal material rarely preserved. They are generally found in shallow marine or brackish water sedimentary deposits. A number of extinct species have been definitively identified and scientifically named, including: Caranx annectens Stinton, 1980 Eocene, England Caranx carangopsis Steindachner, 1859 Cenozoic, Austria Caranx daniltshenkoi Bannikov, 1990 Cenozoic, Russia Caranx exilis Rueckert-Uelkuemen, 1995 Cenozoic, Turkey Caranx extenuatus Stinton, 1980 Eocene, England Caranx gigas Rueckert-Uelkuemen, 1995 Cenozoic, Turkey Caranx gracilis Kramberger, 1882 Oligocene-Lower Miocene, Romania Caranx hagni Rueckert-Uelkuemen, 1995 Cenozoic, Turkey Caranx macoveii Pauca, 1929 Oligocene-Lower Miocene, Romania Caranx petrodavae Simionescu, 1905 Oligocene-Lower Miocene, Romania Caranx praelatus Stinton, 1980 Eocene, England Caranx primaevus Eastman, 1904 Eocene, Italy (may be attributable to own genus Eastmanalepes) Caranx quietus Bannikov, 1990 Cenozoic, Russia Description The species in the genus Caranx are all moderately large to very large fishes, growing from around 50 cm in length to a known maximum length of 1.7 m and 80 kg in weight; a size which is only achieved by the giant trevally, Caranx ignobilis, the largest species of Caranx. In their general body profile, they are similar to a number of other jack genera, having a deep, compressed body with a dorsal profile more convex than the ventral. The dorsal fin is in two parts, the first consisting of 8 spines and the second of one spine and between 16 and 25 soft rays. The anal fin has one or two detached anterior spines, with 1 spine and between 14 and 19 soft rays. The caudal fin is strongly forked. All species have moderate to very strong scutes on the posterior section of their lateral lines. All members of Caranx are all generally silver to grey in colour, with shades of blue or green dorsally, while some species have coloured spots on their flanks. Fin colours range from hyaline to yellow, blue and black. The specific characteristics that distinguish the genus relate to specific anatomical details, with these being a gill raker count between 20 and 31 on the first gill arch, 2 to 4 canines anteriorly positioned in each jaw, and dorsal and anal rays which are never produced into filaments as seen in genera such as Alectis and Carangoides. Distribution and habitat Species from the genus Caranx are distributed throughout the tropical and subtropical waters of the world, inhabiting the Atlantic, Pacific and Indian Oceans. They are known from the coasts of all continents and islands (including remote offshore islands) within this range, and have a fairly even species distribution, with no particular region having unusually high amounts of Caranx species. Most species are coastal fish, and very few venture into waters further offshore than the continental shelf, and these species are generally moved by ocean currents. They inhabit a range of environments including sand flats, bays, lagoons, reefs, sea mounts and estuaries. Most species are demersal, or bottom dwelling, in nature, while others are pelagic, moving long distances in the upper water column. Biology and fisheries The level of biological information known about each species in Caranx is generally related to how important they are commercially. All species are predatory fish, taking smaller fish, crustaceans and cephalopods as prey. Most species form schools as juveniles, but generally become more solitary with age. Reproduction and growth has been studied in a number of species, with these characteristics varying greatly between species. All species in Caranx are of at least minor importance to fisheries, but a number are much more so due to their abundance in certain regions. Most are considered to be gamefish, with some such as the giant trevally and bluefin trevally highly sought after by anglers. They are generally considered poor to fair quality table fishes, and have had a number of ciguatera poisoning cases attributed to them.
Biology and health sciences
Acanthomorpha
Animals
8240558
https://en.wikipedia.org/wiki/Line%E2%80%93line%20intersection
Line–line intersection
In Euclidean geometry, the intersection of a line and a line can be the empty set, a point, or another line. Distinguishing these cases and finding the intersection have uses, for example, in computer graphics, motion planning, and collision detection. In three-dimensional Euclidean geometry, if two lines are not in the same plane, they have no point of intersection and are called skew lines. If they are in the same plane, however, there are three possibilities: if they coincide (are not distinct lines), they have an infinitude of points in common (namely all of the points on either of them); if they are distinct but have the same slope, they are said to be parallel and have no points in common; otherwise, they have a single point of intersection. The distinguishing features of non-Euclidean geometry are the number and locations of possible intersections between two lines and the number of possible lines with no intersections (parallel lines) with a given line. Formulas A necessary condition for two lines to intersect is that they are in the same plane—that is, are not skew lines. Satisfaction of this condition is equivalent to the tetrahedron with vertices at two of the points on one line and two of the points on the other line being degenerate in the sense of having zero volume. For the algebraic form of this condition, see . Given two points on each line First we consider the intersection of two lines and in two-dimensional space, with line being defined by two distinct points and , and line being defined by two distinct points and . The intersection of line and can be defined using determinants. The determinants can be written out as: When the two lines are parallel or coincident, the denominator is zero. Given two points on each line segment The intersection point above is for the infinitely long lines defined by the points, rather than the line segments between the points, and can produce an intersection point not contained in either of the two line segments. In order to find the position of the intersection in respect to the line segments, we can define lines and in terms of first degree Bézier parameters: (where and are real numbers). The intersection point of the lines is found with one of the following values of or , where and with There will be an intersection if and . The intersection point falls within the first line segment if , and it falls within the second line segment if . These inequalities can be tested without the need for division, allowing rapid determination of the existence of any line segment intersection before calculating its exact point. Given two line equations The and coordinates of the point of intersection of two non-vertical lines can easily be found using the following substitutions and rearrangements. Suppose that two lines have the equations and where and are the slopes (gradients) of the lines and where and are the -intercepts of the lines. At the point where the two lines intersect (if they do), both coordinates will be the same, hence the following equality: We can rearrange this expression in order to extract the value of , and so, To find the coordinate, all we need to do is substitute the value of into either one of the two line equations, for example, into the first: Hence, the point of intersection is Note that if then the two lines are parallel and they do not intersect, unless as well, in which case the lines are coincident and they intersect at every point. Using homogeneous coordinates By using homogeneous coordinates, the intersection point of two implicitly defined lines can be determined quite easily. In 2D, every point can be defined as a projection of a 3D point, given as the ordered triple . The mapping from 3D to 2D coordinates is . We can convert 2D points to homogeneous coordinates by defining them as . Assume that we want to find intersection of two infinite lines in 2-dimensional space, defined as and . We can represent these two lines in line coordinates as and . The intersection of two lines is then simply given by If , the lines do not intersect. More than two lines The intersection of two lines can be generalized to involve additional lines. The existence of and expression for the -line intersection problem are as follows. In two dimensions In two dimensions, more than two lines almost certainly do not intersect at a single point. To determine if they do and, if so, to find the intersection point, write the th equation () as and stack these equations into matrix form as where the th row of the matrix is , is the 2 × 1 vector , and the th element of the column vector is . If has independent columns, its rank is 2. Then if and only if the rank of the augmented matrix is also 2, there exists a solution of the matrix equation and thus an intersection point of the lines. The intersection point, if it exists, is given by where is the Moore–Penrose generalized inverse of (which has the form shown because has full column rank). Alternatively, the solution can be found by jointly solving any two independent equations. But if the rank of is only 1, then if the rank of the augmented matrix is 2 there is no solution but if its rank is 1 then all of the lines coincide with each other. In three dimensions The above approach can be readily extended to three dimensions. In three or more dimensions, even two lines almost certainly do not intersect; pairs of non-parallel lines that do not intersect are called skew lines. But if an intersection does exist it can be found, as follows. In three dimensions a line is represented by the intersection of two planes, each of which has an equation of the form Thus a set of lines can be represented by equations in the 3-dimensional coordinate vector : where now is and is . As before there is a unique intersection point if and only if has full column rank and the augmented matrix does not, and the unique intersection if it exists is given by Nearest points to skew lines In two or more dimensions, we can usually find a point that is mutually closest to two or more lines in a least-squares sense. In two dimensions In the two-dimensional case, first, represent line as a point on the line and a unit normal vector , perpendicular to that line. That is, if and are points on line 1, then let and let which is the unit vector along the line, rotated by a right angle. The distance from a point to the line is given by And so the squared distance from a point to a line is The sum of squared distances to many lines is the cost function: This can be rearranged: To find the minimum, we differentiate with respect to and set the result equal to the zero vector: so and so In more than two dimensions While is not well-defined in more than two dimensions, this can be generalized to any number of dimensions by noting that is simply the symmetric matrix with all eigenvalues unity except for a zero eigenvalue in the direction along the line providing a seminorm on the distance between and another point giving the distance to the line. In any number of dimensions, if is a unit vector along the th line, then becomes where is the identity matrix, and so General derivation In order to find the intersection point of a set of lines, we calculate the point with minimum distance to them. Each line is defined by an origin and a unit direction vector . The square of the distance from a point to one of the lines is given from Pythagoras: where is the projection of on line . The sum of distances to the square to all lines is To minimize this expression, we differentiate it with respect to . which results in where is the identity matrix. This is a matrix , with solution , where is the pseudo-inverse of . Non-Euclidean geometry In spherical geometry, any two great circles intersect. In hyperbolic geometry, given any line and any point, there are infinitely many lines through that point that do not intersect the given line.
Mathematics
Two-dimensional space
null
4779861
https://en.wikipedia.org/wiki/Pouched%20lamprey
Pouched lamprey
The pouched lamprey (Geotria australis), also known as the piharau in New Zealand's North Island, korokoro, kanakana in the South Island, or wide-mouthed lamprey, is a species in the genus Geotria, which is the only genus in the family Geotriidae. The second species in the genus is the Argentinian lamprey (Geotria macrostoma), which was revalidated as a separate species in 2020. The pouched lamprey is native to the southern hemisphere. It spends the early part of its life in fresh water, migrating to the sea as an adult, and returning to fresh water to spawn and die. Description G. australis, like other lampreys, has a thin eel-like body, and grows up to long. It has two low dorsal fins on the back half. Like other lampreys, it has no jaws, only a sucker. The skin is a striking silver in adult lampreys caught fresh from the sea but soon changes to brown after they have been in fresh water for some time, due to deposition of biliverdin. Adult's eyes are relatively small and located on the side of the head. When fully mature, males develop a baggy pouch under their eyes, which may be used to massage and oxygenate their eggs. There have also been suggestions that the pouch in northern hemisphere species has been used by males during breeding times for gathering stones to make a nest. Life cycle The freshwater ammocoete or larval stage of the life cycle are a dull brown in colour for most of their lives. Ammocoetes remain in fresh water for about four years until undergoing a six-month metamorphosis, changing to silver with blue-green stripes. The central nervous system of the pouched lamprey develops notably during metamorphosis to the large-eyed macropthalmia stage, with particularly large increases in the volume of visual areas of the brain. At this point they migrate downstream to the sea. Adults spend some of their lives in the open sea, living as parasites on other fish. They attach themselves to the gills or side of the fish and rasp at the tissues below. Adults return to fresh water to breed, spending up to eighteen months sexually maturing before spawning. Adults have been recorded living up to 105 days after spawning and wrapping themselves around egg masses to provide parental care. Distribution and habitat The pouched lamprey is widespread in the Southern Hemisphere, occurring in New Zealand, Chile, Argentina, the Falkland Islands, South Georgia and the southwest and southeast corners of Australia. It is the only lamprey species found in New Zealand. Threats Lampreys are preyed on by albatrosses, shags, large fish and marine mammals. It has been hypothesised that the apparent decline in lamprey numbers could be caused by the degradation of water quality in lowland waterways. History Pouched lampreys are a traditional Māori delicacy in New Zealand. Traditional methods for catching lampreys included disturbing the lampreys as they ascended waterfalls and capturing them, or by using , which involved placing a weir across larger rivers which led to a hīnaki (woven trap). Some across the Whanganui River were more than across. Pouched lampreys were widely seen in New Zealand in the mid-19th century, and were adopted as a food by European settlers, due to the history of lampreys as delicacies in Europe.
Biology and health sciences
Agnatha
Animals
1871162
https://en.wikipedia.org/wiki/Spin%E2%80%93orbit%20interaction
Spin–orbit interaction
In quantum mechanics, the spin–orbit interaction (also called spin–orbit effect or spin–orbit coupling) is a relativistic interaction of a particle's spin with its motion inside a potential. A key example of this phenomenon is the spin–orbit interaction leading to shifts in an electron's atomic energy levels, due to electromagnetic interaction between the electron's magnetic dipole, its orbital motion, and the electrostatic field of the positively charged nucleus. This phenomenon is detectable as a splitting of spectral lines, which can be thought of as a Zeeman effect product of two effects: the apparent magnetic field seen from the electron perspective due to special relativity and the magnetic moment of the electron associated with its intrinsic spin due to quantum mechanics. For atoms, energy level splitting produced by the spin–orbit interaction is usually of the same order in size as the relativistic corrections to the kinetic energy and the zitterbewegung effect. The addition of these three corrections is known as the fine structure. The interaction between the magnetic field created by the electron and the magnetic moment of the nucleus is a slighter correction to the energy levels known as the hyperfine structure. A similar effect, due to the relationship between angular momentum and the strong nuclear force, occurs for protons and neutrons moving inside the nucleus, leading to a shift in their energy levels in the nucleus shell model. In the field of spintronics, spin–orbit effects for electrons in semiconductors and other materials are explored for technological applications. The spin–orbit interaction is at the origin of magnetocrystalline anisotropy and the spin Hall effect. In atomic energy levels This section presents a relatively simple and quantitative description of the spin–orbit interaction for an electron bound to a hydrogen-like atom, up to first order in perturbation theory, using some semiclassical electrodynamics and non-relativistic quantum mechanics. This gives results that agree reasonably well with observations. A rigorous calculation of the same result would use relativistic quantum mechanics, using the Dirac equation, and would include many-body interactions. Achieving an even more precise result would involve calculating small corrections from quantum electrodynamics. Energy of a magnetic moment The energy of a magnetic moment in a magnetic field is given by where is the magnetic moment of the particle, and is the magnetic field it experiences. Magnetic field We shall deal with the magnetic field first. Although in the rest frame of the nucleus, there is no magnetic field acting on the electron, there is one in the rest frame of the electron (see classical electromagnetism and special relativity). Ignoring for now that this frame is not inertial, we end up with the equation where is the velocity of the electron, and is the electric field it travels through. Here, in the non-relativistic limit, we assume that the Lorentz factor . Now we know that is radial, so we can rewrite . Also we know that the momentum of the electron . Substituting these and changing the order of the cross product (using the identity ) gives Next, we express the electric field as the gradient of the electric potential . Here we make the central field approximation, that is, that the electrostatic potential is spherically symmetric, so is only a function of radius. This approximation is exact for hydrogen and hydrogen-like systems. Now we can say that where is the potential energy of the electron in the central field, and is the elementary charge. Now we remember from classical mechanics that the angular momentum of a particle . Putting it all together, we get It is important to note at this point that is a positive number multiplied by , meaning that the magnetic field is parallel to the orbital angular momentum of the particle, which is itself perpendicular to the particle's velocity. Spin magnetic moment of the electron The spin magnetic moment of the electron is where is the spin (or intrinsic angular-momentum) vector, is the Bohr magneton, and is the electron-spin g-factor. Here is a negative constant multiplied by the spin, so the spin magnetic moment is antiparallel to the spin. The spin–orbit potential consists of two parts. The Larmor part is connected to the interaction of the spin magnetic moment of the electron with the magnetic field of the nucleus in the co-moving frame of the electron. The second contribution is related to Thomas precession. Larmor interaction energy The Larmor interaction energy is Substituting in this equation expressions for the spin magnetic moment and the magnetic field, one gets Now we have to take into account Thomas precession correction for the electron's curved trajectory. Thomas interaction energy In 1926 Llewellyn Thomas relativistically recomputed the doublet separation in the fine structure of the atom. Thomas precession rate is related to the angular frequency of the orbital motion of a spinning particle as follows: where is the Lorentz factor of the moving particle. The Hamiltonian producing the spin precession is given by To the first order in , we obtain Total interaction energy The total spin–orbit potential in an external electrostatic potential takes the form The net effect of Thomas precession is the reduction of the Larmor interaction energy by factor of about 1/2, which came to be known as the Thomas half. Evaluating the energy shift Thanks to all the above approximations, we can now evaluate the detailed energy shift in this model. Note that and are no longer conserved quantities. In particular, we wish to find a new basis that diagonalizes both (the non-perturbed Hamiltonian) and . To find out what basis this is, we first define the total angular momentum operator Taking the dot product of this with itself, we get (since and commute), and therefore It can be shown that the five operators , , , , and all commute with each other and with ΔH. Therefore, the basis we were looking for is the simultaneous eigenbasis of these five operators (i.e., the basis where all five are diagonal). Elements of this basis have the five quantum numbers: (the "principal quantum number"), (the "total angular momentum quantum number"), (the "orbital angular momentum quantum number"), (the "spin quantum number"), and (the " component of total angular momentum"). To evaluate the energies, we note that for hydrogenic wavefunctions (here is the Bohr radius divided by the nuclear charge ); and Final energy shift We can now say that where the spin-orbit coupling constant is For the exact relativistic result, see the solutions to the Dirac equation for a hydrogen-like atom. The derivation above calculates the interaction energy in the (momentaneous) rest frame of the electron and in this reference frame there's a magnetic field that's absent in the rest frame of the nucleus. Another approach is to calculate it in the rest frame of the nucleus, see for example George P. Fisher: Electric Dipole Moment of a Moving Magnetic Dipole (1971). However the rest frame calculation is sometimes avoided, because one has to account for hidden momentum. In solids A crystalline solid (semiconductor, metal etc.) is characterized by its band structure. While on the overall scale (including the core levels) the spin–orbit interaction is still a small perturbation, it may play a relatively more important role if we zoom in to bands close to the Fermi level (). The atomic (spin–orbit) interaction, for example, splits bands that would be otherwise degenerate, and the particular form of this spin–orbit splitting (typically of the order of few to few hundred millielectronvolts) depends on the particular system. The bands of interest can be then described by various effective models, usually based on some perturbative approach. An example of how the atomic spin–orbit interaction influences the band structure of a crystal is explained in the article about Rashba and Dresselhaus interactions. In crystalline solid contained paramagnetic ions, e.g. ions with unclosed d or f atomic subshell, localized electronic states exist. In this case, atomic-like electronic levels structure is shaped by intrinsic magnetic spin–orbit interactions and interactions with crystalline electric fields. Such structure is named the fine electronic structure. For rare-earth ions the spin–orbit interactions are much stronger than the crystal electric field (CEF) interactions. The strong spin–orbit coupling makes J a relatively good quantum number, because the first excited multiplet is at least ~130 meV (1500 K) above the primary multiplet. The result is that filling it at room temperature (300 K) is negligibly small. In this case, a -fold degenerated primary multiplet split by an external CEF can be treated as the basic contribution to the analysis of such systems' properties. In the case of approximate calculations for basis , to determine which is the primary multiplet, the Hund principles, known from atomic physics, are applied: The ground state of the terms' structure has the maximal value allowed by the Pauli exclusion principle. The ground state has a maximal allowed value, with maximal . The primary multiplet has a corresponding when the shell is less than half full, and , where the fill is greater. The , and of the ground multiplet are determined by Hund's rules. The ground multiplet is degenerated – its degeneracy is removed by CEF interactions and magnetic interactions. CEF interactions and magnetic interactions resemble, somehow, the Stark and the Zeeman effect known from atomic physics. The energies and eigenfunctions of the discrete fine electronic structure are obtained by diagonalization of the (2J + 1)-dimensional matrix. The fine electronic structure can be directly detected by many different spectroscopic methods, including the inelastic neutron scattering (INS) experiments. The case of strong cubic CEF (for 3d transition-metal ions) interactions form group of levels (e.g. T2g, A2g), which are partially split by spin–orbit interactions and (if occur) lower-symmetry CEF interactions. The energies and eigenfunctions of the discrete fine electronic structure (for the lowest term) are obtained by diagonalization of the (2L + 1)(2S + 1)-dimensional matrix. At zero temperature (T = 0 K) only the lowest state is occupied. The magnetic moment at T = 0 K is equal to the moment of the ground state. It allows the evaluation of the total, spin and orbital moments. The eigenstates and corresponding eigenfunctions can be found from direct diagonalization of Hamiltonian matrix containing crystal field and spin–orbit interactions. Taking into consideration the thermal population of states, the thermal evolution of the single-ion properties of the compound is established. This technique is based on the equivalent operator theory defined as the CEF widened by thermodynamic and analytical calculations defined as the supplement of the CEF theory by including thermodynamic and analytical calculations. Examples of effective Hamiltonians Hole bands of a bulk (3D) zinc-blende semiconductor will be split by into heavy and light holes (which form a quadruplet in the -point of the Brillouin zone) and a split-off band ( doublet). Including two conduction bands ( doublet in the -point), the system is described by the effective eight-band model of Kohn and Luttinger. If only top of the valence band is of interest (for example when , Fermi level measured from the top of the valence band), the proper four-band effective model is where are the Luttinger parameters (analogous to the single effective mass of a one-band model of electrons) and are angular momentum 3/2 matrices ( is the free electron mass). In combination with magnetization, this type of spin–orbit interaction will distort the electronic bands depending on the magnetization direction, thereby causing magnetocrystalline anisotropy (a special type of magnetic anisotropy). If the semiconductor moreover lacks the inversion symmetry, the hole bands will exhibit cubic Dresselhaus splitting. Within the four bands (light and heavy holes), the dominant term is where the material parameter for GaAs (see pp. 72 in Winkler's book, according to more recent data the Dresselhaus constant in GaAs is 9 eVÅ3; the total Hamiltonian will be ). Two-dimensional electron gas in an asymmetric quantum well (or heterostructure) will feel the Rashba interaction. The appropriate two-band effective Hamiltonian is where is the 2 × 2 identity matrix, the Pauli matrices and the electron effective mass. The spin–orbit part of the Hamiltonian, is parametrized by , sometimes called the Rashba parameter (its definition somewhat varies), which is related to the structure asymmetry. Above expressions for spin–orbit interaction couple spin matrices and to the quasi-momentum , and to the vector potential of an AC electric field through the Peierls substitution . They are lower order terms of the Luttinger–Kohn k·p perturbation theory in powers of . Next terms of this expansion also produce terms that couple spin operators of the electron coordinate . Indeed, a cross product is invariant with respect to time inversion. In cubic crystals, it has a symmetry of a vector and acquires a meaning of a spin–orbit contribution to the operator of coordinate. For electrons in semiconductors with a narrow gap between the conduction and heavy hole bands, Yafet derived the equation where is a free electron mass, and is a -factor properly renormalized for spin–orbit interaction. This operator couples electron spin directly to the electric field through the interaction energy . Oscillating electromagnetic field Electric dipole spin resonance (EDSR) is the coupling of the electron spin with an oscillating electric field. Similar to the electron spin resonance (ESR) in which electrons can be excited with an electromagnetic wave with the energy given by the Zeeman effect, in EDSR the resonance can be achieved if the frequency is related to the energy band splitting given by the spin–orbit coupling in solids. While in ESR the coupling is obtained via the magnetic part of the EM wave with the electron magnetic moment, the ESDR is the coupling of the electric part with the spin and motion of the electrons. This mechanism has been proposed for controlling the spin of electrons in quantum dots and other mesoscopic systems.
Physical sciences
Atomic physics
Physics
1871740
https://en.wikipedia.org/wiki/Ultraviolet%20index
Ultraviolet index
The ultraviolet index, or UV index, is an international standard measurement of the strength of the sunburn-producing ultraviolet (UV) radiation at a particular place and time. It is primarily used in daily and hourly forecasts aimed at the general public. The UV index is designed as an open-ended linear scale, directly proportional to the intensity of UV radiation, and adjusting for wavelength based on what causes human skin to sunburn. The purpose of the UV index is to help people effectively protect themselves from UV radiation, which has health benefits in moderation but in excess causes sunburn, skin aging, DNA damage, skin cancer, immunosuppression, and eye damage, such as cataracts. The scale was developed by Canadian scientists in 1992, and then adopted and standardized by the UN's World Health Organization and World Meteorological Organization in 1994. Public health organizations recommend that people protect themselves (for example, by applying sunscreen to the skin and wearing a hat and sunglasses) if they spend substantial time outdoors when the UV index is 3 or higher; see the table below for more detailed recommendations. Description The UV index is a linear scale that measures the intensity of UV radiation with respect to sunburn. For example, assuming similar spectral power distributions, radiation with a UV index of 12 is twice as intense as radiation at a UV index of 6. For a wide range of timescales, sunburn in response to controlled UV radiation occurs in proportion to the total number of photons delivered, not varying with the intensity or duration of exposure. Therefore, under similar conditions, a person who develops a sunburn after 30 minutes of exposure to UV index 6 radiation would most likely develop a sunburn after 15 minutes of exposure to UV index 12 radiation, since it is twice the intensity but half the duration. This linear scale is unlike other common environmental scales such as decibels or the Richter scale, which are logarithmic (the severity multiplies for each step on the scale, growing exponentially). An index of 0 corresponds to zero UV radiation, as is essentially the case at night. An index of 10 corresponds roughly to midday summer sunlight in the tropics with a clear sky when the UV index was originally designed; now summertime index values in the tens are common for tropical latitudes, mountainous altitudes, areas with ice/water reflectivity and areas with above-average ozone layer depletion. While the UV index can be calculated from a direct measurement of the UV spectral power at a given location, as some inexpensive portable devices are able to approximate, the value given in weather reports is usually a prediction based on a computer model. Although this may be in error (especially when cloud conditions are unexpectedly heavy or light), it is usually within ±1 UV index unit as that which would be measured. When the UV index is presented on a daily basis, it represents UV intensity around the sun's highest point in the day, called solar noon, halfway between sunrise and sunset. This typically occurs between 11:30 and 12:30, or between 12:30 and 13:30 in areas where daylight saving time is being observed. Predictions are made by a computer model that accounts for the effects of sun-earth distance, solar zenith angle, total ozone amount, tropospheric aerosol optical depth, elevation, snow/ice reflectivity and cloud transmission, all of which influence the amount of UV radiation at the surface. Technical definition The UV index is a number linearly related to the intensity of sunburn-producing UV radiation at a given point on the Earth's surface. It cannot be simply related to the irradiance (measured in W/m2) because the UV of greatest concern occupies a spectrum of wavelengths from 295 to 325 nm, and shorter wavelengths have already been absorbed a great deal when they arrive at the earth's surface. However, skin damage from sunburn is related to wavelength, the shorter wavelengths being much more damaging. The UV power spectrum (expressed as watts per square meter per nanometer of wavelength) is therefore multiplied by a weighting curve known as the CIE-standard McKinlay–Diffey erythemal action spectrum. There are some older formulas for the spectrum, resulting in differences of up to 2%. The result is integrated over the whole spectrum. This gives a weighted figure called the Diffey-weighted UV irradiance (DUV) or the erythemal dose rate. Since the normalization weight is 1 for wavelengths between 250nm and 298nm, a source of a given DUV irradiance causes roughly as much sunburn as a radiation source emitting those wavelengths at the same intensity, although inaccuracies in the spectrum definition and varying reactions by skin type may mean this relationship does not actually hold. When the index was designed, the typical midday summer sunlight was around 250 mW/m2. Thus, for convenience, the DUV is divided by 25 mW/m2 to produce an index nominally from 0 to 11+, though ozone depletion is now resulting in higher values. To illustrate the spectrum weighting principle, the incident power density in midday summer sunlight is typically 0.6 mW/(nm m2) at 295 nm, 74 mW/(nm m2) at 305 nm, and 478 mW/(nm m2) at 325 nm. (Note the huge absorption that has already taken place in the atmosphere at short wavelengths.) The erythemal weighting factors applied to these figures are 1.0, 0.22, and 0.003 respectively. (Also note the huge increase in sunburn damage caused by the shorter wavelengths; e.g., for the same irradiance, 305 nm is 22% as damaging as 295 nm, and 325 nm is 0.3% as damaging as 295 nm.) Integration of these values using all the intermediate weightings over the full spectral range of 290 nm to 400 nm produces a figure of 264 mW/m2 (the DUV), which is then divided by 25 mW/m2 to give a UV index of 10.6. History After sporadic attempts by various meteorologists to define a "sunburn index" and growing concern about ozone depletion, Environment Canada scientists James B. Kerr, C. Thomas McElroy, and David I. Wardle invented the modern UV index in Toronto, Ontario. Environment Canada launched it as part of the weather forecast on May 27, 1992, making Canada the first country in the world to issue official predictions of UV levels for the next day. Many other countries followed suit with their own UV indices. Initially, the methods of calculating and reporting a UV index varied significantly from country to country. A global UV index, first standardized by the World Health Organization and World Meteorological Organization in 1994, gradually replaced the inconsistent regional versions, specifying not only a uniform calculation method (the Canadian definition) but also standard colors and graphics for visual media. On December 29, 2003, a world-record ground-level UV index of 43.3 was detected at Bolivia's Licancabur volcano, though other scientists dispute readings higher than 26. In 2005, Australia and the United States launched the UV Alert. While the two countries have different baseline UV intensity requirements before issuing an alert, their common goal is to raise awareness of the dangers of over-exposure to the Sun on days with intense UV radiation. In 2007, the United Nations honored UV index inventors Kerr, McElroy, and Wardle with the Innovators Award for their far-reaching work on reducing public health risks from UV radiation. In the same year, a survey among meteorologists ranked the development of the UV index as #11 on The Weather Channel's 100 Biggest Weather Moments. In 2022, a mobile phone application that provides localized information on ultraviolet (UV) radiation levels was launched by the World Health Organization (WHO), the World Meteorological Organization (WMO), the United Nations Environment Programme (UNEP) and the International Labour Organization (ILO). Index usage The recommendations below are for average adults with lightly tanned skin (Fitzpatrick scale of skin colour: Type II). Those with darker skin (Type IV+) are more likely to withstand greater sun exposure, while extra precautions are needed for children, seniors, particularly fair-skinned adults, and those who have greater sun sensitivity for medical reasons or from UV exposure in previous days. When the day's predicted UV index is within various numerical ranges, the recommendations for protection are as follows: Some sunshine prediction and advice apps have been released. These use the UV index and Fitzpatrick scale skin type to calculate the maximum exposure time before receiving a sunburn. The Fitzpatrick scale is not sufficient to precisely estimate the minimum radiation dose needed for sunburn. Research has found broad variation within and between populations, e.g. for skin type V subjects the MED in the US is 60–100 mJ/cm2 vs. 120–240 mJ/cm2 in Taiwan. Neglecting weighting, 9 mJ/cm2 is 1 UV index hour.
Physical sciences
Other_2
Basics and measurement
1872854
https://en.wikipedia.org/wiki/Biochemical%20cascade
Biochemical cascade
A biochemical cascade, also known as a signaling cascade or signaling pathway, is a series of chemical reactions that occur within a biological cell when initiated by a stimulus. This stimulus, known as a first messenger, acts on a receptor that is transduced to the cell interior through second messengers which amplify the signal and transfer it to effector molecules, causing the cell to respond to the initial stimulus. Most biochemical cascades are series of events, in which one event triggers the next, in a linear fashion. At each step of the signaling cascade, various controlling factors are involved to regulate cellular actions, in order to respond effectively to cues about their changing internal and external environments. An example would be the coagulation cascade of secondary hemostasis which leads to fibrin formation, and thus, the initiation of blood coagulation. Another example, sonic hedgehog signaling pathway, is one of the key regulators of embryonic development and is present in all bilaterians. Signaling proteins give cells information to make the embryo develop properly. When the pathway malfunctions, it can result in diseases like basal cell carcinoma. Recent studies point to the role of hedgehog signaling in regulating adult stem cells involved in maintenance and regeneration of adult tissues. The pathway has also been implicated in the development of some cancers. Drugs that specifically target hedgehog signaling to fight diseases are being actively developed by a number of pharmaceutical companies. Introduction Signaling cascades Cells require a full and functional cellular machinery to live. When they belong to complex multicellular organisms, they need to communicate among themselves and work for symbiosis in order to give life to the organism. These communications between cells triggers intracellular signaling cascades, termed signal transduction pathways, that regulate specific cellular functions. Each signal transduction occurs with a primary extracellular messenger that binds to a transmembrane or nuclear receptor, initiating intracellular signals. The complex formed produces or releases second messengers that integrate and adapt the signal, amplifying it, by activating molecular targets, which in turn trigger effectors that will lead to the desired cellular response. Transductors and effectors Signal transduction is realized by activation of specific receptors and consequent production/delivery of second messengers, such as Ca2+ or cAMP. These molecules operate as signal transducers, triggering intracellular cascades and in turn amplifying the initial signal. Two main signal transduction mechanisms have been identified, via nuclear receptors, or via transmembrane receptors. In the first one, first messenger cross through the cell membrane, binding and activating intracellular receptors localized at nucleus or cytosol, which then act as transcriptional factors regulating directly gene expression. This is possible due to the lipophilic nature of those ligands, mainly hormones. In the signal transduction via transmembrane receptors, the first messenger binds to the extracellular domain of transmembrane receptor, activating it. These receptors may have intrinsic catalytic activity or may be coupled to effector enzymes, or may also be associated to ionic channels. Therefore, there are four main transmembrane receptor types: G protein coupled receptors (GPCRs), tyrosine kinase receptors (RTKs), serine/threonine kinase receptors (RSTKs), and ligand-gated ion channels (LGICs). Second messengers can be classified into three classes: Hydrophilic/cytosolic – are soluble in water and are localized at the cytosol, including cAMP, cGMP, IP3, Ca2+, cADPR and S1P. Their main targets are protein kinases as PKA and PKG, being then involved in phosphorylation mediated responses. Hydrophobic/membrane-associated – are insoluble in water and membrane-associated, being localized at intermembrane spaces, where they can bind to membrane-associated effector proteins. Examples: PIP3, DAG, phosphatidic acid, arachidonic acid and ceramide. They are involved in regulation of kinases and phosphatases, G protein associated factors and transcriptional factors. Gaseous – can be widespread through cell membrane and cytosol, including nitric oxide and carbon monoxide. Both of them can activate cGMP and, besides of being capable of mediating independent activities, they also can operate in a coordinated mode. Cellular response The cellular response in signal transduction cascades involves alteration of the expression of effector genes or activation/inhibition of targeted proteins. Regulation of protein activity mainly involves phosphorylation/dephosphorylation events, leading to its activation or inhibition. It is the case for the vast majority of responses as a consequence of the binding of the primary messengers to membrane receptors. This response is quick, as it involves regulation of molecules that are already present in the cell. On the other hand, the induction or repression of the expression of genes requires the binding of transcriptional factors to the regulatory sequences of these genes. The transcriptional factors are activated by the primary messengers, in most cases, due to their function as nuclear receptors for these messengers. The secondary messengers like DAG or Ca2+ could also induce or repress gene expression, via transcriptional factors. This response is slower than the first because it involves more steps, like transcription of genes and then the effect of newly formed proteins in a specific target. The target could be a protein or another gene. Examples of biochemical cascades In biochemistry, several important enzymatic cascades and signal transduction cascades participate in metabolic pathways or signaling networks, in which enzymes are usually involved to catalyze the reactions. For example, the tissue factor pathway in the coagulation cascade of secondary hemostasis is the primary pathway leading to fibrin formation, and thus, the initiation of blood coagulation. The pathways are a series of reactions, in which a zymogen (inactive enzyme precursor) of a serine protease and its glycoprotein co-factors are activated to become active components that then catalyze the next reaction in the cascade, ultimately resulting in cross-linked fibrin. Another example, sonic hedgehog signaling pathway, is one of the key regulators of embryonic development and is present in all bilaterians. Different parts of the embryo have different concentrations of hedgehog signaling proteins, which give cells information to make the embryo develop properly and correctly into a head or a tail. When the pathway malfunctions, it can result in diseases like basal cell carcinoma. Recent studies point to the role of hedgehog signaling in regulating adult stem cells involved in maintenance and regeneration of adult tissues. The pathway has also been implicated in the development of some cancers. Drugs that specifically target hedgehog signaling to fight diseases are being actively developed by a number of pharmaceutical companies. Most biochemical cascades are series of events, in which one event triggers the next, in a linear fashion. Biochemical cascades include: The Complement system The Insulin Signaling Pathway The Sonic hedgehog Signaling Pathway The Wnt signaling pathway The JAK-STAT signaling pathway The Adrenergic receptor Pathways The Acetylcholine receptor Pathways The Mitogen-activated protein kinase cascade Conversely, negative cascades include events that are in a circular fashion, or can cause or be caused by multiple events. Negative cascades include: Ischemic cascade Cell-specific biochemical cascades Epithelial cells Adhesion is an essential process to epithelial cells so that epithelium can be formed and cells can be in permanent contact with extracellular matrix and other cells. Several pathways exist to accomplish this communication and adhesion with environment. But the main signalling pathways are the cadherin and integrin pathways. The cadherin pathway is present in adhesion junctions or in desmosomes and it is responsible for epithelial adhesion and communication with adjacent cells. Cadherin is a transmembrane glycoprotein receptor that establishes contact with another cadherin present in the surface of a neighbour cell forming an adhesion complex. This adhesion complex is formed by β-catenin and α-catenin, and p120CAS is essential for its stabilization and regulation. This complex then binds to actin, leading to polymerization. For actin polymerization through the cadherin pathway, proteins of the Rho GTPases family are also involved. This complex is regulated by phosphorylation, which leads to downregulation of adhesion. Several factors can induce the phosphorylation, like EGF, HGF or v-Src. The cadherin pathway also has an important function in survival and proliferation because it regulates the concentration of cytoplasmic β-catenin. When β-catenin is free in the cytoplasm, normally it is degraded, however if the Wnt signalling is activated, β-catenin degradation is inhibited and it is translocated to the nucleus where it forms a complex with transcription factors. This leads to activation of genes responsible for cell proliferation and survival. So the cadherin-catenin complex is essential for cell fate regulation. Integrins are heterodimeric glycoprotein receptors that recognize proteins present in the extracellular matrix, like fibronectin and laminin. In order to function, integrins have to form complexes with ILK and Fak proteins. For adhesion to the extracellular matrix, ILK activate the Rac and Cdc42 proteins and leading to actin polymerization. ERK also leads to actin polymerization through activation of cPLA2. Recruitment of FAK by integrin leads to Akt activation and this inhibits pro-apoptotic factors like BAD and Bax. When adhesion through integrins do not occur the pro-apoptotic factors are not inhibited and resulting in apoptosis. Hepatocytes The hepatocyte is a complex and multifunctional differentiated cell whose cell response will be influenced by the zone in hepatic lobule, because concentrations of oxygen and toxic substances present in the hepatic sinusoids change from periportal zone to centrilobular zone10. The hepatocytes of the intermediate zone have the appropriate morphological and functional features since they have the environment with average concentrations of oxygen and other substances. This specialized cell is capable of: Regulate glucose metabolism Via cAMP/PKA/TORC (transducers of regulated CREB)/CRE, PIP3 /PKB and PLC /IP3 Expression of enzymes for synthesis, storage and distribution of glucose Synthesis of acute phase proteins Via JAK /STAT /APRE (acute phase response element) Expression of C-reactive protein, globulin protease inhibitors, complement, coagulation and fibrinolytic systems and iron homeostasis Regulate iron homeostasis (acute phase independent) Via Smads /HAMP Hepcidin expression Regulate lipid metabolism Via LXR /LXRE (LXR response element) Expression of ApoE CETP, FAS and LPL Exocrine production of bile salts and other compounds Via LXR /LXRE Expression of CYP7A1 and ABC transporters Degradate of toxic substances Via LXR /LXRE Expression of ABC transporters Endocrine production Via JAK/STAT /GHRE (growth hormone response element) IGF-1 and IGFBP-3 expression Via THR/THRE (thyroid hormone response element) Angiotensinogen expression Regenerate itself by hepatocyte mitosis Via STAT and Gab1: RAS/MAPK, PLC/IP3 and PI3K/FAK Cell growth, proliferation, survival, invasion and motility The hepatocyte also regulates other functions for constitutive synthesis of proteins (albumin, ALT and AST) that influences the synthesis or activation of other molecules (synthesis of urea and essential amino acids), activate vitamin D, utilization of vitamin K, transporter expression of vitamin A and conversion of thyroxine. Neurons Purinergic signalling has an essential role at interactions between neurons and glia cells, allowing these to detect action potentials and modulate neuronal activity, contributing for intra and extracellular homeostasis regulation. Besides purinergic neurotransmitter, ATP acts as a trophic factor at cellular development and growth, being involved on microglia activation and migration, and also on axonal myelination by oligodendrocytes. There are two main types of purinergic receptors, P1 binding to adenosine, and P2 binding to ATP or ADP, presenting different signalling cascades. The Nrf2/ARE signalling pathway has a fundamental role at fighting against oxidative stress, to which neurons are especially vulnerable due to its high oxygen consumption and high lipid content. This neuroprotective pathway involves control of neuronal activity by perisynaptic astrocytes and neuronal glutamate release, with the establishment of tripartite synapses. The Nrf2/ARE activation leads to a higher expression of enzymes involved in glutathione syntheses and metabolism, that have a key role in antioxidant response. The LKB1/NUAK1 signalling pathway regulates terminal axon branching at cortical neurons, via local immobilized mitochondria capture. Besides NUAK1, LKB1 kinase acts under other effectors enzymes as SAD-A/B and MARK, therefore regulating neuronal polarization and axonal growth, respectively. These kinase cascades implicates also Tau and others MAP. An extended knowledge of these and others neuronal pathways could provide new potential therapeutic targets for several neurodegenerative chronic diseases as Alzheimer's, Parkinson's and Huntington's disease, and also amyotrophic lateral sclerosis. Blood cells The blood cells (erythrocytes, leukocytes and platelets) are produced by hematopoiesis. The erythrocytes have as main function the O2 delivery to the tissues, and this transfer occurs by diffusion and is determined by the O2 tension (PO2). The erythrocyte is able to feel the tissue need for O2 and cause a change in vascular caliber, through the pathway of ATP release, which requires an increase in cAMP, and are regulated by the phosphodiesterase (PDE). This pathway can be triggered via two mechanisms: physiological stimulus (like reduced O2 tension) and activation of the prostacyclin receptor (IPR). This pathway includes heterotrimeric G proteins, adenylyl cyclase (AC), protein kinase A (PKA), cystic fibrosis transmembrane conductance regulator (CFTR), and a final conduit that transport ATP to vascular lumen (pannexin 1 or voltage-dependent anion channel (VDAC)). The released ATP acts on purinergic receptors on endothelial cells, triggering the synthesis and release of several vasodilators, like nitric oxide (NO) and prostacyclin (PGI2). The current model of leukocyte adhesion cascade includes many steps mentioned in Table 1. The integrin-mediated adhesion of leukocytes to endothelial cells is related with morphological changes in both leukocytes and endothelial cells, which together support leukocyte migration through the venular walls. Rho and Ras small GTPases are involved in the principal leukocyte signaling pathways underlying chemokine-stimulated integrin-dependent adhesion, and have important roles in regulating cell shape, adhesion and motility. After a vascular injury occurs, platelets are activated by locally exposed collagen (glycoprotein (GP) VI receptor), locally generated thrombin (PAR1 and PAR4 receptors), platelet-derived thromboxane A2 (TxA2) (TP receptor) and ADP (P2Y1 and P2Y12 receptors) that is either released from damaged cells or secreted from platelet dense granules. The von Willebrand factor (VWF) serves as an essential accessory molecule. In general terms, platelet activation initiated by agonist takes to a signaling cascade that leads to an increase of the cytosolic calcium concentration. Consequently, the integrin αIIbβ3 is activated and the binding to fibrinogen allows the aggregation of platelets to each other. The increase of cytosolic calcium also leads to shape change and TxA2 synthesis, leading to signal amplification. Lymphocytes The main goal of biochemical cascades in lymphocytes is the secretion of molecules that can suppress altered cells or eliminate pathogenic agents, through proliferation, differentiation and activation of these cells. Therefore, the antigenic receptors play a central role in signal transduction in lymphocytes, because when antigens interact with them lead to a cascade of signal events. These receptors, that recognize the antigen soluble (B cells) or linked to a molecule on Antigen Presenting Cells (T cells), do not have long cytoplasm tails, so they are anchored to signal proteins, which contain a long cytoplasmic tails with a motif that can be phosphorylated (ITAM – immunoreceptor tyrosine-based activation motif) and resulting in different signal pathways. The antigen receptor and signal protein form a stable complex, named BCR or TCR, in B or T cells, respectively. The family Src is essential for signal transduction in these cells, because it is responsible for phosphorylation of ITAMs. Therefore, Lyn and Lck, in lymphocytes B and T, respectively, phosphorylate immunoreceptor tyrosine-based activation motifs after the antigen recognition and the conformational change of the receptor, which leads to the binding of Syk/Zap-70 kinases to ITAM and its activation. Syk kinase is specific of lymphocytes B and Zap-70 is present in T cells. After activation of these enzymes, some adaptor proteins are phosphorylated, like BLNK (B cells) and LAT (T cells). These proteins after phosphorylation become activated and allow binding of others enzymes that continue the biochemical cascade. One example of a protein that binds to adaptor proteins and become activated is PLC that is very important in the lymphocyte signal pathways. PLC is responsible for PKC activation, via DAG and Ca2+, which leads to phosphorylation of CARMA1 molecule, and formation of CBM complex. This complex activates Iκκ kinase, which phosphorylates I-κB, and then allows the translocation of NF-κB to the nucleus and transcription of genes encoding cytokines, for example. Others transcriptional factors like NFAT and AP1 complex are also important for transcription of cytokines. The differentiation of B cells to plasma cells is also an example of a signal mechanism in lymphocytes, induced by a cytokine receptor. In this case, some interleukins bind to a specific receptor, which leads to activation of MAPK/ERK pathway. Consequently, the BLIMP1 protein is translated and inhibits PAX5, allowing immunoglobulin genes transcription and activation of XBP1 (important for the secretory apparatus formation and enhancing of protein synthesis). Also, the coreceptors (CD28/CD19) play an important role because they can improve the antigen/receptor binding and initiate parallel cascade events, like activation o PI3 Kinase. PIP3 then is responsible for activation of several proteins, like vav (leads to activation of JNK pathway, which consequently leads to activation of c-Jun) and btk (can also activate PLC). Bones Wnt signaling pathway The Wnt signaling pathway can be divided in canonical and non-canonical. The canonical signaling involves binding of Wnt to Frizzled and LRP5 co-receptor, leading to GSK3 phosphorylation and inhibition of β-catenin degradation, resulting in its accumulation and translocation to the nucleus, where it acts as a transcription factor. The non-canonical Wnt signaling can be divided in planar cell polarity (PCP) pathway and Wnt/calcium pathway. It is characterized by binding of Wnt to Frizzled and activation of G proteins and to an increase of intracellular levels of calcium through mechanisms involving PKC 50. The Wnt signaling pathway plays a significant role in osteoblastogenesis and bone formation, inducing the differentiation of mesenquimal pluripotent cells in osteoblasts and inhibiting the RANKL/RANK pathway and osteoclastogenesis. RANKL/RANK signaling pathway RANKL is a member of the TNF superfamily of ligands. Through binding to the RANK receptor it activates various molecules, like NF-kappa B, MAPK, NFAT and PI3K52. The RANKL/RANK signaling pathway regulates osteoclastogenesis, as well as, the survival and activation of osteoclasts. Adenosine signaling pathway Adenosine is very relevant in bone metabolism, as it plays a role in formation and activation of both osteoclasts and osteoblasts. Adenosine acts by binding to purinergic receptors and influencing adenylyl cyclase activity and the formation of cAMP and PKA 54. Adenosine may have opposite effects on bone metabolism, because while certain purinergic receptors stimulate adenylyl cyclase activity, others have the opposite effect. Under certain circumstances adenosine stimulates bone destruction and in other situations it promotes bone formation, depending on the purinergic receptor that is being activated. Stem cells Self-renewal and differentiation abilities are exceptional properties of stem cells. These cells can be classified by their differentiation capacity, which progressively decrease with development, in totipotents, pluripotents, multipotents and unipotents. Self-renewal process is highly regulated from cell cycle and genetic transcription control. There are some signaling pathways, such as LIF/JAK/STAT3 (Leukemia inhibitory factor/Janus kinase/Signal transducer and activator of transcription 3) and BMP/SMADs/Id (Bone morphogenetic proteins/ Mothers against decapentaplegic/ Inhibitor of differentiation), mediated by transcription factors, epigenetic regulators and others components, and they are responsible for self-renewal genes expression and inhibition of differentiation genes expression, respectively. At cell cycle level there is an increase of complexity of the mechanisms in somatic stem cells. However, it is observed a decrease of self-renewal potential with age. These mechanisms are regulated by p16Ink4a-CDK4/6-Rb and p19Arf-p53-P21Cip1 signaling pathways. Embryonic stem cells have constitutive cyclin E-CDK2 activity, which hyperphosphorylates and inactivates Rb. This leads to a short G1 phase of the cell cycle with rapid G1-S transition and little dependence on mitogenic signals or D cyclins for S phase entry. In fetal stem cells, mitogens promote a relatively rapid G1-S transition through cooperative action of cyclin D-CDK4/6 and cyclin E-CDK2 to inactivate Rb family proteins. p16Ink4a and p19Arf expression are inhibited by Hmga2-dependent chromatin regulation. Many young adult stem cells are quiescent most of the time. In the absence of mitogenic signals, cyclin-CDKs and the G1-S transition are suppressed by cell cycle inhibitors including Ink4 and Cip/Kip family proteins. As a result, Rb is hypophosphorylated and inhibits E2F, promoting quiescence in G0-phase of the cell cycle. Mitogen stimulation mobilizes these cells into cycle by activating cyclin D expression. In old adult stem cells, let-7 microRNA expression increases, reducing Hmga2 levels and increasing p16Ink4a and p19Arf levels. This reduces the sensitivity of stem cells to mitogenic signals by inhibiting cyclin-CDK complexes. As a result, either stem cells cannot enter the cell cycle, or cell division slows in many tissues. Extrinsic regulation is made by signals from the niche, where stem cells are found, which is able to promote quiescent state and cell cycle activation in somatic stem cells. Asymmetric division is characteristic of somatic stem cells, maintaining the reservoir of stem cells in the tissue and production of specialized cells of the same. Stem cells show an elevated therapeutic potential, mainly in hemato-oncologic pathologies, such as leukemia and lymphomas. Little groups of stem cells were found into tumours, calling cancer stem cells. There are evidences that these cells promote tumor growth and metastasis. Oocytes The oocyte is the female cell involved in reproduction. There is a close relationship between the oocyte and the surrounding follicular cells which is crucial to the development of both. GDF9 and BMP15 produced by the oocyte bind to BMPR2 receptors on follicular cells activating SMADs 2/3, ensuring follicular development. Concomitantly, oocyte growth is initiated by binding of KITL to its receptor KIT in the oocyte, leading to the activation of PI3K/Akt pathway, allowing oocyte survival and development. During embryogenesis, oocytes initiate meiosis and stop in prophase I. This arrest is maintained by elevated levels of cAMP within the oocyte. It was recently suggested that cGMP cooperates with cAMP to maintain the cell cycle arrest. During meiotic maturation, the LH peak that precedes ovulation activates MAPK pathway leading to gap junction disruption and breakdown of communication between the oocyte and the follicular cells. PDE3A is activated and degrades cAMP, leading to cell cycle progression and oocyte maturation. The LH surge also leads to the production of progesterone and prostaglandins that induce the expression of ADAMTS1 and other proteases, as well as their inhibitors. This will lead to degradation of the follicular wall, but limiting the damage and ensuring that the rupture occurs in the appropriate location, releasing the oocyte into the fallopian tubes. Oocyte activation depends on fertilization by sperm. It is initiated with sperm's attraction induced by prostaglandins produced by the oocyte, which will create a gradient that will influence the sperm's direction and velocity. After fusion with the oocyte, PLC ζ of the spermatozoa is released into the oocyte leading to an increase in Ca2+ levels that will activate CaMKII which will degrade MPF, leading to the resumption of meiosis. The increased Ca2+ levels will induce the exocytosis of cortical granules that degrade ZP receptors, used by sperm to penetrate the oocyte, blocking polyspermy. Deregulation of these pathways will lead to several diseases like, oocyte maturation failure syndrome which results in infertility. Increasing our molecular knowledge of oocyte development mechanisms could improve the outcome of assisted reproduction procedures, facilitating conception. Spermatozoon Spermatozoon is the male gamete. After ejaculation this cell is not mature, so it can not fertilize the oocyte. To have the ability to fertilize the female gamete, this cell suffers capacitation and acrosome reaction in female reproductive tract. The signaling pathways best described for spermatozoon involve these processes. The cAMP/PKA signaling pathway leads to sperm cells capacitation; however, adenylyl cyclase in sperm cells is different from the somatic cells. Adenylyl cyclase in spermatozoon does not recognize G proteins, so it is stimulated by bicarbonate and Ca2+ ions. Then, it converts adenosine triphosphate into cyclic AMP, which activates Protein kinase A. PKA leads to protein tyrosine phosphorylation. Phospholipase C (PLC) is involved in acrosome reaction. ZP3 is a glycoprotein present in zona pelucida and it interacts with receptors in spermatozoon. So, ZP3 can activate G protein coupled receptors and tyrosine kinase receptors, that leads to production of PLC. PLC cleaves the phospholipid phosphatidylinositol 4,5-bisphosphate (PIP2) into diacyl glycerol (DAG) and inositol 1,4,5-trisphosphate. IP3 is released as a soluble structure into the cytosol and DAG remains bound to the membrane. IP3 binds to IP3 receptors, present in acrosome membrane. In addition, calcium and DAG together work to activate protein kinase C, which goes on to phosphorylate other molecules, leading to altered cellular activity. These actions cause an increase in cytosolic concentration of Ca2+ that leads to dispersion of actin and consequently promotes plasmatic membrane and outer acrosome membrane fusion. Progesterone is a steroid hormone produced in cumulus oophorus. In somatic cells it binds to receptors in nucleus; however, in spermatozoon its receptors are present in plasmatic membrane. This hormone activates AKT that leads to activation of other protein kinases, involved in capacitation and acrosome reaction. When ROS (reactive oxygen species) are present in high concentration, they can affect the physiology of cells, but when they are present in moderated concentration they are important for acrosome reaction and capacitation. ROS can interact with cAMP/PKA and progesterone pathway, stimulating them. ROS also interacts with ERK pathway that leads to activation of Ras, MEK and MEK-like proteins. These proteins activate protein tyrosine kinase (PTK) that phosphorylates various proteins important for capacitation and acrosome reaction. Embryos Various signalling pathways, as FGF, WNT and TGF-β pathways, regulate the processes involved in embryogenesis. FGF (Fibroblast Growth Factor) ligands bind to receptors tyrosine kinase, FGFR (Fibroblast Growth Factor Receptors), and form a stable complex with co-receptors HSPG (Heparan Sulphate Proteoglycans) that will promote autophosphorylation of the intracellular domain of FGFR and consequent activation of four main pathways: MAPK/ERK, PI3K, PLCγ and JAK/STAT. MAPK/ERK (Mitogen-Activated Protein Kinase/Extracellular Signal-Regulated Kinase) regulates gene transcription through successive kinase phosphorylation and in human embryonic stem cells it helps maintaining pluripotency. However, in the presence of Activin A, a TGF-β ligand, it causes the formation of mesoderm and neuroectoderm. Phosphorylation of membrane phospholipids by PI3K (Phosphatidylinositol 3-Kinase) results in activation of AKT/PKB (Protein Kinase B). This kinase is involved in cell survival and inhibition of apoptosis, cellular growth and maintenance of pluripotency, in embryonic stem cells. PLCγ (Phosphoinositide Phospholipase C γ) hydrolyzes membrane phospholipids to form IP3 (Inositoltriphosphate) and DAG (Diacylglycerol), leading to activation of kinases and regulating morphogenic movements during gastrulation and neurulation. STAT (Signal Trandsducer and Activator of Transcription) is phosphorylated by JAK (Janus Kinase) and regulates gene transcription, determining cell fates. In mouse embryonic stem cells, this pathway helps maintaining pluripotency. The WNT pathway allows β-catenin function in gene transcription, once the interaction between WNT ligand and G protein-coupled receptor Frizzled inhibits GSK-3 (Glycogen Synthase Kinase-3) and thus formation of β-catenin destruction complex. Although there is some controversy about the effects of this pathway in embryogenesis, it is thought that WNT signalling induces primitive streak, mesoderm and endoderm formation. In TGF-β (Transforming Growth Factor β) pathway, BMP (Bone Morphogenic Protein), Activin and Nodal ligands bind to their receptors and activate Smads that bind to DNA and promote gene transcription. Activin is necessary for mesoderm and specially endoderm differentiation, and Nodal and BMP are involved in embryo patterning. BMP is also responsible for formation of extra-embryonic tissues before and during gastrulation, and for early mesoderm differentiation, when Activin and FGF pathways are activated. Pathway construction Pathway building has been performed by individual groups studying a network of interest (e.g., immune signaling pathway) as well as by large bioinformatics consortia (e.g., the Reactome Project) and commercial entities (e.g., Ingenuity Systems). Pathway building is the process of identifying and integrating the entities, interactions, and associated annotations, and populating the knowledge base. Pathway construction can have either a data-driven objective (DDO) or a knowledge-driven objective (KDO). Data-driven pathway construction is used to generate relationship information of genes or proteins identified in a specific experiment such as a microarray study. Knowledge-driven pathway construction entails development of a detailed pathway knowledge base for particular domains of interest, such as a cell type, disease, or system. The curation process of a biological pathway entails identifying and structuring content, mining information manually and/or computationally, and assembling a knowledgebase using appropriate software tools. A schematic illustrating the major steps involved in the data-driven and knowledge-driven construction processes. For either DDO or KDO pathway construction, the first step is to mine pertinent information from relevant information sources about the entities and interactions. The information retrieved is assembled using appropriate formats, information standards, and pathway building tools to obtain a pathway prototype. The pathway is further refined to include context-specific annotations such as species, cell/tissue type, or disease type. The pathway can then be verified by the domain experts and updated by the curators based on appropriate feedback. Recent attempts to improve knowledge integration have led to refined classifications of cellular entities, such as GO, and to the assembly of structured knowledge repositories. Data repositories, which contain information regarding sequence data, metabolism, signaling, reactions, and interactions are a major source of information for pathway building. A few useful databases are described in the following table. Legend: Y – Yes, N – No; BIND – Biomolecular Interaction Network Database, DIP – Database of Interacting Proteins, GNPV – Genome Network Platform Viewer, HPRD = Human Protein Reference Database, MINT – Molecular Interaction database, MIPS – Munich Information center for Protein Sequences, UNIHI – Unified Human Interactome, OPHID – Online Predicted Human Interaction Database, EcoCyc – Encyclopaedia of E. Coli Genes and Metabolism, MetaCyc – aMetabolic Pathway database, KEGG – Kyoto Encyclopedia of Genes and Genomes, PANTHER – Protein Analysis Through Evolutionary Relationship database, STKE – Signal Transduction Knowledge Environment, PID – The Pathway Interaction Database, BioPP – Biological Pathway Publisher. A comprehensive list of resources can be found at http://www.pathguide.org. Pathway-related databases and tools KEGG The increasing amount of genomic and molecular information is the basis for understanding higher-order biological systems, such as the cell and the organism, and their interactions with the environment, as well as for medical, industrial and other practical applications. The KEGG resource provides a reference knowledge base for linking genomes to biological systems, categorized as building blocks in the genomic space (KEGG GENES), the chemical space (KEGG LIGAND), wiring diagrams of interaction networks and reaction networks (KEGG PATHWAY), and ontologies for pathway reconstruction (BRITE database). The KEGG PATHWAY database is a collection of manually drawn pathway maps for metabolism, genetic information processing, environmental information processing such as signal transduction, ligand–receptor interaction and cell communication, various other cellular processes and human diseases, all based on extensive survey of published literature. GenMAPP Gene Map Annotator and Pathway Profiler (GenMAPP) a free, open-source, stand-alone computer program is designed for organizing, analyzing, and sharing genome scale data in the context of biological pathways. GenMAPP database support multiple gene annotations and species as well as custom species database creation for a potentially unlimited number of species. Pathway resources are expanded by utilizing homology information to translate pathway content between species and extending existing pathways with data derived from conserved protein interactions and coexpression. A new mode of data visualization including time-course, single nucleotide polymorphism (SNP), and splicing, has been implemented with GenMAPP database to support analysis of complex data. GenMAPP also offers innovative ways to display and share data by incorporating HTML export of analyses for entire sets of pathways as organized web pages. In short, GenMAPP provides a means to rapidly interrogate complex experimental data for pathway-level changes in a diverse range of organisms. Reactome Given the genetic makeup of an organism, the complete set of possible reactions constitutes its reactome. Reactome, located at http://www.reactome.org is a curated, peer-reviewed resource of human biological processes/pathway data. The basic unit of the Reactome database is a reaction; reactions are then grouped into causal chains to form pathways The Reactome data model allows us to represent many diverse processes in the human system, including the pathways of intermediary metabolism, regulatory pathways, and signal transduction, and high-level processes, such as the cell cycle. Reactome provides a qualitative framework, on which quantitative data can be superimposed. Tools have been developed to facilitate custom data entry and annotation by expert biologists, and to allow visualization and exploration of the finished dataset as an interactive process map. Although the primary curational domain is pathways from Homo sapiens, electronic projections of human pathways onto other organisms are regularly created via putative orthologs, thus making Reactome relevant to model organism research communities. The database is publicly available under open source terms, which allows both its content and its software infrastructure to be freely used and redistributed. Studying whole transcriptional profiles and cataloging protein–protein interactions has yielded much valuable biological information, from the genome or proteome to the physiology of an organism, an organ, a tissue or even a single cell. The Reactome database containing a framework of possible reactions which, when combined with expression and enzyme kinetic data, provides the infrastructure for quantitative models, therefore, an integrated view of biological processes, which links such gene products and can be systematically mined by using bioinformatics applications. Reactome data available in a variety of standard formats, including BioPAX, SBML and PSI-MI, and also enable data exchange with other pathway databases, such as the Cycs, KEGG and amaze, and molecular interaction databases, such as BIND and HPRD. The next data release will cover apoptosis, including the death receptor signaling pathways, and the Bcl2 pathways, as well as pathways involved in hemostasis. Other topics currently under development include several signaling pathways, mitosis, visual phototransduction and hematopoeisis. In summary, Reactome provides high-quality curated summaries of fundamental biological processes in humans in a form of biologist-friendly visualization of pathways data, and is an open-source project. Pathway-oriented approaches In the post-genomic age, high-throughput sequencing and gene/protein profiling techniques have transformed biological research by enabling comprehensive monitoring of a biological system, yielding a list of differentially expressed genes or proteins, which is useful in identifying genes that may have roles in a given phenomenon or phenotype. With DNA microarrays and genome-wide gene engineering, it is possible to screen global gene expression profiles to contribute a wealth of genomic data to the public domain. With RNA interference, it is possible to distill the inferences contained in the experimental literature and primary databases into knowledge bases that consist of annotated representations of biological pathways. In this case, individual genes and proteins are known to be involved in biological processes, components, or structures, as well as how and where gene products interact with each other. Pathway-oriented approaches for analyzing microarray data, by grouping long lists of individual genes, proteins, and/or other biological molecules according to the pathways they are involved in into smaller sets of related genes or proteins, which reduces the complexity, have proven useful for connecting genomic data to specific biological processes and systems. Identifying active pathways that differ between two conditions can have more explanatory power than a simple list of different genes or proteins. In addition, a large number of pathway analytic methods exploit pathway knowledge in public repositories such as Gene Ontology (GO) or Kyoto Encyclopedia of Genes and Genomes (KEGG), rather than inferring pathways from molecular measurements. Furthermore, different research focuses have given the word "pathway" different meanings. For example, 'pathway' can denote a metabolic pathway involving a sequence of enzyme-catalyzed reactions of small molecules, or a signaling pathway involving a set of protein phosphorylation reactions and gene regulation events. Therefore, the term "pathway analysis" has a very broad application. For instance, it can refer to the analysis physical interaction networks (e.g., protein–protein interactions), kinetic simulation of pathways, and steady-state pathway analysis (e.g., flux-balance analysis), as well as its usage in the inference of pathways from expression and sequence data. Several functional enrichment analysis tools and algorithms have been developed to enhance data interpretation. The existing knowledge base–driven pathway analysis methods in each generation have been summarized in recent literature. Applications of pathway analysis in medicine Colorectal cancer (CRC) A program package MatchMiner was used to scan HUGO names for cloned genes of interest are scanned, then are input into GoMiner, which leveraged the GO to identify the biological processes, functions and components represented in the gene profile. Also, Database for Annotation, Visualization, and Integrated Discovery (DAVID) and KEGG database can be used for the analysis of microarray expression data and the analysis of each GO biological process (P), cellular component (C), and molecular function (F) ontology. In addition, DAVID tools can be used to analyze the roles of genes in metabolic pathways and show the biological relationships between genes or gene-products and may represent metabolic pathways. These two databases also provide bioinformatics tools online to combine specific biochemical information on a certain organism and facilitate the interpretation of biological meanings for experimental data. By using a combined approach of Microarray-Bioinformatic technologies, a potential metabolic mechanism contributing to colorectal cancer (CRC) has been demonstrated Several environmental factors may be involved in a series of points along the genetic pathway to CRC. These include genes associated with bile acid metabolism, glycolysis metabolism and fatty acid metabolism pathways, supporting a hypothesis that some metabolic alternations observed in colon carcinoma may occur in the development of CRC. Parkinson's disease (PD) Cellular models are instrumental in dissecting a complex pathological process into simpler molecular events. Parkinson's disease (PD) is multifactorial and clinically heterogeneous; the aetiology of the sporadic (and most common) form is still unclear and only a few molecular mechanisms have been clarified so far in the neurodegenerative cascade. In such a multifaceted picture, it is particularly important to identify experimental models that simplify the study of the different networks of proteins and genes involved. Cellular models that reproduce some of the features of the neurons that degenerate in PD have contributed to many advances in our comprehension of the pathogenic flow of the disease. In particular, the pivotal biochemical pathways (i.e. apoptosis and oxidative stress, mitochondrial impairment and dysfunctional mitophagy, unfolded protein stress and improper removal of misfolded proteins) have been widely explored in cell lines, challenged with toxic insults or genetically modified. The central role of a-synuclein has generated many models aiming to elucidate its contribution to the dysregulation of various cellular processes. Classical cellular models appear to be the correct choice for preliminary studies on the molecular action of new drugs or potential toxins and for understanding the role of single genetic factors. Moreover, the availability of novel cellular systems, such as cybrids or induced pluripotent stem cells, offers the chance to exploit the advantages of an in vitro investigation, although mirroring more closely the cell population being affected. Alzheimer's disease (AD) Synaptic degeneration and death of nerve cells are defining features of Alzheimer's disease (AD), the most prevalent age-related neurodegenerative disorders. In AD, neurons in the hippocampus and basal forebrain (brain regions that subserve learning and memory functions) are selectively vulnerable. Studies of postmortem brain tissue from AD people have provided evidence for increased levels of oxidative stress, mitochondrial dysfunction and impaired glucose uptake in vulnerable neuronal populations. Studies of animal and cell culture models of AD suggest that increased levels of oxidative stress (membrane lipid peroxidation, in particular) may disrupt neuronal energy metabolism and ion homeostasis, by impairing the function of membrane ion-motive ATPases, glucose and glutamate transporters. Such oxidative and metabolic compromise may thereby render neurons vulnerable to excitotoxicity and apoptosis. Recent studies suggest that AD can manifest systemic alterations in energy metabolism (e.g., increased insulin resistance and dysregulation of glucose metabolism). Emerging evidence that dietary restriction can forestall the development of AD is consistent with a major "metabolic" component to these disorders, and provides optimism that these devastating brain disorders of aging may be largely preventable.
Biology and health sciences
Cell processes
Biology
1874591
https://en.wikipedia.org/wiki/German%20cockroach
German cockroach
The German cockroach (Blattella germanica), colloquially known as the croton bug, is a species of small cockroach, typically about long. In color it varies from tan to almost black, and it has two dark, roughly parallel, streaks on the pronotum running anteroposteriorly from behind the head to the base of the wings. Although B. germanica has wings, it can barely fly, although it may glide when disturbed. Of the few species of cockroach that are domestic pests, it probably is the most widely troublesome example. It is very closely related to the Asian cockroach, and to the casual observer, the two appear nearly identical and may be mistaken for each other. History Previously thought to be a native of Europe, the German cockroach later was considered to have emerged from the region of Ethiopia in Northeast Africa, but recent evidence indicates that it actually originated in South Asia or Southeast Asia, and diverged from Blattella asahinai slightly over 2000 years ago. The cockroach's sensitivity to cold might reflect its origin from such warm climates, and its spread as a domiciliary pest since ancient times has resulted from incidental human transport and shelter. The species now is cosmopolitan in distribution, occurring as a household pest on all continents except Antarctica, and on many major islands, as well. It accordingly has been given various names in the cultures of many regions. Biology and pest status The German cockroach occurs widely in human buildings, but is particularly associated with restaurants, food processing facilities, hotels, and institutional establishments such as nursing homes and hospitals. They can survive outside as well, though they are not commonly found in the wild. In cold climates, they occur only near human dwellings, because they cannot survive severe cold. However, German cockroaches have been found as inquilines ("tenants") of human buildings as far north as Alert, Nunavut. Similarly, they have been found as far south as southern Patagonia. Though nocturnal, the German cockroach occasionally appears by day, especially if the population is crowded or has been disturbed. However, sightings are most frequent in the evening, when someone suddenly brings a light into a room deserted after dark, such as a kitchen where they have been scavenging. When excited or frightened, the species emits an unpleasant odor. Diet German cockroaches are omnivorous scavengers. They are attracted particularly to meats, starches, sugars, and fatty foods. Where a shortage of foodstuff exists, they may eat household items such as soap, glue, and toothpaste. In famine conditions, they turn cannibalistic, chewing at each other's wings and legs. The German cockroach is an intermediate host of the Acanthocephalan parasite Moniliformis kalahariensis. Reproduction The German cockroach reproduces faster than any other residential cockroach, growing from egg to reproductive adult in roughly 50 – 60 days under ideal conditions. Once fertilized, a female German cockroach develops an ootheca in her abdomen. The abdomen swells as her eggs develop, until the translucent tip of the ootheca begins to protrude from the end of her abdomen, and by that time the eggs inside are fully sized, about 1/4 inch long with 16 segments. The ootheca, at first translucent, soon turns white and then within a few hours it turns pink, progressively darkening until, some 48 hours later, it attains the dark red-brown of the shell of a chestnut. The ootheca has a keel-like ridge along the line where the young emerge, and curls slightly towards that edge as it completes its maturation. A small percentage of the nymphs may hatch while the ootheca is still attached to the female, but the majority emerge some 24 hours after it has detached from the female's body. The newly hatched 3-mm-long black nymphs then progress through six or seven instars before becoming sexually mature, but ecdysis is such a hazardous process that nearly half the nymphs die of natural causes before reaching adulthood. Molted skins and dead nymphs are soon eaten by living nymphs present at the time of molting. Pest control The German cockroach is very successful at establishing an ecological niche in buildings, and is resilient in the face of many pest-control measures. Reasons include: lack of natural predators in a human habitat prolific reproduction short reproductive cycle the ability to hide in very small refuges sexual maturity attained within several weeks, and adaptation and resistance to some chemical pesticides The German cockroach is resistant to 42 active ingredients from most major groups of synthetic insecticides such as organochlorides, organophosphates, carbamates, synthetic pyrethroids, neonicotinoids, oxadiazines, and phenyl pyrazoles. German cockroach resistance was first observed with chlordane in 1952. Because German cockroaches have a very high number of genes, they can adapt and evolve resistance to pesticides. They also have many receptors for smell and can sense new food sources. German cockroaches are thigmotactic, meaning they prefer confined spaces, and they are small compared to other pest species, so they can hide within small cracks and crevices that are easy to overlook, thereby evading humans and their eradication efforts. Conversely, the seasoned pest controller is alert for cracks and crevices where it is likely to be profitable to place baits or spray surfaces. To be effective, control measures must be comprehensive, sustained, and systematic; survival of just a few eggs is quite enough to regenerate a nearly exterminated pest population within a few generations, and recolonization from surrounding populations often is very rapid, too. Another problem in controlling German cockroaches is the nature of their population behavior. Though they are not social and practice no organized maternal care, females carry oothecae of 18-50 eggs (average about 32) during incubation until just before hatching, instead of dropping them as most other species of cockroaches do. This protects the eggs from certain classes of predation. Then, after hatching, nymphs largely survive by consuming excretions and molts from adults, thereby establishing their own internal microbial populations and avoiding contact with most insecticidal surface treatments and baits. One effective control is insect growth regulators (hydroprene, methoprene, etc.), which act by preventing molting, thus prevent maturation of the various instars. Caulking baseboards and around pipes may prevent the travel of adults from one apartment to another within a building. As an adaptive consequence of pest control by poisoned glucose baits, a strain of German cockroaches has emerged that reacts to glucose as distastefully bitter. They refuse to eat baits sweetened with glucose, which presents an obstacle to their control, given that several common baits use glucose. Comparison of three common cockroaches Genome The genome of the German cockroach was published in February 2018 in Nature Ecology and Evolution. The relatively large genome (2.0 Gb) harbours a very high number of proteins, of which most notably one group of chemoreceptors, called the ionotropic receptors, is particularly numerous. These chemoreceptors possibly allow the German cockroach to detect a broad range of chemical cues from toxins, food, pathogens, and pheromones.
Biology and health sciences
Cockroaches &amp; Termites (Blattodea)
Animals
1254965
https://en.wikipedia.org/wiki/Maroon
Maroon
Maroon (US/UK , Australia ) is a brownish crimson color that takes its name from the French word , meaning chestnut. Marron is also one of the French translations for "brown". Terms describing interchangeable shades, with overlapping RGB ranges, include burgundy, claret, mulberry, and crimson. Different dictionaries define maroon differently. The Cambridge English Dictionary defines maroon as a dark reddish-purple color while its "American Dictionary" section defines maroon as dark brown-red. Lexico online dictionary defines maroon as a brownish-red. Similarly, Dictionary.com defines maroon as a dark brownish-red. The Shorter Oxford English Dictionary describes maroon as "a brownish-crimson or claret colour," while the Merriam-Webster online dictionary simply defines it as a dark red. In the sRGB color model for additive color representation, the web color called maroon is created by turning down the brightness of pure red to about one half. It is also noted that maroon is the complement of the web color called teal. Etymology Maroon is French ("chestnut"), itself from the Italian that means both chestnut and brown (but the color maroon in Italian is and in French is ), from the medieval Greek . The first recorded use of maroon as a color name in English was in 1789. In culture Religion Vajrayana Buddhist monks, such as the Dalai Lama, wear maroon robes. Maroon, along with golden yellow, is worn in the Philippines by Catholic devotees of the Black Nazarene, especially during its procession on 9 January. National symbols Maroon and white are the colors of the Flag of Qatar. Flag of Phoenix, Arizona is maroon and white. Maroon, gold, teal and orange are the colors of the Flag of Sri Lanka The Flag of Latvia has sometimes been called maroon and white, although the officially declared colors were red and white, and in 2009 were amended to carmine and white. Maroon was named as the official color of the state of Queensland, Australia, in November 2003. While the declared shade of maroon in sRGB is R=115, G=24, B=44, Queenslanders display the spirit of the state by wearing all shades of maroon at sporting and cultural events. Politics Maroon is the color of the Dutch far-right political party Forum for Democracy. Military The distinctive maroon beret has been worn by many airborne forces around the world, starting with the British Parachute Regiment (nicknamed the "Maroon Machine") in 1942. It is sometimes referred to as the "red beret." Historically, maroon was the distinguishing color of the Caçadores (rifle) regiments of the Portuguese Army. Business Maroon is the signature color of the Japanese private rail company Hankyu Railway, decided by a vote of women customers in 1923. In the 1990s, Hankyu planned an alternative color as it was developing new vehicles. That plan was called off following opposition by local residents Music The Famous Maroon Band Maroon 5 "Maroon" by Taylor Swift School colors Many universities, colleges, high schools and other educational institutions have maroon as one of their school colors. Popular combinations include maroon and white, maroon and grey, maroon and gold, and maroon and blue. Maroon and White are the official school colors of Texas A&M University. Maroon and Gold are the official school colors of Texas State University Maroon and Gold are the official school colors of Boston College. Maroon and Gold are the official school colors of the University of Minnesota. Maroon and Gold are the official school colors of the Central Michigan University. Maroon and Gold are the official school colors of Shimer College, representing Mount Carroll Seminary. Maroon is the official school color of the University of Chicago. The school also employs light and dark gray in its official primary color palette. Maroon and White are the official school colors of Lower Merion High School. Maroon and White are the official school colors of Mississippi State University and the name of the university's alma mater. Maroon and White are the official school colors of Colgate University. Maroon and White are the official school colors of Missouri State University. Maroon and White are the official school colors of Littlefield High School Maroon and White are the official school colors of Southern Illinois University Carbondale. Maroon and White are the official school colors of University of Massachusetts Amherst Maroon and Gold are the official school colors of Arizona State University and the name of the university's fight song. Maroon and Black are the official school colors of Cumberland University. Maroon and Orange are the official school colors of Virginia Tech. Maroon and Orange are the official school colors of Crooms Academy of Information Technology. Maroon and Gold are the official colors of Elon University. Maroon and Blue are the official colors of Port Rex Technical High School in South Africa. Maroon and Forest Green are the primary and complementary institutional colors of the University of the Philippines System. Maroon and Gold are the school colors of the University of Perpetual Help System DALTA in the Philippines Sports Sports teams often use maroon as one of their identifying colors, as a result, many have received the nickname "Maroons." The University of Chicago Maroons have used the nickname (and the corresponding color) since a vote came at a meeting of students and faculty on May 5, 1894. The University of the Philippines Fighting Maroons competes in the University Athletic Association of the Philippines and is based in their Diliman Campus. The moniker has been associated with their teams since the 1930s, except for a brief period in the 1960s when they were known as the Parrots. The University of Perpetual Help Altas, playing in the National Collegiate Athletic Association (Philippines) use the color as their primary uniform, derived from their school colors . Heart of Midlothian F.C. have played in predominantly maroon colours since 1877, although they had maroon badge and trimmings in their first kit from their formation in 1874. Official colour of Italian association football team Torino F.C. Club's fans are known as I Granata (the Maroons in Italian). Official colour of Argentina's association football team Club Atlético Lanús (Los Granates). Maroons was the official nickname of the athletic teams representing Mississippi State College, now Mississippi State University from 1932 until 1961 when it was officially changed to the Bulldogs. Bulldogs had been used as an unofficial nickname as far back as 1905. Maroons is also the common nickname for the Queensland Rugby League team when it plays against the Blues (the New South Welshmen) in an annual competition of three games known as the State of Origin series in Australia. The "maroon and whites" is a nickname for the Manly Warringah Sea Eagles in the Australian National Rugby League. West Indies cricket team wears all maroon clothing in limited-overs cricket whilst in Test cricket, they wear maroon cricket caps. Galway and Westmeath wears primarily maroon clothing when playing home Gaelic Athletic Association matches. In North American thoroughbred horse racing, the number 14 horse uses a maroon saddle cloth with the number in yellow. In pool, the color of the 7 and 15 billiard balls are traditionally maroon. Commercial variations of maroon Maroon (Crayola) The color designated as maroon in Crayola crayons since 1958 (when it was renamed from dark red) is a bright medium shade of maroon halfway between brown and rose. Rich maroon (maroon (X11)) Displayed in the adjacent table is the color rich maroon, i.e. maroon as defined in the X11 color names, which is much brighter and more toned toward rose than the HTML/CSS maroon shown above. See the chart Color name clashes in the X11 color names article to see those colors that are different in HTML/CSS and X11. Mystic maroon Displayed at right is the color mystic maroon, one of the colors in the special set of metallic Crayola crayons called Silver Swirls, the colors of which were formulated by Crayola in 1990. Although this is supposed to be a metallic color, there is no mechanism for displaying metallic colors on a computer. Dark red The web color dark red is displayed in the adjacent color table. UP Maroon UP Maroon is the shade used by the University of the Philippines as its primary color.
Physical sciences
Colors
Physics
1255740
https://en.wikipedia.org/wiki/Clathrate%20compound
Clathrate compound
A clathrate is a chemical substance consisting of a lattice that traps or contains molecules. The word clathrate is derived from the Latin (), meaning 'with bars, latticed'. Most clathrate compounds are polymeric and completely envelop the guest molecule, but in modern usage clathrates also include host–guest complexes and inclusion compounds. According to IUPAC, clathrates are inclusion compounds "in which the guest molecule is in a cage formed by the host molecule or by a lattice of host molecules." The term refers to many molecular hosts, including calixarenes and cyclodextrins and even some inorganic polymers such as zeolites. Clathrates can be divided into two categories: clathrate hydrates and inorganic clathrates. Each clathrate is made up of a framework and guests that reside the framework. Most common clathrate crystal structures can be composed of cavities such as dodecahedral, tetrakaidecahedral, and hexakaidecahedral cavities. Unlike hydrates, inorganic clathrates have a covalently bonded framework of inorganic atoms with guests typically consisting of alkali or alkaline earth metals. Due to the stronger covalent bonding, the cages are often smaller than hydrates. Guest atoms interact with the host by ionic or covalent bonds. Therefore, partial substitution of guest atoms follow Zintl rules so that the charge of the overall compound is conserved. Most inorganic clathrates have full occupancy of its framework cages by a guest atom to be in stable phase. Inorganic clathrates can be synthesized by direct reaction using ball milling at high temperatures or high pressures. Crystallization from melt is another common synthesis route. Due to the wide variety of composition of host and guest species, inorganic clathrates are much more chemically diverse and possess a wide range of properties. Most notably, inorganic clathrates can be found to be both an insulator and a superconductor (Ba8Si46). A common property of inorganic clathrates that has attracted researchers is low thermal conductivity. Low thermal conductivity is attributed to the ability of the guest atom to "rattle" within the host framework. The freedom of movement of the guest atoms scatters phonons that transport heat. Examples Clathrates have been explored for many applications including: gas storage, gas production, gas separation, desalination, thermoelectrics, photovoltaics, and batteries. Clathrate compounds with formula A8B16X30, where A is an alkaline earth metal, B is a group III element, and X is an element from group IV have been explored for thermoelectric devices. Thermoelectric materials follow a design strategy called the phonon glass electron crystal concept. Low thermal conductivity and high electrical conductivity is desired to produce the Seebeck Effect. When the guest and host framework are appropriately tuned, clathrates can exhibit low thermal conductivity, i.e., phonon glass behavior, while electrical conductivity through the host framework is undisturbed allowing clathrates to exhibit electron crystal. Methane clathrates feature the hydrogen-bonded framework contributed by water and the guest molecules of methane. Large amounts of methane naturally frozen in this form exist both in permafrost formations and under the ocean sea-bed. Other hydrogen-bonded networks are derived from hydroquinone, urea, and thiourea. A much studied host molecule is Dianin's compound. Hofmann clathrates are coordination polymers with the formula Ni(CN)4·Ni(NH3)2(arene). These materials crystallize with small aromatic guests (benzene, certain xylenes), and this selectivity has been exploited commercially for the separation of these hydrocarbons. Metal organic frameworks (MOFs) form clathrates.
Physical sciences
Supramolecular chemistry
Chemistry
1256645
https://en.wikipedia.org/wiki/Cycloaddition
Cycloaddition
In organic chemistry, a cycloaddition is a chemical reaction in which "two or more unsaturated molecules (or parts of the same molecule) combine with the formation of a cyclic adduct in which there is a net reduction of the bond multiplicity". The resulting reaction is a cyclization reaction. Many but not all cycloadditions are concerted and thus pericyclic. Nonconcerted cycloadditions are not pericyclic. As a class of addition reaction, cycloadditions permit carbon–carbon bond formation without the use of a nucleophile or electrophile. Cycloadditions can be described using two systems of notation. An older but still common notation is based on the size of linear arrangements of atoms in the reactants. It uses parentheses: where the variables are the numbers of linear atoms in each reactant. The product is a cycle of size . In this system, the standard Diels-Alder reaction is a (4 + 2)-cycloaddition, the 1,3-dipolar cycloaddition is a (3 + 2)-cycloaddition and cyclopropanation of a carbene with an alkene a (2 + 1)-cycloaddition. A more recent, IUPAC-preferred notation, first introduced by Woodward and Hoffmann, uses square brackets to indicate the number of electrons, rather than carbon atoms, involved in the formation of the product. In the [i + j + ...] notation, the standard Diels-Alder reaction is a [4 + 2]-cycloaddition, while the 1,3-dipolar cycloaddition is also a [4 + 2]-cycloaddition. Thermal cycloadditions and their stereochemistry Thermal cycloadditions are those cycloadditions where the reactants are in the ground electronic state. They usually have (4n + 2) π electrons participating in the starting material, for some integer n. These reactions occur for reasons of orbital symmetry in a suprafacial-suprafacial (syn/syn stereochemistry) in most cases. Very few examples of antarafacial-antarafacial (anti/anti stereochemistry) reactions have also been reported. There are a few examples of thermal cycloadditions which have 4n π electrons (for example the [2 + 2]-cycloaddition). These proceed in a suprafacial-antarafacial sense (syn/anti stereochemistry), such as the cycloaddition reactions of ketene and allene derivatives, in which the orthogonal set of p orbitals allows the reaction to proceed via a crossed transition state, although the analysis of these reactions as [π2s + π2a] is controversial. Strained alkenes like trans-cycloheptene derivatives have also been reported to react in an antarafacial manner in [2 + 2]-cycloaddition reactions. Doering (in a personal communication to Woodward) reported that heptafulvalene and tetracyanoethylene can react in a suprafacial-antarafacial [14 + 2]-cycloaddition. However, this reaction was later found to be stepwise, as it also produced the Woodward-Hoffmann forbidden suprafacial-suprafacial product under kinetic conditions. Erden and Kaufmann had previously found that the cycloaddition of heptafulvalene and N-phenyltriazolinedione also gave both suprafacial-antarafacial and suprafacial-suprafacial products. Photochemical cycloadditions and their stereochemistry Cycloadditions in which 4n π electrons participate can also occur via photochemical activation. Here, one component has an electron promoted from the HOMO (π bonding) to the LUMO (π* antibonding). Orbital symmetry is then such that the reaction can proceed in a suprafacial-suprafacial manner. An example is the DeMayo reaction. Another example is shown below, the photochemical dimerization of cinnamic acid. The two trans alkenes react head-to-tail, and the isolated isomers are called truxillic acids. Supramolecular effects can influence these cycloadditions. The cycloaddition of trans-1,2-bis(4-pyridyl)ethene is directed by resorcinol in the solid-state in 100% yield. Some cycloadditions instead of π bonds operate through strained cyclopropane rings, as these have significant π character. For example, an analog for the Diels-Alder reaction is the quadricyclane-DMAD reaction: In the (i+j+...) cycloaddition notation i and j refer to the number of atoms involved in the cycloaddition. In this notation, a Diels-Alder reaction is a (4+2)cycloaddition and a 1,3-dipolar addition such as the first step in ozonolysis is a (3+2)cycloaddition. The IUPAC preferred notation however, with [i+j+...] takes electrons into account and not atoms. In this notation, the DA reaction and the dipolar reaction both become a [4+2]cycloaddition. The reaction between norbornadiene and an activated alkyne is a [2+2+2]cycloaddition. Types of cycloaddition Diels-Alder reactions The Diels-Alder reaction is perhaps the most important and commonly taught cycloaddition reaction. Formally it is a [4+2] cycloaddition reaction and exists in a huge range of forms, including the inverse electron-demand Diels–Alder reaction, hexadehydro Diels–Alder reaction and the related alkyne trimerisation. The reaction can also be run in reverse in the retro-Diels–Alder reaction. Reactions involving heteroatoms are known, including the aza-Diels–Alder reaction and oxo-Diels–Alder reaction. Huisgen cycloadditions The Huisgen cycloaddition reaction is a (2+3)cycloaddition. Nitrone-olefin cycloaddition The Nitrone-olefin cycloaddition is a (3+2)cycloaddition. Cheletropic reactions Cheletropic reactions are a subclass of cycloadditions. The key distinguishing feature of cheletropic reactions is that on one of the reagents, both new bonds are being made to the same atom. The classic example is the reaction of sulfur dioxide with a diene. Other Other cycloaddition reactions exist: (4+3) cycloadditions, [6+4] cycloadditions, [2 + 2] photocycloadditions, metal-centered cycloaddition and [4+4] photocycloadditions Formal cycloadditions Cycloadditions often have metal-catalyzed and stepwise radical analogs, however these are not strictly speaking pericyclic reactions. When in a cycloaddition charged or radical intermediates are involved or when the cycloaddition result is obtained in a series of reaction steps they are sometimes called formal cycloadditions to make the distinction with true pericyclic cycloadditions. One example of a formal [3+3]cycloaddition between a cyclic enone and an enamine catalyzed by n-butyllithium is a Stork enamine / 1,2-addition cascade reaction: Iron-catalyzed 2+2 olefin cycloaddition Iron[pyridine(diimine)] catalysts contain a redox active ligand in which the central iron atom can coordinate with two simple, unfunctionalized olefin double bonds. The catalyst can be written as a resonance between a structure containing unpaired electrons with the central iron atom in the II oxidation state, and one in which the iron is in the 0 oxidation state. This gives it the flexibility to engage in binding the double bonds as they undergo a cyclization reaction, generating a cyclobutane structure via C-C reductive elimination; alternatively a cyclobutene structure may be produced by beta-hydrogen elimination. Efficiency of the reaction varies substantially depending on the alkenes used, but rational ligand design may permit expansion of the range of reactions that can be catalyzed.
Physical sciences
Organic reactions
Chemistry
1257941
https://en.wikipedia.org/wiki/Ornithopoda
Ornithopoda
Ornithopoda () is a clade of ornithischian dinosaurs, called ornithopods (). They represent one of the most successful groups of herbivorous dinosaurs during the Cretaceous. The most primitive members of the group were bipedal and relatively small-sized, while advanced members of the subgroup Iguanodontia became quadrupedal and developed large body size. Their major evolutionary advantage was the progressive development of a chewing apparatus that became the most sophisticated ever developed by a non-avian dinosaur, rivaling that of modern mammals such as the domestic cow. They reached their apex of diversity and ecological dominance in the hadrosaurids (colloquially known as 'duck-bills'), before they were wiped out by the Cretaceous–Paleogene extinction event along with all other non-avian dinosaurs. Members are known worldwide. History of research In 1870, Thomas Henry Huxley listed Iguanodontidae (coined by Edward Drinker Cope a year earlier) as one of his three families of dinosaurs (alongside Megalosauridae and Scelidosauridae), including within it the genera Iguanodon, Hypsilophodon, and Hadrosaurus, in addition to Cetiosaurus and tentatively Stenopelix. The term Ornithopoda was erected by Othniel Charles Marsh in 1881 as part of his then still ongoing investigation of the classification of Dinosauria. It was considered one of the four definite orders of dinosaurs, the others being Theropoda, Sauropoda, and Stegosauria (Hallopoda was considered a possible fifth). He subdivided the order into three families: Camptonotidae, Iguanodontidae, and Hadrosauridae; the former was a new name, whereas the latter two were carried over from the nomenclatures of Huxley and Edward Drinker Cope respectively. Within Camptonotidae he included the European Hypsilophodon and three American taxa he named himself, Camptonotus, Laosaurus, and Nanosaurus. Camptonotus was in 1885 renamed to Camptosaurus, as the original name was pre-occupied by a cricket; the associated family followed suit, becoming Camptosauridae. In Iguanodontidae, only found in Europe, he included Iguanodon and Vectisaurus. In Hadrosauridae, he included Hadrosaurus, Cionodon, and tentatively Agathaumas. Description Ornithopoda means "bird feet", from the Greek ornithos, ornis ("bird") and pous, podos ("feet"); this is in reference to members’ characteristic birdlike feet. They were also characterized as lacking in body armour, not developing a horny beak, having an elongated pubis (that eventually extended past the ilium), and having a missing hole in the lower jaw (a Mandibular fenestra). A variety of ornithopods, and related ornithischians, had thin cartilaginous plates along the outside of the ribs; in some cases, these plates mineralized and were fossilized. The function of these intercostal plates is unknown. They have been found with Hypsilophodon, Nanosaurus, Parksosaurus, Talenkauen, Thescelosaurus, and Macrogryphosaurus to date. The early ornithopods were only about 1 metre (3 feet) long, but probably very fast. They had a stiff tail, like the theropods, to help them balance as they ran on their hind legs. Later ornithopods became more adapted to grazing on all fours; their spines curved, and came to resemble the spines of modern ground-feeders, such as the bison. As they became more adapted to eating while bent over, they became facultative quadrupeds; still running on two legs, and comfortable reaching up into trees, but spending most of their time walking or grazing on all fours. The taxonomy of dinosaurs previously ascribed to the Hypsilophodontidae is problematic. The group previously consisted of all non-iguanodontian bipedal ornithischians, but a phylogenetic reappraisal has shown such species to be paraphyletic. As such, the hypsilophodont family is currently represented only by Hypsilophodon. Later ornithopods became larger, but never rivalled the incredible size of the long-necked, long-tailed sauropods. The very largest, such as Shantungosaurus, were as heavy as medium-sized sauropods (up to 23 metric tons/25 short tons), but never grew much beyond 15 metres (50 feet). Classification Historically, most indeterminate ornithischian bipeds were lumped in as ornithopods. Most have since been reclassified. Taxonomy Ornithopoda is usually given the rank of Suborder, within the order Ornithischia. While ranked taxonomy has largely fallen out of favour among dinosaur paleontologists, some researchers have continued to employ such a classification, though sources have differed on what its rank should be. Benton (2004) placed it as an infraorder within the suborder Cerapoda (originally named as an unranked clade), while others, such as Ibiricu et al. 2010, have retained it at its traditional ranking of suborder. Iguanodontia is often listed as an infraorder within a suborder Ornithopoda, though Benton (2004) lists Ornithopoda as an infraorder and does not rank Iguanodontia. Traditionally, iguanodontians were grouped into the superfamily Iguanodontoidea and family Iguanodontidae. However, phylogenetic studies show that the traditional "iguanodontids" are a paraphyletic grade leading up to the hadrosaurs (duck-billed dinosaurs). Groups like Iguanodontoidea are sometimes still used as unranked clades in the scientific literature, though many traditional "iguanodontids" are now included in the more inclusive group Hadrosauroidea. Iguanodontia was originally phylogenetically defined, by Paul Sereno, in 1998, as the most inclusive group containing Parasaurolophus walkeri but not Hypsilophodon foxii. Later, in 2005, he amended the definition to include Thescelosaurus neglectus as a secondary external specifier, alongside Hypsilophodon, accounting for the paraphyletic nature of Hypsilophodontidae. A 2017 study which named and described Burianosaurus noted that the type species Iguanodon bernissartensis must be part of the definition, and that the 2005 definition would, in their analysis, include a far larger group than intended (including Marginocephalia). They proposed an entirely new, node-based definition: the last common ancestor of Iguanodon bernissartensis, Dryosaurus altus, Rhabdodon priscus, and Tenontosaurus tilletti. In 2021, Iguanodontia was given a formal definition under the PhyloCode: "The smallest clade containing Dryosaurus altus, Iguanodon bernissartensis, Rhabdodon priscus, and Tenontosaurus tilletti, provided that it does not include Hypsilophodon foxii." Under this revised definition, Iguanodontia is limited to its traditionally included species, and if it were found to include hypsilophodonts, which were not traditionally considered iguanodontians, it would become an invalid grouping. The slightly less inclusive clade Dryomorpha was named by Paul Sereno in 1986 and given a formal definition in the PhyloCode as "the smallest clade containing Dryosaurus altus and Iguanodon bernissartensis". This group includes basal members such as Hesperonyx, members of the family Dryosauridae, and the derived clade Ankylopollexia. Phylogeny In 2021, Ornithopoda was given a formal definition under the PhyloCode: "The largest clade containing Iguanodon bernissartensis but not Pachycephalosaurus wyomingensis and Triceratops horridus." The cladogram below follows a 2024 analysis of Fonseca et al.
Biology and health sciences
Ornitischians
Animals
2601203
https://en.wikipedia.org/wiki/Roof%20shingle
Roof shingle
A roof’s shingles are a roof covering consisting of individual overlapping elements. These elements are typically flat, rectangular shapes laid in courses from the bottom edge of the roof up, with each successive course overlapping the joints below. Shingles are held by the roof rafters and are made of various materials such as wood, slate, flagstone, metal, plastic, and composite materials such as fibre cement and asphalt shingles. Ceramic roof tiles, which still dominate in Europe and some parts of Asia, are still usually called tiles. Roof shingles may deteriorate faster and need to repel more water than wall shingles. They are a very common roofing material in the United States. Etymology and nomenclature Shingle is a corruption of German meaning a roofing slate. Shingles historically were called tiles, and shingle was a term applied to wood shingles, as is still mostly the case outside the US. Shingles are laid in courses, usually with each shingle offset from its neighbors. The first course is the starter course and the last being a ridge course or ridge slates for a slate roof. The ridge is often covered with a ridge cap, board, piece, or roll, sometimes with a special ridge vent material. Overview Roof shingles are almost always highly visible and so are an important aspect of a building's aesthetics in patterns, textures and colors. Roof shingles, like other building materials on vernacular buildings, are typically of a material locally available. The type of shingle is taken into account before construction because the material affects the roof pitch and construction method: Some shingles can be installed on lath where others need solid sheathing (sheeting) on the roof deck. All shingle roofs are installed from the bottom upward beginning with a starter course and the edge seams offset to avoid leaks. Many shingle installations benefit from being placed on top of an underlayment material such as asphalt felt paper to prevent leaks even from wind driven rain and snow and ice dams in cold climates. At the ridge the shingles on one side of the roof simply extend past the ridge or there is a ridge cap consisting of boards, copper, or lead sheeting. An asphalt shingle roof has flexible asphalt shingles as the ridge cap. Some roof shingles are non-combustible or have a better fire rating than others which influence their use, some building codes do not allow the use of shingles with less than a class-A fire rating to be used on some types of buildings. Due to increased fire hazard, wood shingles and organic-based asphalt shingles have become less common than fiberglass-based asphalt shingles. No shingles are water-tight so the minimum recommended roof pitch is 4:12 without additional underlayment materials. Asphalt shingles In the United States, fiberglass-based asphalt shingles are by far the most common roofing material used for residential roofing applications. In Europe, they are called bitumen roof shingles or tile strips, and are much less common. They are easy to install, relatively affordable, last 20 to 60 years and are recyclable in some areas. Asphalt shingles come in numerous styles and colors. The protective nature of paper and fiberglass asphalt shingles primarily comes from the long-chain petroleum hydrocarbons, while wood shingles are protected by natural oils in the cellulose structure. Over time in the hot sun, these oils soften and when rain falls the oils are gradually washed out of the shingles. During rain, more water is channeled along eaves and complex rooflines, and these are subsequently more prone to erosion than other areas. Eventually the loss of the oils causes asphalt shingle fibers to shrink and wood shingles to rot, exposing the nail heads under the shingles. Once the nail heads are exposed, water running down the roof can seep into the building around the nail shank, resulting in rotting of underlying roof building materials and causing moisture damage to ceilings and paint inside. Wood shingles Two basic types of wood shingles are called shingles and shakes. Wood shakes are typically longer and thicker than wood shingles. The main difference is in how they are made, with shingles always being sawn and shakes normally being split, at least on one side. A wood shake is often more textured, as it is split following the natural grain of the wood rather than sawn against it like the shingle. Untreated wood shingles and shakes have long been known as a fire hazard and have been banned in various places, particularly in urban areas where exterior, combustible building materials contribute to devastating fires known as conflagrations. Modern pressure-impregnated fire retardant treated wood shakes and shingles can achieve a Class B fire rating, and can achieve a Class A rating when used in conjunction with specially designed roof assemblies. The use of wooden roof shingles has existed in parts of the world with a long tradition of wooden buildings, especially Scandinavia, and Central and Eastern Europe. Nearly all the houses and buildings in colonial Chiloé were built with wood, and roof shingles were extensively employed in Chilota architecture. Stone shingles Slate shingles are also called slate tiles, the usual name outside the US. Slate roof shingles are relatively expensive to install but can last 80 to 400 years depending on the quality of the slate used, and how well they are maintained. The material itself deteriorates only slowly, and may be recycled from one building to another. The primary means of failure in a slate roof is when individual slates lose their peg attachment and begin to slide out of place. This can open up small gaps above each slate. A secondary mode of failure is when the slates themselves begin to break up. The lower parts of a slate may break loose, giving a gap below a slate. Commonly the small and stressed area above the nail hole may fail, allowing the slate to slip as before. In the worst cases, a slate may simply break in half and be lost altogether. A common repair to slate roofs is to apply 'torching', a mortar fillet underneath the slates, attaching them to the battens. This may apply as either a repair, to hold slipping slates, or pre-emptively on construction. Where slates are particularly heavy, the roof may begin to split apart along the roofline. This usually follows rot developing and weakening the internal timbers, often as a result of poor ventilation within the roof space. An important aspect to slate roofs is the use of a metal flashing which will last as long as the slates. Slate shingles may be cut in a variety of decorative patterns and are available in several colors. Flagstone shingles are a traditional roofing material. Some stone shingles are fastened in place but some simply are held by gravity, so the roof pitch cannot be too steep or the stones will slide off the roof. Sandstone has also been used to make shingles. Gallery of stone shingles Fibre cement shingles Fibre cement shingles are often known by their manufacturer's name, such as Eternit or Transite. Often, the fiber in the cement material was asbestos, the use of which has been banned since the 1980s, for health reasons. The removal of shingles containing asbestos requires extra precautions and special disposal methods. Metal shingles Metal shingles are a type of roofing material that offers the appeal of traditional shingles, such as wood, tile, and slate, while providing high fire resistance and durability. They are crafted from durable heavy-gauge aluminum and designed to emulate the classic appearance of traditional slate, cedar shingles, and other materials. Metal shingles are extremely fire resistant, so are used in fire prone areas. Plastic shingles Plastic has been used to produce imitation slate shingles. These are lightweight and durable, but combustible. Also, they are very lightweight and are one of the cheapest shingles to have installed. Cedar shingles Cedar shingles are resistant to rot and commonly available in lengths of . These fade gradually from natural wood colored to a silver-like tone. Types include hand-split resawn shakes, tapersplit shakes or tapersawn shakes. Composite shingles Composite or synthetic shingles are a relatively new type of shingle material that are made from a blend of materials, including asphalt, fiberglass, and other polymers. These shingles are designed to mimic the look of natural materials such as wood, slate, or clay and aim to increase the durability, strength, and resistance to weather elements relative to these natural materials. Some examples of manufacturers of synthetic or composite roof shingles are DaVinci Roofscapes or Unified Steel. Rubber shingles Rubber shingle roofs are typically made from 95% recycled material from a variety of sources including recycled tires. They last twice as long as asphalt shingles but are about twice the price as asphalt. They are more quiet than most roofs, hail resistant, and a high wind rating if there is a tongue and groove fitting at the front edge of the rubber shingle design.
Technology
Building materials
null
2603832
https://en.wikipedia.org/wiki/Allied%20health%20professions
Allied health professions
Allied health professions (AHPs) are a category of health professionals that provide a range of diagnostic, preventive, therapeutic, and rehabilitative services in connection with health care. While there is no international standard for defining the diversity of allied health professions, they are typically considered those which are distinct from the fields of medicine, nursing and dentistry. In providing care to patients with certain illnesses, AHPs may work in the public or private sector, in hospitals or in other types of facilities, and often in clinical collaboration with other providers having complementary scopes of practice. Allied health professions are usually of smaller size proportional to physicians and nurses. It has been estimated that approximately 30% of the total health workforce worldwide are AHPs. In most jurisdictions, AHPs are subject to health professional requisites including minimum standards for education, regulation and licensing. They must work based on scientific principles and within an evidence based practice model. They may sometimes be considered to perform the role of mid-level practitioners, when having an advanced education and training to diagnose and treat patients, but not the certification of a physician. Allied health professionals are different from alternative medicine practitioners, also sometimes called natural healers, who work outside the conventions of modern biomedicine. Definition The organization of International Chief Health Professions Officers (ICHPO) developed a widely-used definition of the allied health professions: Professions The allied health professions represent a large cluster of health and care service providers, which usually require specific training and/or certification, but which are distinct from the medicine, nursing and dentistry professions. There is a large demand for allied health professionals, especially in rural and medically underserved areas. AHPs are generally considered distinct from other healthcare service providers on the basis of several factors. These factors may include AHPs offering services in ways which support treatments provided by other healthcare professionals (working either in independent autonomous practice or under direct supervision), or by offering services which other healthcare professionals require but do not provide themselves (for example in the use of medical technologies). The precise titles, roles and requisites of AHPs vary considerably from country to country. For the United States, a generic definition is in the Public Health Service Act, including those with "training, in a science relating to health care, [and] who shares in the responsibility for the delivery of health care services or related services" (other than a registered nurse or physician assistant). In South Africa, AHPs are identified and regulated through the Health Professions Council of South Africa (e.g., clinical technologists, dental therapists) or through the Allied Health Professions Council (e.g., massage therapists, chiropractors). Depending on the country and local health care system, the professions that are considered AHPs vary. For example, in some contexts optometrists are not considered AHPs, as the profession has a longer history of primary care practice independent of modern medicine, whereas in others optometrists are identified as falling under the AHP umbrella. Similarly, in some health care jurisdictions physiotherapists are not considered AHPs, as they tend to have more autonomy in private practice without the need for medical referral, whereas in other jurisdictions physiotherapists are identified and regulated as AHPs. A limited subset of the following professional areas may be represented, and may be regulated: Training and education Some allied health professions are more specialized, and so must adhere to national training and education standards and their professional scope of practice. Often they must prove their skills through degrees, diplomas, certified credentials, and continuing education. Other allied health professions require no special training or credentials and are trained for their work by their employer through on-the-job training (which would then exclude them from consideration as an allied health profession in a country like Australia). Many allied health jobs are considered career ladder jobs because of the opportunities for advancement within specific fields. Allied health professions can include the use of many skills. Depending on the profession, these may include basic life support; medical terminology, acronyms and spelling; basics of medical law and ethics; understanding of human relations; interpersonal communication skills; counseling skills; computer literacy; ability to document healthcare information; interviewing skills; and proficiency in word processing; database management and electronic dictation. History and growth The explosion of scientific knowledge that followed World War II brought increasingly sophisticated and complex medical diagnostic and treatment procedures. Increasing public demand for medical services combined with higher health care costs provoked a trend toward expansion of service delivery from treating patients in hospitals to widespread provision of care in physician's private and group practices, ambulatory medical and emergency clinics, and mobile clinics and community-based care. Changes in the health industry and emphasis on cost-efficient solutions to health care delivery will continue to encourage expansion of the allied health workforce. The World Health Organization estimates there is currently a worldwide shortage of about 2 million allied health professionals (considering all health workers aside from medical and nursing personnel) needed in order to meet global health goals. In recognition of the growth of the number and diversity of allied health professionals in recent years, the 2008 version of the International Standard Classification of Occupations increased the number of groups dedicated to allied health professions. Depending on the presumed skill level, they may either be identified as "health professionals" or "health associate professionals". For example, new categories have been created for delineating "paramedical practitioners"—grouping professions such as clinical officers, clinical associates, physician assistants, Feldshers, and assistant medical officers—as well as for community health workers; dietitians and nutritionists; audiologists and speech therapists; and others. In developing countries, many national human resources for health strategic plans and international development initiatives are focusing on scaling up training of allied health professions, such as HIV/AIDS counsellors, clinical officers and community health workers, in providing essential preventive and treatment services in ambulatory and community-based care settings. With growing demand for ambulatory health care, researchers expect to witness a heavier demand for professions that are employed outside of hospital settings — including allied health. Modern times India In India, the National Commission for Allied and Healthcare Professions identifies and sets quality standards for 56 professions in diagnostics, therapeutics, community health, and biomedical technology (e.g., physiotherapists, radiologists). United Kingdom In the United Kingdom there are 12 distinct professions who are considered allied health professionals; in combination they account for about 6% of the NHS workforce. In 2013 the annual expenditure on services provided by allied health professionals amounted to around £2 billion, although there is a lack of evidence around the extent to which these services improve the quality of care. United States In the United States, the Association of Schools of Allied Health Professionals uses wording from the Public Health Service Act to list those who are considered to be allied health professionals. Professionals who are excluded under the Act from the list of AHPs, although they may possess degrees or diplomas in health sciences, include the following: Employment projections Projections in the United States and many other countries have shown an expected long-term shortage of qualified workers to fill many allied health positions. This is primarily due to expansion of the health industry due to demographic changes (a growing and aging population), large numbers of health workers nearing retirement, the industry's need to be cost efficient, and a lack of sufficient investment in training programs to keep pace with these trends. Studies have also pointed to the need for increased diversity in the allied health workforce to realize a culturally competent health system. Workforce and health care experts anticipate that health services will increasingly be delivered via ambulatory and nursing care settings rather than in hospitals. According to the North American Industry Classification System (NAICS), the health care industry consists of four main sub-sectors, divided by the types of services provided at each facility: Hospitals: primarily provides inpatient health services and may provide some outpatient services as a secondary activity. Ambulatory health care settings: primarily provides outpatient services at facilities such as doctors' offices, outpatient clinics and clinical laboratories. Nursing and residential care facilities: provides residential care, such as community care for the elderly or mental health and substance abuse facilities. Social Assistance: provides services for the elderly and/or disabled, services for the homeless and poor, vocational rehabilitation, or child day care services. In the US, a larger proportion of the allied health care workforce is already employed in ambulatory settings. In California, nearly half (49.4 percent) of the allied health workforce is employed in ambulatory health care settings, compared with 28.7 percent and 21.9 percent employed in hospital and nursing care, respectively. One source reported allied health professionals making up 60 percent of the total US health workforce. Advancements in medical technology also allow for more services that formerly required expensive hospital stays to be delivered via ambulatory care. For example, in California, research has predicted the total consumption of hospital days per person will decline from 4 days in 2010 to 3.2 days in 2020 to 2.5 days in 2030. In contrast, the number of ambulatory visits per person will increase from 3.2 visits per person in 2010 to 3.6 visits per person in 2020 to 4.2 visits in 2030.
Biology and health sciences
Fields of medicine
Health
2604065
https://en.wikipedia.org/wiki/Agricultural%20education
Agricultural education
Agricultural education is the systematic and organized teaching, instruction and training (theoretical as well as hands-on, real-world fieldwork-based) available to students, farmers or individuals interested in the science, business and technology of agriculture (animal and plant production) as well as the management of land, environment and natural resources. Agricultural education is part of the curriculum of primary and secondary schools along with tertiary institutions such as colleges, universities and vocational and technical schools. Agricultural education resources is provided by youth organizations, farm apprenticeships/internships, non-profit organizations, and government agencies/ministries. As well as agricultural workshops, trainings, shows, fairs, and research institutions. Online/distance learning programs are also available. In institutions, agricultural education serves as preparation for employment or careers in the farming and agricultural sector. Students learn about general principles of land management, soil science, pasture management. As well as the principles of agricultural economics, plant growth (plant physiology and how plants transport materials, reproduce and germinate), crop production (land preparation, cultivation of cash crops, crop selection, planting and maintenance), and protection (weed, pest and disease control, integrated pest management and the responsible use of farm chemicals). In addition to livestock anatomy and physiology, production (livestock housing, nutrition and health management for the well-being of animals and optimal production), and breeding. Students who pursue higher education in colleges and universities are provided with more in-depth and focused education so that they can develop expertise in specialized areas such as animal science (physiology, nutrition, reproduction and health aspects of domesticated animals such as dairy cattle, sheep, poultry, etc.), food science (sustainable food, food safety, physiochemical and biological aspects of food, etc.), genetics (animal and plant genetics and genomics and their application in breeding and biotechnology), international agriculture (global perspective on international agribusiness, global food systems, water and energy issues, cropping systems in different regions), Farm business management (budgeting, marketing, planning and other skills necessary to manage the financial and business aspects of agricultural operations), sustainable and organic agriculture. Horticulture, turf grass management, small animal welfare, etc. can also be taught. The main purposes of agricultural education encompass building a skilled agricultural workforce through training and preparation of future farmers and agricultural professionals, promotion of sustainable and responsible agricultural practices, enhancement of food security, development of cutting-edge agricultural technologists, innovators and leaders, improvement of awareness and understanding of agriculture to bridge the gap between the source of food and the broader community of consumers, contribution to rural economic development and growth, and strengthening the connection between urban and rural agricultural communities. Historically, farming techniques and knowledge were passed down through oral traditions. In 19th century, agricultural education was formalized as an academic discipline through the Morrill Acts in the United States. Over the years, it slowly subsumed a broad range of scientific subjects related to animals, plants and crops, soil, business, food, land, natural resources and environment. In recent decades agricultural education has been adapted to address the issues of new technology, global perspectives and food security. Recent technological advancements discussed in agricultural education include the integration of precision agriculture, biotechnology, advanced machinery and data-driven approaches to optimize production, reduce resource wastage, improve overall efficiency, and minimize agriculture's ecological footprint. In the future, online learning, interdisciplinary research, community outreach and preparation for diverse career opportunities will also play a crucial role in addressing the evolving challenges of the agricultural sector. Disciplines closely tied to agricultural education include agricultural communications, agricultural leadership, and extension education. In the United States The chief sources of agriculture education in the United States are high schools, community colleges, four-year colleges and universities, youth organizations, and the 10x15 program. History The rapid growth of agricultural education began during the late 19th century. In 1862, the United States Congress created the Department of Agriculture to gather and distribute agricultural information. The Morrill Act, which provided the land-grant schools, became law that same year. The Hatch Act of 1887 gave federal funds to establish agricultural experiment stations. The first dairy school in the U.S. was created at the University of Wisconsin–Madison in 1891. Government support for agricultural education has increased during the 20th century. For example, the Smith-Lever Act of 1914 created what is now the Cooperative Extension System (1988). The Smith-Hughes Act of 1917 and the George-Barden Act of 1946 financed high-school instruction in farming. Woodlawn High School (Woodlawn, Virginia) was the first public high school in the United States to offer agricultural education classes under the Smith-Hughes Act. The Vocational Education Act of 1963 funded training in other fields of agriculture. Agricultural science and education expanded after 1900 in response to a need for more technical knowledge and skill in the use of newly developed agricultural technologies. This development led to the use of modern farming methods that required fewer farmworkers, resulting in larger, corporatized farms and ranches. This development increased the need for more agriculture science and education. Other legislation influenced the development of agricultural education into what the field is today. It has developed throughout the last century from various laws and pieces of legislation. The Education for All Handicapped Children Act of 1975 required all public schools to provide a free and appropriate education to all students with disabilities. Under this provision, children with disabilities were now allowed to enroll in agricultural classes. The Americans with Disabilities Act, enacted in 1990, further required public schools to give students with disabilities equal opportunity for education to all other children in the country, and as a result, it increased opportunities for students with disabilities participate in agricultural classes. Educate America Act of 1994 raised benchmark standards for public education at the districts level, increasing curriculum and development requirements for all classes, including agricultural ones. The School-to-Work Opportunities Act, also of 1994, required teachers to teach students tasks and disciplines that would help their students prepare for employment once they graduated, of which practical education in agriculture was a major part. Finally, No Child Left Behind (Elementary and Secondary Education Act of 2001) further raised standards for students in public schools and increased requirements of teachers in order to reach these standards, affecting agricultural education as part of the general curriculum of many schools. Elementary school In 2006, Walton Rural Life Center in Walton, Kansas was the first public elementary school in the United States to base its curriculum around agriculture. Integrating agricultural components into the classroom is one of the challenges that elementary teachers face. They are also expected to teach with outdated teaching substances. John Block, who was a previous Secretary of Agriculture for the United States, encouraged agricultural competence. Agriculture in the Classroom was one of Block's achievements to stress agricultural literacy in 1981. Agriculture in the Classroom soon became utilized in each state. Though Agriculture in the Classroom was the beginning of agriculture education in all fifty states, elementary instruction began in some schools possibly before the 1900s. After elementary agriculture education began to grow, twenty-one states began to require it by 1915. The required curriculum was evenly split between urban and rural schools. The states that required agricultural education at the elementary level were all midwestern states or southern states; both of which are rich in agriculture. High schools Agricultural education at the high school level focuses on three main categories: classroom instruction, supervised agricultural experience (SAE), and active involvement in the National FFA Organization (Future Farmers of America). Classroom instruction of an agricultural class teaches the students the basic concepts of the particular course through hands on learning and experience. Students will be taught the information in the curriculum in order for them to understand and develop skills in the application and problem solving issues that would occur in an agricultural setting. The Supervised Agricultural Experience (SAE) portion of the agricultural curriculum is when a student must use the knowledge they have gained in the classroom instruction and use it in real life situations. Several topic choices are available for the student to choose between, whether it is on a farm setting, exploratory setting, entrepreneurship, agribusiness, or research projects. The student will choose a task from one of these topic areas and conduct a research experiment throughout the course of the agricultural class. The teacher is involved in the process and will help guide the student along the way. SAE programs give students the opportunity to take the information learned in the classroom setting and use it on an agricultural topic that interests them. This portion of an agricultural education will give students an idea of how it is working out in the real world and solving problems that will arise in the work field. The FFA is a national organization that all agricultural classes at the high school level are involved in. The agricultural teacher is the leader of that particular schools FFA chapter, and will guide students’ activities and programs held throughout the year. FFA is an educational program designed to teach students leadership skills in both agricultural settings and everyday life, encourages personal growth in students, boosts self-confidence, builds character, encourage healthy lifestyles, and give students opportunities to be a part of the agricultural economy. FFA chapters will volunteer in communities, conduct banquets for FFA members and their families, raise awareness of agriculture, compete in FFA competitions, and attend national FFA conventions. In some states, the agriculture teacher leads a local Young Farmers Association in monthly meetings. The group may comprise local farmers, citizens, and anyone interested in learning about agriculture and new farming methods. The Young Farmers Association is designed to aid the adoption of agricultural technologies, and it gives agriculture teachers the opportunity to meet local citizens and reach out to the community. Colleges and universities Land-grant universities awarded more than three-quarters of all agricultural degrees in 1988. These state schools receive federal aid under legislation that followed the Morrill Act of 1862, which granted public lands to support agricultural or mechanical education. Land-grant universities have three chief functions: teaching, research, and outreach, or extension. Teaching A bachelor's degree in agricultural education generally leads to employment teaching agriculture up to the high school level or in the agricultural sector. Students are required to complete agriculture classes as well as education classes in order to become qualified to teach. A master's degree is required in order to teach at the college level. The Association for Career and Technical Education (ACTE), the largest national education association dedicated to the advancement of education that prepares youth and adults for careers, provides resources for agricultural education. An agricultural education degree also gives the qualifications to do extension work for universities and agricultural companies and organizations. The following universities provide pathways to complete certification requirements of their home states in secondary agricultural education: Alcorn State University Angelo State University - Texas Auburn University Clemson University Cornell University Colorado State University, Degree Requirements Illinois State University, Degree Requirements Louisiana State University Delaware Valley University Michigan State University, Middle Tennessee State University Montana State University North Carolina State University North Dakota State University Oregon State University The Pennsylvania State University, Degree Requirements South Dakota State University, Degree Requirements The University of Idaho Sam Houston State University - Texas Sul Ross State University - Texas Texas A & M - Kingsville Texas A & M - Commerce Texas A&M University Texas State University Texas Tech University Tarleton State University -Texas University of Arkansas University of Georgia University of Missouri, Degree Requirements Utah State University Washington State University West Virginia University West Texas A & M - Texas Colleges of agriculture additionally prepare students for careers in all aspects of the food and agricultural system. Some career choices include food science, veterinary science, farming, ranching, teaching, marketing, agricultural communication, management, and social services. Colleges and universities awarded about 21,000 bachelor's degrees in agriculture per year in 1988, and about 6,000 master's or doctor's degrees. Research Each land-grant university has an agricultural experiment station equipped with laboratories and experimental farms. There, agricultural scientists work to develop better farming methods, solve the special problems of local farmers, and provide new technology. Research published in scholarly journals about agricultural safety is available from the NIOSH-supported National Agricultural Safety Database. The American Dairy Science Association provides research and education scholarships focused on the dairy farm and processing industries. Extension service The Cooperative Extension System is a partnership of the federal, state, and county governments. This service distributes information gathered by the land-grant universities and the U.S. Department of Agriculture to farmers, families, and young people. County extension agents, located in most countries (1988), train and support about 3 million (1988) volunteer leaders. Agents and volunteers carry out extension programs through meetings, workshops, newsletters, radio, television, and visits. Related organizations Professional organizations in the United States related to agricultural education include the American Association for Agricultural Education (AAAE), the Association for Career and Technical Education (ACTE), the National Association of Agricultural Educators (NAAE), and The National Council for Agricultural Education (The council). 4H Club is considered a youth development program that teaches children about sciences, leadership, research, etc. 4H club has over 6 million members nationwide and is the largest youth development organization in the United States. 4H members use hands on learning to reach goals and help in communities. Members of 4-H carry out group and individual projects dealing with conservation, food and agriculture, health and safety, and other subjects. The 4-H program in the United States is part of the Cooperative Extension service. Somewhat similarly, the FFA is a national organization that teaches students leadership skills and is designed to help members become more well rounded citizens in the agricultural field. The FFA is an integral part of the program of agricultural education in many high schools as a result of Public Law 740 in 1950 (Currently revised as Public Law 105-225 of the 105th Congress of the United States), with 649,355 FFA members (2016–2017). Local chapters participate in Career Development Events (individually and as a team), each student has a Supervised Agricultural Experience program (SAE), and participates in many conferences and conventions to develop leadership, citizenship, patriotism and excellence in agriculture. The National FFA Organization is structured from the local chapter up, including local districts, areas, regions, state associations, and the national level. The FFA Mission is to make a positive difference in the lives of students by developing their potential for premier leadership, personal growth, and career success through agricultural education. Outside the US The history of agricultural education predates US activities and initially developed in Scottish, Italian and German colleges. The land grant approach of the USA owes much to the Scottish system in particular. Changes in higher agricultural education around the world today are highlighting implicit approaches that have hampered development and exceptional advances that have fed the world. the process has been described in one text (below) which takes a global perspective. Agricultural education in other countries resembles that in the United States. Canada has its own 4-H program. Agriculture Canada distributes information on new farming methods and maintains experimental farms, research stations, and research institutions throughout the country. BC Agriculture in the Classroom Foundation operates in the province of British Columbia. In Australia, each state has several agricultural research stations and an extension service. Great Britain has a program of youth clubs called Young Farmer's Clubs that resemble 4-H. The Food and Agriculture Organization of the United Nations works to train people throughout the world in modern farming methods. The United States gives technical assistance to farmers in developing nations through its Agency for International Development (AID). The Green School Alliance (GSA) founded in 2007 has been working globally to expand its network of peer-to-peer Green Schools which focus on teaching sustainability and environmental education. It is a non-profit organization with free and voluntary membership. It has accrued 8,087 member schools from 48 states and 91 countries. Australia As of February 2015 Agriculture in Australia employs over 235,300 people in the agriculture, fishing and forestry and fishing industry. This industry alone equates to 12% share of the GDP earning close to $155 billion a year. The farmers own a combined 135,997 farms covering approximately 61% of the land mass. Given these figures the agricultural programs in place in school and universities in very important to the future of the county. Several high schools operate across the country specifying in agriculture education. Predominantly these high schools are set in the rural areas with access to land. On the majority of cases the students often travel 1000 km to attend schools, taking up residence at the schools as boarders for the school term. One of the biggest in Australia is Farrer Memorial Agricultural High School in central New South Wales. The Agriculture in Education programme launched by the Australian government in 2015 helps teachers better understand the products and processes associated with food and fibre production and gives students an opportunity to understand the importance of agriculture in the Australian economy. Topics covered by the materials include: designing and making a financial plan for a market garden, free range chicken farming, food security, and sustainable production practices in food and fibre. The agricultural environment has changed enormously over the past 15 years, with greater emphasis on product quality issues, vertical integration from production to consumer, diversity in demand options, and environmental namely drought, welfare and ethical issues. Western Australia In Western Australia, The Western Australian College of Agriculture is the primary provider of high schools in the state providing excellent educational opportunities at six campuses located near Cunderdin, Denmark, Esperance, Harvey, Morawa and Narrogin. Each Campus has modern facilities on commercial sized farms and offers Year 10, 11 and 12 programs for male and female students. The students study a range of School Curriculum and Standards Authority subjects leading to Secondary Graduation and the Western Australian Certificate of Education and also complete vocational qualifications from Industry Training Packages. The major focus is on the study of agriculture but the program may also include horticulture, viticulture, equine, aquaculture, forestry, building construction, metals and engineering and automotive. Each Campus offers some specialist programs that can lead to tertiary study and apprenticeships and careers in a range of agriculture related vocations. Tertiary studies located in Perth are available at Curtin University, Murdoch University and Muresk Institute offering degrees in Agriculture including Agricultural Business Management and Agricultural Science. Western Australian is in a precarious position and faces several challenges, fact that agriculture in Australia is affected by an ongoing shortage of labour and of skills. Labour supply is being adversely affected by an ageing workforce, retirements by baby boomers, seasonal nature of the lower skilled workforce and an inability to attract sufficient young people to work in the industry. Agricultural educators Otto Frederick Hunziker, Purdue University John Wrightson, Downton Agricultural College Raymond A. Pearson, Cornell University HAS University of Applied Sciences Kasetsart University King Mongkut's Institute of Technology Ladkrabang Wageningen University
Technology
Academic disciplines
null
2604677
https://en.wikipedia.org/wiki/Tabqa%20Dam
Tabqa Dam
The Tabqa Dam (, ; ), or al-Thawra Dam as it is also named (, ; , literally "Revolution Dam"), most commonly known as Euphrates Dam (; ; ), is an earthen dam on the Euphrates, located upstream from the city of Raqqa in Raqqa Governorate, Syria. The city of Al-Thawrah is located immediately south of the dam. The dam is high and long and is the largest dam in Syria. Its construction led to the creation of Lake Assad, Syria's largest water reservoir. The dam was constructed between 1968 and 1973 with help from the Soviet Union. At the same time, an international effort was made to excavate and document as many archaeological remains as possible in the area of the future lake before they would be flooded by the rising water. When the flow of the Euphrates was reduced in 1974 to fill the lake behind the dam, a dispute broke out between Syria and Iraq (which is downstream) that was settled by intervention from Saudi Arabia and the Soviet Union. The dam was originally built to generate hydroelectric power, as well as irrigate lands on both sides of the Euphrates. The dam has not reached its full potential in either of these objectives. Project history In 1927, when Syria was a French mandate, it was proposed to build a dam in the Euphrates near the Syria–Turkey border. After Syria became independent in 1946, the feasibility of the project was studied and shelved. In 1957, the Syrian government reached an agreement with the Soviet Union to build a dam in the Euphrates. In 1960, as part of the United Arab Republic, Syria signed an agreement with West Germany for a financing loan. In 1965, after Syria left the UAR, a new agreement was reached with the Soviet Union. A special government department was created to oversee the construction. In the early 1960s Swedish geomorphologist Åke Sundborg worked as an advisor on the dam project with the task of estimating the amount and fate of sediment entering the dam. Sundborg developed a mathematical model on the projected growth of a river delta in the dam. Originally, the Tabqa Dam was conceived as a dual-purpose dam. The dam would include a hydroelectric power station with eight turbines capable of producing 880 MW in total, and would irrigate an area of on both sides of the Euphrates. Construction of the dam lasted between 1968 and 1973, while the accompanying power station was finished on 8 March 1978. The dam was constructed during the agricultural reform policies of Hafez al-Assad, who had re-routed the Euphrates river for the dam in 1974. The total cost of the dam was US$340 million of which US$100 million was in the form of a loan by the Soviet Union. The Soviet Union also provided technical expertise. During construction, up to 12,000 Syrians and 900 Russian technicians worked on the dam. They were housed in the greatly expanded town near the construction site, which was subsequently renamed Al-Thawrah. To facilitate the project, as well as the construction of irrigation works on the Khabur River, the national railway system (Chemins de Fer Syriens) was extended from Aleppo to the dam, Raqqa, Deir ez-Zor, and eventually Qamishli. Around 4,000 Arab families who had been living in the flooded part of the Euphrates Valley were resettled in other parts of northern Syria, part of a partially implemented plan to establish an "Arab belt" along the borders with Turkey and Iraq in order to separate Kurds in Syria from Turkish and Iraqi Kurdistan. Dispute with Iraq In 1974, the authorities started to fill the lake behind the dam by reducing the flow of the Euphrates. Slightly earlier, the Turkish government had started filling the reservoir of the newly constructed Keban Dam, and at the same time the area was hit by significant drought. As a result, Iraq received significantly less water from the Euphrates than normal, and complained that annual Euphrates flow had dropped from in 1973 to in 1975. Iraq asked the Arab League to intervene but Syria argued that it received less water from Turkey as well. As a result, tensions rose; both governments sent troops to the Syria-Iraq border, and the Iraqi government threatened to bomb the Tabqa Dam. Before the dispute could escalate any further, an agreement was reached in 1975 after mediation by Saudi Arabia and the Soviet Union, whereby Syria immediately increased the flow from the dam and agreed to let 60 percent of the Euphrates water that came over the Syria-Turkey border flow into Iraq. In 1987, Turkey, Syria and Iraq signed an agreement by which Turkey was committed to maintain an average Euphrates flow of per second into Syria, which translates into of water per year. Rescue excavations in the Lake Assad region The upper part of the Syrian Euphrates valley has been intensively occupied at least since the Late Natufian period (10,800–9500 BC). Nineteenth- and early twentieth-century European travellers had already noted the presence of numerous archaeological sites in the area that would be flooded by the new reservoir. In order to preserve or at least document as many of these remains as possible, an extensive archaeological rescue programme was initiated during which more than 25 sites were excavated. Between 1963 and 1965, archaeological sites and remains were located with the help of aerial photographs, and a ground survey was carried out as well to determine the periods that were present at each site. Between 1965 and 1970, foreign archaeological missions carried out systematic excavations at the sites of Mureybet (United States), Tell Qannas (Habuba Kabira) (Belgium), Mumbaqa (Germany), Selenkahiye (Netherlands), and Emar (France). With help from UNESCO, two minarets at Mureybet and Meskene were photogrammetrically measured, and a protective glacis was built around the castle Qal'at Ja'bar. The castle was located on a hilltop that would not be flooded, but the lake would turn it in an island. The castle is now connected to the shore by a causeway. In 1971, with support from UNESCO, Syria appealed to the international community to participate in the efforts to salvage as many archaeological remains as possible before the area would disappear under the rising water of Lake Assad. To stimulate foreign participation, the Syrian antiquities law was modified so that foreign missions had the right to claim a part of the artefacts that were found during excavation. As a result, between 1971 and 1977, numerous excavations were carried out in the Lake Assad area by Syrian as well as foreign missions. Syrian archaeologists worked at the sites of Tell al-'Abd, 'Anab al-Safinah, Tell Sheikh Hassan, Qal'at Ja'bar, Dibsi Faraj and Tell Fray. There were missions from the United States on Tell Hadidi (Azu), Dibsi Faraj, Tell Fray and Shams ed-Din-Tannira; from France on Mureybet and Emar; from Italy on Tell Fray; from the Netherlands on Tell Ta'as, Jebel Aruda and Selenkahiye; from Switzerland on Tell al-Hajj; from Great Britain on Abu Hureyra and Tell es-Sweyhat; and from Japan on Tell Roumeila. In addition, the minarets of Mureybet and Meskene were moved to higher locations, and Qal'at Ja'bar was further reinforced and restored. Many finds from the excavations are now on display in the National Museum of Aleppo, where a special permanent exhibition is devoted to the finds from the Lake Assad region. Other dams in the Syrian Euphrates valley After the completion of the Tabqa Dam, Syria built two more dams in the Euphrates, both of which were functionally related to the Tabqa Dam. The Baath Dam, located downstream from the Tabqa Dam, was completed in 1986 and functions as a floodwater control to manage the irregular output of the Tabqa Dam and as a hydroelectric power station. The Tishrin Dam, which functions primarily as a hydroelectric power station, has been constructed south from the Syria–Turkey border and filling of the reservoir started in 1999. Its construction was partly motivated by the disappointing performance of the Tabqa Dam. The implementation of a fourth dam between Raqqa and Deir ez-Zor – the Halabiye Dam – was planned in 2009 and an appeal to archaeologists was released to excavate sites that will be flooded by the new reservoir. Recent history On 11 February 2013 the dam was captured by the Syrian opposition in their fight against the government, according to The Syrian Observatory for Human Rights. In 2013, four of the dam's eight turbines were operational and the original staff continued to manage it. Dam workers still received pay from the Syrian Government, and fighting in the area temporarily ceased if repairs were needed. The dam was then captured by the Islamic State of Iraq and the Levant in 2014. SDF efforts to retake parts of the Al-Raqqa and Deir ez-Zor Governorates, including the area immediately surrounding the dam, began in November 2016. Interruptions in power output from the dam due to combat are estimated to have affected up to 40,000 people. In January 2017 the Euphrates rose 10 meters due to heavy precipitation and flow mismanagement, disrupting transportation and flooding farmland downstream. A nearby raid against ISIL by combined SDF and US special forces also impacted the dam's entrance. In March 2017, ISIL warned of the dam's imminent collapse after the towers attached to the dam were bombed by an American B-52 bomber during a joint US/SDF operation to capture it on March 26, 2017. The dam had been on a U.S. no-strike list but was struck by three bombs anyway. The bombing caused critical equipment to fail and the dam to stop functioning. One of the bombs, a bunker buster, failed to detonate. An emergency ceasefire between the Islamic State, US forces, and the Syrian government, otherwise sworn enemies, enabled engineers to make emergency repairs to the dam to prevent it from failing while the Turkish authorities coordinated to close the gates of dams upstream in order to prevent overtopping. A US drone strike killed three of the civilian emergency dam workers shortly thereafter. On March 29 a floodgate was opened by emergency workers, causing flooding downstream which displaced approximately 3,000 people. A second floodgate was opened on April 5, mitigating risk of collapse. If the dam had failed major flooding would have extended past Deir ez-Zor, more than 100 miles downstream. SDF forces announced they captured the dam on 10 May 2017. Characteristics of the dam and the reservoir The Tabqa dam is located on a spot where rocky outcrops on each side of the Euphrates Valley are less than apart. The dam is an earth-fill dam that is long, high from the riverbed ( above sea-level), wide at its base and at the top. The hydroelectric power station is located on the southern end of the dam and contains eight Kaplan turbines. The turbines' rotation speed is 125 RPM, and they can potentially generate 103 MW each. Lake Assad is long and on average wide. The reservoir can potentially hold of water, at which size its surface area would be . Annual evaporation is due to the high average summer temperature in northern Syria. This is high compared to reservoirs upstream from Lake Assad. For example, the evaporation at Keban Dam Lake is per year at roughly the same surface area. Neither the Tabqa Dam nor Lake Assad is currently used to its full economic potential. Although the lake can potentially hold , actual capacity is , with a surface area of . The proposed irrigation scheme suffered from a number of problems, including the high gypsum content in the reclaimed soils around Lake Assad, soil salinization, the collapse of canals that distributed the water from Lake Assad, and the unwillingness of farmers to resettle in the reclaimed areas. As a result, only were irrigated from Lake Assad in 1984. In 2000, the irrigated surface had risen to , which is 19 percent of the projected . Due to lower than expected water flow from Turkey, as well as lack of maintenance, the dam generates only 150 MW instead of 800 MW. Lake Assad is the most important source of drinking water to Aleppo, providing the city through a pipeline with of drinking water per year. The lake also supports a fishing industry. Environmental effects Research indicates that the salinity of the Euphrates water in Iraq has increased considerably since the nearly simultaneous construction of the Keban Dam in Turkey and the Tabqa Dam in Syria. This increase can, among other things, be related to the lower discharge of the Euphrates as a result of the construction of the Keban Dam and the dams of the Southeastern Anatolia Project (GAP) in Turkey, and to a lesser degree of the Tabqa Dam in Syria. High-salinity water is less useful for domestic and irrigation purposes. The shore of the lake has developed into an important marshland area. On the southeastern shore, some areas have been reforested with evergreen trees including the Aleppo pine and the Euphrates poplar. Lake Assad is an important wintering location for migratory birds and the government has undertaken measures to protect small areas along the shores of Lake Assad from hunters by downgrading access roads. The island of Jazirat al-Thawra has been designated a nature reserve.
Technology
Dams
null
3552729
https://en.wikipedia.org/wiki/Marine%20engineering
Marine engineering
Marine engineering is the engineering of boats, ships, submarines, and any other marine vessel. Here it is also taken to include the engineering of other ocean systems and structures – referred to in certain academic and professional circles as "ocean engineering". After completing this degree one can join a ship as an officer in engine department and eventually rise to the rank of a chief engineer. This rank is one of the top ranks onboard and is equal to the rank of a ship's captain. Marine engineering is the highly preferred course to join merchant Navy as an officer as it provides ample opportunities in terms of both onboard and onshore jobs. Marine engineering applies a number of engineering sciences, including mechanical engineering, electrical engineering, electronic engineering, and computer Engineering, to the development, design, operation and maintenance of watercraft propulsion and ocean systems. It includes but is not limited to power and propulsion plants, machinery, piping, automation and control systems for marine vehicles of any kind, as well as coastal and offshore structures. History Archimedes is traditionally regarded as the first marine engineer, having developed a number of marine engineering systems in antiquity. Modern marine engineering dates back to the beginning of the Industrial Revolution (early 1700s). In 1807, Robert Fulton successfully used a steam engine to propel a vessel through the water. Fulton's ship used the engine to power a small wooden paddle wheel as its marine propulsion system. The integration of a steam engine into a watercraft to create a marine steam engine was the start of the marine engineering profession. Only twelve years after Fulton's Clermont had her first voyage, the Savannah marked the first sea voyage from America to Europe. Around 50 years later the steam powered paddle wheels had a peak with the creation of the Great Eastern, which was as big as one of the cargo ships of today, 700 feet in length, weighing 22,000 tons. Paddle steamers would become the front runners of the steamship industry for the next thirty years till the next type of propulsion came around. Training There are several educational paths to becoming a marine engineer, all of which includes earning a university or college degree, such as a Bachelor of Engineering (B.Eng. or B.E.), Bachelor of Science (B.Sc. or B.S.), Bachelor of Technology (B.Tech.), Bachelor of Technology Management and Marine Engineering (B.TecMan & MarEng), or a Bachelor of Applied Science (B.A.Sc.) in Marine Engineering. Depending on the country and jurisdiction, to be licensed as a Marine engineer, a Master's degree, such as a Master of Engineering (M.Eng.), Master of Science (M.Sc or M.S.), or Master of Applied Science (M.A.Sc.) may be required. Some marine engineers join the profession laterally, entering from other disciplines, like Mechanical Engineering, Civil Engineering, Electrical Engineering, Geomatics Engineering and Environmental Engineering, or from science-based fields, such as Geology, Geophysics, Physics, Geomatics, Earth Science, and Mathematics. To qualify as a marine engineer, those changing professions are required to earn a graduate Marine Engineering degree, such as an M.Eng, M.S., M.Sc., or M.A.Sc., after graduating from a different quantitative undergraduate program. The fundamental subjects of marine engineering study usually include: Mathematics; Calculus, Algebra, Differential Equations, Numerical Analysis Geoscience; Geochemistry, Geophysics, Mineralogy, Geomatics Mechanics; Rock mechanics, Soil Mechanics, Geomechanics Thermodynamics; Heat Transfer, Work (thermodynamics), Mass Transfer Hydrogeology Fluid Mechanics; Fluid statics, Fluid Dynamics Geostatistics; Spatial Analysis, Statistics Control Engineering; Control Theory, Instrumentation Surface Mining; Open-pit mining Related Fields Naval architecture In the engineering of seagoing vessels, naval architecture is concerned with the overall design of the ship and its propulsion through the water, while marine engineering ensures that the ship systems function as per the design. Although they have distinctive disciplines, naval architects and marine engineers often work side-by-side. Ocean engineering (and combination with Marine engineering) Ocean engineering is concerned with other structures and systems in or adjacent to the ocean, including offshore platforms, coastal structures such as piers and harbors, and other ocean systems such as ocean wave energy conversion and underwater life-support systems. This in fact makes ocean engineering a distinctive field from marine engineering, which is concerned with the design and application of shipboard systems specifically. However, on account of its similar nomenclature and multiple overlapping core disciplines (e.g. hydrodynamics, hydromechanics, and materials science), "ocean engineering" sometimes operates under the umbrella term of "marine engineering", especially in industry and academia outside of the U.S. The same combination has been applied to the rest of this article. Oceanography Oceanography is a scientific field concerned with the acquisition and analysis of data to characterize the ocean. Although separate disciplines, marine engineering and oceanography are closely intertwined: marine engineers often use data gathered by oceanographers to inform their design and research, and oceanographers use tools designed by marine engineers (more specifically, oceanographic engineers) to advance their understanding and exploration of the ocean. Mechanical engineering Marine engineering incorporates many aspects of mechanical engineering. One manifestation of this relationship lies in the design of shipboard propulsion systems. Mechanical engineers design the main propulsion plant, the powering and mechanization aspects of the ship functions such as steering, anchoring, cargo handling, heating, ventilation, air conditioning interior and exterior communication, and other related requirements. Electrical power generation and electrical power distribution systems are typically designed by their suppliers; the only design responsibility of the marine engineering is installation. Furthermore, an understanding of mechanical engineering topics such as fluid dynamics, fluid mechanics, linear wave theory, strength of materials, structural mechanics, and structural dynamics is essential to a marine engineer's repertoire of skills. These and other mechanical engineering subjects serve as an integral component of the marine engineering curriculum. Civil Engineering Civil engineering concepts play in an important role in many marine engineering projects such as the design and construction of ocean structures, ocean bridges and tunnels, and port/harbor design. Coastal engineering Electronics and Robotics Marine engineering often deals in the fields of electrical engineering and robotics, especially in applications related to employing deep-sea cables and UUVs. Deep-sea cables A series of transoceanic fiber optic cables are responsible for connecting much of the world's communication via the internet, carrying as much as 99 percent of total global internet and signal traffic. These cables must be engineered to withstand deep-sea environments that are remote and often unforgiving, with extreme pressures and temperatures as well as potential interference by fishing, trawling, and sea life. UUV autonomy and networks The use of unmanned underwater vehicles (UUVs) stands to benefit from the use of autonomous algorithms and networking. Marine engineers aim to learn how advancements in autonomy and networking can be used to enhance existing UUV technologies and facilitate the development of more capable underwater vehicles. Petroleum Engineering A knowledge of marine engineering proves useful in the field of petroleum engineering, as hydrodynamics and seabed integration serve as key elements in the design and maintenance of offshore oil platforms. Marine construction Marine construction is the process of building structures in or adjacent to large bodies of water, usually the sea. These structures can be built for a variety of purposes, including transportation, energy production, and recreation. Marine construction can involve the use of a variety of building materials, predominantly steel and concrete. Some examples of marine structures include ships, offshore platforms, moorings, pipelines, cables, wharves, bridges, tunnels, breakwaters and docks. Challenges specific to marine engineering Hydrodynamic loading In the same way that civil engineers design to accommodate wind loads on building and bridges, marine engineers design to accommodate a ship or submarine struck by waves millions of times over the course of the vessel's life. These load conditions are also found in marine construction and coastal engineering Stability Any seagoing vessel has the constant need for hydrostatic stability. A naval architect, like an airplane designer, is concerned with stability. What makes the naval architect's job unique is that a ship operates in two fluids simultaneously: water and air. Even after a ship has been designed and put to sea, marine engineers face the challenge of balancing cargo, as stacking containers vertically increases the mass of the ship and shifts the center of gravity higher. The weight of fuel also presents a problem, as the pitch of the ship may cause the liquid to shift, resulting in an imbalance. In some vessels, this offset will be counteracted by storing water inside larger ballast tanks. Marine engineers are responsible for the task of balancing and tracking the fuel and ballast water of a ship. Floating offshore structures have similar constraints. Corrosion The saltwater environment faced by seagoing vessels makes them highly susceptible to corrosion. In every project, marine engineers are concerned with surface protection and preventing galvanic corrosion. Corrosion can be inhibited through cathodic protection by introducing pieces of metal (e.g. zinc) to serve as a "sacrificial anode" in the corrosion reaction. This causes the metal to corrode instead of the ship's hull. Another way to prevent corrosion is by sending a controlled amount of low DC current through the ship's hull, thereby changing the hull's electrical charge and delaying the onset of electro-chemical corrosion. Similar problems are encountered in coastal and offshore structures. Anti-fouling Anti-fouling is the process of eliminating obstructive organisms from essential components of seawater systems. Depending on the nature and location of marine growth, this process is performed in a number of different ways: Marine organisms may grow and attach to the surfaces of the outboard suction inlets used to obtain water for cooling systems. Electro-chlorination involves running high electrical current through sea water, altering the water's chemical composition to create sodium hypochlorite, purging any bio-matter. An electrolytic method of anti-fouling involves running electrical current through two anodes (Scardino, 2009). These anodes typically consist of copper and aluminum (or alternatively, iron). The first metal, copper anode, releases its ion into the water, creating an environment that is too toxic for bio-matter. The second metal, aluminum, coats the inside of the pipes to prevent corrosion. Other forms of marine growth such as mussels and algae may attach themselves to the bottom of a ship's hull. This growth interferes with the smoothness and uniformity of the ship's hull, causing the ship to have a less hydrodynamic shape that causes it to be slower and less fuel-efficient. Marine growth on the hull can be remedied by using special paint that prevents the growth of such organisms. Pollution control Sulfur emission The burning of marine fuels releases harmful pollutants into the atmosphere. Ships burn marine diesel in addition to heavy fuel oil. Heavy fuel oil, being the heaviest of refined oils, releases sulfur dioxide when burned. Sulfur dioxide emissions have the potential to raise atmospheric and ocean acidity causing harm to marine life. However, heavy fuel oil may only be burned in international waters due to the pollution created. It is commercially advantageous due to the cost effectiveness compared to other marine fuels. It is prospected that heavy fuel oil will be phased out of commercial use by the year 2020 (Smith, 2018). Oil and water discharge Water, oil, and other substances collect at the bottom of the ship in what is known as the bilge. Bilge water is pumped overboard, but must pass a pollution threshold test of 15 ppm (parts per million) of oil to be discharged. Water is tested and either discharged if clean or recirculated to a holding tank to be separated before being tested again. The tank it is sent back to, the oily water separator, utilizes gravity to separate the fluids due to their viscosity. Ships over 400 gross tons are required to carry the equipment to separate oil from bilge water. Further, as enforced by MARPOL, all ships over 400 gross tons and all oil tankers over 150 gross tons are required to log all oil transfers in an oil record book (EPA, 2011). Cavitation Cavitation is the process of forming an air bubble in a liquid due to the vaporization of that liquid cause by an area of low pressure. This area of low pressure lowers the boiling point of a liquid allowing it to vaporize into a gas. Cavitation can take place in pumps, which can cause damage to the impeller that moves the fluids through the system. Cavitation is also seen in propulsion. Low pressure pockets form on the surface of the propeller blades as its revolutions per minute increase (IIMS, 2015). Cavitation on the propeller causes a small but violent implosion which could warp the propeller blade. To remedy the issue, more blades allow the same amount of propulsion force but at a lower rate of revolutions. This is crucial for submarines as the propeller needs to keep the vessel relatively quiet to stay hidden. With more propeller blades, the vessel is able to achieve the same amount of propulsion force at lower shaft revolutions. Applications The following categories provide a number of focus areas in which marine engineers direct their efforts. Arctic Engineering In designing systems that operate in the arctic (especially scientific equipment such as meteorological instrumentation and oceanographic buoys), marine engineers must overcome an array of design challenges. Equipment must be able to operate at extreme temperatures for prolonged periods of time, often with little to no maintenance. This creates the need for exceptionally temperature-resistant materials and durable precision electronic components. Coastal Design and Restoration Coastal engineering applies a mixture of civil engineering and other disciplines to create coastal solutions for areas along or near the ocean. In protecting coastlines from wave forces, erosion, and sea level rise, marine engineers must consider whether they will use a "gray" infrastructure solution - such as a breakwater, culvert, or sea wall made from rocks and concrete - or a "green" infrastructure solution that incorporates aquatic plants, mangroves, and/or marsh ecosystems. It has been found that gray infrastructure costs more to build and maintain, but it may provide better protection against ocean forces in high-energy wave environments. A green solution is generally less expensive and more well-integrated with local vegetation, but may be susceptible to erosion or damage if executed improperly. In many cases engineers will select a hybrid approach that combines elements of both gray and green solutions. Deep Sea Systems Life Support The design of underwater life-support systems such as underwater habitats presents a unique set of challenges requiring a detailed knowledge of pressure vessels, diving physiology, and thermodynamics. Unmanned Underwater Vehicles Marine engineers may design or make frequent use of unmanned underwater vehicles, which operate underwater without a human aboard. UUVs often perform work in locations which would be otherwise impossible or difficult to access by humans due to a number of environmental factors (e.g. depth, remoteness, and/or temperature). UUVs can be remotely operated by humans, like in the case of remotely operated vehicles, semi-autonomous, or autonomous. Sensors and instrumentation The development of oceanographic sciences, subsea engineering and the ability to detect, track and destroy submarines (anti-submarine warfare) required the parallel development of a host of marine scientific instrumentation and sensors. Visible light is not transferred far underwater, so the medium for transmission of data is primarily acoustic. High-frequency sound is used to measure the depth of the ocean, determine the nature of the seafloor, and detect submerged objects. The higher the frequency, the higher the definition of the data that is returned. Sound Navigation and Ranging or SONAR was developed during the First World War to detect submarines, and has been greatly refined through to the present day. Submarines similarly use sonar equipment to detect and target other submarines and surface ships, and to detect submerged obstacles such as seamounts that pose a navigational obstacle. Simple echo-sounders point straight down and can give an accurate reading of ocean depth (or look up at the underside of sea-ice). More advanced echo sounders use a fan-shaped beam or sound, or multiple beams to derive highly detailed images of the ocean floor. High power systems can penetrate the soil and seabed rocks to give information about the geology of the seafloor, and are widely used in geophysics for the discovery of hydrocarbons, or for engineering survey. For close-range underwater communications, optical transmission is possible, mainly using blue lasers. These have a high bandwidth compared with acoustic systems, but the range is usually only a few tens of metres, and ideally at night. As well as acoustic communications and navigation, sensors have been developed to measure ocean parameters such as temperature, salinity, oxygen levels and other properties including nitrate levels, levels of trace chemicals and environmental DNA. The industry trend has been towards smaller, more accurate and more affordable systems so that they can be purchased and used by university departments and small companies as well as large corporations, research organisations and governments. The sensors and instruments are fitted to autonomous and remotely-operated systems as well as ships, and are enabling these systems to take on tasks that hitherto required an expensive human-crewed platform. Manufacture of marine sensors and instruments mainly takes place in Asia, Europe and North America. Products are advertised in specialist journals, and through Trade Shows such as Oceanology International and Ocean Business which help raise awareness of the products. Environmental Engineering In every coastal and offshore project, environmental sustainability is an important consideration for the preservation of ocean ecosystems and natural resources. Instances in which marine engineers benefit from knowledge of environmental engineering include creation of fisheries, clean-up of oil spills, and creation of coastal solutions. Offshore Systems A number of systems designed fully or in part by marine engineers are used offshore - far away from coastlines. Offshore oil platforms The design of offshore oil platforms involves a number of marine engineering challenges. Platforms must be able to withstand ocean currents, wave forces, and saltwater corrosion while remaining structurally integral and fully anchored into the seabed. Additionally, drilling components must be engineered to handle these same challenges with a high factor of safety to prevent oil leaks and spills from contaminating the ocean. Offshore wind farms Offshore wind farms encounter many similar marine engineering challenges to oil platforms. They provide a source of renewable energy with a higher yield than wind farms on land, while encountering less resistance from the general public (see NIMBY). Ocean wave energy Marine engineers continue to investigate the possibility of ocean wave energy as a viable source of power for distributed or grid applications. Many designs have been proposed and numerous prototypes have been built, but the problem of harnessing wave energy in a cost-effective manner remains largely unresolved. Port and Harbor Design A marine engineer may also deal with the planning, creation, expansion, and modification of port and harbor designs. Harbors can be natural or artificial and protect anchored ships from wind, waves, and currents. Ports can be defined as a city, town, or place where ships are moored, loaded, or unloaded. Ports typically reside within a harbor and are made up of one or more individual terminals that handle a particular cargo including passengers, bulk cargo, or containerized cargo. Marine engineers plan and design various types of marine terminals and structures found in ports, and they must understand the loads imposed on these structures over the course of their lifetime. Salvage and Recovery Marine salvage techniques are continuously modified and improved to recover shipwrecks. Marine engineers use their skills to assist at some stages of this process. Career Industry With a diverse engineering background, marine engineers work in a variety of industry jobs across every field of math, science, technology, and engineering. A few companies such as Oceaneering International and Van Oord specialize in marine engineering, while other companies consult marine engineers for specific projects. Such consulting commonly occurs in the oil industry, with companies such as ExxonMobil and BP hiring marine engineers to manage aspects of their offshore drilling projects. Military Marine engineering lends itself to a number of military applications – mostly related to the Navy. The United States Navy's Seabees, Civil Engineer Corps, and Engineering Duty Officers often perform work related to marine engineering. Military contractors (especially those in naval shipyards) and the Army Corps of Engineers play a role in certain marine engineering projects as well. Expected Growth In 2012, the average annual earnings for marine engineers in the U.S. were $96,140 with average hourly earnings of $46.22. As a field, marine engineering is predicted to grow approximately 12% from 2016 to 2026. Currently, there are about 8,200 naval architects and marine engineers employed, however, this number is expected to increase to 9,200 by 2026 (BLS, 2017). This is due at least in part to the critical role of the shipping industry on the global market supply chain; 80% of the world's trade by volume is done overseas by close to 50,000 ships, all of which require marine engineers aboard and shoreside (ICS, 2017). Additionally, offshore energy continues to grow, and a greater need exists for coastal solutions due to sea level rise. Education Maritime universities are dedicated to teaching and training students in maritime professions. Marine engineers generally have a bachelor's degree in marine engineering, marine engineering technology, or marine systems engineering. Practical training is valued by employers alongside the bachelor's degree. Professional institutions IMarEST World Maritime University Society for Underwater Technology IEEE Oceanic Engineering Society Marine Engineering and Research Institute Indian Maritime University Royal Institution of Naval Architects (RINA) Pakistan Marine Academy Society of Naval Architects and Marine Engineers (SNAME) is a worldwide society that is focused on the advancement of the maritime industry. SNAME was founded in 1893. American Society of Naval Engineers (ASNE) SIMAC Degrees in ocean engineering A number of institutions - including MIT, UC Berkeley, the U.S. Naval Academy, and Texas A&M University - offer a four-year Bachelor of Science degree specifically in ocean engineering. Accredited programs consist of basic undergraduate math and science subjects such as calculus, statistics, chemistry, and physics; fundamental engineering subjects such as statics, dynamics, electrical engineering, and thermodynamics; and more specialized subjects such as ocean structural analysis, hydromechanics, and coastal management. Graduate students in ocean engineering take classes on more advanced, in-depth subjects while conducting research to complete a graduate-level thesis. The Massachusetts Institute of Technology offers master's and PhD degrees specifically in ocean engineering. Additionally, MIT co-hosts a joint program with the Woods Hole Oceanographic Institution for students studying ocean engineering and other ocean-related topics at the graduate level. Journals and Conferences Journals about ocean engineering include Ocean Engineering, the IEEE Journal of Oceanic Engineering and the Journal of Waterway, Port, Coastal, and Ocean Engineering. Conferences in the field of marine engineering include the IEEE Oceanic Engineering Society's OCEANS Conference and Exposition and the European Wave and Tidal Energy Conference (EWTEC). Marine Engineering Achievements The Delta Works is a series of 13 projects designed to protect the Netherlands against flooding from the North Sea. The American Society of Civil Engineers named it one of the "Seven Wonders of the Modern World". As of April 2021 twenty-two people have descended to Challenger Deep, the lowest point in the Earth's ocean located in the Mariana Trench. Recovery of Soviet submarine K-219 by a joint team of U.S. Navy and CIA engineers aboard Glomar Explorer. Notable Marine Engineers In Industry Pieter van Oord, CEO of Royal van Oord In Academia Michael E. McCormick, Professor Emeritus of the Department of Naval Architecture and Ocean Engineering at the U.S. Naval Academy and pioneer of wave energy research In Media and Popular Culture Marine engineers performed an important role in the clean-up of oil spills such as Exxon Valdez and British Petroleum. James Cameron's documentary Deepsea Challenge follows the story of the team that built a submersible in which Cameron made the first solo descent to Challenger Deep, the lowest point in the Earth's ocean.
Technology
Disciplines
null
3553872
https://en.wikipedia.org/wiki/Viperfish
Viperfish
A viperfish is any species of marine fish in the genus Chauliodus. Viperfishes are mostly found in the mesopelagic zone and are characterized by long, needle-like teeth and hinged lower jaws. A typical viperfish grows to lengths of . Viperfishes undergo diel vertical migration and are found all around the world in tropical and temperate oceans. Viperfishes are capable of bioluminescence and possess photophores along the ventral side of their body, likely used to camouflage them by blending in with the less than 1% of light that reaches to below 200 meters depth. Although it may appear to be covered in scales, viperfishes do not possess scales. Rather, they are covered by a thick, transparent coating of unknown substance. Extremely large, fang-like teeth give the fish a slightly protruded lower jaw. Habitat Viperfishes live in meso- and bathypelagic environments and have been found dominating submarine calderas such as the Kurose Hole, which is the site with the highest Chauliodus density known in the world. Viperfishes also engage in diel vertical migration, meaning they migrate up into more productive waters during the night to feed. However, it is likely that only part of the total population of viperfishes engages in diel vertical migration on any given night, which could be due to their slow metabolism, i.e. they likely do not have to feed every night. Temperature is another restricting factor in viperfish's vertical distribution in the ocean. Depth is restricted by temperature, and the upper thermal limit of viperfish is 12° to 15 °C. In tropical waters, viperfish tend to stay in the deep layers and not migrate much, while in temperate waters viperfish are more actively migrating and even interacting with epipelagic predators. Body plan Chauliodus species are recognized by their large, fang-like teeth. They are so long that they would pierce the brain of the fish if misaligned. One species of viperfish, C. sloani, have a sampled standard length of 64.0 to 260.0 mm, with a mean SL of 120.3mm. The same species has a mean weight of 5.66 grams. Representatives from Chauliodus pammelas and Chauliodus sloani display a size-based depth differential. Individuals of a lesser mass are found at shallower depths and individuals of larger mass are found at deeper depths, below 500 meters. However, at nighttime larger viperfish can be found in shallower depths. The eyes of Chauliodus sloani maintain a constant size and proportion throughout growth of the fish. In the retina, several rows of rod cell "banks" grow upon each other, increasing in number with size of the fish. This opposed the typical vertebrate retina, which only has one layer of receptors. The first dorsal ray of Chauliodus is elongated, hinged, and connected via musculature; allowing it to swing forward. The tip of this ray has light organs. This fish lack scales, and instead is covered with hexagonal pigment patterns covered in an opalescent, slimy substance. Bioluminescence Chauliodus species utilize their capability of bioluminescence for two distinct purposes: attracting prey and avoiding predators. They show distinct anatomical adaptations for the two functions. Chauliodus possesses a bioluminescent lure located at the tip of its first dorsal ray, which it uses to attract prey by swinging it forward in front of its mouth. This allows the fish to lure prey directly in front of its mouth for feeding. Chauliodus has photophores along the ventral side of its body that emit light through adrenergic nervous control. The distribution of this light closely matches the distribution of light in mesopelagic and bathypelagic ocean zones, making it difficult for predators to see the fish. This allows the fish to swim undetected by predators, aiding survival. This type of camouflage is called counter-illumination. The presence of photo-microbes in the visceral organs of Chauliodus sloani indicates that bioluminescent microbes are likely responsible for the Chauliodus's ability to luminesce. Feeding Viperfishes, depending on the species, prey on other pelagic fishes and crustaceans. Stomach contents of captured individuals have contained lanternfishes, bristlemouths, copepods and krill. Based on the diel vertical migration of its prey, viperfish are assumed to be epipelagic migrants that search surface waters for food. The prey for viperfish, specifically the viperfish species C. sloani, are highly specific and of high abundance but feeding events for viperfish have low levels of occurrence. Viperfish are able to maximize energy input by consuming few but large prey. In order to support the specificity of feeding, the viperfish has multiple adaptations such as a large-toothed mouth, modifications in its skull to allow for a wide opening of its mouth, and elastic stomach and body skin to compensate for large prey. Migratory patterns Vertical movements of viperfish are influenced by temperature. It was observed that the upper limit of distribution is restricted by temperature (12–15 °C). That is observed to affect vertical habitat and trophodynamics. In most tropical waters, it is likely that the viperfish exists full time below 400 meters. In temperate regions, viperfish trophically interact with epipelagic predators at superficial waters. Many sub-species in the Stomiidae family participated in diel vertical migration. In migrating to the surface (400m depth) at night, they prove their ability to withstand large temperature changes of up to 7°C daily. They have been recorded in waters ranging from 4 to 14.5°C, highlighting the wide range of temperatures viperfish are capable of surviving in. Viperfish have previously been recorded in the Italian waters off the western Mediterranean Basin, the Adriatic Sea, the Greek waters of the Aegean Sea, and in the Turkish waters of the Levant Sea. Viperfish have rarely been seen off the Algerian coast by Dieuzeide. They have been reported to occur off the northern Tunisian coast. m Reproduction Despite the abundance of viperfish in the meso- and bathypelagic, their reproductive ecology is widely unknown. This is due to research surveys rarely being able to catch mature adults, as well as the general lack of research on fish reproductive ecology in the deep sea. It is likely, however, that viperfish share a similar reproductive ecology to other dragonfishes which have been studied more extensively (under Stomiidae family). Viperfish are gonochoristic, meaning that they don't exhibit both testicular and ovarian tissue simultaneously in their gonads. They reproduce through spawning, with a study on dragonfishes indicating that males are able to spawn sperm continuously whereas females display asynchronous oocyte development and batch spawning. That same study showed a skewed 1:2 sex ratio favoring females in their collection of over seventy Chauliodus sloani viperfishes in the Gulf of Mexico. Two Chauliodus macouni eggs were recovered in the Columbia River in Oregon (likely displaced by strong Pacific currents), indicating a potentially long incubation period for viperfish eggs. Species There are currently nine extant recognized species in this genus: Chauliodus barbatus Garman, 1899 Chauliodus danae Regan & Trewavas, 1929 (Dana viperfish) Chauliodus dentatus Garman, 1899 Chauliodus macouni T. H. Bean, 1890 (pacific viperfish) Chauliodus minimus Parin & Novikova, 1974 Chauliodus pammelas Alcock, 1892 Chauliodus schmidti Ege, 1948 Chauliodus sloani Bloch & J. G. Schneider, 1801 (Sloane's viperfish) Chauliodus vasnetzovi Novikova, 1972 At least two more species are recognized from Late Miocene-aged fossils: Chauliodus eximus, (Jordan, 1925), originally Eostomias eximus, from Late Miocene California Chauliodus testa, Nazarkin, 2014, from the Late Miocene of Western Sakhalin Island
Biology and health sciences
Stomiiformes
Animals
3555585
https://en.wikipedia.org/wiki/Vertical%20farming
Vertical farming
Vertical farming is the practice of growing crops in vertically and horizontally stacked layers. It often incorporates controlled-environment agriculture, which aims to optimize plant growth, and soilless farming techniques such as hydroponics, aquaponics, and aeroponics. Some common choices of structures to house vertical farming systems include buildings, shipping containers, underground tunnels, and abandoned mine shafts. The modern concept of vertical farming was proposed in 1999 by Dickson Despommier, professor of Public and Environmental Health at Columbia University. Despommier and his students came up with a design of a skyscraper farm that could feed 50,000 people. Although the design has not yet been built, it successfully popularized the idea of vertical farming. Current applications of vertical farmings coupled with other state-of-the-art technologies, such as specialized LED lights, have resulted in over 10 times the crop yield than would receive through traditional farming methods. There have been several different means of implementing vertical farming systems into communities such as: Paignton, Israel, Singapore, Chicago, Munich, London, Japan, and Lincolnshire. The main advantage of utilizing vertical farming technologies is the increased crop yield that comes with a smaller unit area of land requirement. The increased ability to cultivate a larger variety of crops at once because crops do not share the same plots of land while growing is another sought-after advantage. Additionally, crops are resistant to weather disruptions because of their placement indoors, meaning less crops lost to extreme or unexpected weather occurrences. Lastly, because of its limited land usage, vertical farming is less disruptive to the native plants and animals, leading to further conservation of the local flora and fauna. Vertical farming technologies face economic challenges with large start-up costs compared to traditional farms. They cannot grow all types of crops but can be cost-effective for high value products such as salad vegetables. Vertical farms also face large energy demands due to the use of supplementary light like LEDs. The buildings also need excellent control of temperature, humidity and water supplies. Moreover, if non-renewable energy is used to meet these energy demands, vertical farms could produce more pollution than traditional farms or greenhouses. Types The term "vertical farming" was coined by Gilbert Ellis Bailey in 1915 in his book Vertical Farming. His use of the term differs from the current meaning—he wrote about farming with a special interest in soil origin, its nutrient content and the view of plant life as "vertical" life forms, specifically relating to their underground root structures. Modern usage of the term "vertical farming" usually refers to growing plants in layers, whether in a multistory skyscraper, used warehouse, or shipping container. Mixed-use skyscrapers Mixed-use skyscrapers were proposed and built by architect Ken Yeang. Yeang proposes that instead of hermetically sealed mass-produced agriculture, plant life should be cultivated within open air, mixed-use skyscrapers for climate control and consumption. This version of vertical farming is based upon personal or community use rather than the wholesale production and distribution that aspires to feed an entire city. Despommier's skyscrapers Ecologist Dickson Despommier argues that vertical farming is legitimate for environmental reasons. He claims that the cultivation of plant life within skyscrapers will require less embodied energy and produce less pollution than some methods of producing plant life on natural landscapes. By shifting to vertical farms, Despommier believes that farmland will return to its natural state (i.e. forests), which would help reverse the impacts of climate change. He moreover claims that natural landscapes are too toxic for natural agricultural production. Vertical farming would remove some of the parasitic risks associated with farming. Despommier's concept of the vertical farm emerged in 1999 at Columbia University. It promotes the mass cultivation of plant life for commercial purposes in skyscrapers. Stackable shipping containers Several companies have developed stacking recycled shipping containers in urban settings. The shipping containers serve as standardized, modular environmental chambers for growing. By stacking the shipping containers, higher density in terms of produce yield/square foot is possible. But, the stacked containers pose the challenge of how to effectively and affordably access the stacked levels. Brighterside Consulting created a complete off-grid container system. Freight Farms produces the "Greenery" that is a complete farm-to-table system outfitted with vertical hydroponics, LED lighting and intuitive climate controls built within a 12m × 2.4m shipping container. Podponics built a vertical farm in Atlanta consisting of over 100 stacked "growpods", but reportedly went bankrupt in May 2016. A similar farm is under construction in Oman. TerraFarms offer a system of 40 foot shipping containers, which include computer vision integrated with an artificial neural network to monitor the plants; and are remotely monitored from California. It is claimed that the TerraFarm system "has achieved cost parity with traditional, outdoor farming" with each unit producing the equivalent of "three to five acres of farmland", using 97% less water through water recapture and harvesting the evaporated water through the air conditioning. the TerraFarm system was in commercial operation. In abandoned mine shafts Vertical farming in abandoned mine shafts is termed "deep farming", and is proposed to take advantage of consistent underground temperatures and locations near or in urban areas. It would also be able to use nearby groundwater, thereby reducing the cost of providing water to the farm. Technology Lighting can be natural or via LEDs. As of 2018 commercial LEDs were about 28% efficient, which keeps the cost of produce high and prevents vertical farms from competing in regions where cheap vegetables are abundant. Energy costs can be reduced because full-spectrum white light is not required. Instead, red and blue or purple light can be generated with less electricity. History One of the earliest drawings of a tall building that cultivates food was published in Life Magazine in 2009. The reproduced drawings feature vertically stacked homesteads set amidst a farming landscape. This proposal can be seen in Rem Koolhaas's Delirious New York. Koolhaas wrote that this theorem is 'The Skyscraper as Utopian device for the production of unlimited numbers of virgin sites on a metropolitan location'. Hydroponicum Early architectural proposals that contribute to VF include Le Corbusier's Immeubles-Villas (1922) and SITE's Highrise of Homes (1972). SITE's Highrise of Homes is a near revival of the 1909 Life Magazine Theorem. Built examples of tower hydroponicums are documented in The Glass House by John Hix. Images of the vertical farms at the School of Gardeners in Langenlois, Austria, and the glass tower at the Vienna International Horticulture Exhibition (1964) show that vertical farms existed. The technological precedents that make vertical farming possible can be traced back to horticultural history through the development of greenhouse and hydroponic technology. Early hydroponicums integrated hydroponic technology into building systems. These horticultural building systems evolved from greenhouse technology. The British Interplanetary Society developed a hydroponicum for lunar conditions, while other building prototypes were developed during the early days of space exploration. The first Tower Hydroponic Units were developed in Armenia. The Armenian tower hydroponicums are the first built examples of a vertical farm, and are documented in Sholto Douglas' Hydroponics: The Bengal System, first published in 1951 with data from the then-East Pakistan, today's Bangladesh, and the Indian state of West Bengal. Later precursors that have been published, or built, are Ken Yeang's Bioclimatic Skyscraper (Menara Mesiniaga, built 1992); MVRDV's PigCity, 2000; MVRDV's Meta City/ Datatown (1998–2000); Pich-Aguilera's Garden Towers (2001). Ken Yeang is perhaps the most widely known architect who has promoted the idea of the 'mixed-use' Bioclimatic Skyscraper which combines living units and food production. Vertical farm Dickson Despommier is a professor of environmental health sciences and microbiology. He reopened the topic of VF in 1999 with graduate students in a medical ecology class. He speculated that a 30-floor farm on one city block could provide food for 50,000 people including vegetables, fruit, eggs and meat, explaining that hydroponic crops could be grown on upper floors; while the lower floors would be suited for chickens and fish that eat plant waste. Although many of Despommier's suggestions have been challenged from an environmental science and engineering point of view, Despommier successfully popularized his assertion that food production can be transformed. Critics claimed that the additional energy needed for artificial lighting, heating and other operations would outweigh the benefit of the building's close proximity to the areas of consumption. Despommier originally challenged his class to feed the entire population of Manhattan (about 2,000,000 people) using only of rooftop gardens. The class calculated that rooftop gardening methods could feed only two percent of the population. Unsatisfied with the results, Despommier made an off-the-cuff suggestion of growing plants indoors, vertically. By 2001 the first outline of a vertical farm was introduced. In an interview Despommier described how vertical farms would function: Architectural designs were independently produced by designers Chris Jacobs, Andrew Kranis and Gordon Graff. Mass media attention began with an article written in New York magazine, followed by others, as well as radio and television features. In 2011, the Plant in Chicago was building an anaerobic digester into the building. This will allow the farm to operate off the energy grid. Moreover, the anaerobic digester will be recycling waste from nearby businesses that would otherwise go into landfills. In 2013, the Association for Vertical Farming was founded in Munich, Germany. As of 2014, Vertical Fresh Farms was operating in Buffalo, New York, specializing in salad greens, herbs and sprouts. In March the world's then largest vertical farm opened in Scranton, Pennsylvania, built by Green Spirit Farms (GSF). The firm is housed in a single story building covering 3.25 hectares, with racks stacked six high to house 17 million plants. The farm was to grow 14 lettuce crops per year, as well as spinach, kale, tomatoes, peppers, basil and strawberries. Water is scavenged from the farm's atmosphere with a dehumidifier. Kyoto-based Nuvege (pronounced "new veggie") operates a windowless farm. Its LED lighting is tuned to service two types of chlorophyll, one preferring red light and the other blue. Nuvege produces 6 million lettuce heads a year. The US Defense Advanced Research Projects Agency (DARPA) operates an 18-story project that produces genetically modified plants that make proteins useful in vaccines. Plenty has designed a new AI-controlled modular grow system for multiple crops; they are opening a farm in Chesterfield, Virginia that will grow more than 4 million pounds of strawberries each year. The farm uses 97% less land and 97% less water than traditional farming. Advantages Many of VF's potential benefits are obtained from scaling up hydroponic or aeroponic growing methods. A 2018 study estimated that the value of four ecosystem services provided by existing vegetation in urban areas was on the order of $33 billion annually. The study's quantitative framework projected annual food production of 100–180 million tonnes, energy savings ranging from 14 to 15 billion kilowatt hours, nitrogen sequestration between 100,000 and 170,000 tonnes and stormwater runoff reductions between 45 and 57 billion cubic meters annually. Food production, nitrogen fixation, energy savings, pollination, climate regulation, soil formation and biological pest control could be worth as much as $80–160 billion annually. Reduced need for farmland It is estimated that by the year 2050, the world's population will increase by 3 billion people and close to 80% will live in urban areas. Vertical farms have the potential to reduce or eliminate the need to create additional farmland. Increased crop production Unlike traditional farming in non-tropical areas, indoor farming can produce crops year-round. All-season farming multiplies the productivity of the farmed surface by a factor of 4 to 6 depending on the crop. With crops such as strawberries, the factor may be as high as 30. Furthermore, as the crops would be consumed where they are grown, long-distance transport with its accompanying time delays, should reduce spoilage, infestation and energy needs. Globally some 30% of harvested crops are wasted due to spoilage and infestation, though this number is much lower in developed nations. Despommier suggests that once dwarf versions of crops (e.g. dwarf wheat which is smaller in size but richer in nutrients), year-round crops and "stacker" plant holders are accounted for, a 30-story building with a base of a building block () would yield a yearly crop analogous to that of of traditional farming. Weather disruption Crops grown in traditional outdoor farming depend on supportive weather, and suffer from undesirable temperatures rain, monsoon, hailstorm, tornadoe, flooding, wildfires and drought. "Three recent floods (in 1993, 2007 and 2008) cost the United States billions of dollars in lost crops, with even more devastating losses in topsoil. Changes in rain patterns and temperature could diminish India's agricultural output by 30 percent by the end of the century." VF productivity is mostly independent of weather, although earthquakes and tornadoes still pose threats. The issue of adverse weather conditions is especially relevant for arctic and sub-arctic areas like Alaska and northern Canada where traditional farming is largely impossible. Food insecurity has been a long-standing problem in remote northern communities where fresh produce has to be shipped large distances resulting in high costs and poor nutrition. Container-based farms can provide fresh produce year-round at a lower cost than shipping in supplies from more southerly locations with a number of farms operating in locations such as Churchill, Manitoba and Unalaska, Alaska As with disruption to crop growing, local container-based farms are also less susceptible to disruption than the long supply chains necessary to deliver traditionally grown produce to remote communities. Food prices in Churchill spiked substantially after floods in May and June 2017 forced the closure of the rail line that forms the only permanent overland connection between Churchill and the rest of Canada. Conservation Up to 20 units of outdoor farmland per unit of VF could return to its natural state, due to VF's increased productivity. Vertical farming would thus reduce the amount of farmland, thus saving many natural resources. Deforestation and desertification caused by agricultural encroachment on natural biomes could be avoided. Producing food indoors reduces or eliminates conventional plowing, planting, and harvesting by farm machinery, protecting soil and reducing emissions. Resource scarcity The scarcity of fertilizer components like phosphorus poses a threat to industrial agriculture. The closed-cycle design of vertical farm systems minimizes the loss of nutrients, while traditional field agriculture loses nutrients to runoff and leeching. Mass extinction Withdrawing human activity from large areas of the Earth's land surface may be necessary to address anthropogenic mass extinctions. Traditional agriculture disrupts wild populations and may be unethical given a viable alternative. One study showed that wood mouse populations dropped from 25 per hectare to 5 per hectare after harvest, estimating 10 animals killed per hectare each year with conventional farming. In comparison, vertical farming would cause nominal harm to wildlife. Human health Traditional farming is a hazardous occupation that often affects the health of farmers. Such risks include: exposure to infectious agents such as malaria and schistosomes, as well as soil-borne microbes, exposure to toxic pesticides and fungicides, confrontations with wildlife such as venomous snakes, and injuries that can occur when using large industrial farming equipment. VF reduces some of these risks. The modern industrial food system makes unhealthy food cheap while fresh produce is more expensive, encouraging poor eating habits. These habits lead to health problems such as obesity, heart disease and diabetes. Poverty and culture Food insecurity is one of the primary factors leading to absolute poverty. Constructing farms will allow continued growth of culturally significant food items without sacrificing sustainability or basic needs, which can be significant to the recovery of a society from poverty. Urban growth Vertical farming, used in conjunction with other technologies and socioeconomic practices, could allow cities to expand while remaining substantially self-sufficient in food. This would allow large urban centers to grow without food constraints. Energy sustainability Vertical farms could exploit methane digesters to generate energy. Methane digesters could be built on site to transform the organic waste generated at the farm into biogas that is generally composed of 65% methane along with other gases. This biogas could then be burned to generate electricity for the greenhouse. Problems Economics Vertical farms require substantial start-up funding and some start-up companies have not been able to achieve a profit before going bankrupt. Opponents question the potential profitability of vertical farming. Its economic and environmental benefits rest partly on the concept of minimizing food miles, the distance that food travels from farm to consumer. However, a recent analysis suggests that transportation is only a minor contributor to the economic and environmental costs of supplying food to urban populations. The analysis concluded that "food miles are, at best, a marketing fad". Thus the facility would have to lower costs or charge higher prices to justify remaining in a city. Similarly, if power needs are met by fossil fuels, the environmental effect may be a net loss; even building low-carbon capacity to power the farms may not make as much sense as simply leaving traditional farms in place, while burning less coal. The initial building costs would exceed $100 million, for a 60 hectare vertical farm. Office occupancy costs can be high in major cities, with office space in cities such as Tokyo, Moscow, Mumbai, Dubai, Milan, Zurich, and São Paulo ranging from $1850 to $880 per square meter. The developers of the TerraFarm system produced from second hand, 40 foot shipping containers claimed that their system "has achieved cost parity with traditional, outdoor farming". Energy use During the growing season, the sun shines on a vertical surface at an extreme angle such that much less light is available to crops than when they are planted on flat land. Therefore, supplemental light would be required. Bruce Bugbee claimed that the power demands of vertical farming would be uncompetitive with traditional farms using only natural light. Environmental writer George Monbiot calculated that the cost of providing enough supplementary light to grow the grain for a single loaf would be about $15. An article in the Economist argued that "even though crops growing in a glass skyscraper will get some natural sunlight during the day, it won't be enough" and "the cost of powering artificial lights will make indoor farming prohibitively expensive". As "The Vertical Farm" proposes a controlled environment, heating and cooling costs will resemble those of any other tower. Plumbing and elevator systems are necessary to distribute nutrients and water. In the northern continental United States, fossil fuel heating cost can be over $200,000 per hectare. Jones Food Company in Gloucestershire, England opened a farm in 2024 with of growing space, powered only by renewable electricity. Pollution Depending on the method of electricity generation used, greenhouse produce can create more greenhouse gases than field produce, largely due to higher energy use per kilogram. Vertical farms require much greater energy per kilogram versus regular greenhouses, mainly through increased lighting. The amount of pollution produced is dependent on how the energy is generated. Greenhouses commonly supplement CO2 levels to three–four times the atmospheric rate. This increase in CO2 increases photosynthesis rates by 50%, contributing to higher yields. Some greenhouses burn fossil fuels purely for this purpose, as other CO2 sources, such as those from furnaces, contain pollutants such as sulphur dioxide and ethylene which significantly damage plants. This means a vertical farm requires a CO2 source, most likely from combustion. Also, necessary ventilation may allow CO2 to leak into the atmosphere. Greenhouse growers commonly exploit photoperiodism in plants to control whether the plants are in a vegetative or reproductive stage. As part of this control, the lights stay on past sunset and before sunrise or periodically throughout the night. Single story greenhouses have attracted criticism over light pollution. Hydroponic greenhouses regularly change the water, producing water containing fertilizers and pesticides that must be disposed of. The most common method of spreading the effluent over neighbouring farmland or wetlands would be more difficult for an urban vertical farm. Technologies and devices Vertical farming relies on the use of various physical methods to become effective. Combining these technologies and devices in an integrated whole is necessary to make Vertical Farming a reality. Various methods are proposed and under research. The most common technologies suggested are: Greenhouses The Folkewall and other vertical growing architectures Aeroponics Agricultural robot Aquaponics Composting Controlled-environment agriculture Flower pots Grow lights Hydroponics Phytoremediation Precision agriculture Skyscrapers TerraFarm Plans and Realization Developers and local governments in multiple cities have expressed interest in establishing a vertical farm: Incheon (South Korea), Abu Dhabi (United Arab Emirates), Dongtan (China), New York City, Portland, Oregon, Los Angeles, Las Vegas, Seattle, Surrey, B.C., Toronto, Paris, Bangalore, Dubai, Shanghai and Beijing. In 2009, the world's first pilot production system was installed at Paignton Zoo Environmental Park in the United Kingdom. The project showcased vertical farming and provided a physical base to research sustainable urban food production. The produce is used to feed the zoo's animals while the project enables evaluation of the systems and provides an educational resource to advocate for change in unsustainable land use practices that impact upon global biodiversity and ecosystem services, In 2010 the Green Zionist Alliance proposed a resolution at the 36th World Zionist Congress calling on Keren Kayemet L'Yisrael (Jewish National Fund in Israel) to develop vertical farms in Israel. In 2012, the world's first commercial vertical farm was opened in Singapore. Sky Greens Farms developed it, and it is three stories high. They currently have over 100 nine meter-tall towers. In 2013, the Association for Vertical Farming (AVF) was founded in Munich (Germany). By May 2015, the AVF had expanded with regional chapters across Europe, Asia, the USA, Canada, and the United Kingdom. This organization unites growers and inventors to improve food security and sustainable development. AVF focuses on advancing vertical farming technologies, designs, and businesses by hosting international info days, workshops, and summits. The world's largest vertical farm opened in Dubai in 2022. It produces more than one million kilograms of leafy greens per year, using 95 percent less water than traditional cultivation and saving 250 million liters of water per year.
Technology
Agriculture_2
null
18616290
https://en.wikipedia.org/wiki/Gamma%20ray
Gamma ray
A gamma ray, also known as gamma radiation (symbol ), is a penetrating form of electromagnetic radiation arising from the radioactive decay of atomic nuclei. It consists of the shortest wavelength electromagnetic waves, typically shorter than those of X-rays. With frequencies above 30 exahertz () and wavelengths less than 10 picometers (), gamma ray photons have the highest photon energy of any form of electromagnetic radiation. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900 while studying radiation emitted by radium. In 1903, Ernest Rutherford named this radiation gamma rays based on their relatively strong penetration of matter; in 1900, he had already named two less penetrating types of decay radiation (discovered by Henri Becquerel) alpha rays and beta rays in ascending order of penetrating power. Gamma rays from radioactive decay are in the energy range from a few kiloelectronvolts (keV) to approximately 8 megaelectronvolts (MeV), corresponding to the typical energy levels in nuclei with reasonably long lifetimes. The energy spectrum of gamma rays can be used to identify the decaying radionuclides using gamma spectroscopy. Very-high-energy gamma rays in the 100–1000 teraelectronvolt (TeV) range have been observed from astronomical sources such as the Cygnus X-3 microquasar. Natural sources of gamma rays originating on Earth are mostly a result of radioactive decay and secondary radiation from atmospheric interactions with cosmic ray particles. However, there are other rare natural sources, such as terrestrial gamma-ray flashes, which produce gamma rays from electron action upon the nucleus. Notable artificial sources of gamma rays include fission, such as that which occurs in nuclear reactors, and high energy physics experiments, such as neutral pion decay and nuclear fusion. The energy ranges of gamma rays and X-rays overlap in the electromagnetic spectrum, so the terminology for these electromagnetic waves varies between scientific disciplines. In some fields of physics, they are distinguished by their origin: gamma rays are created by nuclear decay while X-rays originate outside the nucleus. In astrophysics, gamma rays are conventionally defined as having photon energies above 100 keV and are the subject of gamma-ray astronomy, while radiation below 100 keV is classified as X-rays and is the subject of X-ray astronomy. Gamma rays are ionizing radiation and are thus hazardous to life. They can cause DNA mutations, cancer and tumors, and at high doses burns and radiation sickness. Due to their high penetration power, they can damage bone marrow and internal organs. Unlike alpha and beta rays, they easily pass through the body and thus pose a formidable radiation protection challenge, requiring shielding made from dense materials such as lead or concrete. On Earth, the magnetosphere protects life from most types of lethal cosmic radiation other than gamma rays. History of discovery The first gamma ray source to be discovered was the radioactive decay process called gamma decay. In this type of decay, an excited nucleus emits a gamma ray almost immediately upon formation. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900, while studying radiation emitted from radium. Villard knew that his described radiation was more powerful than previously described types of rays from radium, which included beta rays, first noted as "radioactivity" by Henri Becquerel in 1896, and alpha rays, discovered as a less penetrating form of radiation by Rutherford, in 1899. However, Villard did not consider naming them as a different fundamental type. Later, in 1903, Villard's radiation was recognized as being of a type fundamentally different from previously named rays by Ernest Rutherford, who named Villard's rays "gamma rays" by analogy with the beta and alpha rays that Rutherford had differentiated in 1899. The "rays" emitted by radioactive elements were named in order of their power to penetrate various materials, using the first three letters of the Greek alphabet: alpha rays as the least penetrating, followed by beta rays, followed by gamma rays as the most penetrating. Rutherford also noted that gamma rays were not deflected (or at least, not deflected) by a magnetic field, another property making them unlike alpha and beta rays. Gamma rays were first thought to be particles with mass, like alpha and beta rays. Rutherford initially believed that they might be extremely fast beta particles, but their failure to be deflected by a magnetic field indicated that they had no charge. In 1914, gamma rays were observed to be reflected from crystal surfaces, proving that they were electromagnetic radiation. Rutherford and his co-worker Edward Andrade measured the wavelengths of gamma rays from radium, and found they were similar to X-rays, but with shorter wavelengths and thus, higher frequency. This was eventually recognized as giving them more energy per photon, as soon as the latter term became generally accepted. A gamma decay was then understood to usually emit a gamma photon. Sources Natural sources of gamma rays on Earth include gamma decay from naturally occurring radioisotopes such as potassium-40, and also as a secondary radiation from various atmospheric interactions with cosmic ray particles. Natural terrestrial sources that produce gamma rays include lightning strikes and terrestrial gamma-ray flashes, which produce high energy emissions from natural high-energy voltages. Gamma rays are produced by a number of astronomical processes in which very high-energy electrons are produced. Such electrons produce secondary gamma rays by the mechanisms of bremsstrahlung, inverse Compton scattering and synchrotron radiation. A large fraction of such astronomical gamma rays are screened by Earth's atmosphere. Notable artificial sources of gamma rays include fission, such as occurs in nuclear reactors, as well as high energy physics experiments, such as neutral pion decay and nuclear fusion. A sample of gamma ray-emitting material that is used for irradiating or imaging is known as a gamma source. It is also called a radioactive source, isotope source, or radiation source, though these more general terms also apply to alpha and beta-emitting devices. Gamma sources are usually sealed to prevent radioactive contamination, and transported in heavy shielding. Radioactive decay (gamma decay) Gamma rays are produced during gamma decay, which normally occurs after other forms of decay occur, such as alpha or beta decay. A radioactive nucleus can decay by the emission of an or particle. The daughter nucleus that results is usually left in an excited state. It can then decay to a lower energy state by emitting a gamma ray photon, in a process called gamma decay. The emission of a gamma ray from an excited nucleus typically requires only 10−12 seconds. Gamma decay may also follow nuclear reactions such as neutron capture, nuclear fission, or nuclear fusion. Gamma decay is also a mode of relaxation of many excited states of atomic nuclei following other types of radioactive decay, such as beta decay, so long as these states possess the necessary component of nuclear spin. When high-energy gamma rays, electrons, or protons bombard materials, the excited atoms emit characteristic "secondary" gamma rays, which are products of the creation of excited nuclear states in the bombarded atoms. Such transitions, a form of nuclear gamma fluorescence, form a topic in nuclear physics called gamma spectroscopy. Formation of fluorescent gamma rays are a rapid subtype of radioactive gamma decay. In certain cases, the excited nuclear state that follows the emission of a beta particle or other type of excitation, may be more stable than average, and is termed a metastable excited state, if its decay takes (at least) 100 to 1000 times longer than the average 10−12 seconds. Such relatively long-lived excited nuclei are termed nuclear isomers, and their decays are termed isomeric transitions. Such nuclei have half-lifes that are more easily measurable, and rare nuclear isomers are able to stay in their excited state for minutes, hours, days, or occasionally far longer, before emitting a gamma ray. The process of isomeric transition is therefore similar to any gamma emission, but differs in that it involves the intermediate metastable excited state(s) of the nuclei. Metastable states are often characterized by high nuclear spin, requiring a change in spin of several units or more with gamma decay, instead of a single unit transition that occurs in only 10−12 seconds. The rate of gamma decay is also slowed when the energy of excitation of the nucleus is small. An emitted gamma ray from any type of excited state may transfer its energy directly to any electrons, but most probably to one of the K shell electrons of the atom, causing it to be ejected from that atom, in a process generally termed the photoelectric effect (external gamma rays and ultraviolet rays may also cause this effect). The photoelectric effect should not be confused with the internal conversion process, in which a gamma ray photon is not produced as an intermediate particle (rather, a "virtual gamma ray" may be thought to mediate the process). Decay schemes One example of gamma ray production due to radionuclide decay is the decay scheme for cobalt-60, as illustrated in the accompanying diagram. First, decays to excited by beta decay emission of an electron of . Then the excited decays to the ground state (see nuclear shell model) by emitting gamma rays in succession of 1.17 MeV followed by . This path is followed 99.88% of the time: :{| border="0" |- style="height:2em;" | ||→ || ||+ || ||+ || ||+ || ||+ || |- style="height:2em;" | ||→ || || || || || ||+ || ||+ || |} Another example is the alpha decay of to form ; which is followed by gamma emission. In some cases, the gamma emission spectrum of the daughter nucleus is quite simple, (e.g. /) while in other cases, such as with (/ and /), the gamma emission spectrum is complex, revealing that a series of nuclear energy levels exist. Particle physics Gamma rays are produced in many processes of particle physics. Typically, gamma rays are the products of neutral systems which decay through electromagnetic interactions (rather than a weak or strong interaction). For example, in an electron–positron annihilation, the usual products are two gamma ray photons. If the annihilating electron and positron are at rest, each of the resulting gamma rays has an energy of ~ 511 keV and frequency of ~ . Similarly, a neutral pion most often decays into two photons. Many other hadrons and massive bosons also decay electromagnetically. High energy physics experiments, such as the Large Hadron Collider, accordingly employ substantial radiation shielding. Because subatomic particles mostly have far shorter wavelengths than atomic nuclei, particle physics gamma rays are generally several orders of magnitude more energetic than nuclear decay gamma rays. Since gamma rays are at the top of the electromagnetic spectrum in terms of energy, all extremely high-energy photons are gamma rays; for example, a photon having the Planck energy would be a gamma ray. Other sources A few gamma rays in astronomy are known to arise from gamma decay (see discussion of SN1987A), but most do not. Photons from astrophysical sources that carry energy in the gamma radiation range are often explicitly called gamma-radiation. In addition to nuclear emissions, they are often produced by sub-atomic particle and particle-photon interactions. Those include electron-positron annihilation, neutral pion decay, bremsstrahlung, inverse Compton scattering, and synchrotron radiation. Laboratory sources In October 2017, scientists from various European universities proposed a means for sources of GeV photons using lasers as exciters through a controlled interplay between the cascade and anomalous radiative trapping. Terrestrial thunderstorms Thunderstorms can produce a brief pulse of gamma radiation called a terrestrial gamma-ray flash. These gamma rays are thought to be produced by high intensity static electric fields accelerating electrons, which then produce gamma rays by bremsstrahlung as they collide with and are slowed by atoms in the atmosphere. Gamma rays up to 100 MeV can be emitted by terrestrial thunderstorms, and were discovered by space-borne observatories. This raises the possibility of health risks to passengers and crew on aircraft flying in or near thunderclouds. Solar flares The most effusive solar flares emit across the entire EM spectrum, including γ-rays. The first confident observation occurred in 1972. Cosmic rays Extraterrestrial, high energy gamma rays include the gamma ray background produced when cosmic rays (either high speed electrons or protons) collide with ordinary matter, producing pair-production gamma rays at 511 keV. Alternatively, bremsstrahlung are produced at energies of tens of MeV or more when cosmic ray electrons interact with nuclei of sufficiently high atomic number (see gamma ray image of the Moon near the end of this article, for illustration). Pulsars and magnetars The gamma ray sky (see illustration at right) is dominated by the more common and longer-term production of gamma rays that emanate from pulsars within the Milky Way. Sources from the rest of the sky are mostly quasars. Pulsars are thought to be neutron stars with magnetic fields that produce focused beams of radiation, and are far less energetic, more common, and much nearer sources (typically seen only in our own galaxy) than are quasars or the rarer gamma-ray burst sources of gamma rays. Pulsars have relatively long-lived magnetic fields that produce focused beams of relativistic speed charged particles, which emit gamma rays (bremsstrahlung) when those strike gas or dust in their nearby medium, and are decelerated. This is a similar mechanism to the production of high-energy photons in megavoltage radiation therapy machines (see bremsstrahlung). Inverse Compton scattering, in which charged particles (usually electrons) impart energy to low-energy photons boosting them to higher energy photons. Such impacts of photons on relativistic charged particle beams is another possible mechanism of gamma ray production. Neutron stars with a very high magnetic field (magnetars), thought to produce astronomical soft gamma repeaters, are another relatively long-lived star-powered source of gamma radiation. Quasars and active galaxies More powerful gamma rays from very distant quasars and closer active galaxies are thought to have a gamma ray production source similar to a particle accelerator. High energy electrons produced by the quasar, and subjected to inverse Compton scattering, synchrotron radiation, or bremsstrahlung, are the likely source of the gamma rays from those objects. It is thought that a supermassive black hole at the center of such galaxies provides the power source that intermittently destroys stars and focuses the resulting charged particles into beams that emerge from their rotational poles. When those beams interact with gas, dust, and lower energy photons they produce X-rays and gamma rays. These sources are known to fluctuate with durations of a few weeks, suggesting their relatively small size (less than a few light-weeks across). Such sources of gamma and X-rays are the most commonly visible high intensity sources outside the Milky Way galaxy. They shine not in bursts (see illustration), but relatively continuously when viewed with gamma ray telescopes. The power of a typical quasar is about 1040 watts, a small fraction of which is gamma radiation. Much of the rest is emitted as electromagnetic waves of all frequencies, including radio waves. Gamma-ray bursts The most intense sources of gamma rays are also the most intense sources of any type of electromagnetic radiation presently known. They are the "long duration burst" sources of gamma rays in astronomy ("long" in this context, meaning a few tens of seconds), and they are rare compared with the sources discussed above. By contrast, "short" gamma-ray bursts of two seconds or less, which are not associated with supernovae, are thought to produce gamma rays during the collision of pairs of neutron stars, or a neutron star and a black hole. The so-called long-duration gamma-ray bursts produce a total energy output of about 1044 joules (as much energy as the Sun will produce in its entire life-time) but in a period of only 20 to 40 seconds. Gamma rays are approximately 50% of the total energy output. The leading hypotheses for the mechanism of production of these highest-known intensity beams of radiation, are inverse Compton scattering and synchrotron radiation from high-energy charged particles. These processes occur as relativistic charged particles leave the region of the event horizon of a newly formed black hole created during supernova explosion. The beam of particles moving at relativistic speeds are focused for a few tens of seconds by the magnetic field of the exploding hypernova. The fusion explosion of the hypernova drives the energetics of the process. If the narrowly directed beam happens to be pointed toward the Earth, it shines at gamma ray frequencies with such intensity, that it can be detected even at distances of up to 10 billion light years, which is close to the edge of the visible universe. Properties Penetration of matter Due to their penetrating nature, gamma rays require large amounts of shielding mass to reduce them to levels which are not harmful to living cells, in contrast to alpha particles, which can be stopped by paper or skin, and beta particles, which can be shielded by thin aluminium. Gamma rays are best absorbed by materials with high atomic numbers (Z) and high density, which contribute to the total stopping power. Because of this, a lead (high Z) shield is 20–30% better as a gamma shield than an equal mass of another low-Z shielding material, such as aluminium, concrete, water, or soil; lead's major advantage is not in lower weight, but rather its compactness due to its higher density. Protective clothing, goggles and respirators can protect from internal contact with or ingestion of alpha or beta emitting particles, but provide no protection from gamma radiation from external sources. The higher the energy of the gamma rays, the thicker the shielding made from the same shielding material is required. Materials for shielding gamma rays are typically measured by the thickness required to reduce the intensity of the gamma rays by one half (the half-value layer or HVL). For example, gamma rays that require 1 cm (0.4 inch) of lead to reduce their intensity by 50% will also have their intensity reduced in half by of granite rock, 6 cm (2.5 inches) of concrete, or 9 cm (3.5 inches) of packed soil. However, the mass of this much concrete or soil is only 20–30% greater than that of lead with the same absorption capability. Depleted uranium is sometimes used for shielding in portable gamma ray sources, due to the smaller half-value layer when compared to lead (around 0.6 times the thickness for common gamma ray sources, i.e. Iridium-192 and Cobalt-60) and cheaper cost compared to tungsten. In a nuclear power plant, shielding can be provided by steel and concrete in the pressure and particle containment vessel, while water provides a radiation shielding of fuel rods during storage or transport into the reactor core. The loss of water or removal of a "hot" fuel assembly into the air would result in much higher radiation levels than when kept under water. Matter interaction When a gamma ray passes through matter, the probability for absorption is proportional to the thickness of the layer, the density of the material, and the absorption cross section of the material. The total absorption shows an exponential decrease of intensity with distance from the incident surface: where x is the thickness of the material from the incident surface, μ= nσ is the absorption coefficient, measured in cm−1, n the number of atoms per cm3 of the material (atomic density) and σ the absorption cross section in cm2. As it passes through matter, gamma radiation ionizes via three processes: The photoelectric effect: This describes the case in which a gamma photon interacts with and transfers its energy to an atomic electron, causing the ejection of that electron from the atom. The kinetic energy of the resulting photoelectron is equal to the energy of the incident gamma photon minus the energy that originally bound the electron to the atom (binding energy). The photoelectric effect is the dominant energy transfer mechanism for X-ray and gamma ray photons with energies below 50 keV (thousand electronvolts), but it is much less important at higher energies. Compton scattering: This is an interaction in which an incident gamma photon loses enough energy to an atomic electron to cause its ejection, with the remainder of the original photon's energy emitted as a new, lower energy gamma photon whose emission direction is different from that of the incident gamma photon, hence the term "scattering". The probability of Compton scattering decreases with increasing photon energy. It is thought to be the principal absorption mechanism for gamma rays in the intermediate energy range 100 keV to 10 MeV. It is relatively independent of the atomic number of the absorbing material, which is why very dense materials like lead are only modestly better shields, on a per weight basis, than are less dense materials. Pair production: This becomes possible with gamma energies exceeding 1.02 MeV, and becomes important as an absorption mechanism at energies over 5 MeV (see illustration at right, for lead). By interaction with the electric field of a nucleus, the energy of the incident photon is converted into the mass of an electron-positron pair. Any gamma energy in excess of the equivalent rest mass of the two particles (totaling at least 1.02 MeV) appears as the kinetic energy of the pair and in the recoil of the emitting nucleus. At the end of the positron's range, it combines with a free electron, and the two annihilate, and the entire mass of these two is then converted into two gamma photons of at least 0.51 MeV energy each (or higher according to the kinetic energy of the annihilated particles). The secondary electrons (and/or positrons) produced in any of these three processes frequently have enough energy to produce much ionization themselves. Additionally, gamma rays, particularly high energy ones, can interact with atomic nuclei resulting in ejection of particles in photodisintegration, or in some cases, even nuclear fission (photofission). Light interaction High-energy (from 80 GeV to ~10 TeV) gamma rays arriving from far-distant quasars are used to estimate the extragalactic background light in the universe: The highest-energy rays interact more readily with the background light photons and thus the density of the background light may be estimated by analyzing the incoming gamma ray spectra. Gamma spectroscopy Gamma spectroscopy is the study of the energetic transitions in atomic nuclei, which are generally associated with the absorption or emission of gamma rays. As in optical spectroscopy (see Franck–Condon effect) the absorption of gamma rays by a nucleus is especially likely (i.e., peaks in a "resonance") when the energy of the gamma ray is the same as that of an energy transition in the nucleus. In the case of gamma rays, such a resonance is seen in the technique of Mössbauer spectroscopy. In the Mössbauer effect the narrow resonance absorption for nuclear gamma absorption can be successfully attained by physically immobilizing atomic nuclei in a crystal. The immobilization of nuclei at both ends of a gamma resonance interaction is required so that no gamma energy is lost to the kinetic energy of recoiling nuclei at either the emitting or absorbing end of a gamma transition. Such loss of energy causes gamma ray resonance absorption to fail. However, when emitted gamma rays carry essentially all of the energy of the atomic nuclear de-excitation that produces them, this energy is also sufficient to excite the same energy state in a second immobilized nucleus of the same type. Applications Gamma rays provide information about some of the most energetic phenomena in the universe; however, they are largely absorbed by the Earth's atmosphere. Instruments aboard high-altitude balloons and satellites missions, such as the Fermi Gamma-ray Space Telescope, provide our only view of the universe in gamma rays. Gamma-induced molecular changes can also be used to alter the properties of semi-precious stones, and is often used to change white topaz into blue topaz. Non-contact industrial sensors commonly use sources of gamma radiation in refining, mining, chemicals, food, soaps and detergents, and pulp and paper industries, for the measurement of levels, density, and thicknesses. Gamma-ray sensors are also used for measuring the fluid levels in water and oil industries. Typically, these use Co-60 or Cs-137 isotopes as the radiation source. In the US, gamma ray detectors are beginning to be used as part of the Container Security Initiative (CSI). These machines are advertised to be able to scan 30 containers per hour. Gamma radiation is often used to kill living organisms, in a process called irradiation. Applications of this include the sterilization of medical equipment (as an alternative to autoclaves or chemical means), the removal of decay-causing bacteria from many foods and the prevention of the sprouting of fruit and vegetables to maintain freshness and flavor. Despite their cancer-causing properties, gamma rays are also used to treat some types of cancer, since the rays also kill cancer cells. In the procedure called gamma-knife surgery, multiple concentrated beams of gamma rays are directed to the growth in order to kill the cancerous cells. The beams are aimed from different angles to concentrate the radiation on the growth while minimizing damage to surrounding tissues. Gamma rays are also used for diagnostic purposes in nuclear medicine in imaging techniques. A number of different gamma-emitting radioisotopes are used. For example, in a PET scan a radiolabeled sugar called fluorodeoxyglucose emits positrons that are annihilated by electrons, producing pairs of gamma rays that highlight cancer as the cancer often has a higher metabolic rate than the surrounding tissues. The most common gamma emitter used in medical applications is the nuclear isomer technetium-99m which emits gamma rays in the same energy range as diagnostic X-rays. When this radionuclide tracer is administered to a patient, a gamma camera can be used to form an image of the radioisotope's distribution by detecting the gamma radiation emitted (see also SPECT). Depending on which molecule has been labeled with the tracer, such techniques can be employed to diagnose a wide range of conditions (for example, the spread of cancer to the bones via bone scan). Health effects Gamma rays cause damage at a cellular level and are penetrating, causing diffuse damage throughout the body. However, they are less ionising than alpha or beta particles, which are less penetrating. Low levels of gamma rays cause a stochastic health risk, which for radiation dose assessment is defined as the probability of cancer induction and genetic damage. The International Commission on Radiological Protection says "In the low dose range, below about 100 mSv, it is scientifically plausible to assume that the incidence of cancer or heritable effects will rise in direct proportion to an increase in the equivalent dose in the relevant organs and tissues" High doses produce deterministic effects, which is the severity of acute tissue damage that is certain to happen. These effects are compared to the physical quantity absorbed dose measured by the unit gray (Gy). Effects and body response When gamma radiation breaks DNA molecules, a cell may be able to repair the damaged genetic material, within limits. However, a study of Rothkamm and Lobrich has shown that this repair process works well after high-dose exposure but is much slower in the case of a low-dose exposure. Studies have shown low-dose gamma radiation may be enough to cause cancer. In a study of mice, they were given human-relevant low-dose gamma radiation, with genotoxic effects 45 days after continuous low-dose gamma radiation, with significant increases of chromosomal damage, DNA lesions and phenotypic mutations in blood cells of irradiated animals, covering the three types of genotoxic activity. Another study studied the effects of acute ionizing gamma radiation in rats, up to 10 Gy, and who ended up showing acute oxidative protein damage, DNA damage, cardiac troponin T carbonylation, and long-term cardiomyopathy. Risk assessment The natural outdoor exposure in the United Kingdom ranges from 0.1 to 0.5 μSv/h with significant increase around known nuclear and contaminated sites. Natural exposure to gamma rays is about 1 to 2 mSv per year, and the average total amount of radiation received in one year per inhabitant in the USA is 3.6 mSv. There is a small increase in the dose, due to naturally occurring gamma radiation, around small particles of high atomic number materials in the human body caused by the photoelectric effect. By comparison, the radiation dose from chest radiography (about 0.06 mSv) is a fraction of the annual naturally occurring background radiation dose. A chest CT delivers 5 to 8 mSv. A whole-body PET/CT scan can deliver 14 to 32 mSv depending on the protocol. The dose from fluoroscopy of the stomach is much higher, approximately 50 mSv (14 times the annual background). An acute full-body equivalent single exposure dose of 1 Sv (1000 mSv), or 1 Gy, will cause mild symptoms of acute radiation sickness, such as nausea and vomiting; and a dose of 2.0–3.5 Sv (2.0–3.5 Gy) causes more severe symptoms (i.e. nausea, diarrhea, hair loss, hemorrhaging, and inability to fight infections), and will cause death in a sizable number of cases—about 10% to 35% without medical treatment. A dose of 3–5 Sv (3–5 Gy) is considered approximately the LD50 (or the lethal dose for 50% of exposed population) for an acute exposure to radiation even with standard medical treatment. A dose higher than 5 Sv (5 Gy) brings an increasing chance of death above 50%. Above 7.5–10 Sv (7.5–10 Gy) to the entire body, even extraordinary treatment, such as bone-marrow transplants, will not prevent the death of the individual exposed (see radiation poisoning). (Doses much larger than this may, however, be delivered to selected parts of the body in the course of radiation therapy.) For low-dose exposure, for example among nuclear workers, who receive an average yearly radiation dose of 19 mSv, the risk of dying from cancer (excluding leukemia) increases by 2 percent. For a dose of 100 mSv, the risk increase is 10 percent. By comparison, risk of dying from cancer was increased by 32 percent for the survivors of the atomic bombing of Hiroshima and Nagasaki. Units of measurement and exposure The following table shows radiation quantities in SI and non-SI units: The measure of the ionizing effect of gamma and X-rays in dry air is called the exposure, for which a legacy unit, the röntgen, was used from 1928. This has been replaced by kerma, now mainly used for instrument calibration purposes but not for received dose effect. The effect of gamma and other ionizing radiation on living tissue is more closely related to the amount of energy deposited in tissue rather than the ionisation of air, and replacement radiometric units and quantities for radiation protection have been defined and developed from 1953 onwards. These are: The gray (Gy), is the SI unit of absorbed dose, which is the amount of radiation energy deposited in the irradiated material. For gamma radiation this is numerically equivalent to equivalent dose measured by the sievert, which indicates the stochastic biological effect of low levels of radiation on human tissue. The radiation weighting conversion factor from absorbed dose to equivalent dose is 1 for gamma, whereas alpha particles have a factor of 20, reflecting their greater ionising effect on tissue. The rad is the deprecated CGS unit for absorbed dose and the rem is the deprecated CGS unit of equivalent dose, used mainly in the USA. Distinction from X-rays The conventional distinction between X-rays and gamma rays has changed over time. Originally, the electromagnetic radiation emitted by X-ray tubes almost invariably had a longer wavelength than the radiation (gamma rays) emitted by radioactive nuclei. Older literature distinguished between X- and gamma radiation on the basis of wavelength, with radiation shorter than some arbitrary wavelength, such as 10−11 m, defined as gamma rays. Since the energy of photons is proportional to their frequency and inversely proportional to wavelength, this past distinction between X-rays and gamma rays can also be thought of in terms of its energy, with gamma rays considered to be higher energy electromagnetic radiation than are X-rays. However, since current artificial sources are now able to duplicate any electromagnetic radiation that originates in the nucleus, as well as far higher energies, the wavelengths characteristic of radioactive gamma ray sources vs. other types now completely overlap. Thus, gamma rays are now usually distinguished by their origin: X-rays are emitted by definition by electrons outside the nucleus, while gamma rays are emitted by the nucleus. Exceptions to this convention occur in astronomy, where gamma decay is seen in the afterglow of certain supernovas, but radiation from high energy processes known to involve other radiation sources than radioactive decay is still classed as gamma radiation. For example, modern high-energy X-rays produced by linear accelerators for megavoltage treatment in cancer often have higher energy (4 to 25 MeV) than do most classical gamma rays produced by nuclear gamma decay. One of the most common gamma ray emitting isotopes used in diagnostic nuclear medicine, technetium-99m, produces gamma radiation of the same energy (140 keV) as that produced by diagnostic X-ray machines, but of significantly lower energy than therapeutic photons from linear particle accelerators. In the medical community today, the convention that radiation produced by nuclear decay is the only type referred to as "gamma" radiation is still respected. Due to this broad overlap in energy ranges, in physics the two types of electromagnetic radiation are now often defined by their origin: X-rays are emitted by electrons (either in orbitals outside of the nucleus, or while being accelerated to produce bremsstrahlung-type radiation), while gamma rays are emitted by the nucleus or by means of other particle decays or annihilation events. There is no lower limit to the energy of photons produced by nuclear reactions, and thus ultraviolet or lower energy photons produced by these processes would also be defined as "gamma rays" (indeed, this happens for the isomeric transition of the extremely low-energy isomer 229mTh). The only naming-convention that is still universally respected is the rule that electromagnetic radiation that is known to be of atomic nuclear origin is always referred to as "gamma rays", and never as X-rays. However, in physics and astronomy, the converse convention (that all gamma rays are considered to be of nuclear origin) is frequently violated. In astronomy, higher energy gamma and X-rays are defined by energy, since the processes that produce them may be uncertain and photon energy, not origin, determines the required astronomical detectors needed. High-energy photons occur in nature that are known to be produced by processes other than nuclear decay but are still referred to as gamma radiation. An example is "gamma rays" from lightning discharges at 10 to 20 MeV, and known to be produced by the bremsstrahlung mechanism. Another example is gamma-ray bursts, now known to be produced from processes too powerful to involve simple collections of atoms undergoing radioactive decay. This is part and parcel of the general realization that many gamma rays produced in astronomical processes result not from radioactive decay or particle annihilation, but rather in non-radioactive processes similar to X-rays. Although the gamma rays of astronomy often come from non-radioactive events, a few gamma rays in astronomy are specifically known to originate from gamma decay of nuclei (as demonstrated by their spectra and emission half life). A classic example is that of supernova SN 1987A, which emits an "afterglow" of gamma-ray photons from the decay of newly made radioactive nickel-56 and cobalt-56. Most gamma rays in astronomy, however, arise by other mechanisms.
Physical sciences
Nuclear physics
null
18617142
https://en.wikipedia.org/wiki/Mercury%20%28element%29
Mercury (element)
Mercury is a chemical element; it has symbol Hg and atomic number 80. It is also known as quicksilver and was formerly named hydrargyrum ( ) from the Greek words and , from which its chemical symbol is derived. A heavy, silvery d-block element, mercury is the only metallic element that is known to be liquid at standard temperature and pressure; the only other element that is liquid under these conditions is the halogen bromine, though metals such as caesium, gallium, and rubidium melt just above room temperature. Mercury occurs in deposits throughout the world mostly as cinnabar (mercuric sulfide). The red pigment vermilion is obtained by grinding natural cinnabar or synthetic mercuric sulfide. Exposure to mercury and mercury-containing organic compounds is toxic to the nervous system, immune system and kidneys of humans and other animals; mercury poisoning can result from exposure to water-soluble forms of mercury (such as mercuric chloride or methylmercury) either directly or through mechanisms of biomagnification. Mercury is used in thermometers, barometers, manometers, sphygmomanometers, float valves, mercury switches, mercury relays, fluorescent lamps and other devices, although concerns about the element's toxicity have led to the phasing out of such mercury-containing instruments. It remains in use in scientific research applications and in amalgam for dental restoration in some locales. It is also used in fluorescent lighting. Electricity passed through mercury vapor in a fluorescent lamp produces short-wave ultraviolet light, which then causes the phosphor in the tube to fluoresce, making visible light. Properties Physical properties Mercury is a heavy, silvery-white metal that is liquid at room temperature. Compared to other metals, it is a poor conductor of heat, but a fair conductor of electricity. It has a melting point of −38.83 °C and a boiling point of 356.73 °C, both the lowest of any stable metal, although preliminary experiments on copernicium and flerovium have indicated that they have even lower boiling points. This effect is due to lanthanide contraction and relativistic contraction reducing the orbit radius of the outermost electrons, and thus weakening the metallic bonding in mercury. Upon freezing, the volume of mercury decreases by 3.59% and its density changes from 13.69 g/cm3 when liquid to 14.184 g/cm3 when solid. The coefficient of volume expansion is 181.59 × 10−6 at 0 °C, 181.71 × 10−6 at 20 °C and 182.50 × 10−6 at 100 °C (per °C). Solid mercury is malleable and ductile, and can be cut with a knife. Table of thermal and physical properties of liquid mercury: Chemical properties Mercury does not react with most acids, such as dilute sulfuric acid, although oxidizing acids such as concentrated sulfuric acid and nitric acid or aqua regia dissolve it to give sulfate, nitrate, and chloride. Like silver, mercury reacts with atmospheric hydrogen sulfide. Mercury reacts with solid sulfur flakes, which are used in mercury spill kits to absorb mercury (spill kits also use activated carbon and powdered zinc). Amalgams Mercury dissolves many metals such as gold and silver to form amalgams. Iron is an exception, and iron flasks have traditionally been used to transport the material. Several other first row transition metals with the exception of manganese, copper and zinc are also resistant in forming amalgams. Other elements that do not readily form amalgams with mercury include platinum. Sodium amalgam is a common reducing agent in organic synthesis, and is also used in high-pressure sodium lamps. Mercury readily combines with aluminium to form a mercury-aluminium amalgam when the two pure metals come into contact. Since the amalgam destroys the aluminium oxide layer which protects metallic aluminium from oxidizing in-depth (as in iron rusting), even small amounts of mercury can seriously corrode aluminium. For this reason, mercury is not allowed aboard an aircraft under most circumstances because of the risk of it forming an amalgam with exposed aluminium parts in the aircraft. Mercury embrittlement is the most common type of liquid metal embrittlement, as mercury is a natural component of some hydrocarbon reservoirs and will come into contact with petroleum processing equipment under normal conditions. Isotopes There are seven stable isotopes of mercury, with being the most abundant (29.86%). The longest-lived radioisotopes are with a half-life of 444 years, and with a half-life of 46.612 days. Most of the remaining radioisotopes have half-lives that are less than a day. occurs naturally in tiny traces as an intermediate decay product of . and are the most often studied NMR-active nuclei, having spins of and respectively. Etymology Hg is the modern chemical symbol for mercury. It is an abbreviation of , a romanized form of the ancient Greek name for mercury, (). is a Greek compound word meaning , from - (-), the root of () , and () . Like the English name quicksilver (), this name was due to mercury's liquid and shiny properties. The modern English name mercury comes from the planet Mercury. In medieval alchemy, the seven known metals—quicksilver, gold, silver, copper, iron, lead, and tin—were associated with the seven planets. Quicksilver was associated with the fastest planet, which had been named after the Roman god Mercury, who was associated with speed and mobility. The astrological symbol for the planet became one of the alchemical symbols for the metal, and Mercury became an alternative name for the metal. Mercury is the only metal for which the alchemical planetary name survives, as it was decided it was preferable to quicksilver as a chemical name. History Mercury was found in Egyptian tombs that date from 1500 BC; cinnabar, the most common natural source of mercury, has been in use since the Neolithic Age. In China and Tibet, mercury use was thought to prolong life, heal fractures, and maintain generally good health, although it is now known that exposure to mercury vapor leads to serious adverse health effects. The first emperor of a unified China, Qín Shǐ Huáng Dì—allegedly buried in a tomb that contained rivers of flowing mercury on a model of the land he ruled, representative of the rivers of China—was reportedly killed by drinking a mercury and powdered jade mixture formulated by Qin alchemists intended as an elixir of immortality. Khumarawayh ibn Ahmad ibn Tulun, the second Tulunid ruler of Egypt (r. 884–896), known for his extravagance and profligacy, reportedly built a basin filled with mercury, on which he would lie on top of air-filled cushions and be rocked to sleep. In November 2014 "large quantities" of mercury were discovered in a chamber 60 feet below the 1800-year-old pyramid known as the Temple of the Feathered Serpent, the third-largest pyramid of Teotihuacan, Mexico, along with "jade statues, jaguar remains, a box filled with carved shells and rubber balls". In Lamanai, once a major city of the Maya civilization, a pool of mercury was found under a marker in a Mesoamerican ballcourt. Aristotle recounts that Daedalus made a wooden statue of Aphrodite move by pouring quicksilver in its interior. In Greek mythology Daedalus gave the appearance of voice in his statues using quicksilver. The ancient Greeks used cinnabar (mercury sulfide) in ointments; the ancient Egyptians and the Romans used it in cosmetics. By 500 BC mercury was used to make amalgams (Medieval Latin amalgama, "alloy of mercury") with other metals. Alchemists thought of mercury as the First Matter from which all metals were formed. They believed that different metals could be produced by varying the quality and quantity of sulfur contained within the mercury. The purest of these was gold, and mercury was called for in attempts at the transmutation of base (or impure) metals into gold, which was the goal of many alchemists. The mines in Almadén (Spain), Monte Amiata (Italy), and Idrija (now Slovenia) dominated mercury production from the opening of the mine in Almadén 2500 years ago, until new deposits were found at the end of the 19th century. Occurrence Mercury is an extremely rare element in Earth's crust; it has an average crustal abundance by mass of only 0.08 parts per million (ppm) and is the 66th most abundant element in the Earth's crust. Because it does not blend geochemically with those elements that constitute the majority of the crustal mass, mercury ores can be extraordinarily concentrated considering the element's abundance in ordinary rock. The richest mercury ores contain up to 2.5% mercury by mass, and even the leanest concentrated deposits are at least 0.1% mercury (12,000 times average crustal abundance). It is found either as a native metal (rare) or in cinnabar, metacinnabar, sphalerite, corderoite, livingstonite and other minerals, with cinnabar (HgS) being the most common ore. Mercury ores often occur in hot springs or other volcanic regions. Beginning in 1558, with the invention of the patio process to extract silver from ore using mercury, mercury became an essential resource in the economy of Spain and its American colonies. Mercury was used to extract silver from the lucrative mines in New Spain and Peru. Initially, the Spanish Crown's mines in Almadén in Southern Spain supplied all the mercury for the colonies. Mercury deposits were discovered in the New World, and more than 100,000 tons of mercury were mined from the region of Huancavelica, Peru, over the course of three centuries following the discovery of deposits there in 1563. The patio process and later pan amalgamation process continued to create great demand for mercury to treat silver ores until the late 19th century. Former mines in Italy, the United States and Mexico, which once produced a large proportion of the world supply, have now been completely mined out or, in the case of Slovenia (Idrija) and Spain (Almadén), shut down due to the fall of the price of mercury. Nevada's McDermitt Mine, the last mercury mine in the United States, closed in 1992. The price of mercury has been highly volatile over the years and in 2006 was $650 per 76-pound (34.46 kg) flask. Mercury is extracted by heating cinnabar in a current of air and condensing the vapor. The equation for this extraction is: HgS + O2 → Hg + SO2 In 2020, China was the top producer of mercury, providing 88% of the world output (2200 out of 2500 tonnes), followed by Tajikistan (178 t), Russia (50 t) and Mexico (32 t). Because of the high toxicity of mercury, both the mining of cinnabar and refining for mercury are hazardous and historic causes of mercury poisoning. In China, prison labor was used by a private mining company as recently as the 1950s to develop new cinnabar mines. Thousands of prisoners were used by the Luo Xi mining company to establish new tunnels. Worker health in functioning mines is at high risk. A newspaper claimed that an unidentified European Union directive calling for energy-efficient lightbulbs to be made mandatory by 2012 encouraged China to re-open cinnabar mines to obtain the mercury required for CFL bulb manufacture. Environmental dangers have been a concern, particularly in the southern cities of Foshan and Guangzhou, and in Guizhou province in the southwest. Abandoned mercury mine processing sites often contain very hazardous waste piles of roasted cinnabar calcines. Water run-off from such sites is a recognized source of ecological damage. Former mercury mines may be suited for constructive re-use; for example, in 1976 Santa Clara County, California purchased the historic Almaden Quicksilver Mine and created a county park on the site, after conducting extensive safety and environmental analysis of the property. Chemistry All known mercury compounds exhibit one of two positive oxidation states: I and II. Experiments have failed to unequivocally demonstrate any higher oxidation states: both the claimed 1976 electrosynthesis of an unstable Hg(III) species and 2007 cryogenic isolation of HgF4 have disputed interpretations and remain difficult (if not impossible) to reproduce. Compounds of mercury(I) Unlike its lighter neighbors, cadmium and zinc, mercury usually forms simple stable compounds with metal-metal bonds. Most mercury(I) compounds are diamagnetic and feature the dimeric cation, Hg. Stable derivatives include the chloride and nitrate. In aqueous solution of a mercury(I) salt, slight disproportion of Hg into Hg and results in >0.5% of dissolved mercury existing as . In these solutions, complexation of the with addition of ligands such as cyanide causes disproportionation to go to completion, with all Hg precipitating as elemental mercury and insoluble mercury(II) compounds (e.g. mercury(II) cyanide if cyanide is used as the ligand). Mercury(I) chloride, a colorless solid also known as calomel, is really the compound with the formula Hg2Cl2, with the connectivity Cl-Hg-Hg-Cl. It reacts with chlorine to give mercury(II) chloride, which resists further oxidation. Mercury(I) hydride, a colorless gas, has the formula HgH, containing no Hg-Hg bond; however, the gas has only ever been observed as isolated molecules. Indicative of its tendency to bond to itself, mercury forms mercury polycations, which consist of linear chains of mercury centers, capped with a positive charge. One example is containing the cation. Compounds of mercury(II) Mercury(II) is the most common oxidation state and is the main one in nature as well. All four mercuric halides are known and have been demonstrated to form linear coordination geometry, despite mercury's tendency to form tetrahedral molecular geometry with other ligands. This behavior is similar to the Ag+ ion. The best known mercury halide is mercury(II) chloride, an easily sublimating white solid. Mercury(II) oxide, the main oxide of mercury, arises when the metal is exposed to air for long periods at elevated temperatures. It reverts to the elements upon heating near 400 °C, as was demonstrated by Joseph Priestley in an early synthesis of pure oxygen. Hydroxides of mercury are poorly characterized, as attempted isolation studies of mercury(II) hydroxide have yielded mercury oxide instead. Being a soft metal, mercury forms very stable derivatives with the heavier chalcogens. Preeminent is mercury(II) sulfide, HgS, which occurs in nature as the ore cinnabar and is the brilliant pigment vermilion. Like ZnS, HgS crystallizes in two forms, the reddish cubic form and the black zinc blende form. The latter sometimes occurs naturally as metacinnabar. Mercury(II) selenide (HgSe) and mercury(II) telluride (HgTe) are known, these as well as various derivatives, e.g. mercury cadmium telluride and mercury zinc telluride being semiconductors useful as infrared detector materials. Mercury(II) salts form a variety of complex derivatives with ammonia. These include Millon's base (Hg2N+), the one-dimensional polymer (salts of )), and "fusible white precipitate" or [Hg(NH3)2]Cl2. Known as Nessler's reagent, potassium tetraiodomercurate(II) () is still occasionally used to test for ammonia owing to its tendency to form the deeply colored iodide salt of Millon's base. Mercury fulminate is a detonator widely used in explosives. Organomercury compounds Organic mercury compounds are historically important but are of little industrial value in the western world. Mercury(II) salts are a rare example of simple metal complexes that react directly with aromatic rings. Organomercury compounds are always divalent and usually two-coordinate and linear geometry. Unlike organocadmium and organozinc compounds, organomercury compounds do not react with water. They usually have the formula HgR2, which are often volatile, or HgRX, which are often solids, where R is aryl or alkyl and X is usually halide or acetate. Methylmercury, a generic term for compounds with the formula CH3HgX, is a dangerous family of compounds that are often found in polluted water. They arise by a process known as biomethylation. Applications Mercury is used primarily for the manufacture of industrial chemicals or for electrical and electronic applications. It is used in some liquid-in-glass thermometers, especially those used to measure high temperatures. A still increasing amount is used as gaseous mercury in fluorescent lamps, while most of the other applications are slowly being phased out due to health and safety regulations. In some applications, mercury is replaced with less toxic but considerably more expensive Galinstan alloy. Medicine Historical and folk Mercury and its compounds have been used in medicine, although they are much less common today than they once were, now that the toxic effects of mercury and its compounds are more widely understood. An example of the early therapeutic application of mercury was published in 1787 by James Lind. The first edition of The Merck Manuals (1899) featured many then-medically relevant mercuric compounds, such as mercury-ammonium chloride, yellow mercury proto-iodide, calomel, and mercuric chloride, among others. Mercury in the form of one of its common ores, cinnabar, is used in various traditional medicines, especially in traditional Chinese medicine. Review of its safety has found that cinnabar can lead to significant mercury intoxication when heated, consumed in overdose, or taken long term, and can have adverse effects at therapeutic doses, though effects from therapeutic doses are typically reversible. Although this form of mercury appears to be less toxic than other forms, its use in traditional Chinese medicine has not yet been justified, as the therapeutic basis for the use of cinnabar is not clear. Mercury(I) chloride (also known as calomel or mercurous chloride) has been used in traditional medicine as a diuretic, topical disinfectant, and laxative. Mercury(II) chloride (also known as mercuric chloride or corrosive sublimate) was once used to treat syphilis (along with other mercury compounds), although it is so toxic that sometimes the symptoms of its toxicity were confused with those of the syphilis it was believed to treat. It is also used as a disinfectant. Blue mass, a pill or syrup in which mercury is the main ingredient, was prescribed throughout the 19th century for numerous conditions including constipation, depression, child-bearing and toothaches. In the early 20th century, mercury was administered to children yearly as a laxative and dewormer, and it was used in teething powders for infants. The mercury-containing organohalide merbromin (sometimes sold as Mercurochrome) is still widely used but has been banned in some countries, such as the U.S. Contemporary Mercury is an ingredient in dental amalgams. Thiomersal (called Thimerosal in the United States) is an organic compound used as a preservative in vaccines, although this use is in decline. Although it was widely speculated that this mercury-based preservative could cause or trigger autism in children, no evidence supports any such link. Nevertheless, thiomersal has been removed from, or reduced to trace amounts in, all U.S. vaccines recommended for children 6 years of age and under, with the exception of the inactivated influenza vaccine. Merbromin (Mercurochrome), another mercury compound, is a topical antiseptic used for minor cuts and scrapes in some countries. Today, the use of mercury in medicine has greatly declined in all respects, especially in developed countries. Mercury is still used in some diuretics, although substitutes such as thiazides now exist for most therapeutic uses. In 2003, mercury compounds were found in some over-the-counter drugs, including topical antiseptics, stimulant laxatives, diaper-rash ointment, eye drops, and nasal sprays. The FDA has "inadequate data to establish general recognition of the safety and effectiveness" of the mercury ingredients in these products. Production of chlorine and caustic soda Chlorine is produced from sodium chloride (common salt, NaCl) using electrolysis to separate metallic sodium from chlorine gas. Usually salt is dissolved in water to produce a brine. By-products of any such chloralkali process are hydrogen (H2) and sodium hydroxide (NaOH), which is commonly called caustic soda or lye. By far the largest use of mercury in the late 20th century was in the mercury cell process (also called the Castner-Kellner process) where metallic sodium is formed as an amalgam at a cathode made from mercury; this sodium is then reacted with water to produce sodium hydroxide. Many of the industrial mercury releases of the 20th century came from this process, although modern plants claim to be safe in this regard. From the 1960s onward, the majority of industrial plants moved away from mercury cell processes towards diaphragm cell technologies to produce chlorine, though 11% of the chlorine made in the United States was still produced with the mercury cell method as of 2005. Laboratory uses Thermometers Thermometers containing mercury were invented in the early 18th century by Daniel Gabriel Fahrenheit, though earlier attempts at making temperature-measuring instruments filled with quicksilver had been described in the 1650s. Fahrenheit's mercury thermometer was based on an earlier design that used alcohol rather than mercury; the mercury thermometer was significantly more accurate than those using alcohol. From the early 21st century onwards, the use of mercury thermometers has been declining, and mercury-containing instruments have been banned in many jurisdictions following the 1998 Protocol on Heavy Metals. Modern alternatives to mercury thermometers include resistance thermometers, thermocouples, and thermistor sensors that output to a digital display. Mirrors Some transit telescopes use a basin of mercury to form a flat and absolutely horizontal mirror, useful in determining an absolute vertical or perpendicular reference. Concave horizontal parabolic mirrors may be formed by rotating liquid mercury on a disk, the parabolic form of the liquid thus formed reflecting and focusing incident light. Such liquid-mirror telescopes are cheaper than conventional large mirror telescopes by up to a factor of 100, but the mirror cannot be tilted and always points straight up. Electrochemistry Liquid mercury is part of a popular secondary reference electrode (called the calomel electrode) in electrochemistry as an alternative to the standard hydrogen electrode. The calomel electrode is used to work out the electrode potential of half cells. The triple point of mercury, −38.8344 °C, is a fixed point used as a temperature standard for the International Temperature Scale (ITS-90). Polarography and crystallography In polarography, both the dropping mercury electrode and the hanging mercury drop electrode use elemental mercury. This use allows a new uncontaminated electrode to be available for each measurement or each new experiment. Mercury-containing compounds are also of use in the field of structural biology. Mercuric compounds such as mercury(II) chloride or potassium tetraiodomercurate(II) can be added to protein crystals in an effort to create heavy atom derivatives that can be used to solve the phase problem in X-ray crystallography via isomorphous replacement or anomalous scattering methods. Niche uses Gaseous mercury is used in mercury-vapor lamps and some "neon sign" type advertising signs and fluorescent lamps. Those low-pressure lamps emit very spectrally narrow lines, which are traditionally used in optical spectroscopy for calibration of spectral position. Commercial calibration lamps are sold for this purpose; reflecting a fluorescent ceiling light into a spectrometer is a common calibration practice. Gaseous mercury is also found in some electron tubes, including ignitrons, thyratrons, and mercury arc rectifiers. It is also used in specialist medical care lamps for skin tanning and disinfection. Gaseous mercury is added to cold cathode argon-filled lamps to increase the ionization and electrical conductivity. An argon-filled lamp without mercury will have dull spots and will fail to light correctly. Lighting containing mercury can be bombarded/oven pumped only once. When added to neon filled tubes, inconsistent red and blue spots are produced in the light emissions until the initial burning-in process is completed; eventually it will light a consistent dull off-blue color. The Deep Space Atomic Clock (DSAC) under development by the Jet Propulsion Laboratory utilises mercury in a linear ion-trap-based clock. The novel use of mercury permits the creation of compact atomic clocks with low energy requirements ideal for space probes and Mars missions. Skin whitening Mercury is effective as an active ingredient in skin whitening compounds used to depigment skin. The Minamata Convention on Mercury limits the concentration of mercury in such whiteners to 1 part per million. However, as of 2022, many commercially sold whitener products continue to exceed that limit, and are considered toxic. Firearms Mercury(II) fulminate is a primary explosive, which has mainly been used as a primer of a cartridge in firearms throughout the 19th and 20th centuries. Mining Mercury is used in illegal gold mining to help separate gold particles from a mixture of sand or gravel and water. Small gold particles may form mercury-gold amalgam and therefore increase the gold recovery rates. The use of mercury causes a severe pollution problem in places such as Ghana. Historic uses Many historic applications made use of the peculiar physical properties of mercury, especially as a dense liquid and a liquid metal: Quantities of liquid mercury ranging from have been recovered from elite Maya tombs (100–700 AD) or ritual caches at six sites. This mercury may have been used in bowls as mirrors for divinatory purposes. Five of these date to the Classic Period of Maya civilization (c. 250–900) but one example predated this. In Islamic Spain, it was used for filling decorative pools. Later, the American artist Alexander Calder built a mercury fountain for the Spanish Pavilion at the 1937 World Exhibition in Paris. The fountain is now on display at the Fundació Joan Miró in Barcelona. The Fresnel lenses of old lighthouses used to float and rotate in a bath of mercury which acted like a bearing. Mercury sphygmomanometers, barometers, diffusion pumps, coulometers, and many other laboratory instruments took advantage of mercury's properties as a very dense, opaque liquid with a nearly linear thermal expansion. As an electrically conductive liquid, it was used in mercury switches (including home mercury light switches installed prior to 1970), tilt switches used in old fire detectors and in some home thermostats. Owing to its acoustic properties, mercury was used as the propagation medium in delay-line memory devices used in early digital computers of the mid-20th century, such as the SEAC computer. In 1911, Heike Kamerlingh Onnes discovered superconductivity through the cooling of mercury below 4 kelvin shortly after the discovery and production of liquid helium. Its superconductive properties were later determined to be unusual compared to other later-discovered superconductors, such as the more popular niobium alloys. Experimental mercury vapor turbines were installed to increase the efficiency of fossil-fuel electrical power plants. The South Meadow power plant in Hartford, CT employed mercury as its working fluid, in a binary configuration with a secondary water circuit, for a number of years starting in the late 1920s in a drive to improve plant efficiency. Several other plants were built, including the Schiller Station in Portsmouth, NH, which went online in 1950. The idea did not catch on industry-wide due to the weight and toxicity of mercury, as well as the advent of supercritical steam plants in later years. Similarly, liquid mercury was used as a coolant for some nuclear reactors; however, sodium is proposed for reactors cooled with liquid metal, because the high density of mercury requires much more energy to circulate as coolant. Mercury was a propellant for early ion engines in electric space propulsion systems. Advantages were mercury's high molecular weight, low ionization energy, low dual-ionization energy, high liquid density and liquid storability at room temperature. Disadvantages were concerns regarding environmental impact associated with ground testing and concerns about eventual cooling and condensation of some of the propellant on the spacecraft in long-duration operations. The first spaceflight to use electric propulsion was a mercury-fueled ion thruster developed at NASA Glenn Research Center and flown on the Space Electric Rocket Test "SERT-1" spacecraft launched by NASA at its Wallops Flight Facility in 1964. The SERT-1 flight was followed up by the SERT-2 flight in 1970. Mercury and caesium were preferred propellants for ion engines until Hughes Research Laboratory performed studies finding xenon gas to be a suitable replacement. Xenon is now the preferred propellant for ion engines, as it has a high molecular weight, little or no reactivity due to its noble gas nature, and high liquid density under mild cryogenic storage. Other applications made use of the chemical properties of mercury: The mercury battery is a non-rechargeable electrochemical battery, a primary cell, that was common in the middle of the 20th century. It was used in a wide variety of applications and was available in various sizes, particularly button sizes. Its constant voltage output and long shelf life gave it a niche use for camera light meters and hearing aids. The mercury cell was effectively banned in most countries in the 1990s due to concerns about the mercury contaminating landfills. Mercury was used for preserving wood, developing daguerreotypes, silvering mirrors, anti-fouling paints, herbicides, interior latex paint, handheld maze games, cleaning, and road leveling devices in cars. Mercury compounds have been used in antiseptics, laxatives, antidepressants, and in antisyphilitics. Mercury has been replaced with safer compounds in most, if not all, of these applications. It was allegedly used by allied spies to sabotage Luftwaffe planes: a mercury paste was applied to bare aluminium, causing the metal to rapidly corrode; this would cause structural failures. Mercury was once used as a gun barrel bore cleaner. From the mid-18th to the mid-19th centuries, a process called "carroting" was used in the making of felt hats. Animal skins were rinsed in an orange solution (the term "carroting" arose from this color) of the mercury compound mercuric nitrate, Hg(NO3)2. This process separated the fur from the pelt and matted it together. This solution and the vapors it produced were highly toxic. The United States Public Health Service banned the use of mercury in the felt industry in December 1941. The psychological symptoms associated with mercury poisoning inspired the phrase "mad as a hatter". Lewis Carroll's "Mad Hatter" in his book Alice's Adventures in Wonderland was a play on words based on the older phrase, but the character himself does not exhibit symptoms of mercury poisoning. Historically, mercury was used extensively in hydraulic gold mining (see #Mining. Large-scale use of mercury stopped in the 1960s. However, mercury is still used in small scale, often clandestine, gold prospecting. It is estimated that 45,000 metric tons of mercury used in California for placer mining have not been recovered. Mercury was also used in silver mining to extract the metal from ore through the patio process. Toxicity and safety Due to its physical properties and relative chemical inertness, liquid mercury is absorbed very poorly through intact skin and the gastrointestinal tract. Mercury vapor is the primary hazard of elemental mercury. As a result, containers of mercury are securely sealed to avoid spills and evaporation. Heating of mercury, or of compounds of mercury that may decompose when heated, should be carried out with adequate ventilation in order to minimize exposure to mercury vapor. The most toxic forms of mercury are its organic compounds, such as dimethylmercury and methylmercury. Mercury can cause both chronic and acute poisoning. Releases in the environment Preindustrial deposition rates of mercury from the atmosphere may be about 4 ng per 1 L of ice deposited. Volcanic eruptions and related natural sources are responsible for approximately half of atmospheric mercury emissions. Atmospheric mercury contamination in outdoor urban air at the start of the 21st century was measured at 0.01–0.02 μg/m3. A 2001 study measured mercury levels in 12 indoor sites chosen to represent a cross-section of building types, locations and ages in the New York area. This study found mercury concentrations significantly elevated over outdoor concentrations, at a range of 0.0065 – 0.523 μg/m3. The average was 0.069 μg/m3. Half of mercury emissions are attributed to mankind. The sources can be divided into the following estimated percentages: 65% from stationary combustion, of which coal-fired power plants are the largest aggregate source (40% of U.S. mercury emissions in 1999). This includes power plants fueled with gas where the mercury has not been removed. Emissions from coal combustion are between one and two orders of magnitude higher than emissions from oil combustion, depending on the country. 11% from gold production. The three largest point sources for mercury emissions in the U.S. are the three largest gold mines. Hydrogeochemical release of mercury from gold-mine tailings has been accounted as a significant source of atmospheric mercury in eastern Canada. 6.8% from non-ferrous metal production, typically smelters. 6.4% from cement production. 3.0% from waste disposal, including municipal and hazardous waste, crematoria, and sewage sludge incineration. 3.0% from caustic soda production. 1.4% from pig iron and steel production. 1.1% from mercury production, mainly for batteries. 2.0% from other sources. The above percentages are estimates of the global human-caused mercury emissions in 2000, excluding biomass burning, an important source in some regions. A serious industrial disaster was the dumping of waste mercury compounds into Minamata Bay, Japan, between 1932 and 1968. It is estimated that over 3,000 people suffered various deformities, severe mercury poisoning symptoms or death from what became known as Minamata disease. China is estimated to produce 50% of mercury emissions, most of which result from production of vinyl chloride. Mercury also enters into the environment through the improper disposal of mercury-containing products. Due to health concerns, toxics use reduction efforts are cutting back or eliminating mercury in such products. For example, the amount of mercury sold in thermostats in the United States decreased from 14.5 tons in 2004 to 3.9 tons in 2007. The tobacco plant readily absorbs and accumulates heavy metals such as mercury from the surrounding soil into its leaves. These are subsequently inhaled during tobacco smoking. While mercury is a constituent of tobacco smoke, studies have largely failed to discover a significant correlation between smoking and mercury uptake by humans compared to sources such as occupational exposure, fish consumption, and amalgam tooth fillings. A less well-known source of mercury is the burning of joss paper, which is a common tradition practiced in Asia, including China, Vietnam, Hong Kong, Thailand, Taiwan and Malaysia. Spill cleanup Mercury spills pose an immediate threat to people handling the material, in addition to being an environmental hazard if the material is not contained properly. This is of particular concern for visible mercury, or mercury in liquid state, as its unusual appearance and behavior for a metal makes it an attractive nuisance to the uninformed. Procedures have been developed to contain mercury spills, as well as recommendations on appropriate responses based on the conditions of a spill. Tracking liquid mercury away from the site of a spill is a major concern in liquid mercury spills; regulations emphasize containment of the visible mercury as the first course of action, followed by monitoring of mercury vapors and vapor cleanup. Several products are sold as mercury spill adsorbents, ranging from metal salts to polymers and zeolites. Sediment contamination Sediments within large urban-industrial estuaries act as an important sink for point source and diffuse mercury pollution within catchments. A 2015 study of foreshore sediments from the Thames estuary measured total mercury at 0.01 to 12.07 mg/kg with mean of 2.10 mg/kg and median of 0.85 mg/kg (n = 351). The highest mercury concentrations were shown to occur in and around the city of London in association with fine grain muds and high total organic carbon content. The strong affinity of mercury for carbon rich sediments has also been observed in salt marsh sediments of the River Mersey, with a mean concentration of 2 mg/kg, up to 5 mg/kg. These concentrations are far higher than those in the salt marsh river creek sediments of New Jersey and mangroves of Southern China, which exhibit low mercury concentrations of about 0.2 mg/kg. Occupational exposure Due to the health effects of mercury exposure, industrial and commercial uses are regulated in many countries. The World Health Organization, OSHA, and NIOSH all treat mercury as an occupational hazard; both OSHA and NIOSH, among other regulatory agencies, have established specific occupational exposure limits on the element and its derivative compounds in liquid and vapor form. Environmental releases and disposal of mercury are regulated in the U.S. primarily by the United States Environmental Protection Agency. Fish Fish and shellfish have a natural tendency to concentrate mercury in their bodies, often in the form of methylmercury, a highly toxic organic compound of mercury. Species of fish that are high on the food chain, such as shark, swordfish, king mackerel, bluefin tuna, albacore tuna, and tilefish contain higher concentrations of mercury than others. Because mercury and methylmercury are fat soluble, they primarily accumulate in the viscera, although they are also found throughout the muscle tissue. Mercury presence in fish muscles can be studied using non-lethal muscle biopsies. Mercury present in prey fish accumulates in the predator that consumes them. Since fish are less efficient at depurating than accumulating methylmercury, methylmercury concentrations in the fish tissue increase over time. Thus species that are high on the food chain amass body burdens of mercury that can be ten times higher than the species they consume. This process is called biomagnification. Mercury poisoning happened this way in Minamata, Japan, now called Minamata disease. Cosmetics Some facial creams contain dangerous levels of mercury. Most contain comparatively non-toxic inorganic mercury, but products containing highly toxic organic mercury have been encountered. New York City residents have been found to be exposed to significant levels of inorganic mercury compounds through the use of skin care products. Effects and symptoms of mercury poisoning Toxic effects include damage to the brain, kidneys and lungs. Mercury poisoning can result in several diseases, including acrodynia (pink disease), Hunter-Russell syndrome, and Minamata disease. Symptoms typically include sensory impairment (vision, hearing, speech), disturbed sensation and a lack of coordination. The type and degree of symptoms exhibited depend upon the individual toxin, the dose, and the method and duration of exposure. Case–control studies have shown effects such as tremors, impaired cognitive skills, and sleep disturbance in workers with chronic exposure to mercury vapor even at low concentrations in the range 0.7–42 μg/m3. A study has shown that acute exposure (4–8 hours) to calculated elemental mercury levels of 1.1 to 44 mg/m3 resulted in chest pain, dyspnea, cough, hemoptysis, impairment of pulmonary function, and evidence of interstitial pneumonitis. Acute exposure to mercury vapor has been shown to result in profound central nervous system effects, including psychotic reactions characterized by delirium, hallucinations, and suicidal tendency. Occupational exposure has resulted in broad-ranging functional disturbance, including erethism, irritability, excitability, excessive shyness, and insomnia. With continuing exposure, a fine tremor develops and may escalate to violent muscular spasms. Tremor initially involves the hands and later spreads to the eyelids, lips, and tongue. Long-term, low-level exposure has been associated with more subtle symptoms of erethism, including fatigue, irritability, loss of memory, vivid dreams and depression. Treatment Research on the treatment of mercury poisoning is limited. Currently available drugs for acute mercurial poisoning include chelators N-acetyl-D,L-penicillamine (NAP), British Anti-Lewisite (BAL), 2,3-dimercapto-1-propanesulfonic acid (DMPS), and dimercaptosuccinic acid (DMSA). In one small study including 11 construction workers exposed to elemental mercury, patients were treated with DMSA and NAP. Chelation therapy with both drugs resulted in the mobilization of a small fraction of the total estimated body mercury. DMSA was able to increase the excretion of mercury to a greater extent than NAP. Regulations International 140 countries agreed in the Minamata Convention on Mercury by the United Nations Environment Programme (UNEP) to prevent mercury vapor emissions. The convention was signed on 10 October 2013. United States In the United States, the Environmental Protection Agency is charged with regulating and managing mercury contamination. Several laws give the EPA this authority, including the Clean Air Act, the Clean Water Act, the Resource Conservation and Recovery Act, and the Safe Drinking Water Act. Additionally, the Mercury-Containing and Rechargeable Battery Management Act, passed in 1996, phases out the use of mercury in batteries, and provides for the efficient and cost-effective disposal of many types of used batteries. North America contributed approximately 11% of the total global anthropogenic mercury emissions in 1995. The United States Clean Air Act, passed in 1990, put mercury on a list of toxic pollutants that need to be controlled to the greatest possible extent. Thus, industries that release high concentrations of mercury into the environment agreed to install maximum achievable control technologies (MACT). In March 2005, the EPA promulgated a regulation that added power plants to the list of sources that should be controlled and instituted a national cap and trade system. States were given until November 2006 to impose stricter controls, but after a legal challenge from several states, the regulations were struck down by a federal appeals court on 8 February 2008. The rule was deemed not sufficient to protect the health of persons living near coal-fired power plants, given the negative effects documented in the EPA Study Report to Congress of 1998. However newer data published in 2015 showed that after introduction of the stricter controls mercury declined sharply, indicating that the Clean Air Act had its intended impact. The EPA announced new rules for coal-fired power plants on 22 December 2011. Cement kilns that burn hazardous waste are held to a looser standard than are standard hazardous waste incinerators in the United States, and as a result are a disproportionate source of mercury pollution. European Union In the European Union, the directive on the Restriction of the Use of Certain Hazardous Substances in Electrical and Electronic Equipment (see RoHS) bans mercury from certain electrical and electronic products, and limits the amount of mercury in other products to less than 1000 ppm. There are restrictions for mercury concentration in packaging (the limit is 100 ppm for sum of mercury, lead, hexavalent chromium and cadmium) and batteries (the limit is 5 ppm). In July 2007, the European Union also banned mercury in non-electrical measuring devices, such as thermometers and barometers. The ban applies to new devices only, and contains exemptions for the health care sector and a two-year grace period for manufacturers of barometers. Scandinavia Norway enacted a total ban on the use of mercury in the manufacturing and import/export of mercury products, effective 1 January 2008. In 2002, several lakes in Norway were found to have a poor state of mercury pollution, with an excess of 1 μg/g of mercury in their sediment. In 2008, Norway's Minister of Environment Development Erik Solheim said: "Mercury is among the most dangerous environmental toxins. Satisfactory alternatives to Hg in products are available, and it is therefore fitting to induce a ban." Products containing mercury were banned in Sweden in 2009, while elemental mercury has been banned from manufacture and use in all but a few applications (such as certain energy-saving light sources and amalgam dental fillings) in Denmark since 2008.
Physical sciences
Chemical elements_2
null
23676652
https://en.wikipedia.org/wiki/Fallopian%20tube
Fallopian tube
The fallopian tubes, also known as uterine tubes, oviducts or salpinges (: salpinx), are paired tubular sex organs in the human female body that stretch from the ovaries to the uterus. The fallopian tubes are part of the female reproductive system. In other vertebrates, they are only called oviducts. Each tube is a muscular hollow organ that is on average between in length, with an external diameter of . It has four described parts: the intramural part, isthmus, ampulla, and infundibulum with associated fimbriae. Each tube has two openings: a proximal opening nearest to the uterus, and a distal opening nearest to the ovary. The fallopian tubes are held in place by the mesosalpinx, a part of the broad ligament mesentery that wraps around the tubes. Another part of the broad ligament, the mesovarium suspends the ovaries in place. An egg cell is transported from an ovary to a fallopian tube where it may be fertilized in the ampulla of the tube. The fallopian tubes are lined with simple columnar epithelium with hairlike extensions called cilia, which together with peristaltic contractions from the muscular layer, move the fertilized egg (zygote) along the tube. On its journey to the uterus, the zygote undergoes cell divisions that changes it to a blastocyst, an early embryo, in readiness for implantation. Almost a third of cases of infertility are caused by fallopian tube pathologies. These include inflammation, and tubal obstructions. A number of tubal pathologies cause damage to the cilia of the tube, which can impede movement of the sperm or egg. The name comes from the Italian Catholic priest and anatomist Gabriele Falloppio, for whom other anatomical structures are also named. Structure Each fallopian tube leaves the uterus at an opening at the uterine horns known as the proximal tubal opening or proximal ostium. The tubes have an average length of that includes the intramural part of the tube. The tubes extend to near the ovaries where they open into the abdomen at the distal tubal openings. In other mammals, the fallopian tube is called the oviduct, which may also be used in reference to the fallopian tube in the human. The fallopian tubes are held in place by the mesosalpinx a part of the broad ligament mesentery that wraps around the tubes. Another part of the broad ligament, the mesovarium suspends the ovaries in place. Parts Each tube is composed of four parts: from inside the proximal tubal opening the intramural or interstitial part, that links to the narrow isthmus, the isthmus connects to the larger ampulla, which connects with the infundibulum and its associated fimbriae that opens into the peritoneal cavity from the distal tubal opening. Intramural part The intramural part or interstitial part of the fallopian tube lies in the myometrium, the muscular wall of the uterus. This is the narrowest part of the tube that crosses the uterus wall to connect with the isthmus. The intramural part is 0.7 mm wide and 1 cm long. Isthmus The narrow isthmus links the tube to the uterus, and connects to the ampulla. The isthmus is a rounded, and firm muscular part of the tube. The isthmus is 1–5 mm wide, and 3 cm long. The isthmus contains a large number of secretory cells. Ampulla The ampulla is the major part of the fallopian tube. The ampulla is the widest part of the tube with a maximal luminal diameter of 1 cm, and a length of 5 cm. It curves over the ovary, and is the primary site of fertilization. The ampulla contains a large number of ciliated epithelial cells. It is thin walled with a much folded luminal surface, and opens into the infundibulum. Infundibulum The infundibulum opens into the abdomen at the distal tubal opening and rests above the ovary. Most cells here are ciliated epithelial cells. The opening is surrounded by fimbriae, which help in the collection of the oocyte after ovulation. The fimbriae (singular fimbria) is a fringe of densely ciliated tissue projections of approximately 1 mm in width around the distal tubal opening, oriented towards the ovary. They are attached to the ends of the infundibulum, extending from its inner circumference, and muscular wall. The cilia beat towards the fallopian tube. Of all the fimbriae, one fimbria known as the ovarian fimbria is long enough to reach and make contact with the near part of the ovary during ovulation. The fimbriae have a higher density of blood vessels than the other parts of the tube, and the ovarian fimbria is seen to have an even higher density. An ovary is not directly connected to its adjacent fallopian tube. When ovulation is about to occur, the sex hormones activate the fimbriae, causing them to swell with blood, extend, and hit the ovary in a gentle, sweeping motion. An oocyte is released from the ovary into the peritoneal cavity and the cilia of the fimbriae sweep it into the fallopian tube. Microanatomy When viewed under the microscope, the fallopian tube has three layers. From outer to inner, these are the serosa, muscularis mucosae, and the mucosa. The outermost covering layer of serous membrane is known as the serosa. The serosa is derived from the visceral peritoneum. The muscularis mucosae consists of an outer ring of smooth muscle arranged longitudinally, and a thick inner circular ring of smooth muscle. This layer is responsible for the rhythmic peristaltic contractions of the fallopian tubes, that with the cilia move the egg cell towards the uterus. The innermost mucosa is made up of a layer of luminal epithelium, and an underlying thin layer of loose connective tissue the lamina propria. There are three different cell types in the epithelium. Around 25% of the cells are ciliated columnar cells; around 60% are secretory cells, and the rest are peg cells thought to be a secretory cell variant. The ciliated cells are most numerous in the infundibulum, and the ampulla. Estrogen increases the formation of cilia on these cells. Peg cells are shorter, have surface microvilli, and are located between the other epithelial cells. The presence of immune cells in the mucosa has also been reported with the main type being CD8+ T-cells. Other cells found are B lymphocytes, macrophages, NK cells, and dendritic cells. The histological features of tube vary along its length. The mucosa of the ampulla contains an extensive array of complex folds, whereas the relatively narrow isthmus has a thick muscular coat and simple mucosal folds. Development Embryos develop a genital ridge that forms at their tail end and eventually forms the basis for the urinary system and reproductive tracts. Either side and to the front of this tract, around the sixth week develops a duct called the paramesonephric duct, also called the Müllerian duct. A second duct, the mesonephric duct, develops adjacent to this. Both ducts become longer over the next two weeks, and the paramesonephric ducts around the eighth week cross to meet in the midline and fuse. One duct then regresses, with this depending on whether the embryo is genetically female or male. In females, the paramesonephric duct remains, and eventually forms the female reproductive tract. The portions of the paramesonephric duct, which are more cranial—that is, further from the tail-end, end up forming the fallopian tubes. In males, because of the presence of the Y sex chromosome, anti-Müllerian hormone is produced. This leads to the degeneration of the paramesonephric duct. As the uterus develops, the part of the fallopian tubes closer to the uterus, the ampulla, becomes larger. Extensions from the fallopian tubes, the fimbriae, develop over time. Cell markers have been identified in the fimbriae, which suggests that their embryonic origin is different from that of the other tube segments. Apart from the presence of sex chromosomes, specific genes associated with the development of the fallopian tubes include the Wnt and Hox groups of genes, Lim1, Pax2, and Emx2. Embryos have two pairs of ducts that will let gametes out of the body when they are adults; the Müllerian ducts develop in females into the fallopian tubes, uterus, and vagina. Function Fertilization The fallopian tube allows the passage of an egg from the ovary to the uterus. When an oocyte is developing in an ovary, it is surrounded by a spherical collection of cells known as an ovarian follicle. Just before ovulation, the primary oocyte completes meiosis I to form the first polar body and a secondary oocyte, which is arrested in metaphase of meiosis II. At the time of ovulation in the menstrual cycle, the secondary oocyte is released from the ovary. The follicle and the ovary's wall rupture, allowing the secondary oocyte to escape. The secondary oocyte is caught by the fimbriated end of the fallopian tube and travels to the ampulla. Here, the egg is able to become fertilized with sperm. The ampulla is typically where the sperm are met and fertilization occurs; meiosis II is promptly completed. After fertilization, the ovum is now called a zygote and travels toward the uterus with the aid of the hairlike cilia and the activity of the muscle of the fallopian tube. The early embryo requires critical development in the fallopian tube. After about five days, the new embryo enters the uterine cavity and, on about the sixth day, begins to implant on the wall of the uterus. The release of an oocyte does not alternate between the two ovaries and seems to be random. After removal of an ovary, the remaining one produces an egg every month. Clinical significance Almost a third of cases of infertility are caused due to fallopian tube pathologies. These include inflammation, and tubal obstructions. A number of tubal pathologies cause damage to the cilia of the tube, which can impede movement of the sperm or egg. A number of sexually transmitted infections can lead to infertility. Inflammation Salpingitis is inflammation of the fallopian tubes and may be found alone, or with other pelvic inflammatory diseases (PIDs). A thickening of the fallopian tube at its narrow isthmus portion, due to inflammation, is known as salpingitis isthmica nodosa. Like another PID endometriosis, it may lead to fallopian tube obstruction. Fallopian tube obstruction may be a cause of infertility or ectopic pregnancy. Blockage or narrowing If a blocked fallopian tube has affected fertility, its repair where possible may increase the chances of becoming pregnant. Tubal obstruction can be proximal, distal or mid-segmental. Tubal obstruction is a major cause of infertility but full testing of tubal functions is not possible. However, the testing of patency – whether or not the tubes are open can be carried out using hysterosalpingography, laparoscopy and dye, or hystero contrast sonography (HyCoSy). During surgery, the condition of the tubes may be inspected and a dye such as methylene blue can be injected into the uterus and shown to pass through the tubes when the cervix is occluded. As tubal disease is often related to Chlamydia infection, testing for Chlamydia antibodies has become a cost-effective screening device for tubal pathology. Ectopic pregnancy Occasionally the embryo implants outside of the uterus, creating an ectopic pregnancy. Most ectopic pregnancies occur in the fallopian tube, and are commonly known as tubal pregnancies. Surgery The surgical removal of a fallopian tube is called a salpingectomy. To remove both tubes is a bilateral salpingectomy. An operation that combines the removal of a fallopian tube with the removal of at least one ovary is a salpingo-oophorectomy. An operation to remove a fallopian tube obstruction is called a tuboplasty. A surgical procedure to permanently prevent conception is tubal ligation. Cancer Fallopian tube cancer, which typically arises from the epithelial lining of the fallopian tube, has historically been considered to be a very rare malignancy. Evidence suggests it probably represents a significant portion of what has previously been classified as ovarian cancer, as much as 80 per cent. These are classed as serous carcinomas, and are usually located in the fimbriated distal tube. Other In rare cases, a fallopian tube may prolapse into the vaginal canal following a hysterectomy. The swollen fimbriae can have the appearance of an adenocarcinoma. History The Greek doctor Herophilus, in his treatise on midwifery, points out the existence of the two ducts that he supposed transported "female semen". Then Galen, already in the modern era, described that the paired ducts indicated by Herophilus were connected to the uterus. In 1561, the Renaissance doctor Gabriele Falloppio published his book Observationes Anatomicae. Its contribution is a detailed description of the "tubal" of the uterus and its different portions, with its farthest (distal) end open towards the abdomen, and the other (proximal) connected to the uterus. Though the name Fallopian tube is eponymous, it is often spelt with a lower case f from the assumption that the adjective fallopian has been absorbed into modern English as the name for the structure. Merriam-Webster dictionary for example lists fallopian tube, often spelt Fallopian tube. Additional images
Biology and health sciences
Reproductive system
Biology
23681458
https://en.wikipedia.org/wiki/Recrystallization%20%28chemistry%29
Recrystallization (chemistry)
Recrystallization is a method used to purify chemicals by dissolving a mixture of a compound and its impurities, in an appropriate solvent, prior to heating the solution. Following the dissolution of crude product, the mixture will passively cool, yielding a crystallized compound and its impurities as separate entities. The newly formed crystals can then be subjected to x-ray anaylsis for purity assessment. Methods Single-solvent recrystallization The solvent utilized in single-solvent recrystallization must dissolve the crude reaction mixture only when it is heated to reflux. The heated solution is then passively cooled, yielding a crystallized product absent of impurities. The solid crystals are then collected utilizing a filtration apparatus and the filtrate is discarded. Product purity can then be assessed via NMR spectroscopy. Multi-solvent recrystallization Multi-solvent recrystallization relies on the crude product being soluble in one solvent, when it is heated to reflux, while being insoluble in a secondary solvent, regardless of the solvent's temperature. The volume ratio between the first and second solvent is critical. A higher ratio of first to second solvent will lead to permanent dissolution of the desired product, while a low ratio will lead to minimal pure crystal recovery. The terms first and second are in reference to crude product soluble and crude product insoluble solvents respectively. Typically, the second solvent, following the dissolution of the impure solid in the first solvent, is added slowly until the desired product begins to crystallize from solution. The solution is then cooled to further induce recrystallization. Hot filtration recrystallization Hot filtration recrystallization can be used to separate a pure compound from both impurities and some insoluble matter, which may be anything from a third party impurity to fragments of broken glass. The technique makes use of the single solvent system, outlined above, by dissolving a crude reaction mixture, in a minimum amount of hot solvent, before gravity filtering the saturated solution to remove insoluble matter. The saturated solution will then passively cool, yielding pure crystals. X-ray analysis Recrystallized products are often subject to X-ray crystallography for purity assessment. The technique requires crystallized products to be singular, and absent of clumps. Several approaches to this phenomenon are listed below. Slow evaporation of a single solvent - typically the compound is dissolved in a suitable solvent and the solvent is allowed to slowly evaporate. Once the solution is saturated crystals can be formed. Slow evaporation of a multi-solvent system - the same as above, however as the solvent composition changes due to evaporation of the more volatile solvent. The compound is more soluble in the volatile solvent, and so the compound becomes increasingly insoluble in solution and crystallizes. Slow diffusion - similar to the above. However, a second solvent is allowed to evaporate from one container into a container holding the compound solution (gas diffusion). As the solvent composition changes due to an increase in the solvent that has gas diffused into the solution, the compound becomes increasingly insoluble in the solution and crystallizes. Interface/slow mixing (often performed in an NMR tube). Similar to the above, but instead of one solvent gas-diffusing into another, the two solvents mix (diffuse) by liquid-liquid diffusion. Typically a second solvent is "layered" carefully on top of the solution containing the compound. Over time the two solution mix. As the solvent composition changes due to diffusion, the compound becomes increasingly insoluble in solution and crystallizes, usually at the interface. Additionally, it is better to use a denser solvent as the lower layer, and/or a hotter solvent as the upper layer because this results in the slower mixing of the solvents. Specialized equipment can be used in the shape of an "H" to perform the above, where one of the vertical lines of the "H" is a tube containing a solution of the compound, and the other vertical line of the "H" is a tube containing a solvent which the compound is not soluble in, and the horizontal line of the "H" is a tube which joins the two vertical tubes, which also has a fine glass sinter that restricts the mixing of the two solvents. Once single perfect crystals have been obtained, it is recommended that the crystals are kept in a sealed vessel with some of the liquid of crystallization to prevent the crystal from 'drying out'. Single perfect crystals may contain solvent of crystallization in the crystal lattice. Loss of this internal solvent from the crystals can result in the crystal lattice breaking down, and the crystals turning to powder.
Physical sciences
Other separations
Chemistry
23682291
https://en.wikipedia.org/wiki/Cam%20follower
Cam follower
A cam follower, also known as a track follower, is a specialized type of roller or needle bearing designed to follow cam lobe profiles. Cam followers come in a vast array of different configurations, however the most defining characteristic is how the cam follower mounts to its mating part; stud style cam followers use a stud while the yoke style has a hole through the middle. Construction The modern stud type follower was invented and patented in 1937 by Thomas L. Robinson of the McGill Manufacturing Company. It replaced using a standard bearing and bolt. The new cam followers were easier to use because the stud was already included and they could also handle higher loads. While roller cam followers are similar to roller bearings, there are quite a few differences. Standard ball and roller bearings are designed to be pressed into a rigid housing, which provides circumferential support. This keeps the outer race from deforming, so the race cross-section is relatively thin. In the case of cam followers the outer race is loaded at a single point, so the outer race needs a thicker cross-section to reduce deformation. However, in order to facilitate this the roller diameter must be decreased, which also decreases the dynamic bearing capacity. End plates are used to contain the needles or bearing axially. On stud style followers one of the end plates is integrated into the inner race/stud; the other is pressed onto the stud up to a shoulder on the inner race. The inner race is induction hardened so that the stud remains soft if modifications need to be made. On yoke style followers the end plates are peened or pressed onto the inner race or liquid metal injected onto the inner race. The inner race is either induction hardened or through hardened. Another difference is that a lubrication hole is provided to relubricate the follower periodically. A hole is provided at both ends of the stud for lubrication. They also usually have a black oxide finish to help reduce corrosion. Types There are many different types of cam followers available. Anti-friction element The most common anti-friction element employed is a full complement of needle rollers. This design can withstand high radial loads but no thrust loads. A similar design is the caged needle roller design, which also uses needle rollers, but uses a cage to keep them separated. This design allows for higher speeds but decreases the load capacity. The cage also increases internal space so it can hold more lubrication, which increases the time between relubrications. Depending on the exact design sometimes two rollers are put in each pocket of the cage, using a cage design originated by RBC Bearings in 1971. For heavy-duty applications a roller design can be used. This employs two rows of rollers of larger diameter than used in needle roller cam followers to increase the dynamic load capacity and provide some thrust capabilities. This design can support higher speeds than the full complement design. For light-duty applications a bushing type follower can be used. Instead of using a type of a roller a plastic bushing is used to reduce friction, which provides a maintenance free follower. The disadvantage is that it can only support light loads, slow speeds, no thrust loads, and the temperature limit is . A bushing type stud follower can only support approximately 25% of the load of a roller type stud follower, while the heavy and yoke followers can handle 50%. Also all-metallic heavy-duty bushing type followers exist. Shape The outer diameter (OD) of the cam follower (stud or yoke) can be the standard cylindrical shape or be crowned. Crowned cam followers are used to keep the load evenly distributed if it deflects or if there is any misalignment between the follower and the followed surface. They are also used in turntable type applications to reduce skidding. Crowned followers can compensate for up to 0.5° of misalignment, while a cylindrical OD can only tolerate 0.06°. The only disadvantage is that they cannot bear as much load because of higher stresses. Stud Stud style cam followers usually have a standard sized stud, but a heavy stud is available for increased static load capacity. Drives The standard driving system for a stud type cam follower is a slot, for use with a flat head screwdriver. However, hex sockets are available for higher torquing ability, which is especially useful for eccentric cam followers and those used in blind holes. Hex socket cam followers from most manufacturers eliminate the relubrication capability on that end of the cam follower. RBC Bearings' Hexlube cam followers feature a relubrication fitting at the bottom of the hex socket. Eccentricity Stud type cam followers are available with an eccentric stud. The stud has a bushing pushed onto it that has an eccentric outer diameter. This allows for adjustability during installation to eliminate any backlash. The adjustable range for an eccentric bearing is twice that of the eccentricity. Yoke YOKE type cam followers are usually used in applications where minimal deflection is required, as they can be supported on both sides. They can support the same static load as a heavy stud follower. Track followers All cam followers can be track followers, but not all track followers are cam followers. Some track followers have specially shaped outer diameters (OD) to follow tracks. For example, track followers are available with a V-groove for following a V-track, or the OD can have a flange to follow the lip of the track. Specialized track followers are also designed to withstand thrust loads so the anti-friction elements are usually bearing balls or of a tapered roller bearing construction.
Technology
Mechanisms
null
10630377
https://en.wikipedia.org/wiki/Stratigraphic%20section
Stratigraphic section
A stratigraphic section is a sequence of layers of rocks in the order they were deposited. It is based on the principle of original horizontality, which states that layers of sediment are originally deposited horizontally under the action of gravity. Biostratigraphers estimate the age of stratigraphic sections by using the faunal assemblages contained within rock samples from outcrop and drill cores. Geochronologists precisely date rocks within the stratigraphic section to provide better absolute bounds on the timing and rates of deposition. Magnetic stratigraphers look for signs of magnetic reversals in igneous rock units within the drill cores. Other scientists perform stable-isotope studies on the rocks to gain information about past climate. Stratigraphic sections can also be used to locate areas for water, coal, and hydrocarbon extraction, particularly petroleum and natural gas. A Global Boundary Stratotype Section and Point (GSSP) is an internationally agreed upon reference point on a stratigraphic section which defines the lower boundaries of stages on the geologic time scale. (Recently this has been used to define the base of a system) Gallery
Physical sciences
Stratigraphy
Earth science
10638682
https://en.wikipedia.org/wiki/Infrared%20telescope
Infrared telescope
An infrared telescope is a telescope that uses infrared light to detect celestial bodies. Infrared light is one of several types of radiation present in the electromagnetic spectrum. All celestial objects with a temperature above absolute zero emit some form of electromagnetic radiation. In order to study the universe, scientists use several different types of telescopes to detect these different types of emitted radiation in the electromagnetic spectrum. Some of these are gamma ray, x-ray, ultra-violet, regular visible light (optical), as well as infrared telescopes. Leading discoveries There were several key developments that led to the invention of the infrared telescope: In 1800, William Herschel discovered infrared radiation. In 1878, Samuel Pierpoint Langley created the first bolometer. This was a very sensitive instrument that could electrically detect incredibly small changes in temperature in the infrared spectrum. Thomas Edison used an alternative technology, his tasimeter, to measure heat in the Sun's corona during the solar eclipse of July 29, 1878. In the 1950s, scientists used lead-sulfide detectors to detect the infrared radiation from space. These detectors were cooled with liquid nitrogen. Between 1959 and 1961, Harold Johnson created near-infrared photometers which allowed scientists to measure thousands of stars. In 1961, Frank Low invented the first germanium bolometer. This invention, cooled by liquid helium, led the way for current infrared telescope development. Infrared telescopes may be ground-based, air-borne, or space telescopes. They contain an infrared camera with a special solid-state infrared detector which must be cooled to cryogenic temperatures. Ground-based telescopes were the first to be used to observe outer space in infrared. Their popularity increased in the mid-1960s. Ground-based telescopes have limitations because water vapor in the Earth's atmosphere absorbs infrared radiation. Ground-based infrared telescopes tend to be placed on high mountains and in very dry climates to improve visibility. In the 1960s, scientists used balloons to lift infrared telescopes to higher altitudes. With balloons, they were able to reach about up. In 1967, infrared telescopes were placed on rockets. These were the first air-borne infrared telescopes. Since then, aircraft like the Kuiper Airborne Observatory (KAO) have been adapted to carry infrared telescopes. A more recent air-borne infrared telescope to reach the stratosphere was NASA's Stratospheric Observatory for Infrared Astronomy (SOFIA) in May 2010. Together, United States scientists and the German Aerospace Center scientists placed a 17-ton infrared telescope on a Boeing 747 jet airplane. Placing infrared telescopes in space eliminates the interference from the Earth's atmosphere. One of the most significant infrared telescope projects was the Infrared Astronomical Satellite (IRAS) that launched in 1983. It revealed information about other galaxies, as well as information about the center of our galaxy the Milky Way. NASA presently has solar-powered spacecraft in space with an infrared telescope called the Wide-field Infrared Survey Explorer (WISE). It was launched on December 14, 2009. Selective comparison The wavelength of visible light is about 0.4 μm to 0.7 μm, and 0.75 μm to 1000 μm (1 mm) is a typical range for infrared astronomy, far-infrared astronomy, to submillimetre astronomy. Infrared telescopes Ground based : Infrared Telescope Facility, Hawaii, 1979– Gornergrat Infrared Telescope, 1979–2005 Infrared Optical Telescope Array, 1988–2006 United Kingdom Infrared Telescope, 1979– Wyoming Infrared Observatory, 1977- Airborne: Kuiper Airborne Observatory (KAO), 1974-1995 Stratospheric Observatory for Infrared Astronomy (SOFIA), 2010-2022 Space based: Infrared Astronomical Satellite, 1983 Spitzer Space Telescope, 2003-2020 Herschel Space Observatory, 2009-2013 Wide-field Infrared Survey Explorer (WISE), 2009- Nancy Grace Roman Space Telescope (formerly WFIRST) James Webb Space Telescope (JWST), 2021- Euclid, 2023-
Technology
Telescope
null
23859945
https://en.wikipedia.org/wiki/N-body%20problem
N-body problem
In physics, the -body problem is the problem of predicting the individual motions of a group of celestial objects interacting with each other gravitationally. Solving this problem has been motivated by the desire to understand the motions of the Sun, Moon, planets, and visible stars. In the 20th century, understanding the dynamics of globular cluster star systems became an important -body problem. The -body problem in general relativity is considerably more difficult to solve due to additional factors like time and space distortions. The classical physical problem can be informally stated as the following: The two-body problem has been completely solved and is discussed below, as well as the famous restricted three-body problem. History Knowing three orbital positions of a planet's orbit – positions obtained by Sir Isaac Newton from astronomer John Flamsteed – Newton was able to produce an equation by straightforward analytical geometry, to predict a planet's motion; i.e., to give its orbital properties: position, orbital diameter, period and orbital velocity. Having done so, he and others soon discovered over the course of a few years, those equations of motion did not predict some orbits correctly or even very well. Newton realized that this was because gravitational interactive forces amongst all the planets were affecting all their orbits. The aforementioned revelation strikes directly at the core of what the n-body issue physically is: as Newton understood, it is not enough to just provide the beginning location and velocity, or even three orbital positions, in order to establish a planet's actual orbit; one must also be aware of the gravitational interaction forces. Thus came the awareness and rise of the -body "problem" in the early 17th century. These gravitational attractive forces do conform to Newton's laws of motion and to his law of universal gravitation, but the many multiple (-body) interactions have historically made any exact solution intractable. Ironically, this conformity led to the wrong approach. After Newton's time the -body problem historically was not stated correctly because it did not include a reference to those gravitational interactive forces. Newton does not say it directly but implies in his Principia the -body problem is unsolvable because of those gravitational interactive forces. Newton said in his Principia, paragraph 21: Newton concluded via his third law of motion that "according to this Law all bodies must attract each other." This last statement, which implies the existence of gravitational interactive forces, is key. As shown below, the problem also conforms to Jean Le Rond D'Alembert's non-Newtonian first and second Principles and to the nonlinear -body problem algorithm, the latter allowing for a closed form solution for calculating those interactive forces. The problem of finding the general solution of the -body problem was considered very important and challenging. Indeed, in the late 19th century King Oscar II of Sweden, advised by Gösta Mittag-Leffler, established a prize for anyone who could find the solution to the problem. The announcement was quite specific: In case the problem could not be solved, any other important contribution to classical mechanics would then be considered to be prizeworthy. The prize was awarded to Poincaré, even though he did not solve the original problem. (The first version of his contribution even contained a serious error.) The version finally printed contained many important ideas which led to the development of chaos theory. The problem as stated originally was finally solved by Karl Fritiof Sundman for and generalized to by L. K. Babadzanjanz and Qiudong Wang. General formulation The -body problem considers point masses in an inertial reference frame in three dimensional space moving under the influence of mutual gravitational attraction. Each mass has a position vector . Newton's second law says that mass times acceleration is equal to the sum of the forces on the mass. Newton's law of gravity says that the gravitational force felt on mass by a single mass is given by where is the gravitational constant and is the magnitude of the distance between and (metric induced by the norm). Summing over all masses yields the -body equations of motion:where is the self-potential energy Defining the momentum to be , Hamilton's equations of motion for the -body problem become where the Hamiltonian function is and is the kinetic energy Hamilton's equations show that the -body problem is a system of first-order differential equations, with initial conditions as initial position coordinates and initial momentum values. Symmetries in the -body problem yield global integrals of motion that simplify the problem. Translational symmetry of the problem results in the center of mass moving with constant velocity, so that , where is the linear velocity and is the initial position. The constants of motion and represent six integrals of the motion. Rotational symmetry results in the total angular momentum being constant where × is the cross product. The three components of the total angular momentum yield three more constants of the motion. The last general constant of the motion is given by the conservation of energy . Hence, every -body problem has ten integrals of motion. Because and are homogeneous functions of degree 2 and −1, respectively, the equations of motion have a scaling invariance: if is a solution, then so is for any . The moment of inertia of an -body system is given by and the virial is given by . Then the Lagrange–Jacobi formula states that For systems in dynamic equilibrium, the longterm time average of is zero. Then on average the total kinetic energy is half the total potential energy, , which is an example of the virial theorem for gravitational systems. If is the total mass and a characteristic size of the system (for example, the radius containing half the mass of the system), then the critical time for a system to settle down to a dynamic equilibrium is Special cases Two-body problem Any discussion of planetary interactive forces has always started historically with the two-body problem. The purpose of this section is to relate the real complexity in calculating any planetary forces. Note in this Section also, several subjects, such as gravity, barycenter, Kepler's Laws, etc.; and in the following Section too (Three-body problem) are discussed on other Wikipedia pages. Here though, these subjects are discussed from the perspective of the -body problem. The two-body problem () was completely solved by Johann Bernoulli (1667–1748) by classical theory (and not by Newton) by assuming the main point-mass was fixed; this is outlined here. Consider then the motion of two bodies, say the Sun and the Earth, with the Sun fixed, then: The equation describing the motion of mass relative to mass is readily obtained from the differences between these two equations and after canceling common terms gives: Where is the vector position of relative to ; is the Eulerian acceleration ; . The equation is the fundamental differential equation for the two-body problem Bernoulli solved in 1734. Notice for this approach forces have to be determined first, then the equation of motion resolved. This differential equation has elliptic, or parabolic or hyperbolic solutions. It is incorrect to think of (the Sun) as fixed in space when applying Newton's law of universal gravitation, and to do so leads to erroneous results. The fixed point for two isolated gravitationally interacting bodies is their mutual barycenter, and this two-body problem can be solved exactly, such as using Jacobi coordinates relative to the barycenter. Dr. Clarence Cleminshaw calculated the approximate position of the Solar System's barycenter, a result achieved mainly by combining only the masses of Jupiter and the Sun. Science Program stated in reference to his work: The Sun wobbles as it rotates around the Galactic Center, dragging the Solar System and Earth along with it. What mathematician Kepler did in arriving at his three famous equations was curve-fit the apparent motions of the planets using Tycho Brahe's data, and not curve-fitting their true circular motions about the Sun (see Figure). Both Robert Hooke and Newton were well aware that Newton's Law of Universal Gravitation did not hold for the forces associated with elliptical orbits. In fact, Newton's Universal Law does not account for the orbit of Mercury, the asteroid belt's gravitational behavior, or Saturn's rings. Newton stated (in section 11 of the Principia) that the main reason, however, for failing to predict the forces for elliptical orbits was that his math model was for a body confined to a situation that hardly existed in the real world, namely, the motions of bodies attracted toward an unmoving center. Some present physics and astronomy textbooks do not emphasize the negative significance of Newton's assumption and end up teaching that his mathematical model is in effect reality. It is to be understood that the classical two-body problem solution above is a mathematical idealization.
Physical sciences
Classical mechanics
Physics
125024
https://en.wikipedia.org/wiki/Tevatron
Tevatron
The Tevatron was a circular particle accelerator (active until 2011) in the United States, at the Fermi National Accelerator Laboratory (called Fermilab), east of Batavia, Illinois, and was the highest energy particle collider until the Large Hadron Collider (LHC) of the European Organization for Nuclear Research (CERN) was built near Geneva, Switzerland. The Tevatron was a synchrotron that accelerated protons and antiprotons in a circumference ring to energies of up to 1 TeV, hence its name. The Tevatron was completed in 1983 at a cost of $120 million and significant upgrade investments were made during its active years of 1983–2011. The main achievement of the Tevatron was the discovery in 1995 of the top quark—the last fundamental fermion predicted by the Standard Model of particle physics. On July 2, 2012, scientists of the CDF and DØ collider experiment teams at Fermilab announced the findings from the analysis of around 500 trillion collisions produced from the Tevatron collider since 2001, and found that the existence of the suspected Higgs boson was highly likely with a confidence of 99.8%, later improved to over 99.9%. The Tevatron ceased operations on 30 September 2011, due to budget cuts and because of the completion of the LHC, which began operations in early 2010 and is far more powerful (planned energies were two 7 TeV beams at the LHC compared to 1 TeV at the Tevatron). The main ring of the Tevatron will probably be reused in future experiments, and its components may be transferred to other particle accelerators. History December 1, 1968, saw the breaking of ground for the linear accelerator (linac). The construction of the Main Accelerator Enclosure began on October 3, 1969, when the first shovel of earth was turned by Robert R. Wilson, NAL's director. This would become the 6.3 km circumference Fermilab's Main Ring. The linac first 200 MeV beam started on December 1, 1970. The booster first 8 GeV beam was produced on May 20, 1971. On June 30, 1971, a proton beam was guided for the first time through the entire National Accelerator Laboratory accelerator system including the Main Ring. The beam was accelerated to only 7 GeV. Back then, the Booster Accelerator took 200 MeV protons from the Linac and "boosted" their energy to 8 billion electron volts. They were then injected into the Main Accelerator. On the same year before the completion of the Main Ring, Wilson testified to the Joint Committee on Atomic Energy on March 9, 1971, that it was feasible to achieve a higher energy by using superconducting magnets. He also suggested that it could be done by using the same tunnel as the main ring and the new magnets would be installed in the same locations to be operated in parallel to the existing magnets of the Main Ring. That was the starting point of the Tevatron project. The Tevatron was in research and development phase between 1973 and 1979 while the acceleration at the Main Ring continued to be enhanced. A series of milestones saw acceleration rise to 20 GeV on January 22, 1972, to 53 GeV on February 4 and to 100 GeV on February 11. On March 1, 1972, the then NAL accelerator system accelerated for the first time a beam of protons to its design energy of 200 GeV. By the end of 1973, NAL's accelerator system operated routinely at 300 GeV. On 14 May 1976 Fermilab took its protons all the way to 500 GeV. This achievement provided the opportunity to introduce a new energy scale, the teraelectronvolt (TeV), equal to 1000 GeV. On 17 June of that year, the European Super Proton Synchrotron accelerator (SPS) had achieved an initial circulating proton beam (with no accelerating radio-frequency power) of only 400 GeV. The conventional magnet Main Ring was shut down in 1981 for installation of superconducting magnets underneath it. The Main Ring continued to serve as an injector for the Tevatron until the Main Injector was completed west of the Main Ring in 2000. The 'Energy Doubler', as it was known then, produced its first accelerated beam—512 GeV—on July 3, 1983. Its initial energy of 800 GeV was achieved on February 16, 1984. On October 21, 1986, acceleration at the Tevatron was pushed to 900 GeV, providing a first proton–antiproton collision at 1.8 TeV on November 30, 1986. The Main Injector, which replaced the Main Ring, was the most substantial addition, built over six years from 1993 at a cost of $290 million. Tevatron collider Run II begun on March 1, 2001, after successful completion of that facility upgrade. From then, the beam had been capable of delivering an energy of 980 GeV. On July 16, 2004, the Tevatron achieved a new peak luminosity, breaking the record previously held by the old European Intersecting Storage Rings (ISR) at CERN. That very Fermilab record was doubled on September 9, 2006, then a bit more than tripled on March 17, 2008, and ultimately multiplied by a factor of 4 over the previous 2004 record on April 16, 2010 (up to 4 cm−2 s−1). The Tevatron ceased operations on 30 September 2011. By the end of 2011, the Large Hadron Collider (LHC) at CERN had achieved a luminosity almost ten times higher than Tevatron's (at 3.65 cm−2 s−1) and a beam energy of 3.5 TeV each (doing so since March 18, 2010), already ~3.6 times the capabilities of the Tevatron (at 0.98 TeV). Mechanics The acceleration occurred in a number of stages. The first stage was the 750 keV Cockcroft–Walton pre-accelerator, which ionized hydrogen gas and accelerated the negative ions created using a positive voltage. The ions then passed into the 150 meter long linear accelerator (linac) which used oscillating electrical fields to accelerate the ions to 400 MeV. The ions then passed through a carbon foil, to remove the electrons, and the charged protons then moved into the Booster. The Booster was a small circular synchrotron, around which the protons passed up to 20,000 times to attain an energy of around 8 GeV. From the Booster the particles were fed into the Main Injector, which had been completed in 1999 to perform a number of tasks. It could accelerate protons up to 150 GeV; produce 120 GeV protons for antiproton creation; increase antiproton energy to 150 GeV; and inject protons or antiprotons into the Tevatron. The antiprotons were created by the Antiproton Source. 120 GeV protons were collided with a nickel target producing a range of particles including antiprotons which could be collected and stored in the accumulator ring. The ring could then pass the antiprotons to the Main Injector. The Tevatron could accelerate the particles from the Main Injector up to 980 GeV. The protons and antiprotons were accelerated in opposite directions, crossing paths in the CDF and DØ detectors to collide at 1.96 TeV. To hold the particles on track the Tevatron used 774 niobium–titanium superconducting dipole magnets cooled in liquid helium producing the field strength of 4.2 tesla. The field ramped over about 20 seconds as the particles accelerated. Another 240 NbTi quadrupole magnets were used to focus the beam. The initial design luminosity of the Tevatron was 1030 cm−2 s−1, however, following upgrades, the accelerator had been able to deliver luminosities up to 4 cm−2 s−1. On September 27, 1993, the cryogenic cooling system of the Tevatron Accelerator was named an International Historic Landmark by the American Society of Mechanical Engineers. The system, which provided cryogenic liquid helium to the Tevatron's superconducting magnets, was the largest low-temperature system in existence upon its completion in 1978. It kept the coils of the magnets, which bent and focused the particle beam, in a superconducting state, so that they consumed only ⅓ of the power they would have required at normal temperatures. Discoveries The Tevatron confirmed the existence of several subatomic particles that were predicted by theoretical particle physics, or gave suggestions to their existence. In 1995, the CDF experiment and DØ experiment collaborations announced the discovery of the top quark, and by 2007 they measured its mass (172 GeV) to a precision of nearly 1%. In 2006, the CDF collaboration reported the first measurement of Bs oscillations, and observation of two types of sigma baryons. In 2007, the DØ and CDF collaborations reported direct observation of the "Cascade B" () Xi baryon. In September 2008, the DØ collaboration reported detection of the , a "double strange" Omega baryon with the measured mass significantly higher than the quark model prediction. In May 2009 the CDF collaboration made public their results on search for based on analysis of data sample roughly four times larger than the one used by DØ experiment. The mass measurements from the CDF experiment were and in excellent agreement with Standard Model predictions, and no signal has been observed at the previously reported value from the DØ experiment. The two inconsistent results from DØ and CDF differ by or by 6.2 standard deviations. Due to excellent agreement between the mass measured by CDF and the theoretical expectation, it is a strong indication that the particle discovered by CDF is indeed the . It is anticipated that new data from LHC experiments will clarify the situation in the near future. On July 2, 2012, two days before a scheduled announcement at the Large Hadron Collider (LHC), scientists at the Tevatron collider from the CDF and DØ collaborations announced their findings from the analysis of around 500 trillion collisions produced since 2001: They found that the existence of the Higgs boson was likely with a mass in the region of 115 to 135 GeV. The statistical significance of the observed signs was 2.9 sigma, which meant that there is only a 1-in-550 chance that a signal of that magnitude would have occurred if no particle in fact existed with those properties. The final analysis of data from the Tevatron did however not settle the question of whether the Higgs particle exists. Only when the scientists from the Large Hadron Collider announced the more precise LHC results on July 4, 2012, with a mass of 125.3 ± 0.4 GeV (CMS) or 126 ± 0.4 GeV (ATLAS) respectively, was there strong evidence through consistent measurements by the LHC and the Tevatron for the existence of a Higgs particle at that mass range. Disruptions due to earthquakes Even from thousands of miles away, earthquakes caused strong enough movements in the magnets to negatively affect the quality of particle beams and even disrupt them. Therefore, tiltmeters were installed on Tevatron's magnets to monitor minute movements and to help identify the cause of problems quickly. The first known earthquake to disrupt the beam was the 2002 Denali earthquake, with another collider shutdown caused by a moderate local quake on June 28, 2004. Since then, the minute seismic vibrations emanating from over 20 earthquakes were detected at the Tevatron without a shutdown including the 2004 Indian Ocean earthquake, the 2005 Nias–Simeulue earthquake, New Zealand's 2007 Gisborne earthquake, the 2010 Haiti earthquake and the 2010 Chile earthquake.
Physical sciences
Particle physics: General
null
125274
https://en.wikipedia.org/wiki/SkyTrain%20%28Vancouver%29
SkyTrain (Vancouver)
SkyTrain is the medium-capacity rapid transit system serving the Metro Vancouver region in British Columbia, Canada. SkyTrain has of track and uses fully automated trains on grade-separated tracks running on underground and elevated guideways, allowing SkyTrain to hold consistently high on-time reliability. In , the system had a ridership of , or about per weekday as of . The name "SkyTrain" was coined for the system during Expo 86 because the first line (Expo) principally runs on elevated guideway outside of Downtown Vancouver, providing panoramic views of the metropolitan area. SkyTrain uses the world's third-longest cable-supported transit-only bridge, known as SkyBridge, to cross the Fraser River. With the opening of the Evergreen Extension on December 2, 2016, SkyTrain became the longest rapid transit system in Canada and the longest fully automated driverless system in the world. The total lengths of the automated lines of the Shanghai Metro, Singapore MRT, Kuala Lumpur Rapid KL and Dubai Metro have since surpassed those of SkyTrain. SkyTrain has 54 stations served by three lines: the Expo Line, the Millennium Line, and the Canada Line. The Expo and Millennium Lines are operated by British Columbia Rapid Transit Company under contract from TransLink (originally BC Transit), a regional government transportation agency. The Canada Line is operated on the same principles by the private concessionaire ProTrans BC under contract to TransLink and is an integrated part of the regional transport system. SkyTrain uses a fare system shared with other local transit services and is policed by the Metro Vancouver Transit Police. SkyTrain attendants (STAs) provide first aid, emergency response, directions and customer service, inspect fares, monitor train faults, and operate the trains manually if necessary. Network Expo Line The Expo Line connects Waterfront station in Vancouver to King George station in Surrey, principally along a route established by the Westminster and Vancouver Tramway Company as an interurban line in 1890. The Expo Line (originally referred to as simply "SkyTrain" until the opening of the Millennium Line) was built in 1985 in time for Expo 86. It now has 24 stations. The Expo Line ran only as far as New Westminster station initially. In 1989, it was extended to Columbia station and in 1990, once the Skybridge was finished, it continued across the Fraser River to Scott Road station in Surrey. In 1994, the terminus of the Expo Line became King George station in central Surrey. It was built on a budget of $854million (1986 dollars). Effective October 22, 2016, Expo Line trains began operating on a new branch to Production Way–University station, taking over the previous Millennium Line service between Waterfront and that station. During peak periods, trains between Waterfront and Columbia arrive every 2 to 3 minutes. Between Waterfront and King George, trains arrive every 2 to 5 minutes during peak hours, while trains between Waterfront and Production Way arrive every 6 to 7 minutes in the peak hours. Millennium Line Prior to October 22, 2016, the Millennium Line shared tracks with the Expo Line from Waterfront station to Columbia station in New Westminster, then continued along its own elevated route through North Burnaby and East Vancouver, ending at VCC–Clark station, near Vancouver Community College's Broadway campus. It was built on a $1.2-billion budget and the final extension from Commercial Drive station (now Commercial–Broadway station) to VCC–Clark station was opened on January 6, 2006. From October 22, 2016, to December 1, 2016, the Millennium Line operated from VCC–Clark to Lougheed Town Centre station. As of December 2, 2016, the Millennium Line operates between VCC–Clark station in Vancouver and Lafarge Lake–Douglas station in Coquitlam. The Millennium Line has 17 stations, three of which are transfer stations with the Expo Line (Commercial–Broadway, Production Way–University, and Lougheed Town Centre) and two which connect with the West Coast Express commuter train (Moody Centre and Coquitlam Central). The original Millennium Line's stations were designed by British Columbia's top architects and are very different from those on the Expo Line. In 2004, Busby and Associates Architects, designers of the Brentwood Town Centre station in Burnaby, were honoured for their work with a Governor General's Medal in Architecture. Construction on the Millennium Line's Evergreen Extension, from Lougheed Town Centre in Burnaby to Lafarge Lake–Douglas in Coquitlam, was completed in 2016 and it was opened for revenue service on December 2, 2016. This extension adds and 6 new stations to the Millennium Line. Canada Line The Canada Line begins at the Waterfront station hub, then continues south through Vancouver into the City of Richmond and Sea Island. From Bridgeport station, the Canada Line splits into two branches, one heading west to the YVR–Airport station at Vancouver International Airport and the other continuing south to the Richmond–Brighouse station in Richmond's city centre. Opened on August 17, 2009, the Canada Line added 15 stations and to the SkyTrain network. Waterfront station is the only station where the Canada Line directly connects with the Expo Line; however, Vancouver City Centre station is within a three-minute walk from Granville station via the Pacific Centre mall, making an unofficial transfer to the Expo Line. The Canada Line cost $1.9billion, financed by the Governments of Canada and British Columbia, TransLink, and InTransitBC. The Canada Line's trains, built by Hyundai Rotem, are fully automated, but are of a different design from the Expo and Millennium Lines' Bombardier-built fleet. They use conventional electric motors rather than linear induction motor technology. Canada Line tracks do not interconnect with the rest of the SkyTrain network, and there is a separate fleet maintenance depot. Operations Frequency SkyTrain provides high-frequency service, with trains arriving every 2 to 6 minutes at all stations during peak hours. Trains operate between 4:48 a.m. and approximately 1:30 a.m. on weekdays, with reduced hours on weekends on the Expo and Millennium lines. SkyTrain has longer hours of service during special events, such as New Year's Eve, the Vancouver 2010 Olympics, and marathons. Fares TransLink's SkyTrain service area is divided into three zones, with fares varying depending on how many zone boundaries are crossed during one trip (two- and three-zone passengers are charged the one zone rate after 6:30 pm rush hour, and on weekends and statutory holidays). Customers may purchase fares using cash, debit cards, or credit cards from self-serve ticket vending machines at the mezzanine level of each station. A variety of transit passes are available, such as the pre-paid FareSaver ticket, daily DayPass, monthly FareCard, annual EmployerPass, post-secondary student U-Pass, and other specialized passes. Canadian National Institute for the Blind identification cards are accepted without the need to be read by the fare box. One-time fares are valid for 90 minutes on any mode of transportation with any number of transfers, including all SkyTrain lines and bus and SeaBus routes. Concession fares are available for secondary school students with a valid Go-Card and the elderly. Children under 12 have been able to ride the system for free since September 2021. Until April 2016, SkyTrain's fare system was a proof-of-payment system; there were no turnstiles at the entrances to train platforms. Instead, fares were typically enforced by random ticket inspections – usually by police or transit security but occasionally by SkyTrain attendants – through trains and stations. This was supplemented by controlled access – with the payment of a fare or proof of payment required to pass through a staffed gate – at special events where extremely high ridership was expected, such as immediately after BC Lions or Vancouver Canucks games. Fare gates Installing faregates to prevent fare evasion was considered as early as at the time of the system's opening, but was rejected multiple times because the expense of implementing, maintaining, and enforcing them would exceed the losses prevented. In 2005, TransLink estimated it was losing $4million (5 percent of revenue attributed to SkyTrain) annually to fare evasion on SkyTrain. While the Canada Line stations, along with those on the Millennium Line, were designed to allow for future fare gates, the Canada Line opened in 2009 without them, despite stated intentions to include them. Expo Line stations have since been redesigned and retrofitted to accommodate the new fare gate system. The 2008 Provincial Transit Plan outlined several SkyTrain system upgrades, including replacement of the proof-of-payment system with a gated-ticket system. According to Minister of Transportation Kevin Falcon, the gated-ticket system was to be implemented by a private company by 2010. In April 2009, it was announced that the provincial and federal governments would spend $100million to put the gates in place by the end of 2010. However, in August 2009, a TransLink spokesman said the gates would not be installed before 2012, and that a smart card system would be implemented at the same time. It was announced on August 14, 2013, that bus-issued transfers (magnetic strip paper cards) would continue to be issued for cash fares paid on buses, but that these transfers would not work at SkyTrain or SeaBus station fare gates, which require a Compass Card or a 90-minute paper Compass ticket to operate. This means that a bus rider paying cash is required to pay a second fare to transfer to SkyTrain or SeaBus. Those transit users paying cash but beginning their trips at a SkyTrain or SeaBus station are not subject to this second fare because they are issued Compass tickets which are accepted as valid transfers on TransLink buses. Construction of SkyTrain fare gates was completed in May 2014, but they remained open until April 2016 owing to multiple system problems. While open for the nearly two-year period, holders of paper-based monthly passes, bus-issued transfers, and FareSaver tickets continued to pass through the gates into the stations' fare-paid zones unimpeded, although they were subject to having their fare inspected by transit security or transit police once inside the fare-paid zone. Starting in April 2016, they were initially fully closed only during peak hours, with one gate remaining open during off-peak times for people with accessibility issues who could not reach their Compass Cards to the fare gates to tap in or out. Full implementation of the fare gates was also delayed by problems with Compass Cards when riders were tapping out as they exited buses. The tapping-out process on buses was too slow and did not always record the tap which—because the system initially deducted a three-zone fare until a tap-out was recorded and a refund was issued to those having only travelled one or two zones—often resulted in customers being charged for travelling through three zones when in fact they had only travelled through one or two. This was a serious setback for TransLink as the entire system was supposed to be operational by 2013. A solution was finally implemented where the requirement to tap out of buses was removed and all bus travel was considered as within a single zone, creating significant savings for those travelling multiple zones using buses only and in some cases changing transit usage patterns. The last fare gates left open for users with accessibility issues were closed on July 25, 2016, and the system has been in full operation since. Airport surcharge Travel on the Canada Line is free between the three Sea Island stations near the Vancouver International Airport: Templeton, Sea Island Centre, and YVR–Airport. Single-use Compass tickets purchased with cash at Compass vending machines in stations on Sea Island include a surcharge, the "YVR AddFare", of $5.00 on top of the normal fare. This charge is also added to trips initiated at Sea Island stations for travel east to Bridgeport station and beyond using Compass Card stored value or DayPasses. It is not applied to trips using monthly passes, nor to trips travelling to the airport using DayPasses or single-use Compass tickets which were purchased and activated off Sea Island. The YVR AddFare came into effect on January 18, 2010. The revenue collected from the AddFare goes back to TransLink. Ridership Passengers on SkyTrain made an average of 526,400 trips on weekdays . Overall in 2017, the network carried a total of 151million passengers. This compares to 117.4million passengers in 2010: 38,447,725 on the Canada Line and 78,965,214 on the interlined Expo and Millennium Lines. The Canada Line carried an average of 110,000 passengers per weekday in early 2011, and is three years ahead of ridership forecasts. SkyTrain's highest ridership came during the 2010 Winter Olympics when each event ticket included unlimited day-of transit usage. During the 17-day event, Canada Line ridership rose 110 per cent to an average of 228,000 per day, with a single-day record of 287,400 on February 19, 2010. Expo and Millennium Line ridership rose 64 per cent to an average of 394,000 per day, with a single-day record of 567,000 on February 20, 2010. At times, every available train was in service on all three lines. After the Olympics ended, overall transit usage remained 7.8 percent above the previous year. Funding The cost of operating SkyTrain in 2008, with an estimated 73.5million boardings, was $83million. To cover this, TransLink draws mostly from transit fares, advertising ($360million in 2008) and tax ($262million from fuel taxes and $298million from property taxes in 2008), funds which are also shared with bus services, roads and bridge maintenance, and other infrastructure and services. The capital costs of building the system are shared with other government agencies. Capital expenses were $216million in 2008. For example, the cost of building the Canada Line was shared between TransLink ($335million or 22 percent), the federal government (29 percent), the provincial government (28 percent), the airport authority (19 percent), and the City of Vancouver (2 percent). While TransLink has run surpluses for operating costs since 2001, it incurs debt to cover these capital costs. As a whole, TransLink had $1.1billion in long-term debt in 2006, of which $508million was transferred from the province in 1999 when responsibility for SkyTrain was given to TransLink. The province retained ownership of the causeway, bridge, certain services, and a portion of SkyTrain's debt. Security Law enforcement services are provided by the Metro Vancouver Transit Police (MVTP). They replaced the old TransLink Special Provincial Constables, who had limited authority. On December 4, 2005, MVTP officers became the first and only transit police force in Canada to have full police powers and carry firearms. There was public concern in March 2005 when it was announced that transit police would carry firearms. Solicitor General of British Columbia John Les defended the move at the time, saying that it was necessary to enhance SkyTrain security. Transit officers receive the same training as officers in municipal and RCMP forces. They may arrest people for outstanding warrants, enforce drug laws, enforce the criminal code beyond TransLink property, and deal with offences that begin off TransLink property and make their way onto it. They issue tickets for fare evasion and other infractions on SkyTrain, transit buses, SeaBus, and West Coast Express. Transit police officers and Transit Security officers inspect fares at Skytrain stations as part of TransLink's fare audit. Transit Security officers mostly focus their efforts on the bus system, bus loops, and SeaBus. SkyTrain attendants provide customer service and first aid, troubleshoot train and station operations, and perform fare checks alongside the transit police force. SkyTrain attendants can be identified by their uniforms which say "SkyTrain" on them. Over the years, violence and other criminal activities have been concerns at time, but TransLink maintains that the system is safe. In 2009, Inspector Kash Heed of the Vancouver Police Department said that little crime takes place in the stations themselves; however, criminal activity becomes more visible outside them. Each station is monitored with an average of 23 closed-circuit television cameras, allowing SkyTrain operators to monitor passenger and station activity. Designated waiting areas have enhanced lighting, benches, and emergency telephones. Trains have yellow strips above each window which, when pressed, silently alert operators of a security hazard. On-board speaker phones provide two-way communication between passengers and control operators. In 2007, it was reported that the entire surveillance system was upgraded from analogue two-hour tape recording to digital technology, which was to allow police to retrieve previous footage for up to seven days. However, incidents since the upgrade have still limited police to a two-hour loop, resulting in loss of potential evidence. By November 2008, at least 54 deaths had occurred on the platforms and tracks of the Expo and Millennium Lines. 44 of those deaths were suicides, while the remaining ten were accidental. History Planning Vancouver had plans as early as the 1950s to build a monorail system, with modernist architect Wells Coates to design it; that project was abandoned. The lack of a rapid transit system was said to be the cause of traffic problems in the 1970s, and the municipal government could not fund the construction of such a system. During the same period, Urban Transportation Development Corporation, then an Ontario crown corporation, was developing a new rapid transit technology known as an "Intermediate Capacity Transit System". In 1980, the "Advanced Light Rapid Transit" system was selected by the British Columbia provincial government for use on one of two planned corridors, connecting Vancouver to New Westminster in time for Expo 86. Expo Line SkyTrain was conceived as a legacy project of Expo 86 and the first line was finished in time to showcase the fair's theme: "Transportation and Communication: World in Motion – World in Touch". Construction was funded by the provincial and federal governments and began in March 1982. It was built through the Dunsmuir Tunnel under downtown, which had originally been built for the Canadian Pacific Railway, to save costs. The first of the system, from Waterfront to New Westminster station, opened for limited and fare-free service on December 11, 1985. Revenue service began on January 3, 1986, and within its first year the line had carried over 30 million passengers—including visitors to Expo 86. The following year, construction began on an extension including the SkyBridge, Columbia station, and Scott Road station, extending service by to Surrey; it opened on March 16, 1990. The line was expanded again in 1994 with the opening of Gateway, Surrey Central, and King George stations. SkyTrain is part of the 1996 Greater Vancouver Regional District's (GVRD) Livable Region Strategic Plan, which discusses strategies to deal with the anticipated increase of population in the region. These strategies include increasing transportation choices and transit use. Millennium Line The first section of the Millennium Line opened in 2002, with Braid and Sapperton stations. Most of the remaining portion began operating later that year, serving North Burnaby and East Vancouver. Phase I of the Millennium Line was completed $50million under budget. Critics of the project dubbed it the "SkyTrain to Nowhere", claiming that the route of the new line was based on political concerns, not the needs of commuters. One illustration of the legitimacy of this complaint is that the end of the Millennium Line is located in a vacant field, chosen because it was supposed to be the location for a new high-tech development and is close to the head office of QLT Inc., but additional development was slow to get off the ground. That station, VCC–Clark near Clark Drive and Broadway, did not open until 2006 due to the struggles of negotiating the right-of-way with BNSF, the owner of the freight tracks beside the station, but it is still five kilometres short of the original proposed Phase II terminus at Granville Street and 10th Avenue. At the time VCC–Clark station opened, it was revealed that the additional westward extension and its three stations was out of favour and "not a high priority anymore". Evergreen Extension The Evergreen Extension, known as the Evergreen Line during construction, is the second phase of the Millennium Line, extending from Lougheed Mall in Burnaby to the Douglas College campus in Coquitlam. Originally referred to as the Port Moody-Coquitlam (PMC) Line, it provides a "one-seat ride" from Coquitlam to Vancouver. Switches to the PMC Line were installed to the east of Lougheed Town Centre station during its initial construction and a third platform at the station was roughed-in in anticipation of the extension. Phase II was postponed following a change in provincial government and a shuffling of priorities that led to prioritizing building the Canada Line due to Vancouver's hosting of the 2010 Olympics. Preliminary construction of the Evergreen Extension began in July 2012 and major construction started in June 2013 with the construction of support columns for the line. The extension began revenue service on December 2, 2016. Canada Line The Canada Line was built as a public–private partnership, with the winning consortium (now known as ProTransBC), led by SNC-Lavalin, contributing funds toward its construction and operating it for 35 years. A minimum ridership was guaranteed to ProTransBC by TransLink. The Richmond–Vancouver corridor had been considered for a rapid transit line as early as 1979 but such a project was not funded until the early 2000s with the approval of the Canada Line. The line opened on August 17, 2009, 15 weeks ahead of schedule and on budget. Ridership rose three years ahead of forecasts, hitting 100,000 passengers per weekday in May 2010 and 136,000 passengers per weekday in June 2011. The Canada Line is operationally independent from the other SkyTrain lines, using different rolling stock (shorter overall train and station length, but wider cars) that is incompatible with the Expo and Millennium Lines. Impact SkyTrain has had a significant impact on the development of areas near stations, and has helped to shape urban density in Metro Vancouver. Between 1991 and 2001, the population living within of SkyTrain increased by 37 percent, compared to the regional average of 24 percent. Since SkyTrain opened, the total population of the service area rose from 400,000 to 1.3million people. According to BC Transit's document SkyTrain: A catalyst for development, more than $5billion of private money had been invested within a 10–15 minute walking distance of the SkyTrain and SeaBus. The report claimed that the two modes of transportation were the driving force of the investment, though it did not disaggregate the general growth in that area. Design Routes There are three main routes: the Expo Line, Millennium Line and Canada Line. The Expo Line travels between Waterfront station in Downtown Vancouver and Columbia station in New Westminster, serving the cities of Vancouver, Burnaby, and New Westminster. From Columbia, the Expo Line splits into two branches. One branch travels through Surrey to King George station, while the other travels through New Westminster and Burnaby, terminating at Production Way–University station. Millennium Line trains travel between VCC–Clark station and Lafarge Lake–Douglas station in the city of Coquitlam. Near the western end of the line is a major transfer point with the Expo Line at Commercial–Broadway station. Further east, Lougheed Town Centre station and Production Way–University station serve as two more transfer points with the Expo Line. The Canada Line travels southward from Waterfront station in Downtown Vancouver to Richmond, where the track splits at Bridgeport station; trains alternate between a southern branch ending at Richmond–Brighouse station and a western branch ending at Vancouver International Airport. Although most of the system is elevated, SkyTrain runs at or below grade through Downtown Vancouver, for the Vancouver portion of the Canada Line until just before it reaches Richmond at Marine Drive station, through the tunnel used by the Millennium Line between Coquitlam and Port Moody, through the tunnel between Columbia and Sapperton stations in New Westminster, and for short stretches in Burnaby and New Westminster. SkyTrain's Expo Line uses the world's second longest bridge dedicated to transit services, the SkyBridge, which crosses the Fraser River between New Westminster and Surrey. It is a cable-stayed bridge, with towers. Two additional transit-only bridges, the North Arm Bridge and the Middle Arm Bridge, were built for the Canada Line. The North Arm Bridge is an extradosed bridge with a total length of , with shorter towers necessitated by its proximity to the Vancouver International Airport, and also has a pedestrian/bicycle deck connecting the bicycle networks of Vancouver and Richmond. The Middle Arm Bridge is a shorter box girder bridge. Technology The signalling technology used on all three SkyTrain lines to run trains automatically was originally developed by Alcatel and loaded from a 3.5" diskette. There were initially four systems called the vehicle control computer (VCC) with three divided over the mainline and one for the storage yard. VCC1 controls trains from Waterfront to Royal Oak; VCC2 controls trains from Royal Oak to King George (it now also controls a portion of the Millennium Line); and VCC3 controls trains in the yard. Additional VCCs were added as Skytrain expanded. Each VCC is a cluster of three IBM Type 7588 rack-mount single-board computers with Intel-IA32 Pentium processors and proprietary hardware in a fault-tolerant configuration. For example, VCC3 is composed of CPU1, CPU2, and CPU3. For every command that is sent to a train, at least two of the CPUs must agree with the action, otherwise an error is generated and the command is ignored. The VCC communicates with the train's vehicle on board computer (VOBC), whose data is transmitted through coax cables laid along the tracks. There are up to two VOBCs per married-pair trains, i.e. 4-car train would have two VOBCs. If the VCCs fail or communication between the VCC and the VOBC is lost, the train will "time-out" and emergency-brake (EB) through a Quester Tangent brake assurance monitor (BAM) that controls propulsion and braking systems. The VCCs have a command-line-console, but normally the trains are controlled through a system known as the SMC, which also provides scheduling. All commands from the SMC are verified to be safe by the VCC before execution. However if the SMC fails, the system can still be operated through the VCC. This is known as "degraded mode". The SkyTrain health monitoring unit (HMU) developed by Quester Tangent provides monitoring and diagnostic functionality for vehicle maintenance by connecting to CAN vehicle network and providing a maintenance display in the Hostler Panel. SkyTrain's signalling system later provided the basis of SelTrac, which is currently maintained and sold by Thales and has equipped many lines around the world. Largely as a result of this, the Expo and Millennium Lines have a combined punctuality record of over 96 percent; the principal cause of train delays is passenger interference with train doors. There have been two derailments during revenue service in the system's history. Accessibility The SkyTrain network is fully mobility-needs accessible, including vehicles and stations. Mark I train cars have one designated wheelchair position, Mark II, Mark III and Hyundai Rotem cars have two, and all stations have elevators. TransLink upgraded all Expo Line platform station edges to match those on the Millennium Line shortly after it was completed. The new, wider edges are brighter and are tiled to provide a safer environment for the visually impaired. The Canada Line also uses this safety feature in its stations. Since the opening of the Millennium Line, aside from platform tile upgrading, many Expo line stations have also been refitted with new signage and ticket vending machines. Accessibility is provided for deaf individuals through real-time English signage and displays at stations and on newer trains, although a reliance on verbal communication for service disruptions has been identified as a transportation barrier. The distinctive three-tone chime used in the SkyTrain system was recorded in 1984–85 at Little Mountain Sound Studios in Vancouver. The automated train announcements have been voiced by Laureen Regan since the opening of the Millennium Line in 2002, and by Karen Kelm between 1985 and 2001. Rolling stock Expo and Millennium Lines The Expo Line and Millennium Line use Bombardier's Advanced Rapid Transit (ART) system, a system of automated trains driven by linear induction motors, formerly known as Intermediate Capacity Transit System (ICTS). These trains reach speeds of ; including wait times at stops, the end-to-end average speed is , three times faster than a bus and almost twice as fast as a B-Line express bus. During cold weather, TransLink crews use hockey sticks to clear snow and ice from train doors, which would otherwise prevent some doors from being able to open. The trains are also slowed and staffed by TransLink attendants, who can manually override the automatic controls in the event of an obstruction caused by snow or ice. UTDC ICTS Mark I fleet The initial fleet consisted of lightweight Mark I ICTS cars from Urban Transportation Development Corporation, similar to those used by Toronto's Line 3 Scarborough and the Detroit People Mover. Mark I vehicles are composed of mated pairs and normally run as six-car trains and only on the Expo Line, but can be run in two-, four-, or six-car configurations. The maximum based on current station platform lengths is a six-car configuration, totalling . The SkyTrain fleet includes 150 Mark I cars. These trains have a mix of forward-, reverse- and side-facing seats; red, white, and blue interiors; and four doors per car, two per side. Bombardier ART Mark II fleet When the Millennium Line was built, TransLink ordered new-generation Mark II ART trains from Bombardier Transportation, some of which were assembled in a Burnaby factory. Similar trains are used in Kuala Lumpur's Kelana Jaya Line, New York's JFK AirTrain, and the Beijing Airport Express. These trains are run in four-car configurations on the Expo Line, and two-car configurations on the Millennium Line. Each pair of cars is semi-permanently joined together in a twin unit or "married pair", with a length of . Mark II trains have a streamlined front and rear, an articulated joint allowing passengers to walk the length of a married pair, white/grey/blue interior, and six doors per car, three per side. TransLink also ordered 48 Mark II ART (2009/2010 model) in 2009 to further supplement supply and integrate new features like CCTV and visual maps with LED lights. Bombardier Innovia Metro 300 (ART Mark III) fleet The Bombardier ART model has undergone several redesigns from the original UTDC ICTS model, and the Mark II design has been updated by Bombardier, with this newest offering being the Innovia Metro 300. Dimensions are similar to the Mark II, with capacity improvements offered over the outgoing model through redesigned car layout. TransLink ordered 28 Mark III cars, which began delivery in 2015, and went into service beginning in August 2016. The vehicles appear sleeker, with larger windows on the sides of the train, and redesigned windows and headlights on the ends of the cars. The interior is largely similar to the second generation of Mark II cars, with the some seats removed to better accommodate bicycles and strollers. TransLink has claimed that the interior of the Mark III offers better sound and heat insulation. TransLink ordered the cars for the Evergreen Extension in a 4-car articulated configuration, with two centre cars, to allow full-length train movements by passengers. However, due to a shortage of trains, the Mark IIIs are being used on the Expo Line, while 2-car Innovia 200 (Mk2) serve the Millennium Line. On December 16, 2016, TransLink ordered 28 more Mark III cars, bringing the total of Mark III cars to 56 by the end of 2019. On February 22, 2018, TransLink announced a further order of 28 Mark III cars, which will bring the total number of Mark III cars to 84 once all trains are in service by the end of 2020. Canada Line The Canada Line uses Hyundai Rotem EMU vehicles, with cars powered by conventional electric motors instead of the linear induction motor (LIM) technology used by the Expo and Millennium Line vehicles; as a result, the Canada Line vehicles cannot be used on the Expo and Millennium Lines. There are 20 trains, which operate as two-carriage articulated units and can reach a speed of . They are maintained at a yard next to Bridgeport station in Richmond. On February 22, 2018, TransLink announced an additional order of 24 Canada Line cars to be brought into service by 2020, bringing the total to 32 trains operating as two-car units. Future expansion Several possible expansions to the SkyTrain network have been announced. In 2005, TransLink released a ten-year outlook outlining a potential line to the University of British Columbia (UBC) and further expansion of the Expo Line into Surrey. In 2011, two separate rapid transit studies have given further examination and consultation into rapid transit options for expansion for the UBC–Broadway corridor, and Surrey and the South of Fraser region. Expo Line capacity upgrades are also being planned to meet future demand. A pair of expansions—the Broadway corridor extension and the Expo Line to Langley—began construction in the early 2020s alongside the addition of 235 new cars and upgrades to SkyTrain facilities. Broadway corridor extension Early proposals planned to extend SkyTrain west along the Broadway corridor, but stopped well short of UBC because of the cost, estimated at $700million in 1999. However, the Provincial Transit Plan, released in February 2008, included funding for the entire Broadway corridor to UBC. The line would replace the region's busiest bus routes, where over 100,000 trips are made daily. The line would also include an interchange with the Canada Line at Cambie Street. In 2008, the new line was estimated to cost $2.8billion, with an expected completion date of 2020. Government statements suggested that the UBC line would be an extension of the SkyTrain network from VCC–Clark station via elevated platforms or a tunnel along Broadway ending at University of British Columbia Vancouver. This would mean that riders travelling from Coquitlam to UBC would not need to change trains, as Millennium Line trains would continue to UBC from Lafarge Lake–Douglas station. Riders from the Evergreen Extension east of Commercial–Broadway station would also have a secondary route to downtown with the option of transferring to the Canada Line instead of the Expo Line. However, light rail and higher-capacity bus rapid transit were also proposed. In 2011, with the UBC Line Rapid Transit Study, SkyTrain was evaluated as a possible technology for rapid transit expansion along the Broadway corridor to UBC, along with light rail transit and bus rapid Transit. The June 2014 plan proposes a first phase that would extend the Millennium Line from VCC–Clark station to Arbutus Street using SkyTrain technology, with an interchange with the Canada Line at Broadway–City Hall station; a second phase would see the line extended from Arbutus to UBC. A plebiscite to raise 25 percent of the funds required for the Broadway extension to Arbutus, among other transit expansion plans, was defeated in 2015. On March 16, 2018, the provincial government approved the construction of an extension of the Millennium Line underneath Broadway, which will extend the line underground west to Arbutus Street, while adding six new stations. Early work was slated to begin in 2019 with a completion date set for 2025. On April 19, 2018, the UBC Board of Governors indicated it would consider contributing funds towards accelerating the extension of the Millennium Line from its new planned terminus at Arbutus to the university. On January 30, 2019, Vancouver City Council endorsed building the line underground all the way to UBC. On July 17, 2020, the British Columbia government announced that the Acciona–Ghella Joint Venture Company had been selected to receive the design–build contract for the Broadway extension. Premier John Horgan confirmed on September 4, 2020, that construction would proceed in the fourth quarter of 2020 despite the ongoing COVID-19 pandemic in British Columbia. Horgan also confirmed that the extension is expected to be in service by 2025. Transportation Minister Claire Trevena also stated that there were no immediate plans to extend the line towards the UBC campus. The provincial government announced on November 24, 2022, that the opening of the extension would be pushed back to early 2026 owing to a labour dispute affecting concrete workers which took place that June. On May 24, 2024, the provincial government announced that the extension opening would be further delayed to late 2027, due to various delays which occurred during the tunnel-boring process. Expo Line extension The 2008 Provincial Transit Plan included a extension of the Expo Line from King George station in Surrey east to Guildford, then along 152 Street to Fraser Highway and southeast to 168 Street; a further extension to Willowbrook Shopping Centre in Langley City was also included in the plan. After a period of time where SkyTrain, light rail transit, and bus rapid transit were considered for service expansion, federal funding was secured in 2021 to build a SkyTrain extension to Langley City at a total cost (shared between the federal government, provincial government, and TransLink) between $3.8 and $3.95billion. In July 2022, the extension received approval from the provincial government to be built in one single phase, opening in 2028 with eight stations. Procurement for private contractors began in October 2022 and was scheduled to end with the selection of winning bids by December 2023. The project was divided into three general contracts—the guideway, stations, and electrical systems. Construction began in November 2024 and is, expected to be completed in 2029. Expo Line capacity expansion Several options have been considered, planned, or implemented to improve capacity on the Expo Line, including operating longer trains, reducing operating headways, and extending station platforms beyond . In late 2020, TransLink ordered 41 Alstom Mark V trainsets in five-car configurations that will eventually replace older, lower-capacity trains. Six more Alstom Mark V trainsets were ordered in May 2024, bring this total to 47 Mark V trainsets. Coquitlam maintenance facility In March 2021, it was announced that a new yard would be constructed to provide storage space and maintenance needed for the upcoming extensions of the Expo and Millennium lines. This new facility is to be located near the New Westminster–Coquitlam border along North Road. The land was purchased for $82.5million, while the cost for the structure and additional tracks was estimated at an additional $300million. The new yard was expected to provide additional maintenance and space in time for the opening of the Millennium Line's Broadway extension in early 2026. As of February 2024, the opening of the Coquitlam maintenance facility, which is expected to have a storage capacity of 145 cars, is scheduled to take place in 2027. University of British Columbia extension On January 14, 2008, the British Columbia provincial government announced a commitment to the expansion of the Millennium Line to the University of British Columbia (UBC) by 2020 as part of a $14-billion transit spending package to address climate change. It was not clear what route the new line would take, but it was hinted that there would be less use of cut-and-cover tunnelling to minimize disruption to businesses along Broadway and avoid the same problems seen during the Canada Line construction along Cambie Street. This expansion failed to materialize. On February 15, 2019, the TransLink Mayors' Council again approved an extension of the line to the UBC campus, although funding for this continuation past Arbutus Street had not yet been secured. In April 2022, TransLink assessed possible route options in the UBC area, including the provision of additional pocket storage tracks near the UBC terminus owing to the distance between the university and the nearest storage yard. As a result, the terminus at UBC would hypothetically be larger in size in order to accommodate the additional storage space and operational flexibility. In March 2023, it was announced that a contractor would be hired to put together a business case for the extension, which was to be presented in December 2024. Port Coquitlam extension When the Evergreen Extension was built, the first few metres of track and a track switch for an eventual eastward extension to Port Coquitlam were built at Coquitlam Central station. Such an extension would create two branches where trains would alternate between going east to Lafarge Lake–Douglas station or Port Coquitlam. A feasibility study was conducted, started during early 2020 and running for about six months. Both Port Coquitlam mayor Brad West, Port Coquitlam's city council, and Coquitlam's city council have stated support for the extension. However, as of 2022, no funding had been secured nor a formal plan created. North Shore connection In 2019, the BC Ministry of Transportation and Infrastructure announced its intention to study a rapid transit link from Vancouver's city centre to the North Shore, possibly in the form of SkyTrain. By March 2020, the provincial government confirmed it had selected six possible routes for a "high-capacity, fixed-link, rapid transit crossing across Burrard Inlet between Vancouver and the North Shore". In 2022, TransLink suggested that a North Shore link would likely be created using bus rapid transit first while a concurrent feasibility study of a longer-term light rail transit or SkyTrain connection is conducted.
Technology
Canada
null
125276
https://en.wikipedia.org/wiki/Matrix%20addition
Matrix addition
In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together. For a vector, , adding two matrices would have the geometric effect of applying each matrix transformation separately onto , then adding the transformed vectors. However, there are other operations that could also be considered addition for matrices, such as the direct sum and the Kronecker sum. Entrywise sum Two matrices must have an equal number of rows and columns to be added. In which case, the sum of two matrices A and B will be a matrix which has the same number of rows and columns as A and B. The sum of A and B, denoted , is computed by adding corresponding elements of A and B: Or more concisely (assuming that ): For example: Similarly, it is also possible to subtract one matrix from another, as long as they have the same dimensions. The difference of A and B, denoted , is computed by subtracting elements of B from corresponding elements of A, and has the same dimensions as A and B. For example: Direct sum Another operation, which is used less often, is the direct sum (denoted by ⊕). The Kronecker sum is also denoted ⊕; the context should make the usage clear. The direct sum of any pair of matrices A of size m × n and B of size p × q is a matrix of size (m + p) × (n + q) defined as: For instance, The direct sum of matrices is a special type of block matrix. In particular, the direct sum of square matrices is a block diagonal matrix. The adjacency matrix of the union of disjoint graphs (or multigraphs) is the direct sum of their adjacency matrices. Any element in the direct sum of two vector spaces of matrices can be represented as a direct sum of two matrices. In general, the direct sum of n matrices is: where the zeros are actually blocks of zeros (i.e., zero matrices). Kronecker sum The Kronecker sum is different from the direct sum, but is also denoted by ⊕. It is defined using the Kronecker product ⊗ and normal matrix addition. If A is n-by-n, B is m-by-m and denotes the k-by-k identity matrix then the Kronecker sum is defined by:
Mathematics
Linear algebra
null
125280
https://en.wikipedia.org/wiki/Matrix%20multiplication
Matrix multiplication
In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices and is denoted as . Matrix multiplication was first described by the French mathematician Jacques Philippe Marie Binet in 1812, to represent the composition of linear maps that are represented by matrices. Matrix multiplication is thus a basic tool of linear algebra, and as such has numerous applications in many areas of mathematics, as well as in applied mathematics, statistics, physics, economics, and engineering. Computing matrix products is a central operation in all computational applications of linear algebra. Notation This article will use the following notational conventions: matrices are represented by capital letters in bold, e.g. ; vectors in lowercase bold, e.g. ; and entries of vectors and matrices are italic (they are numbers from a field), e.g. and . Index notation is often the clearest way to express definitions, and is used as standard in the literature. The entry in row , column of matrix is indicated by , or . In contrast, a single subscript, e.g. , is used to select a matrix (not a matrix entry) from a collection of matrices. Definitions Matrix times matrix If is an matrix and is an matrix, the matrix product (denoted without multiplication signs or dots) is defined to be the matrix such that for and . That is, the entry of the product is obtained by multiplying term-by-term the entries of the th row of and the th column of , and summing these products. In other words, is the dot product of the th row of and the th column of . Therefore, can also be written as Thus the product is defined if and only if the number of columns in equals the number of rows in , in this case . In most scenarios, the entries are numbers, but they may be any kind of mathematical objects for which an addition and a multiplication are defined, that are associative, and such that the addition is commutative, and the multiplication is distributive with respect to the addition. In particular, the entries may be matrices themselves (see block matrix). Matrix times vector A vector of length can be viewed as a column vector, corresponding to an matrix whose entries are given by If is an matrix, the matrix-times-vector product denoted by is then the vector that, viewed as a column vector, is equal to the matrix In index notation, this amounts to: One way of looking at this is that the changes from "plain" vector to column vector and back are assumed and left implicit. Vector times matrix Similarly, a vector of length can be viewed as a row vector, corresponding to a matrix. To make it clear that a row vector is meant, it is customary in this context to represent it as the transpose of a column vector; thus, one will see notations such as The identity holds. In index notation, if is an matrix, amounts to: Vector times vector The dot product of two vectors and of equal length is equal to the single entry of the matrix resulting from multiplying these vectors as a row and a column vector, thus: (or which results in the same matrix). Illustration The figure to the right illustrates diagrammatically the product of two matrices and , showing how each intersection in the product matrix corresponds to a row of and a column of . The values at the intersections, marked with circles in figure to the right, are: Fundamental applications Historically, matrix multiplication has been introduced for facilitating and clarifying computations in linear algebra. This strong relationship between matrix multiplication and linear algebra remains fundamental in all mathematics, as well as in physics, chemistry, engineering and computer science. Linear maps If a vector space has a finite basis, its vectors are each uniquely represented by a finite sequence of scalars, called a coordinate vector, whose elements are the coordinates of the vector on the basis. These coordinate vectors form another vector space, which is isomorphic to the original vector space. A coordinate vector is commonly organized as a column matrix (also called a column vector), which is a matrix with only one column. So, a column vector represents both a coordinate vector, and a vector of the original vector space. A linear map from a vector space of dimension into a vector space of dimension maps a column vector onto the column vector The linear map is thus defined by the matrix and maps the column vector to the matrix product If is another linear map from the preceding vector space of dimension , into a vector space of dimension , it is represented by a matrix A straightforward computation shows that the matrix of the composite map is the matrix product The general formula ) that defines the function composition is instanced here as a specific case of associativity of matrix product (see below): Geometric rotations Using a Cartesian coordinate system in a Euclidean plane, the rotation by an angle around the origin is a linear map. More precisely, where the source point and its image are written as column vectors. The composition of the rotation by and that by then corresponds to the matrix product where appropriate trigonometric identities are employed for the second equality. That is, the composition corresponds to the rotation by angle , as expected. Resource allocation in economics As an example, a fictitious factory uses 4 kinds of basic commodities, to produce 3 kinds of intermediate goods, , which in turn are used to produce 3 kinds of final products, . The matrices   and   provide the amount of basic commodities needed for a given amount of intermediate goods, and the amount of intermediate goods needed for a given amount of final products, respectively. For example, to produce one unit of intermediate good , one unit of basic commodity , two units of , no units of , and one unit of are needed, corresponding to the first column of . Using matrix multiplication, compute this matrix directly provides the amounts of basic commodities needed for given amounts of final goods. For example, the bottom left entry of is computed as , reflecting that units of are needed to produce one unit of . Indeed, one unit is needed for , one for each of two , and for each of the four units that go into the unit, see picture. In order to produce e.g. 100 units of the final product , 80 units of , and 60 units of , the necessary amounts of basic goods can be computed as that is, units of , units of , units of , units of are needed. Similarly, the product matrix can be used to compute the needed amounts of basic goods for other final-good amount data. System of linear equations The general form of a system of linear equations is Using same notation as above, such a system is equivalent with the single matrix equation Dot product, bilinear form and sesquilinear form The dot product of two column vectors is the unique entry of the matrix product where is the row vector obtained by transposing . (As usual, a 1×1 matrix is identified with its unique entry.) More generally, any bilinear form over a vector space of finite dimension may be expressed as a matrix product and any sesquilinear form may be expressed as where denotes the conjugate transpose of (conjugate of the transpose, or equivalently transpose of the conjugate). General properties Matrix multiplication shares some properties with usual multiplication. However, matrix multiplication is not defined if the number of columns of the first factor differs from the number of rows of the second factor, and it is non-commutative, even when the product remains defined after changing the order of the factors. Non-commutativity An operation is commutative if, given two elements and such that the product is defined, then is also defined, and If and are matrices of respective sizes and , then is defined if , and is defined if . Therefore, if one of the products is defined, the other one need not be defined. If , the two products are defined, but have different sizes; thus they cannot be equal. Only if , that is, if and are square matrices of the same size, are both products defined and of the same size. Even in this case, one has in general For example but This example may be expanded for showing that, if is a matrix with entries in a field , then for every matrix with entries in , if and only if where , and is the identity matrix. If, instead of a field, the entries are supposed to belong to a ring, then one must add the condition that belongs to the center of the ring. One special case where commutativity does occur is when and are two (square) diagonal matrices (of the same size); then . Again, if the matrices are over a general ring rather than a field, the corresponding entries in each must also commute with each other for this to hold. Distributivity The matrix product is distributive with respect to matrix addition. That is, if are matrices of respective sizes , , , and , one has (left distributivity) and (right distributivity) This results from the distributivity for coefficients by Product with a scalar If is a matrix and a scalar, then the matrices and are obtained by left or right multiplying all entries of by . If the scalars have the commutative property, then If the product is defined (that is, the number of columns of equals the number of rows of ), then and If the scalars have the commutative property, then all four matrices are equal. More generally, all four are equal if belongs to the center of a ring containing the entries of the matrices, because in this case, for all matrices . These properties result from the bilinearity of the product of scalars: Transpose If the scalars have the commutative property, the transpose of a product of matrices is the product, in the reverse order, of the transposes of the factors. That is where T denotes the transpose, that is the interchange of rows and columns. This identity does not hold for noncommutative entries, since the order between the entries of and is reversed, when one expands the definition of the matrix product. Complex conjugate If and have complex entries, then where denotes the entry-wise complex conjugate of a matrix. This results from applying to the definition of matrix product the fact that the conjugate of a sum is the sum of the conjugates of the summands and the conjugate of a product is the product of the conjugates of the factors. Transposition acts on the indices of the entries, while conjugation acts independently on the entries themselves. It results that, if and have complex entries, one has where denotes the conjugate transpose (conjugate of the transpose, or equivalently transpose of the conjugate). Associativity Given three matrices and , the products and are defined if and only if the number of columns of equals the number of rows of , and the number of columns of equals the number of rows of (in particular, if one of the products is defined, then the other is also defined). In this case, one has the associative property As for any associative operation, this allows omitting parentheses, and writing the above products as This extends naturally to the product of any number of matrices provided that the dimensions match. That is, if are matrices such that the number of columns of equals the number of rows of for , then the product is defined and does not depend on the order of the multiplications, if the order of the matrices is kept fixed. These properties may be proved by straightforward but complicated summation manipulations. This result also follows from the fact that matrices represent linear maps. Therefore, the associative property of matrices is simply a specific case of the associative property of function composition. Computational complexity depends on parenthesization Although the result of a sequence of matrix products does not depend on the order of operation (provided that the order of the matrices is not changed), the computational complexity may depend dramatically on this order. For example, if and are matrices of respective sizes , computing needs multiplications, while computing needs multiplications. Algorithms have been designed for choosing the best order of products; see Matrix chain multiplication. When the number of matrices increases, it has been shown that the choice of the best order has a complexity of Application to similarity Any invertible matrix defines a similarity transformation (on square matrices of the same size as ) Similarity transformations map product to products, that is In fact, one has Square matrices Let us denote the set of square matrices with entries in a ring , which, in practice, is often a field. In , the product is defined for every pair of matrices. This makes a ring, which has the identity matrix as identity element (the matrix whose diagonal entries are equal to 1 and all other entries are 0). This ring is also an associative -algebra. If , many matrices do not have a multiplicative inverse. For example, a matrix such that all entries of a row (or a column) are 0 does not have an inverse. If it exists, the inverse of a matrix is denoted , and, thus verifies A matrix that has an inverse is an invertible matrix. Otherwise, it is a singular matrix. A product of matrices is invertible if and only if each factor is invertible. In this case, one has When is commutative, and, in particular, when it is a field, the determinant of a product is the product of the determinants. As determinants are scalars, and scalars commute, one has thus The other matrix invariants do not behave as well with products. Nevertheless, if is commutative, and have the same trace, the same characteristic polynomial, and the same eigenvalues with the same multiplicities. However, the eigenvectors are generally different if . Powers of a matrix One may raise a square matrix to any nonnegative integer power multiplying it by itself repeatedly in the same way as for ordinary numbers. That is, Computing the th power of a matrix needs times the time of a single matrix multiplication, if it is done with the trivial algorithm (repeated multiplication). As this may be very time consuming, one generally prefers using exponentiation by squaring, which requires less than matrix multiplications, and is therefore much more efficient. An easy case for exponentiation is that of a diagonal matrix. Since the product of diagonal matrices amounts to simply multiplying corresponding diagonal elements together, the th power of a diagonal matrix is obtained by raising the entries to the power : Abstract algebra The definition of matrix product requires that the entries belong to a semiring, and does not require multiplication of elements of the semiring to be commutative. In many applications, the matrix elements belong to a field, although the tropical semiring is also a common choice for graph shortest path problems. Even in the case of matrices over fields, the product is not commutative in general, although it is associative and is distributive over matrix addition. The identity matrices (which are the square matrices whose entries are zero outside of the main diagonal and 1 on the main diagonal) are identity elements of the matrix product. It follows that the matrices over a ring form a ring, which is noncommutative except if and the ground ring is commutative. A square matrix may have a multiplicative inverse, called an inverse matrix. In the common case where the entries belong to a commutative ring , a matrix has an inverse if and only if its determinant has a multiplicative inverse in . The determinant of a product of square matrices is the product of the determinants of the factors. The matrices that have an inverse form a group under matrix multiplication, the subgroups of which are called matrix groups. Many classical groups (including all finite groups) are isomorphic to matrix groups; this is the starting point of the theory of group representations. Matrices are the morphisms of a category, the category of matrices. The objects are the natural numbers that measure the size of matrices, and the composition of morphisms is matrix multiplication. The source of a morphism is the number of columns of the corresponding matrix, and the target is the number of rows. Computational complexity The matrix multiplication algorithm that results from the definition requires, in the worst case, multiplications and additions of scalars to compute the product of two square matrices. Its computational complexity is therefore , in a model of computation for which the scalar operations take constant time. Rather surprisingly, this complexity is not optimal, as shown in 1969 by Volker Strassen, who provided an algorithm, now called Strassen's algorithm, with a complexity of Strassen's algorithm can be parallelized to further improve the performance. , the best peer-reviewed matrix multiplication algorithm is by Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou and has complexity . It is not known whether matrix multiplication can be performed in time. This would be optimal, since one must read the elements of a matrix in order to multiply it with another matrix. Since matrix multiplication forms the basis for many algorithms, and many operations on matrices even have the same complexity as matrix multiplication (up to a multiplicative constant), the computational complexity of matrix multiplication appears throughout numerical linear algebra and theoretical computer science. Generalizations Other types of products of matrices include: Block matrix operations Cracovian product, defined as Frobenius inner product, the dot product of matrices considered as vectors, or, equivalently the sum of the entries of the Hadamard product Hadamard product of two matrices of the same size, resulting in a matrix of the same size, which is the product entry-by-entry Kronecker product or tensor product, the generalization to any size of the preceding Khatri-Rao product and Face-splitting product Outer product, also called dyadic product or tensor product of two column matrices, which is Scalar multiplication
Mathematics
Linear algebra
null
125293
https://en.wikipedia.org/wiki/Copper
Copper
Copper is a chemical element. It has the symbol Cu (), and the atomic number 29. It is a soft, malleable, and ductile metal with very high thermal and electrical conductivity. A freshly exposed surface of pure copper has a pinkish-orange color. Copper is used as a conductor of heat and electricity, as a building material, and as a constituent of various metal alloys, such as sterling silver used in jewelry, cupronickel used to make marine hardware and coins, and constantan used in strain gauges and thermocouples for temperature measurement. Copper is one of the few metals that can occur in nature in a directly usable metallic form. This means that copper is a native metal. This led to very early human use in several regions, from . Thousands of years later, it was the first metal to be smelted from sulfide ores, ; the first metal to be cast into a shape in a mold, ; and the first metal to be purposely alloyed with another metal, tin, to create bronze, . Commonly encountered compounds are copper(II) salts, which often impart blue or green colors to such minerals as azurite, malachite, and turquoise, and have been used widely and historically as pigments. Copper used in buildings, usually for roofing, oxidizes to form a green patina of compounds called verdigris. Copper is sometimes used in decorative art, both in its elemental metal form and in compounds as pigments. Copper compounds are used as bacteriostatic agents, fungicides, and wood preservatives. Copper is essential to all living organisms as a trace dietary mineral because it is a key constituent of the respiratory enzyme complex cytochrome c oxidase. In molluscs and crustaceans, copper is a constituent of the blood pigment hemocyanin, replaced by the iron-complexed hemoglobin in fish and other vertebrates. In humans, copper is found mainly in the liver, muscle, and bone. The adult body contains between 1.4 and 2.1 mg of copper per kilogram of body weight. Etymology In the Roman era, copper was mined principally on Cyprus, the origin of the name of the metal, from aes cyprium (metal of Cyprus), later corrupted to (Latin). (Old English) and copper were derived from this, the later spelling first used around 1530. Characteristics Physical Copper, silver, and gold are in group 11 of the periodic table; these three metals have one s-orbital electron on top of a filled d-electron shell and are characterized by high ductility, and electrical and thermal conductivity. The filled d-shells in these elements contribute little to interatomic interactions, which are dominated by the s-electrons through metallic bonds. Unlike metals with incomplete d-shells, metallic bonds in copper are lacking a covalent character and are relatively weak. This observation explains the low hardness and high ductility of single crystals of copper. At the macroscopic scale, introduction of extended defects to the crystal lattice, such as grain boundaries, hinders flow of the material under applied stress, thereby increasing its hardness. For this reason, copper is usually supplied in a fine-grained polycrystalline form, which has greater strength than monocrystalline forms. The softness of copper partly explains its high electrical conductivity () and high thermal conductivity, second highest (second only to silver) among pure metals at room temperature. This is because the resistivity to electron transport in metals at room temperature originates primarily from scattering of electrons on thermal vibrations of the lattice, which are relatively weak in a soft metal. The maximum possible current density of copper in open air is approximately , above which it begins to heat excessively. Copper is one of a few metallic elements with a natural color other than gray or silver. Pure copper is orange-red and acquires a reddish tarnish when exposed to air. This is due to the low plasma frequency of the metal, which lies in the red part of the visible spectrum, causing it to absorb the higher-frequency green and blue colors. As with other metals, if copper is put in contact with another metal in the presence of an electrolyte, galvanic corrosion will occur. Chemical Copper does not react with water, but it does slowly react with atmospheric oxygen to form a layer of brown-black copper oxide which, unlike the rust that forms on iron in moist air, protects the underlying metal from further corrosion (passivation). A green layer of verdigris (copper carbonate) can often be seen on old copper structures, such as the roofing of many older buildings and the Statue of Liberty. Copper tarnishes when exposed to some sulfur compounds, with which it reacts to form various copper sulfides. Isotopes There are 29 isotopes of copper. and are stable, with comprising approximately 69% of naturally occurring copper; both have a spin of . The other isotopes are radioactive, with the most stable being with a half-life of 61.83 hours. Seven metastable isomers have been characterized; is the longest-lived with a half-life of 3.8 minutes. Isotopes with a mass number above 64 decay by β−, whereas those with a mass number below 64 decay by β+. , which has a half-life of 12.7 hours, decays both ways. and have significant applications. is used in Cu-PTSM as a radioactive tracer for positron emission tomography. Occurrence Copper is produced in massive stars and is present in the Earth's crust in a proportion of about 50 parts per million (ppm). In nature, copper occurs in a variety of minerals, including native copper, copper sulfides such as chalcopyrite, bornite, digenite, covellite, and chalcocite, copper sulfosalts such as tetrahedite-tennantite, and enargite, copper carbonates such as azurite and malachite, and as copper(I) or copper(II) oxides such as cuprite and tenorite, respectively. The largest mass of elemental copper discovered weighed 420 tonnes and was found in 1857 on the Keweenaw Peninsula in Michigan, US. Native copper is a polycrystal, with the largest single crystal ever described measuring . Copper is the 26th most abundant element in Earth's crust, representing 50 ppm compared with 75 ppm for zinc, and 14 ppm for lead. Typical background concentrations of copper do not exceed in the atmosphere; in soil; in vegetation; 2 μg/L in freshwater and in seawater. Production Most copper is mined or extracted as copper sulfides from large open pit mines in porphyry copper deposits that contain 0.4 to 1.0% copper. Sites include Chuquicamata, in Chile, Bingham Canyon Mine, in Utah, United States, and El Chino Mine, in New Mexico, United States. According to the British Geological Survey, in 2005, Chile was the top producer of copper with at least one-third of the world share followed by the United States, Indonesia and Peru. Copper can also be recovered through the in-situ leach process. Several sites in the state of Arizona are considered prime candidates for this method. The amount of copper in use is increasing and the quantity available is barely sufficient to allow all countries to reach developed world levels of usage. An alternative source of copper for collection currently being researched are polymetallic nodules, which are located at the depths of the Pacific Ocean approximately 3000–6500 meters below sea level. These nodules contain other valuable metals such as cobalt and nickel. Reserves and prices Copper has been in use for at least 10,000 years, but more than 95% of all copper ever mined and smelted has been extracted since 1900. As with many natural resources, the total amount of copper on Earth is vast, with around 1014 tons in the top kilometer of Earth's crust, which is about 5 million years' worth at the current rate of extraction. However, only a tiny fraction of these reserves is economically viable with present-day prices and technologies. Estimates of copper reserves available for mining vary from 25 to 60 years, depending on core assumptions such as the growth rate. Recycling is a major source of copper in the modern world. The price of copper is volatile. After a peak in 2022 the price unexpectedly fell. The global market for copper is one of the most commodified and financialized of the commodity markets, and has been so for decades. Extraction The great majority of copper ores are sulfides. Common ores are the sulfides chalcopyrite (CuFeS2), bornite (Cu5FeS4) and, to a lesser extent, covellite (CuS) and chalcocite (Cu2S). These ores occur at the level of <1% Cu. Concentration of the ore is required, which begins with comminution followed by froth flotation. The remaining concentrate is smelted, which can be described with two simplified equations: Cuprous sulfide is oxidized to cuprous oxide: 2 Cu2S + 3 O2 → 2 Cu2O + 2 SO2 Cuprous oxide reacts with cuprous sulfide to convert to blister copper upon heating: 2 Cu2O + Cu2S → 6 Cu + 2 SO2 This roasting gives matte copper, roughly 50% Cu by weight, which is purified by electrolysis. Depending on the ore, sometimes other metals are obtained during the electrolysis including platinum and gold. Aside from sulfides, another family of ores are oxides. Approximately 15% of the world's copper supply derives from these oxides. The beneficiation process for oxides involves extraction with sulfuric acid solutions followed by electrolysis. In parallel with the above method for "concentrated" sulfide and oxide ores, copper is recovered from mine tailings and heaps. A variety of methods are used including leaching with sulfuric acid, ammonia, ferric chloride. Biological methods are also used. A potential source of copper is polymetallic nodules, which have an estimated concentration 1.3%. Recycling According to the International Resource Panel's Metal Stocks in Society report, the global per capita stock of copper in use in society is 35–55 kg. Much of this is in more-developed countries (140–300 kg per capita) rather than less-developed countries (30–40 kg per capita). In 2001, a typical automobile contained 20–30 kg of copper. Like aluminium, copper is recyclable without any loss of quality, both from raw state and from manufactured products. An estimated 80% of all copper ever mined is still in use today. In volume, copper is the third most recycled metal after iron and aluminium. , recycled copper supplies about one-third of global demand. The process of recycling copper is roughly the same as is used to extract copper but requires fewer steps. High-purity scrap copper is melted in a furnace and then reduced and cast into billets and ingots. Lower-purity scrap is melted to form black copper (70–90% pure, containing impurities such as iron, zinc, tin, and nickel), followed by oxidation of impurities in a converter to form blister copper (96–98% pure), which is then refined as before. Environmental impacts The environmental cost of copper mining was estimated at 3.7 kg -eq per kg of copper in 2019. Codelco, a major producer in Chile, reported that in 2020 the company emitted 2.8 t -eq per ton (2.8 kg -eq per kg) of fine copper. Greenhouse gas emissions primarily arise from electricity consumed by the company, especially when sourced from fossil fuels, and from engines required for copper extraction and refinement. Companies that mine land often mismanage waste, rendering the area sterile for life. Additionally, nearby rivers and forests are also negatively impacted. The Philippines is an example of a region where land is overexploited by mining companies. Copper mining waste in Valea Şesei, Romania, has significantly altered nearby water properties. The water in the affected areas is highly acidic, with a pH range of 2.1–4.9, and shows elevated electrical conductivity levels between 280 and 1561 mS/cm. These changes in water chemistry make the environment inhospitable for fish, essentially rendering the water uninhabitable for aquatic life. Alloys Numerous copper alloys have been formulated, many with important uses. Brass is an alloy of copper and zinc. Bronze usually refers to copper-tin alloys, but can refer to any alloy of copper such as aluminium bronze. Copper is one of the most important constituents of silver and karat gold solders used in the jewelry industry, modifying the color, hardness and melting point of the resulting alloys. Some lead-free solders consist of tin alloyed with a small proportion of copper and other metals. The alloy of copper and nickel, called cupronickel, is used in low-denomination coins, often for the outer cladding. The US five-cent coin (currently called a nickel) consists of 75% copper and 25% nickel in homogeneous composition. Prior to the introduction of cupronickel, which was widely adopted by countries in the latter half of the 20th century, alloys of copper and silver were also used, with the United States using an alloy of 90% silver and 10% copper until 1965, when circulating silver was removed from all coins with the exception of the half dollar—these were debased to an alloy of 40% silver and 60% copper between 1965 and 1970. The alloy of 90% copper and 10% nickel, remarkable for its resistance to corrosion, is used for various objects exposed to seawater, though it is vulnerable to the sulfides sometimes found in polluted harbors and estuaries. Alloys of copper with aluminium (about 7%) have a golden color and are used in decorations. Shakudō is a Japanese decorative alloy of copper containing a low percentage of gold, typically 4–10%, that can be patinated to a dark blue or black color. Compounds Copper forms a rich variety of compounds, usually with oxidation states +1 and +2, which are often called cuprous and cupric, respectively. Copper compounds promote or catalyse numerous chemical and biological processes. Binary compounds As with other elements, the simplest compounds of copper are binary compounds, i.e. those containing only two elements, the principal examples being oxides, sulfides, and halides. Both cuprous and cupric oxides are known. Among the numerous copper sulfides, important examples include copper(I) sulfide () and copper monosulfide (). Cuprous halides with fluorine, chlorine, bromine, and iodine are known, as are cupric halides with fluorine, chlorine, and bromine. Attempts to prepare copper(II) iodide yield only copper(I) iodide and iodine. 2 Cu2+ + 4 I− → 2 CuI + I2 Coordination chemistry Copper forms coordination complexes with ligands. In aqueous solution, copper(II) exists as . This complex exhibits the fastest water exchange rate (speed of water ligands attaching and detaching) for any transition metal aquo complex. Adding aqueous sodium hydroxide causes the precipitation of light blue solid copper(II) hydroxide. A simplified equation is: Cu2+ + 2 OH− → Cu(OH)2 Aqueous ammonia results in the same precipitate. Upon adding excess ammonia, the precipitate dissolves, forming tetraamminecopper(II): + 4 NH3 → + 2 H2O + 2 OH− Many other oxyanions form complexes; these include copper(II) acetate, copper(II) nitrate, and copper(II) carbonate. Copper(II) sulfate forms a blue crystalline pentahydrate, the most familiar copper compound in the laboratory. It is used in a fungicide called the Bordeaux mixture. Polyols, compounds containing more than one alcohol functional group, generally interact with cupric salts. For example, copper salts are used to test for reducing sugars. Specifically, using Benedict's reagent and Fehling's solution the presence of the sugar is signaled by a color change from blue Cu(II) to reddish copper(I) oxide. Schweizer's reagent and related complexes with ethylenediamine and other amines dissolve cellulose. Amino acids such as cystine form very stable chelate complexes with copper(II) including in the form of metal-organic biohybrids (MOBs). Many wet-chemical tests for copper ions exist, one involving potassium ferricyanide, which gives a red-brown precipitate with copper(II) salts. Organocopper chemistry Compounds that contain a carbon-copper bond are known as organocopper compounds. They are very reactive towards oxygen to form copper(I) oxide and have many uses in chemistry. They are synthesized by treating copper(I) compounds with Grignard reagents, terminal alkynes or organolithium reagents; in particular, the last reaction described produces a Gilman reagent. These can undergo substitution with alkyl halides to form coupling products; as such, they are important in the field of organic synthesis. Copper(I) acetylide is highly shock-sensitive but is an intermediate in reactions such as the Cadiot–Chodkiewicz coupling and the Sonogashira coupling. Conjugate addition to enones and carbocupration of alkynes can also be achieved with organocopper compounds. Copper(I) forms a variety of weak complexes with alkenes and carbon monoxide, especially in the presence of amine ligands. Copper(III) and copper(IV) Copper(III) is most often found in oxides. A simple example is potassium cuprate, KCuO2, a blue-black solid. The most extensively studied copper(III) compounds are the cuprate superconductors. Yttrium barium copper oxide (YBa2Cu3O7) consists of both Cu(II) and Cu(III) centres. Like oxide, fluoride is a highly basic anion and is known to stabilize metal ions in high oxidation states. Both copper(III) and even copper(IV) fluorides are known, K3CuF6 and Cs2CuF6, respectively. Some copper proteins form oxo complexes, which, in extensively studied synthetic analog systems, feature copper(III). With tetrapeptides, purple-colored copper(III) complexes are stabilized by the deprotonated amide ligands. Complexes of copper(III) are also found as intermediates in reactions of organocopper compounds, for example in the Kharasch–Sosnovsky reaction. History A timeline of copper illustrates how this metal has advanced human civilization for the past 11,000 years. Prehistoric Copper Age Copper occurs naturally as native metallic copper and was known to some of the oldest civilizations on record. The history of copper use dates to 9000 BC in the Middle East; a copper pendant was found in northern Iraq that dates to 8700 BC. Evidence suggests that gold and meteoric iron (but not smelted iron) were the only metals used by humans before copper. The history of copper metallurgy is thought to follow this sequence: first, cold working of native copper, then annealing, smelting, and, finally, lost-wax casting. In southeastern Anatolia, all four of these techniques appear more or less simultaneously at the beginning of the Neolithic . Copper smelting was independently invented in different places. The earliest evidence of lost-wax casting copper comes from an amulet found in Mehrgarh, Pakistan, and is dated to 4000 BC. Investment casting was invented in 4500–4000 BC in Southeast Asia Smelting was probably discovered in China before 2800 BC, in Central America around 600 AD, and in West Africa about the 9th or 10th century AD. Carbon dating has established mining at Alderley Edge in Cheshire, UK, at 2280 to 1890 BC. Ötzi the Iceman, a male dated from 3300 to 3200 BC, was found with an axe with a copper head 99.7% pure; high levels of arsenic in his hair suggest an involvement in copper smelting. Experience with copper has assisted the development of other metals; in particular, copper smelting likely led to the discovery of iron smelting. Production in the Old Copper Complex in Michigan and Wisconsin is dated between 6500 and 3000 BC. A copper spearpoint found in Wisconsin has been dated to 6500 BC. Copper usage by the indigenous peoples of the Old Copper Complex from the Great Lakes region of North America has been radiometrically dated to as far back as 7500 BC. Indigenous peoples of North America around the Great Lakes may have also been mining copper during this time, making it one of the oldest known examples of copper extraction in the world. There is evidence from prehistoric lead pollution from lakes in Michigan that people in the region began mining copper . Evidence suggests that utilitarian copper objects fell increasingly out of use in the Old Copper Complex of North America during the Bronze Age and a shift towards an increased production of ornamental copper objects occurred. Bronze Age Natural bronze, a type of copper made from ores rich in silicon, arsenic, and (rarely) tin, came into general use in the Balkans around 5500 BC. Alloying copper with tin to make bronze was first practiced about 4000 years after the discovery of copper smelting, and about 2000 years after "natural bronze" had come into general use. Bronze artifacts from the Vinča culture date to 4500 BC. Sumerian and Egyptian artifacts of copper and bronze alloys date to 3000 BC. Egyptian Blue, or cuprorivaite (calcium copper silicate) is a synthetic pigment that contains copper and started being used in ancient Egypt around 3250 BC. The manufacturing process of Egyptian blue was known to the Romans, but by the fourth century AD the pigment fell out of use and the secret to its manufacturing process became lost. The Romans said the blue pigment was made from copper, silica, lime and natron and was known to them as caeruleum. The Bronze Age began in Southeastern Europe around 3700–3300 BC, in Northwestern Europe about 2500 BC. It ended with the beginning of the Iron Age, 2000–1000 BC in the Near East, and 600 BC in Northern Europe. The transition between the Neolithic period and the Bronze Age was formerly termed the Chalcolithic period (copper-stone), when copper tools were used with stone tools. The term has gradually fallen out of favor because in some parts of the world, the Chalcolithic and Neolithic are coterminous at both ends. Brass, an alloy of copper and zinc, is of much more recent origin. It was known to the Greeks, but became a significant supplement to bronze during the Roman Empire. Ancient and post-classical In Greece, copper was known by the name (χαλκός). It was an important resource for the Romans, Greeks and other ancient peoples. In Roman times, it was known as aes Cyprium, being the generic Latin term for copper alloys and Cyprium from Cyprus, where much copper was mined. The phrase was simplified to cuprum, hence the English copper. Aphrodite (Venus in Rome) represented copper in mythology and alchemy because of its lustrous beauty and its ancient use in producing mirrors; Cyprus, the source of copper, was sacred to the goddess. The seven heavenly bodies known to the ancients were associated with the seven metals known in antiquity, and Venus was assigned to copper, both because of the connection to the goddess and because Venus was the brightest heavenly body after the Sun and Moon and so corresponded to the most lustrous and desirable metal after gold and silver. Copper was first mined in ancient Britain as early as 2100 BC. Mining at the largest of these mines, the Great Orme, continued into the late Bronze Age. Mining seems to have been largely restricted to supergene ores, which were easier to smelt. The rich copper deposits of Cornwall seem to have been largely untouched, in spite of extensive tin mining in the region, for reasons likely social and political rather than technological. In North America, native copper is known to have been extracted from sites on Isle Royale with primitive stone tools between 800 and 1600 AD. Copper annealing was being performed in the North American city of Cahokia around 1000–1300 AD. There are several exquisite copper plates, known as the Mississippian copper plates that have been found in North America in the area around Cahokia dating from this time period (1000–1300 AD). The copper plates were thought to have been manufactured at Cahokia before ending up elsewhere in the Midwest and southeastern United States like the Wulfing cache and Etowah plates. In South America a copper mask dated to 1000 BC found in the Argentinian Andes is the oldest known copper artifact discovered in the Andes. Peru has been considered the origin for early copper metallurgy in pre-Columbian America, but the copper mask from Argentina suggests that the Cajón del Maipo of the southern Andes was another important center for early copper workings in South America. Copper metallurgy was flourishing in South America, particularly in Peru around 1000 AD. Copper burial ornamentals from the 15th century have been uncovered, but the metal's commercial production did not start until the early 20th century. The cultural role of copper has been important, particularly in currency. Romans in the 6th through 3rd centuries BC used copper lumps as money. At first, the copper itself was valued, but gradually the shape and look of the copper became more important. Julius Caesar had his own coins made from brass, while Octavianus Augustus Caesar's coins were made from Cu-Pb-Sn alloys. With an estimated annual output of around 15,000 t, Roman copper mining and smelting activities reached a scale unsurpassed until the time of the Industrial Revolution; the provinces most intensely mined were those of Hispania, Cyprus and in Central Europe. The gates of the Temple of Jerusalem used Corinthian bronze treated with depletion gilding. The process was most prevalent in Alexandria, where alchemy is thought to have begun. In ancient India, copper was used in the holistic medical science Ayurveda for surgical instruments and other medical equipment. Ancient Egyptians (~2400 BC) used copper for sterilizing wounds and drinking water, and later to treat headaches, burns, and itching. Modern The Great Copper Mountain was a mine in Falun, Sweden, that operated from the 10th century to 1992. It satisfied two-thirds of Europe's copper consumption in the 17th century and helped fund many of Sweden's wars during that time. It was referred to as the nation's treasury; Sweden had a copper backed currency. Copper is used in roofing, currency, and for photographic technology known as the daguerreotype. Copper was used in Renaissance sculpture, and was used to construct the Statue of Liberty; copper continues to be used in construction of various types. Copper plating and copper sheathing were widely used to protect the under-water hulls of ships, a technique pioneered by the British Admiralty in the 18th century. The Norddeutsche Affinerie in Hamburg was the first modern electroplating plant, starting its production in 1876. The German scientist Gottfried Osann invented powder metallurgy in 1830 while determining the metal's atomic mass; around then it was discovered that the amount and type of alloying element (e.g., tin) to copper would affect bell tones. During the rise in demand for copper for the Age of Electricity, from the 1880s until the Great Depression of the 1930s, the United States produced one third to half the world's newly mined copper. Major districts included the Keweenaw district of northern Michigan, primarily native copper deposits, which was eclipsed by the vast sulphide deposits of Butte, Montana, in the late 1880s, which itself was eclipsed by porphyry deposits of the Southwest United States, especially at Bingham Canyon, Utah, and Morenci, Arizona. Introduction of open pit steam shovel mining and innovations in smelting, refining, flotation concentration and other processing steps led to mass production. Early in the twentieth century, Arizona ranked first, followed by Montana, then Utah and Michigan. Flash smelting was developed by Outokumpu in Finland and first applied at Harjavalta in 1949; the energy-efficient process accounts for 50% of the world's primary copper production. The Intergovernmental Council of Copper Exporting Countries, formed in 1967 by Chile, Peru, Zaire and Zambia, operated in the copper market as OPEC does in oil, though it never achieved the same influence, particularly because the second-largest producer, the United States, was never a member; it was dissolved in 1988. In 2008, China became the world's largest importer of copper and has continued to be as of at least 2023. Applications The major applications of copper are electrical wire (60%), roofing and plumbing (20%), and industrial machinery (15%). Copper is used mostly as a pure metal, but when greater hardness is required, it is put into such alloys as brass and bronze (5% of total use). For more than two centuries, copper paint has been used on boat hulls to control the growth of plants and shellfish. A small part of the copper supply is used for nutritional supplements and fungicides in agriculture. Machining of copper is possible, although alloys are preferred for good machinability in creating intricate parts. Wire and cable Despite competition from other materials, copper remains the preferred electrical conductor in nearly all categories of electrical wiring except overhead electric power transmission where aluminium is often preferred. Copper wire is used in power generation, power transmission, power distribution, telecommunications, electronics circuitry, and countless types of electrical equipment. Electrical wiring is the most important market for the copper industry. This includes structural power wiring, power distribution cable, appliance wire, communications cable, automotive wire and cable, and magnet wire. Roughly half of all copper mined is used for electrical wire and cable conductors. Many electrical devices rely on copper wiring because of its multitude of inherent beneficial properties, such as its high electrical conductivity, tensile strength, ductility, creep (deformation) resistance, corrosion resistance, low thermal expansion, high thermal conductivity, ease of soldering, malleability, and ease of installation. For a short period from the late 1960s to the late 1970s, copper wiring was replaced by aluminium wiring in many housing construction projects in America. The new wiring was implicated in a number of house fires and the industry returned to copper. Electronics and related devices Integrated circuits and printed circuit boards increasingly feature copper in place of aluminium because of its superior electrical conductivity; heat sinks and heat exchangers use copper because of its superior heat dissipation properties. Electromagnets, vacuum tubes, cathode-ray tubes, and magnetrons in microwave ovens use copper, as do waveguides for microwave radiation. Electric motors Copper's superior conductivity enhances the efficiency of electrical motors. This is important because motors and motor-driven systems account for 43–46% of all global electricity consumption and 69% of all electricity used by industry. Increasing the mass and cross section of copper in a coil increases the efficiency of the motor. Copper motor rotors, a new technology designed for motor applications where energy savings are prime design objectives, are enabling general-purpose induction motors to meet and exceed National Electrical Manufacturers Association (NEMA) premium efficiency standards. Renewable energy production Architecture Copper has been used since ancient times as a durable, corrosion resistant, and weatherproof architectural material. Roofs, flashings, rain gutters, downspouts, domes, spires, vaults, and doors have been made from copper for hundreds or thousands of years. Copper's architectural use has been expanded in modern times to include interior and exterior wall cladding, building expansion joints, radio frequency shielding, and antimicrobial and decorative indoor products such as attractive handrails, bathroom fixtures, and counter tops. Some of copper's other important benefits as an architectural material include low thermal movement, light weight, lightning protection, and recyclability. The metal's distinctive natural green patina has long been coveted by architects and designers. The final patina is a particularly durable layer that is highly resistant to atmospheric corrosion, thereby protecting the underlying metal against further weathering. It can be a mixture of carbonate and sulfate compounds in various amounts, depending upon environmental conditions such as sulfur-containing acid rain. Architectural copper and its alloys can also be 'finished' to take on a particular look, feel, or color. Finishes include mechanical surface treatments, chemical coloring, and coatings. Copper has excellent brazing and soldering properties and can be welded; the best results are obtained with gas metal arc welding. Antibiofouling Copper is biostatic, meaning bacteria and many other forms of life will not grow on it. For this reason it has long been used to line parts of ships to protect against barnacles and mussels. It was originally used pure, but has since been superseded by Muntz metal and copper-based paint. Similarly, as discussed in copper alloys in aquaculture, copper alloys have become important netting materials in the aquaculture industry because they are antimicrobial and prevent biofouling, even in extreme conditions and have strong structural and corrosion-resistant properties in marine environments. Antimicrobial Copper-alloy touch surfaces have natural properties that destroy a wide range of microorganisms (e.g., E. coli O157:H7, methicillin-resistant Staphylococcus aureus (MRSA), Staphylococcus, Clostridium difficile, influenza A virus, adenovirus, SARS-CoV-2, and fungi). Indians have been using copper vessels since ancient times for storing water, even before modern science realized its antimicrobial properties. Some copper alloys were proven to kill more than 99.9% of disease-causing bacteria within just two hours when cleaned regularly. The United States Environmental Protection Agency (EPA) has approved the registrations of these copper alloys as "antimicrobial materials with public health benefits"; that approval allows manufacturers to make legal claims to the public health benefits of products made of registered alloys. In addition, the EPA has approved a long list of antimicrobial copper products made from these alloys, such as bedrails, handrails, over-bed tables, sinks, faucets, door knobs, toilet hardware, computer keyboards, health club equipment, and shopping cart handles. Copper doorknobs are used by hospitals to reduce the transfer of disease, and Legionnaires' disease is suppressed by copper tubing in plumbing systems. Antimicrobial copper alloy products are now being installed in healthcare facilities in the U.K., Ireland, Japan, Korea, France, Denmark, and Brazil, as well as being called for in the US, and in the subway transit system in Santiago, Chile, where copper–zinc alloy handrails were installed in some 30 stations between 2011 and 2014. Textile fibers can be blended with copper to create antimicrobial protective fabrics. Copper demand Total world production in 2023 is expected to be almost 23 million metric tons. Copper demand is increasing due to the ongoing energy transition to electricity. China accounts for over half the demand. For some purposes, other metals can substitute, aluminium wire was substituted in many applications, but improper design resulted in fire hazards. The safety issues have since been solved by use of larger sizes of aluminium wire (#8AWG and up), and properly designed aluminium wiring is still being installed in place of copper. For example, the Airbus A380 uses aluminum wire in place of copper wire for electrical power transmission. Speculative investing Copper may be used as a speculative investment due to the predicted increase in use from worldwide infrastructure growth, and the important role it has in producing wind turbines, solar panels, and other renewable energy sources. Another reason predicted demand increases is the fact that electric cars contain an average of 3.6 times as much copper as conventional cars, although the effect of electric cars on copper demand is debated. Some people invest in copper through copper mining stocks, ETFs, and futures. Others store physical copper in the form of copper bars or rounds although these tend to carry a higher premium in comparison to precious metals. Those who want to avoid the premiums of copper bullion alternatively store old copper wire, copper tubing or American pennies made before 1982. Folk medicine Copper is commonly used in jewelry, and according to some folklore, copper bracelets relieve arthritis symptoms. In one trial for osteoarthritis and one trial for rheumatoid arthritis, no differences were found between copper bracelet and control (non-copper) bracelet. No evidence shows that copper can be absorbed through the skin. If it were, it might lead to copper poisoning. Degradation Chromobacterium violaceum and Pseudomonas fluorescens can both mobilize solid copper as a cyanide compound. The ericoid mycorrhizal fungi associated with Calluna, Erica and Vaccinium can grow in metalliferous soils containing copper. The ectomycorrhizal fungus Suillus luteus protects young pine trees from copper toxicity. A sample of the fungus Aspergillus niger was found growing from gold mining solution and was found to contain cyano complexes of such metals as gold, silver, copper, iron, and zinc. The fungus also plays a role in the solubilization of heavy metal sulfides. Biological role Biochemistry Copper proteins have diverse roles in biological electron transport and oxygen transportation, processes that exploit the easy interconversion of Cu(I) and Cu(II). Copper is essential in the aerobic respiration of all eukaryotes. In mitochondria, it is found in cytochrome c oxidase, which is the last protein in oxidative phosphorylation. Cytochrome c oxidase is the protein that binds the O2 between a copper and an iron; the protein transfers 4 electrons to the O2 molecule to reduce it to two molecules of water. Copper is also found in many superoxide dismutases, proteins that catalyze the decomposition of superoxides by converting it (by disproportionation) to oxygen and hydrogen peroxide: Cu2+-SOD + O2− → Cu+-SOD + O2 (reduction of copper; oxidation of superoxide) Cu+-SOD + O2− + 2H+ → Cu2+-SOD + H2O2 (oxidation of copper; reduction of superoxide) The protein hemocyanin is the oxygen carrier in most mollusks and some arthropods such as the horseshoe crab (Limulus polyphemus). Because hemocyanin is blue, these organisms have blue blood rather than the red blood of iron-based hemoglobin. Structurally related to hemocyanin are the laccases and tyrosinases. Instead of reversibly binding oxygen, these proteins hydroxylate substrates, illustrated by their role in the formation of lacquers. The biological role for copper commenced with the appearance of oxygen in Earth's atmosphere. Several copper proteins, such as the "blue copper proteins", do not interact directly with substrates; hence they are not enzymes. These proteins relay electrons by the process called electron transfer. A unique tetranuclear copper center has been found in nitrous-oxide reductase. Chemical compounds which were developed for treatment of Wilson's disease have been investigated for use in cancer therapy. Nutrition Copper is an essential trace element in plants and animals, but not all microorganisms. The human body contains copper at a level of about 1.4 to 2.1 mg per kg of body mass. Absorption Copper is absorbed in the gut, then transported to the liver bound to albumin. After processing in the liver, copper is distributed to other tissues in a second phase, which involves the protein ceruloplasmin, carrying the majority of copper in blood. Ceruloplasmin also carries the copper that is excreted in milk, and is particularly well-absorbed as a copper source. Copper in the body normally undergoes enterohepatic circulation (about 5 mg a day, vs. about 1 mg per day absorbed in the diet and excreted from the body), and the body is able to excrete some excess copper, if needed, via bile, which carries some copper out of the liver that is not then reabsorbed by the intestine. Dietary recommendations The U.S. Institute of Medicine (IOM) updated the estimated average requirements (EARs) and recommended dietary allowances (RDAs) for copper in 2001. If there is not sufficient information to establish EARs and RDAs, an estimate designated Adequate Intake (AI) is used instead. The AIs for copper are: 200 μg of copper for 0–6-month-old males and females, and 220 μg of copper for 7–12-month-old males and females. For both sexes, the RDAs for copper are: 340 μg of copper for 1–3 years old, 440 μg of copper for 4–8 years old, 700 μg of copper for 9–13 years old, 890 μg of copper for 14–18 years old and 900 μg of copper for ages 19 years and older. For pregnancy, 1,000 μg. For lactation, 1,300 μg. As for safety, the IOM also sets tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of copper, the UL is set at 10 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women and men ages 18 and older, the AIs are set at 1.3 and 1.6 mg/day, respectively. AIs for pregnancy and lactation is 1.5 mg/day. For children ages 1–17 years, the AIs increase with age from 0.7 to 1.3 mg/day. These AIs are higher than the U.S. RDAs. The European Food Safety Authority reviewed the same safety question and set its UL at 5 mg/day, which is half the U.S. value. For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For copper labeling purposes, 100% of the Daily Value was 2.0 mg, but , it was revised to 0.9 mg to bring it into agreement with the RDA. A table of the old and new adult daily values is provided at Reference Daily Intake. Deficiency Because of its role in facilitating iron uptake, copper deficiency can produce anemia-like symptoms, neutropenia, bone abnormalities, hypopigmentation, impaired growth, increased incidence of infections, osteoporosis, hyperthyroidism, and abnormalities in glucose and cholesterol metabolism. Conversely, Wilson's disease causes an accumulation of copper in body tissues. Severe deficiency can be found by testing for low plasma or serum copper levels, low ceruloplasmin, and low red blood cell superoxide dismutase levels; these are not sensitive to marginal copper status. The "cytochrome c oxidase activity of leucocytes and platelets" has been stated as another factor in deficiency, but the results have not been confirmed by replication. Toxicity Gram quantities of various copper salts have been taken in suicide attempts and produced acute copper toxicity in humans, possibly due to redox cycling and the generation of reactive oxygen species that damage DNA. Corresponding amounts of copper salts (30 mg/kg) are toxic in animals. A minimum dietary value for healthy growth in rabbits has been reported to be at least 3 ppm in the diet. However, higher concentrations of copper (100 ppm, 200 ppm, or 500 ppm) in the diet of rabbits may favorably influence feed conversion efficiency, growth rates, and carcass dressing percentages. Chronic copper toxicity does not normally occur in humans because of transport systems that regulate absorption and excretion. Autosomal recessive mutations in copper transport proteins can disable these systems, leading to Wilson's disease with copper accumulation and cirrhosis of the liver in persons who have inherited two defective genes. Elevated copper levels have also been linked to worsening symptoms of Alzheimer's disease. Human exposure In the US, the Occupational Safety and Health Administration (OSHA) has designated a permissible exposure limit (PEL) for copper dust and fumes in the workplace as a time-weighted average (TWA) of 1 mg/m3. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 1 mg/m3, time-weighted average. The IDLH (immediately dangerous to life and health) value is 100 mg/m3. Copper is a constituent of tobacco smoke. The tobacco plant readily absorbs and accumulates heavy metals, such as copper from the surrounding soil into its leaves. These are readily absorbed into the user's body following smoke inhalation. The health implications are not clear.
Physical sciences
Chemistry
null
125296
https://en.wikipedia.org/wiki/Delusion
Delusion
A delusion is a false fixed belief that is not amenable to change in light of conflicting evidence. As a pathology, it is distinct from a belief based on false or incomplete information, confabulation, dogma, illusion, hallucination, or some other misleading effects of perception, as individuals with those beliefs are able to change or readjust their beliefs upon reviewing the evidence. However: "The distinction between a delusion and a strongly held idea is sometimes difficult to make and depends in part on the degree of conviction with which the belief is held despite clear or reasonable contradictory evidence regarding its veracity." Delusions have been found to occur in the context of many pathological states (both general physical and mental) and are of particular diagnostic importance in psychotic disorders including schizophrenia, paraphrenia, manic episodes of bipolar disorder, and psychotic depression. Types Delusions are categorized into four different groups: Bizarre delusion: Delusions are deemed bizarre if they are clearly implausible and not understandable to same-culture peers and do not derive from ordinary life experiences. An example named by the DSM-5 is a belief that someone replaced all of one's internal organs with someone else's without leaving a scar, depending on the organ in question. Non-bizarre delusion: A delusion that, though false, reflects real–life situations and is at least technically possible; it may include feelings of being followed, poisoned, infected etc. e.g., the affected person mistakenly believes that they are under constant police surveillance. Mood-congruent delusion: Any delusion with content consistent with either a depressive or manic state, e.g., a depressed person believes that news anchors on television highly disapprove of them, or a person in a manic state might believe they are a powerful deity. Mood-neutral delusion: A delusion that does not relate to the patient's emotional state; for example, a belief that an extra limb is growing out of the back of one's head is neutral to either depression or mania. French psychiatry (which is influenced by psychoanalysis), however, also establishes a difference between "paranoid" (paranoïde) and "paranoiac" (paranoïaque) delusion. The paranoid delusion, observed in schizophrenia, is non-systematized and is characterized by a disorganized structure and confused speech and thoughts. The paranoiac delusion, observed in paraphrenia, is highly systematized (which means it is very organized and clear) and is focused on a single theme. Themes In addition to these categories, delusions often manifest according to a consistent theme. Although delusions can have any theme, certain themes are more common. Some of the more common delusion themes are: Delusion of control: False belief that another person, group of people, or external force controls one's general thoughts, feelings, impulses, or behaviors. Delusional jealousy: False belief that a spouse or lover is having an affair, with no proof to back up the claim. Delusion of guilt or sin (or delusion of self-accusation): Ungrounded feeling of remorse or guilt of delusional intensity. Thought broadcasting: False belief that other people can know one's thoughts. Delusion of thought insertion: Belief that another thinks through the mind of the person. Persecutory delusions: False belief that one is being persecuted. Delusion of reference: False belief that insignificant remarks, events, or objects in one's environment have personal meaning or significance. "Usually the meaning assigned to these events is negative, but the 'messages' can also have a grandiose quality." Erotomania: False belief that another person is in love with them. Religious delusion: Belief that the affected person is a god or chosen to act as a god. Somatic delusion: Delusion whose content pertains to bodily functioning, bodily sensations or physical appearance. Usually the false belief is that the body is somehow diseased, abnormal or changed. A specific example of this delusion is delusional parasitosis: Delusion in which one feels infested with insects, bacteria, mites, spiders, lice, fleas, worms, or other organisms. Delusion of poverty: Person strongly believes they are financially incapacitated. Although this type of delusion is less common now, it was particularly widespread in the days preceding state support. Grandiose delusions Grandiose delusions or delusions of grandeur are principally a subtype of delusional disorder but could possibly feature as a symptom of schizophrenia and manic episodes of bipolar disorder. Grandiose delusions are characterized by fantastical beliefs that one is famous, omnipotent or otherwise very powerful. The delusions are generally fantastic, often with a supernatural, science-fictional, or religious bent. In colloquial usage, one who overestimates one's own abilities, talents, stature or situation is sometimes said to have "delusions of grandeur". This is generally due to excessive pride, rather than any actual delusions. Grandiose delusions or delusions of grandeur can also be associated with megalomania. Persecutory delusions Persecutory delusions are the most common type of delusions and involve the theme of being followed, harassed, cheated, poisoned or drugged, conspired against, spied on, attacked, or otherwise obstructed in the pursuit of goals. Persecutory delusions are a condition in which the affected person wrongly believes that they are being persecuted. Specifically, they have been defined as containing two central elements: The individual thinks that: harm is occurring, or is going to occur the persecutors have the intention to cause harm According to the DSM-IV-TR, persecutory delusions are the most common form of delusions in schizophrenia, where the person believes they are "being tormented, followed, sabotaged, tricked, spied on, or ridiculed". In the DSM-IV-TR, persecutory delusions are the main feature of the persecutory type of delusional disorder. When the focus is to remedy some injustice by legal action, they are sometimes called "querulous paranoia". Causes Explaining the causes of delusions continues to be challenging and several theories have been developed. One is the genetic or biological theory, which states that close relatives of people with delusional disorder are at increased risk of delusional traits. Another theory is the dysfunctional cognitive processing, which states that delusions may arise from distorted ways people have of explaining life to themselves. A third theory is called motivated or defensive delusions. This one states that some of those persons who are predisposed might experience the onset of delusional disorder in those moments when coping with life and maintaining high self-esteem becomes a significant challenge. In this case, the person views others as the cause of their personal difficulties in order to preserve a positive self-view. This condition is more common among people who have poor hearing or sight. Also, ongoing stressors have been associated with a higher possibility of developing delusions. Examples of such stressors are immigration, low socioeconomic status, and even possibly the accumulation of smaller daily struggles. Specific delusions The top two factors mainly concerned in the germination of delusions are disorder of brain functioning and background influences of temperament and personality. Higher levels of dopamine qualify as a sign of disorders of brain function. That they are needed to sustain certain delusions was examined by a preliminary study on delusional disorder (a psychotic syndrome) instigated to clarify if schizophrenia had a dopamine psychosis. There were positive results - delusions of jealousy and persecution had different levels of dopamine metabolite HVA and homovanillyl alcohol (which may have been genetic). These can be only regarded as tentative results; the study called for future research with a larger population. It is simplistic to say that a certain measure of dopamine will bring about a specific delusion. Studies show age and gender to be influential and it is most likely that HVA levels change during the life course of some syndromes. On the influence of personality, it has been said: "Jaspers considered there is a subtle change in personality due to the illness itself; and this creates the condition for the development of the delusional atmosphere in which the delusional intuition arises." Cultural factors have "a decisive influence in shaping delusions". For example, delusions of guilt and punishment are frequent in a Western, Christian country like Austria, but not in Pakistan, where it is more likely persecution. Similarly, in a series of case studies, delusions of guilt and punishment were found in Austrian patients with Parkinson's being treated with l-dopa, a dopamine agonist. Pathophysiology The two-factor model of delusions posits that dysfunction in both belief formation systems and belief evaluation systems are necessary for delusions. Dysfunction in evaluations systems localized to the right lateral prefrontal cortex, regardless of delusion content, is supported by neuroimaging studies and is congruent with its role in conflict monitoring in healthy persons. Abnormal activation and reduced volume is seen in people with delusions, as well as in disorders associated with delusions such as frontotemporal dementia, psychosis and Lewy body dementia. Furthermore, lesions to this region are associated with "jumping to conclusions", damage to this region is associated with post-stroke delusions, and hypometabolism this region associated with caudate strokes presenting with delusions. The aberrant salience model suggests that delusions are a result of people assigning excessive importance to irrelevant stimuli. In support of this hypothesis, regions normally associated with the salience network demonstrate reduced grey matter in people with delusions, and the neurotransmitter dopamine, which is widely implicated in salience processing, is also widely implicated in psychotic disorders. Specific regions have been associated with specific types of delusions. The volume of the hippocampus and parahippocampus is related to paranoid delusions in Alzheimer's disease, and has been reported to be abnormal post mortem in one person with delusions. Capgras delusions have been associated with occipito-temporal damage and may be related to failure to elicit normal emotions or memories in response to faces. Diagnosis The modern definition and Jaspers' original criteria have been criticised, as counter-examples can be shown for every defining feature. Studies on psychiatric patients show that delusions vary in intensity and conviction over time, which suggests that certainty and incorrigibility are not necessary components of a delusional belief. Delusions do not necessarily have to be false or 'incorrect inferences about external reality'. Some religious or spiritual beliefs by their nature may not be falsifiable, and hence cannot be described as false or incorrect, no matter whether the person holding these beliefs was diagnosed as delusional or not. In other situations the delusion may turn out to be true belief. For example, in delusional jealousy, where a person believes that their partner is being unfaithful (and may even follow them into the bathroom believing them to be seeing their lover even during the briefest of partings), it may actually be true that the partner is having sexual relations with another person. In this case, the delusion does not cease to be a delusion because the content later turns out to be verified as true or the partner actually chose to engage in the behavior of which they were being accused. In other cases, the belief may be mistakenly assumed to be false by a doctor or psychiatrist assessing it, just because it seems to be unlikely, bizarre or held with excessive conviction. Psychiatrists rarely have the time or resources to check the validity of a person's claims leading to some true beliefs to be erroneously classified as delusional. This is known as the Martha Mitchell effect, after the wife of the attorney general who alleged that illegal activity was taking place in the White House. At the time, her claims were thought to be signs of mental illness, and only after the Watergate scandal broke was she proved right (and hence sane). Similar factors have led to criticisms of Jaspers' definition of true delusions as being ultimately 'un-understandable'. Critics (such as R. D. Laing) have argued that this leads to the diagnosis of delusions being based on the subjective understanding of a particular psychiatrist, who may not have access to all the information that might make a belief otherwise interpretable. R. D. Laing's hypothesis has been applied to some forms of projective therapy to "fix" a delusional system so that it cannot be altered by the patient. Psychiatric researchers at Yale University, Ohio State University and the Community Mental Health Center of Middle Georgia have used novels and motion picture films as the focus. Texts, plots and cinematography are discussed and the delusions approached tangentially. This use of fiction to decrease the malleability of a delusion was employed in a joint project by science-fiction author Philip Jose Farmer and Yale psychiatrist A. James Giannini. They wrote the novel Red Orc's Rage, which, recursively, deals with delusional adolescents who are treated with a form of projective therapy. In this novel's fictional setting other novels written by Farmer are discussed and the characters are symbolically integrated into the delusions of fictional patients. This particular novel was then applied to real-life clinical settings. Another difficulty with the diagnosis of delusions is that almost all of these features can be found in "normal" beliefs. Many religious beliefs hold exactly the same features, yet are not universally considered delusional. For instance, if a person was holding a true belief then they will of course persist with it. This can cause the disorder to be misdiagnosed by psychiatrists. These factors have led the psychiatrist Anthony David to note that "there is no acceptable (rather than accepted) definition of a delusion." In practice, psychiatrists tend to diagnose a belief as delusional if it is either patently bizarre, causing significant distress, or excessively pre-occupying the patient, especially if the person is subsequently unswayed in belief by counter-evidence or reasonable arguments. Joseph Pierre, M.D. states that one factor that helps differentiate delusions from other kinds of beliefs is that anomalous subjective experiences are often used to justify delusional beliefs. While idiosyncratic and self-referential content often make delusions impossible to share with others, Pierre suggests that it may be more helpful to emphasize the level of conviction, preoccupation, and extension of a belief rather than the content of the belief when considering whether a belief is delusional. It is important to distinguish true delusions from other symptoms such as anxiety, fear, or paranoia. To diagnose delusions a mental state examination may be used. This test includes appearance, mood, affect, behavior, rate and continuity of speech, evidence of hallucinations or abnormal beliefs, thought content, orientation to time, place and person, attention and concentration, insight and judgment, as well as short-term memory. Johnson-Laird suggests that delusions may be viewed as the natural consequence of failure to distinguish conceptual relevance. That is, irrelevant information would be framed as disconnected experiences, then it is taken to be relevant in a manner that suggests false causal connections. Furthermore, relevant information would be ignored as counterexamples. Definition Although non-specific concepts of madness have been around for several thousand years, the psychiatrist and philosopher Karl Jaspers was the first to define the four main criteria for a belief to be considered delusional in his 1913 book General Psychopathology. These criteria are: certainty (held with absolute conviction) incorrigibility (not changeable by compelling counterargument or proof to the contrary) impossibility or falsity of content (implausible, bizarre, or patently untrue) not amenable to understanding (i.e., belief cannot be explained psychologically) Furthermore, when beliefs involve value judgments, only those which cannot be proven true are considered delusions. For example: a man claiming that he flew into the Sun and flew back home. This would be considered a delusion, unless he were speaking figuratively, or if the belief had a cultural or religious source. Only the first three criteria remain cornerstones of the current definition of a delusion in the DSM-5. Robert Trivers writes that delusion is a discrepancy in relation to objective reality, but with a firm conviction in reality of delusional ideas, which is manifested in the "affective basis of delusion". Treatment Delusions and other positive symptoms of psychosis are often treated with antipsychotic medication, which exert a medium effect size according to meta-analytic evidence. Cognitive behavioral therapy (CBT) improves delusions relative to control conditions according to a meta-analysis. A meta-analysis of 43 studies reported that metacognitive training (MCT) reduces delusions at a medium to large effect size relative to control conditions. Criticism Some psychiatrists criticize the practice of defining one and the same belief as normal in one culture and pathological in another culture for cultural essentialism. They argue that it is not justified to assume that culture can be simplified to a few traceable, distinguishable and statistically quantifiable factors and that everything outside those factors must be biological since cultural influences are mixed, including not only parents and teachers but also peers, friends, and media, and the same cultural influence can have different effects depending on earlier cultural influences. Other critical psychiatrists argue that just because a person's belief is unshaken by one influence does not prove that it would remain unshaken by another. For example, a person whose beliefs are not changed by verbal correction from a psychiatrist, which is how delusion is usually diagnosed, may still change his or her mind when observing empirical evidence, only that psychiatrists rarely, if ever, present patients with such situations. Anthropologist David Graeber has criticized psychiatry's assumption that an absurd belief goes from being delusional to "being there for a reason" merely because it is shared by many people by arguing that just as genetic pathogens like viruses can take advantage of an organism without benefitting said organism, memetic phenomena can spread while being harmful to societies, implying that entire societies can become ill. David Graeber argued that if somatic medicine did not have higher scientific standards than psychiatry's way of defining delusion, pandemics like the plague would have been considered to transubstantiate from an illness to "a phenomenon that benefits the people" as soon as it had spread to a sufficiently large portion of the population. It was argued by Graeber that since deinstitutionalisation made sales of psychiatric medication profitable by no longer needing to spend money on keeping the patients in mental hospitals, corrupt incentives for psychiatry to allege "needs" for treatments have increased (in particular with regard to medicines that are said to be needed in daily doses, not so much regarding devices that can be kept for longer periods of time) which may itself be a harmful memetic pandemic in society that leads to diagnosing and medication of criticisms of widespread beliefs that are actually absurd and harmful, making the absurd belief that is not labelled as an illness profitable anyway by attracting criticisms that are labelled as illnesses.
Biology and health sciences
Miscellaneous
null
125297
https://en.wikipedia.org/wiki/Dynamic%20programming
Dynamic programming
Dynamic programming is both a mathematical optimization method and an algorithmic paradigm. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have optimal substructure. If sub-problems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a relation between the value of the larger problem and the values of the sub-problems. In the optimization literature this relationship is called the Bellman equation. Overview Mathematical optimization In terms of mathematical optimization, dynamic programming usually refers to simplifying a decision by breaking it down into a sequence of decision steps over time. This is done by defining a sequence of value functions V1, V2, ..., Vn taking y as an argument representing the state of the system at times i from 1 to n. The definition of Vn(y) is the value obtained in state y at the last time n. The values Vi at earlier times i = n −1, n − 2, ..., 2, 1 can be found by working backwards, using a recursive relationship called the Bellman equation. For i = 2, ..., n, Vi−1 at any state y is calculated from Vi by maximizing a simple function (usually the sum) of the gain from a decision at time i − 1 and the function Vi at the new state of the system if this decision is made. Since Vi has already been calculated for the needed states, the above operation yields Vi−1 for those states. Finally, V1 at the initial state of the system is the value of the optimal solution. The optimal values of the decision variables can be recovered, one by one, by tracking back the calculations already performed. Control theory In control theory, a typical problem is to find an admissible control which causes the system to follow an admissible trajectory on a continuous time interval that minimizes a cost function The solution to this problem is an optimal control law or policy , which produces an optimal trajectory and a cost-to-go function . The latter obeys the fundamental equation of dynamic programming: a partial differential equation known as the Hamilton–Jacobi–Bellman equation, in which and . One finds that minimizing in terms of , , and the unknown function and then substitutes the result into the Hamilton–Jacobi–Bellman equation to get the partial differential equation to be solved with boundary condition . In practice, this generally requires numerical techniques for some discrete approximation to the exact optimization relationship. Alternatively, the continuous process can be approximated by a discrete system, which leads to a following recurrence relation analog to the Hamilton–Jacobi–Bellman equation: at the -th stage of equally spaced discrete time intervals, and where and denote discrete approximations to and . This functional equation is known as the Bellman equation, which can be solved for an exact solution of the discrete approximation of the optimization equation. Example from economics: Ramsey's problem of optimal saving In economics, the objective is generally to maximize (rather than minimize) some dynamic social welfare function. In Ramsey's problem, this function relates amounts of consumption to levels of utility. Loosely speaking, the planner faces the trade-off between contemporaneous consumption and future consumption (via investment in capital stock that is used in production), known as intertemporal choice. Future consumption is discounted at a constant rate . A discrete approximation to the transition equation of capital is given by where is consumption, is capital, and is a production function satisfying the Inada conditions. An initial capital stock is assumed. Let be consumption in period , and assume consumption yields utility as long as the consumer lives. Assume the consumer is impatient, so that he discounts future utility by a factor each period, where . Let be capital in period . Assume initial capital is a given amount , and suppose that this period's capital and consumption determine next period's capital as , where is a positive constant and . Assume capital cannot be negative. Then the consumer's decision problem can be written as follows: subject to for all Written this way, the problem looks complicated, because it involves solving for all the choice variables . (The capital is not a choice variable—the consumer's initial capital is taken as given.) The dynamic programming approach to solve this problem involves breaking it apart into a sequence of smaller decisions. To do so, we define a sequence of value functions , for which represent the value of having any amount of capital at each time . There is (by assumption) no utility from having capital after death, . The value of any quantity of capital at any previous time can be calculated by backward induction using the Bellman equation. In this problem, for each , the Bellman equation is subject to This problem is much simpler than the one we wrote down before, because it involves only two decision variables, and . Intuitively, instead of choosing his whole lifetime plan at birth, the consumer can take things one step at a time. At time , his current capital is given, and he only needs to choose current consumption and saving . To actually solve this problem, we work backwards. For simplicity, the current level of capital is denoted as . is already known, so using the Bellman equation once we can calculate , and so on until we get to , which is the value of the initial decision problem for the whole lifetime. In other words, once we know , we can calculate , which is the maximum of , where is the choice variable and . Working backwards, it can be shown that the value function at time is where each is a constant, and the optimal amount to consume at time is which can be simplified to We see that it is optimal to consume a larger fraction of current wealth as one gets older, finally consuming all remaining wealth in period , the last period of life. Computer science There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping sub-problems. If a problem can be solved by combining optimal solutions to non-overlapping sub-problems, the strategy is called "divide and conquer" instead. This is why merge sort and quick sort are not classified as dynamic programming problems. Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its sub-problems. Such optimal substructures are usually described by means of recursion. For example, given a graph G=(V,E), the shortest path p from a vertex u to a vertex v exhibits optimal substructure: take any intermediate vertex w on this shortest path p. If p is truly the shortest path, then it can be split into sub-paths p1 from u to w and p2 from w to v such that these, in turn, are indeed the shortest paths between the corresponding vertices (by the simple cut-and-paste argument described in Introduction to Algorithms). Hence, one can easily formulate the solution for finding shortest paths in a recursive manner, which is what the Bellman–Ford algorithm or the Floyd–Warshall algorithm does. Overlapping sub-problems means that the space of sub-problems must be small, that is, any recursive algorithm solving the problem should solve the same sub-problems over and over, rather than generating new sub-problems. For example, consider the recursive formulation for generating the Fibonacci sequence: Fi = Fi−1 + Fi−2, with base case F1 = F2 = 1. Then F43 = F42 + F41, and F42 = F41 + F40. Now F41 is being solved in the recursive sub-trees of both F43 as well as F42. Even though the total number of sub-problems is actually small (only 43 of them), we end up solving the same problems over and over if we adopt a naive recursive solution such as this. Dynamic programming takes account of this fact and solves each sub-problem only once. This can be achieved in either of two ways: Top-down approach: This is the direct fall-out of the recursive formulation of any problem. If the solution to any problem can be formulated recursively using the solution to its sub-problems, and if its sub-problems are overlapping, then one can easily memoize or store the solutions to the sub-problems in a table (often an array or hashtable in practice). Whenever we attempt to solve a new sub-problem, we first check the table to see if it is already solved. If a solution has been recorded, we can use it directly, otherwise we solve the sub-problem and add its solution to the table. Bottom-up approach: Once we formulate the solution to a problem recursively as in terms of its sub-problems, we can try reformulating the problem in a bottom-up fashion: try solving the sub-problems first and use their solutions to build-on and arrive at solutions to bigger sub-problems. This is also usually done in a tabular form by iteratively generating solutions to bigger and bigger sub-problems by using the solutions to small sub-problems. For example, if we already know the values of F41 and F40, we can directly calculate the value of F42. Some programming languages can automatically memoize the result of a function call with a particular set of arguments, in order to speed up call-by-name evaluation (this mechanism is referred to as call-by-need). Some languages make it possible portably (e.g. Scheme, Common Lisp, Perl or D). Some languages have automatic memoization built in, such as tabled Prolog and J, which supports memoization with the M. adverb. In any case, this is only possible for a referentially transparent function. Memoization is also encountered as an easily accessible design pattern within term-rewrite based languages such as Wolfram Language. Bioinformatics Dynamic programming is widely used in bioinformatics for tasks such as sequence alignment, protein folding, RNA structure prediction and protein-DNA binding. The first dynamic programming algorithms for protein-DNA binding were developed in the 1970s independently by Charles DeLisi in the US and by Georgii Gurskii and Alexander Zasedatelev in the Soviet Union. Recently these algorithms have become very popular in bioinformatics and computational biology, particularly in the studies of nucleosome positioning and transcription factor binding. Examples: computer algorithms Dijkstra's algorithm for the shortest path problem From a dynamic programming point of view, Dijkstra's algorithm for the shortest path problem is a successive approximation scheme that solves the dynamic programming functional equation for the shortest path problem by the Reaching method. In fact, Dijkstra's explanation of the logic behind the algorithm, namely is a paraphrasing of Bellman's famous Principle of Optimality in the context of the shortest path problem. Fibonacci sequence Using dynamic programming in the calculation of the nth member of the Fibonacci sequence improves its performance greatly. Here is a naïve implementation, based directly on the mathematical definition: function fib(n) if n <= 1 return n return fib(n − 1) + fib(n − 2) Notice that if we call, say, fib(5), we produce a call tree that calls the function on the same value many different times: fib(5) fib(4) + fib(3) (fib(3) + fib(2)) + (fib(2) + fib(1)) ((fib(2) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1)) (((fib(1) + fib(0)) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1)) In particular, fib(2) was calculated three times from scratch. In larger examples, many more values of fib, or subproblems, are recalculated, leading to an exponential time algorithm. Now, suppose we have a simple map object, m, which maps each value of fib that has already been calculated to its result, and we modify our function to use it and update it. The resulting function requires only O(n) time instead of exponential time (but requires O(n) space): var m := map(0 → 0, 1 → 1) function fib(n) if key n is not in map m m[n] := fib(n − 1) + fib(n − 2) return m[n] This technique of saving values that have already been calculated is called memoization; this is the top-down approach, since we first break the problem into subproblems and then calculate and store values. In the bottom-up approach, we calculate the smaller values of fib first, then build larger values from them. This method also uses O(n) time since it contains a loop that repeats n − 1 times, but it only takes constant (O(1)) space, in contrast to the top-down approach which requires O(n) space to store the map. function fib(n) if n = 0 return 0 else var previousFib := 0, currentFib := 1 repeat n − 1 times // loop is skipped if n = 1 var newFib := previousFib + currentFib previousFib := currentFib currentFib := newFib return currentFib In both examples, we only calculate fib(2) one time, and then use it to calculate both fib(4) and fib(3), instead of computing it every time either of them is evaluated. A type of balanced 0–1 matrix Consider the problem of assigning values, either zero or one, to the positions of an matrix, with even, so that each row and each column contains exactly zeros and ones. We ask how many different assignments there are for a given . For example, when , five possible solutions are There are at least three possible approaches: brute force, backtracking, and dynamic programming. Brute force consists of checking all assignments of zeros and ones and counting those that have balanced rows and columns ( zeros and ones). As there are possible assignments and sensible assignments, this strategy is not practical except maybe up to . Backtracking for this problem consists of choosing some order of the matrix elements and recursively placing ones or zeros, while checking that in every row and column the number of elements that have not been assigned plus the number of ones or zeros are both at least . While more sophisticated than brute force, this approach will visit every solution once, making it impractical for larger than six, since the number of solutions is already 116,963,796,250 for  = 8, as we shall see. Dynamic programming makes it possible to count the number of solutions without visiting them all. Imagine backtracking values for the first row – what information would we require about the remaining rows, in order to be able to accurately count the solutions obtained for each first row value? We consider boards, where , whose rows contain zeros and ones. The function f to which memoization is applied maps vectors of n pairs of integers to the number of admissible boards (solutions). There is one pair for each column, and its two components indicate respectively the number of zeros and ones that have yet to be placed in that column. We seek the value of ( arguments or one vector of elements). The process of subproblem creation involves iterating over every one of possible assignments for the top row of the board, and going through every column, subtracting one from the appropriate element of the pair for that column, depending on whether the assignment for the top row contained a zero or a one at that position. If any one of the results is negative, then the assignment is invalid and does not contribute to the set of solutions (recursion stops). Otherwise, we have an assignment for the top row of the board and recursively compute the number of solutions to the remaining board, adding the numbers of solutions for every admissible assignment of the top row and returning the sum, which is being memoized. The base case is the trivial subproblem, which occurs for a board. The number of solutions for this board is either zero or one, depending on whether the vector is a permutation of and pairs or not. For example, in the first two boards shown above the sequences of vectors would be ((2, 2) (2, 2) (2, 2) (2, 2)) ((2, 2) (2, 2) (2, 2) (2, 2)) k = 4 0 1 0 1 0 0 1 1 ((1, 2) (2, 1) (1, 2) (2, 1)) ((1, 2) (1, 2) (2, 1) (2, 1)) k = 3 1 0 1 0 0 0 1 1 ((1, 1) (1, 1) (1, 1) (1, 1)) ((0, 2) (0, 2) (2, 0) (2, 0)) k = 2 0 1 0 1 1 1 0 0 ((0, 1) (1, 0) (0, 1) (1, 0)) ((0, 1) (0, 1) (1, 0) (1, 0)) k = 1 1 0 1 0 1 1 0 0 ((0, 0) (0, 0) (0, 0) (0, 0)) ((0, 0) (0, 0), (0, 0) (0, 0)) The number of solutions is Links to the MAPLE implementation of the dynamic programming approach may be found among the external links. Checkerboard Consider a checkerboard with n × n squares and a cost function c(i, j) which returns a cost associated with square (i,j) (i being the row, j being the column). For instance (on a 5 × 5 checkerboard), Thus c(1, 3) = 5 Let us say there was a checker that could start at any square on the first rank (i.e., row) and you wanted to know the shortest path (the sum of the minimum costs at each visited rank) to get to the last rank; assuming the checker could move only diagonally left forward, diagonally right forward, or straight forward. That is, a checker on (1,3) can move to (2,2), (2,3) or (2,4). This problem exhibits optimal substructure. That is, the solution to the entire problem relies on solutions to subproblems. Let us define a function q(i, j) as q(i, j) = the minimum cost to reach square (i, j). Starting at rank n and descending to rank 1, we compute the value of this function for all the squares at each successive rank. Picking the square that holds the minimum value at each rank gives us the shortest path between rank n and rank 1. The function q(i, j) is equal to the minimum cost to get to any of the three squares below it (since those are the only squares that can reach it) plus c(i, j). For instance: Now, let us define q(i, j) in somewhat more general terms: The first line of this equation deals with a board modeled as squares indexed on 1 at the lowest bound and n at the highest bound. The second line specifies what happens at the first rank; providing a base case. The third line, the recursion, is the important part. It represents the A,B,C,D terms in the example. From this definition we can derive straightforward recursive code for q(i, j). In the following pseudocode, n is the size of the board, c(i, j) is the cost function, and min() returns the minimum of a number of values: function minCost(i, j) if j < 1 or j > n return infinity else if i = 1 return c(i, j) else return min( minCost(i-1, j-1), minCost(i-1, j), minCost(i-1, j+1) ) + c(i, j) This function only computes the path cost, not the actual path. We discuss the actual path below. This, like the Fibonacci-numbers example, is horribly slow because it too exhibits the overlapping sub-problems attribute. That is, it recomputes the same path costs over and over. However, we can compute it much faster in a bottom-up fashion if we store path costs in a two-dimensional array q[i, j] rather than using a function. This avoids recomputation; all the values needed for array q[i, j] are computed ahead of time only once. Precomputed values for (i,j) are simply looked up whenever needed. We also need to know what the actual shortest path is. To do this, we use another array p[i, j]; a predecessor array. This array records the path to any square s. The predecessor of s is modeled as an offset relative to the index (in q[i, j]) of the precomputed path cost of s. To reconstruct the complete path, we lookup the predecessor of s, then the predecessor of that square, then the predecessor of that square, and so on recursively, until we reach the starting square. Consider the following pseudocode: function computeShortestPathArrays() for x from 1 to n q[1, x] := c(1, x) for y from 1 to n q[y, 0] := infinity q[y, n + 1] := infinity for y from 2 to n for x from 1 to n m := min(q[y-1, x-1], q[y-1, x], q[y-1, x+1]) q[y, x] := m + c(y, x) if m = q[y-1, x-1] p[y, x] := -1 else if m = q[y-1, x] p[y, x] := 0 else p[y, x] := 1 Now the rest is a simple matter of finding the minimum and printing it. function computeShortestPath() computeShortestPathArrays() minIndex := 1 min := q[n, 1] for i from 2 to n if q[n, i] < min minIndex := i min := q[n, i] printPath(n, minIndex) function printPath(y, x) print(x) print("<-") if y = 2 print(x + p[y, x]) else printPath(y-1, x + p[y, x]) Sequence alignment In genetics, sequence alignment is an important application where dynamic programming is essential. Typically, the problem consists of transforming one sequence into another using edit operations that replace, insert, or remove an element. Each operation has an associated cost, and the goal is to find the sequence of edits with the lowest total cost. The problem can be stated naturally as a recursion, a sequence A is optimally edited into a sequence B by either: inserting the first character of B, and performing an optimal alignment of A and the tail of B deleting the first character of A, and performing the optimal alignment of the tail of A and B replacing the first character of A with the first character of B, and performing optimal alignments of the tails of A and B. The partial alignments can be tabulated in a matrix, where cell (i,j) contains the cost of the optimal alignment of A[1..i] to B[1..j]. The cost in cell (i,j) can be calculated by adding the cost of the relevant operations to the cost of its neighboring cells, and selecting the optimum. Different variants exist, see Smith–Waterman algorithm and Needleman–Wunsch algorithm. Tower of Hanoi puzzle The Tower of Hanoi or Towers of Hanoi is a mathematical game or puzzle. It consists of three rods, and a number of disks of different sizes which can slide onto any rod. The puzzle starts with the disks in a neat stack in ascending order of size on one rod, the smallest at the top, thus making a conical shape. The objective of the puzzle is to move the entire stack to another rod, obeying the following rules: Only one disk may be moved at a time. Each move consists of taking the upper disk from one of the rods and sliding it onto another rod, on top of the other disks that may already be present on that rod. No disk may be placed on top of a smaller disk. The dynamic programming solution consists of solving the functional equation S(n,h,t) = S(n-1,h, not(h,t)) ; S(1,h,t) ; S(n-1,not(h,t),t) where n denotes the number of disks to be moved, h denotes the home rod, t denotes the target rod, not(h,t) denotes the third rod (neither h nor t), ";" denotes concatenation, and S(n, h, t) := solution to a problem consisting of n disks that are to be moved from rod h to rod t. For n=1 the problem is trivial, namely S(1,h,t) = "move a disk from rod h to rod t" (there is only one disk left). The number of moves required by this solution is 2n − 1. If the objective is to maximize the number of moves (without cycling) then the dynamic programming functional equation is slightly more complicated and 3n − 1 moves are required. Egg dropping puzzle The following is a description of the instance of this famous puzzle involving N=2 eggs and a building with H=36 floors: Suppose that we wish to know which stories in a 36-story building are safe to drop eggs from, and which will cause the eggs to break on landing (using U.S. English terminology, in which the first floor is at ground level). We make a few assumptions: An egg that survives a fall can be used again. A broken egg must be discarded. The effect of a fall is the same for all eggs. If an egg breaks when dropped, then it would break if dropped from a higher window. If an egg survives a fall, then it would survive a shorter fall. It is not ruled out that the first-floor windows break eggs, nor is it ruled out that eggs can survive the 36th-floor windows. If only one egg is available and we wish to be sure of obtaining the right result, the experiment can be carried out in only one way. Drop the egg from the first-floor window; if it survives, drop it from the second-floor window. Continue upward until it breaks. In the worst case, this method may require 36 droppings. Suppose 2 eggs are available. What is the lowest number of egg-droppings that is guaranteed to work in all cases? To derive a dynamic programming functional equation for this puzzle, let the state of the dynamic programming model be a pair s = (n,k), where n = number of test eggs available, n = 0, 1, 2, 3, ..., N − 1. k = number of (consecutive) floors yet to be tested, k = 0, 1, 2, ..., H − 1. For instance, s = (2,6) indicates that two test eggs are available and 6 (consecutive) floors are yet to be tested. The initial state of the process is s = (N,H) where N denotes the number of test eggs available at the commencement of the experiment. The process terminates either when there are no more test eggs (n = 0) or when k = 0, whichever occurs first. If termination occurs at state s = (0,k) and k > 0, then the test failed. Now, let W(n,k) = minimum number of trials required to identify the value of the critical floor under the worst-case scenario given that the process is in state s = (n,k). Then it can be shown that W(n,k) = 1 + min{max(W(n − 1, x − 1), W(n,k − x)): x = 1, 2, ..., k } with W(n,0) = 0 for all n > 0 and W(1,k) = k for all k. It is easy to solve this equation iteratively by systematically increasing the values of n and k. Faster DP solution using a different parametrization Notice that the above solution takes time with a DP solution. This can be improved to time by binary searching on the optimal in the above recurrence, since is increasing in while is decreasing in , thus a local minimum of is a global minimum. Also, by storing the optimal for each cell in the DP table and referring to its value for the previous cell, the optimal for each cell can be found in constant time, improving it to time. However, there is an even faster solution that involves a different parametrization of the problem: Let be the total number of floors such that the eggs break when dropped from the th floor (The example above is equivalent to taking ). Let be the minimum floor from which the egg must be dropped to be broken. Let be the maximum number of values of that are distinguishable using tries and eggs. Then for all . Let be the floor from which the first egg is dropped in the optimal strategy. If the first egg broke, is from to and distinguishable using at most tries and eggs. If the first egg did not break, is from to and distinguishable using tries and eggs. Therefore, . Then the problem is equivalent to finding the minimum such that . To do so, we could compute in order of increasing , which would take time. Thus, if we separately handle the case of , the algorithm would take time. But the recurrence relation can in fact be solved, giving , which can be computed in time using the identity for all . Since for all , we can binary search on to find , giving an algorithm. Matrix chain multiplication Matrix chain multiplication is a well-known example that demonstrates utility of dynamic programming. For example, engineering applications often have to multiply a chain of matrices. It is not surprising to find matrices of large dimensions, for example 100×100. Therefore, our task is to multiply matrices . Matrix multiplication is not commutative, but is associative; and we can multiply only two matrices at a time. So, we can multiply this chain of matrices in many different ways, for example: and so on. There are numerous ways to multiply this chain of matrices. They will all produce the same final result, however they will take more or less time to compute, based on which particular matrices are multiplied. If matrix A has dimensions m×n and matrix B has dimensions n×q, then matrix C=A×B will have dimensions m×q, and will require m*n*q scalar multiplications (using a simplistic matrix multiplication algorithm for purposes of illustration). For example, let us multiply matrices A, B and C. Let us assume that their dimensions are m×n, n×p, and p×s, respectively. Matrix A×B×C will be of size m×s and can be calculated in two ways shown below: Ax(B×C) This order of matrix multiplication will require nps + mns scalar multiplications. (A×B)×C This order of matrix multiplication will require mnp + mps scalar calculations. Let us assume that m = 10, n = 100, p = 10 and s = 1000. So, the first way to multiply the chain will require 1,000,000 + 1,000,000 calculations. The second way will require only 10,000+100,000 calculations. Obviously, the second way is faster, and we should multiply the matrices using that arrangement of parenthesis. Therefore, our conclusion is that the order of parenthesis matters, and that our task is to find the optimal order of parenthesis. At this point, we have several choices, one of which is to design a dynamic programming algorithm that will split the problem into overlapping problems and calculate the optimal arrangement of parenthesis. The dynamic programming solution is presented below. Let's call m[i,j] the minimum number of scalar multiplications needed to multiply a chain of matrices from matrix i to matrix j (i.e. Ai × .... × Aj, i.e. i<=j). We split the chain at some matrix k, such that i <= k < j, and try to find out which combination produces minimum m[i,j]. The formula is: if i = j, m[i,j]= 0 if i < j, m[i,j]= min over all possible values of k where k ranges from i to j − 1. is the row dimension of matrix i, is the column dimension of matrix k, is the column dimension of matrix j. This formula can be coded as shown below, where input parameter "chain" is the chain of matrices, i.e. : function OptimalMatrixChainParenthesis(chain) n = length(chain) for i = 1, n m[i,i] = 0 // Since it takes no calculations to multiply one matrix for len = 2, n for i = 1, n - len + 1 j = i + len -1 m[i,j] = infinity // So that the first calculation updates for k = i, j-1 if q < m[i, j] // The new order of parentheses is better than what we had m[i, j] = q // Update s[i, j] = k // Record which k to split on, i.e. where to place the parenthesis So far, we have calculated values for all possible , the minimum number of calculations to multiply a chain from matrix i to matrix j, and we have recorded the corresponding "split point". For example, if we are multiplying chain , and it turns out that and , that means that the optimal placement of parenthesis for matrices 1 to 3 is and to multiply those matrices will require 100 scalar calculations. This algorithm will produce "tables" m[, ] and s[, ] that will have entries for all possible values of i and j. The final solution for the entire chain is m[1, n], with corresponding split at s[1, n]. Unraveling the solution will be recursive, starting from the top and continuing until we reach the base case, i.e. multiplication of single matrices. Therefore, the next step is to actually split the chain, i.e. to place the parenthesis where they (optimally) belong. For this purpose we could use the following algorithm: function PrintOptimalParenthesis(s, i, j) if i = j print "A"i else print "(" PrintOptimalParenthesis(s, i, s[i, j]) PrintOptimalParenthesis(s, s[i, j] + 1, j) print ")" Of course, this algorithm is not useful for actual multiplication. This algorithm is just a user-friendly way to see what the result looks like. To actually multiply the matrices using the proper splits, we need the following algorithm: function MatrixChainMultiply(chain from 1 to n) // returns the final matrix, i.e. A1×A2×... ×An OptimalMatrixChainParenthesis(chain from 1 to n) // this will produce s[ . ] and m[ . ] "tables" OptimalMatrixMultiplication(s, chain from 1 to n) // actually multiply function OptimalMatrixMultiplication(s, i, j) // returns the result of multiplying a chain of matrices from Ai to Aj in optimal way if i < j // keep on splitting the chain and multiplying the matrices in left and right sides LeftSide = OptimalMatrixMultiplication(s, i, s[i, j]) RightSide = OptimalMatrixMultiplication(s, s[i, j] + 1, j) return MatrixMultiply(LeftSide, RightSide) else if i = j return Ai // matrix at position i else print "error, i <= j must hold" function MatrixMultiply(A, B) // function that multiplies two matrices if columns(A) = rows(B) for i = 1, rows(A) for j = 1, columns(B) C[i, j] = 0 for k = 1, columns(A) C[i, j] = C[i, j] + A[i, k]*B[k, j] return C else print "error, incompatible dimensions." History of the name The term dynamic programming was originally used in the 1940s by Richard Bellman to describe the process of solving problems where one needs to find the best decisions one after another. By 1953, he refined this to the modern meaning, referring specifically to nesting smaller decision problems inside larger decisions, and the field was thereafter recognized by the IEEE as a systems analysis and engineering topic. Bellman's contribution is remembered in the name of the Bellman equation, a central result of dynamic programming which restates an optimization problem in recursive form. Bellman explains the reasoning behind the term dynamic programming in his autobiography, Eye of the Hurricane: An Autobiography: The word dynamic was chosen by Bellman to capture the time-varying aspect of the problems, and because it sounded impressive. The word programming referred to the use of the method to find an optimal program, in the sense of a military schedule for training or logistics. This usage is the same as that in the phrases linear programming and mathematical programming, a synonym for mathematical optimization. The above explanation of the origin of the term may be inaccurate: According to Russell and Norvig, the above story "cannot be strictly true, because his first paper using the term (Bellman, 1952) appeared before Wilson became Secretary of Defense in 1953." Also, Harold J. Kushner stated in a speech that, "On the other hand, when I asked [Bellman] the same question, he replied that he was trying to upstage Dantzig's linear programming by adding dynamic. Perhaps both motivations were true."
Mathematics
Optimization
null
23690287
https://en.wikipedia.org/wiki/Hexagonal%20crystal%20family
Hexagonal crystal family
In crystallography, the hexagonal crystal family is one of the six crystal families, which includes two crystal systems (hexagonal and trigonal) and two lattice systems (hexagonal and rhombohedral). While commonly confused, the trigonal crystal system and the rhombohedral lattice system are not equivalent (see section crystal systems below). In particular, there are crystals that have trigonal symmetry but belong to the hexagonal lattice (such as α-quartz). The hexagonal crystal family consists of the 12 point groups such that at least one of their space groups has the hexagonal lattice as underlying lattice, and is the union of the hexagonal crystal system and the trigonal crystal system. There are 52 space groups associated with it, which are exactly those whose Bravais lattice is either hexagonal or rhombohedral. Lattice systems The hexagonal crystal family consists of two lattice systems: hexagonal and rhombohedral. Each lattice system consists of one Bravais lattice. In the hexagonal family, the crystal is conventionally described by a right rhombic prism unit cell with two equal axes (a by a), an included angle of 120° (γ) and a height (c, which can be different from a) perpendicular to the two base axes. The hexagonal unit cell for the rhombohedral Bravais lattice is the R-centered cell, consisting of two additional lattice points which occupy one body diagonal of the unit cell. There are two ways to do this, which can be thought of as two notations which represent the same structure. In the usual so-called obverse setting, the additional lattice points are at coordinates (, , ) and (, , ), whereas in the alternative reverse setting they are at the coordinates (,,) and (,,). In either case, there are 3 lattice points per unit cell in total and the lattice is non-primitive. The Bravais lattices in the hexagonal crystal family can also be described by rhombohedral axes. The unit cell is a rhombohedron (which gives the name for the rhombohedral lattice). This is a unit cell with parameters a = b = c; α = β = γ ≠ 90°. In practice, the hexagonal description is more commonly used because it is easier to deal with a coordinate system with two 90° angles. However, the rhombohedral axes are often shown (for the rhombohedral lattice) in textbooks because this cell reveals the m symmetry of the crystal lattice. The rhombohedral unit cell for the hexagonal Bravais lattice is the D-centered cell, consisting of two additional lattice points which occupy one body diagonal of the unit cell with coordinates (, , ) and (, , ). However, such a description is rarely used. Crystal systems The hexagonal crystal family consists of two crystal systems: trigonal and hexagonal. A crystal system is a set of point groups in which the point groups themselves and their corresponding space groups are assigned to a lattice system (see table in Crystal system#Crystal classes). The trigonal crystal system consists of the 5 point groups that have a single three-fold rotation axis, which includes space groups 143 to 167. These 5 point groups have 7 corresponding space groups (denoted by R) assigned to the rhombohedral lattice system and 18 corresponding space groups (denoted by P) assigned to the hexagonal lattice system. Hence, the trigonal crystal system is the only crystal system whose point groups have more than one lattice system associated with their space groups. The hexagonal crystal system consists of the 7 point groups that have a single six-fold rotation axis. These 7 point groups have 27 space groups (168 to 194), all of which are assigned to the hexagonal lattice system. Trigonal crystal system The 5 point groups in this crystal system are listed below, with their international number and notation, their space groups in name and example crystals. Hexagonal crystal system The 7 point groups (crystal classes) in this crystal system are listed below, followed by their representations in Hermann–Mauguin or international notation and Schoenflies notation, and mineral examples, if they exist. The unit cell volume is given by a2c•sin(60°) Hexagonal close packed Hexagonal close packed (hcp) is one of the two simple types of atomic packing with the highest density, the other being the face-centered cubic (fcc). However, unlike the fcc, it is not a Bravais lattice, as there are two nonequivalent sets of lattice points. Instead, it can be constructed from the hexagonal Bravais lattice by using a two-atom motif (the additional atom at about (, , )) associated with each lattice point. Multi-element structures Compounds that consist of more than one element (e.g. binary compounds) often have crystal structures based on the hexagonal crystal family. Some of the more common ones are listed here. These structures can be viewed as two or more interpenetrating sublattices where each sublattice occupies the interstitial sites of the others. Wurtzite structure The wurtzite crystal structure is referred to by the Strukturbericht designation B4 and the Pearson symbol hP4. The corresponding space group is No. 186 (in International Union of Crystallography classification) or P63mc (in Hermann–Mauguin notation). The Hermann-Mauguin symbols in P63mc can be read as follows: 63.. : a six fold screw rotation around the c-axis .m. : a mirror plane with normal {100} ..c : glide plane in the c-directions with normal {120}. Among the compounds that can take the wurtzite structure are wurtzite itself (ZnS with up to 8% iron instead of zinc), silver iodide (AgI), zinc oxide (ZnO), cadmium sulfide (CdS), cadmium selenide (CdSe), silicon carbide (α-SiC), gallium nitride (GaN), aluminium nitride (AlN), boron nitride (w-BN) and other semiconductors. In most of these compounds, wurtzite is not the favored form of the bulk crystal, but the structure can be favored in some nanocrystal forms of the material. In materials with more than one crystal structure, the prefix "w-" is sometimes added to the empirical formula to denote the wurtzite crystal structure, as in w-BN. Each of the two individual atom types forms a sublattice which is hexagonal close-packed (HCP-type). When viewed all together, the atomic positions are the same as in lonsdaleite (hexagonal diamond). Each atom is tetrahedrally coordinated. The structure can also be described as an HCP lattice of zinc with sulfur atoms occupying half of the tetrahedral voids or vice versa. The wurtzite structure is non-centrosymmetric (i.e., lacks inversion symmetry). Due to this, wurtzite crystals can (and generally do) have properties such as piezoelectricity and pyroelectricity, which centrosymmetric crystals lack. Nickel arsenide structure The nickel arsenide structure consists of two interpenetrating sublattices: a primitive hexagonal nickel sublattice and a hexagonal close-packed arsenic sublattice. Each nickel atom is octahedrally coordinated to six arsenic atoms, while each arsenic atom is trigonal prismatically coordinated to six nickel atoms. The structure can also be described as an HCP lattice of arsenic with nickel occupying each octahedral void. Compounds adopting the NiAs structure are generally the chalcogenides, arsenides, antimonides and bismuthides of transition metals. The following are the members of the nickeline group: Achavalite: Breithauptite: Freboldite: Kotulskite: Langistite: Nickeline: Sobolevskite: Sudburyite: In two dimensions There is only one hexagonal Bravais lattice in two dimensions: the hexagonal lattice.
Physical sciences
Crystallography
Physics
8251651
https://en.wikipedia.org/wiki/European%20conger
European conger
The European conger (Conger conger) is a species of conger of the family Congridae. It is the heaviest eel in the world and native to the northeast Atlantic, including the Mediterranean Sea. Description and behavior European congers have an average adult length of , a maximum known length of around (possibly up to for the largest specimens), and maximum weight of roughly , making them the largest eels in the world by weight. They can be rivaled or marginally exceeded in length by the largest species of moray eel but these tend to be slenderer and thus weigh less than the larger congers. Average specimens caught will weigh only . Females, with an average length at sexual maturity of , are much larger than males, with an average length at sexual maturity of . The body is very long, anguilliform, without scales. The colour is usually grey, but can also be blackish. The belly is white. A row of small white spots is aligned along the lateral line. The head is almost conical, and slightly depressed. The snout is rounded and prominent, with lateral olfactory holes. The large gill openings are in the lateral position. The conical teeth are arranged in rows on the jaws. The dorsal and anal fins are confluent with the caudal fin. Pectoral fins are present, while ventral fins are absent. Conger eels have habits similar to moray eels. They usually live amongst rocks in holes, or "eel pits", sometimes in one hole together with moray eels. They come out from their holes at night to hunt. These nocturnal predators mainly feed on fish, cephalopods, and crustaceans, although they are thought to scavenge on dead and rotting fish, as well as actively hunt live fish. Congers can be aggressive to humans, and large specimens can pose a danger to divers. Distribution This species can be found in the eastern Atlantic from Norway and Iceland to Senegal, and also in the Mediterranean and Black Sea at 0–500 m of depth, although they may reach depths of 3600 m during their migrations. It is sometimes seen in very shallow water by the shore, but can also go down to . It is usually present on rough, rocky, broken ground, close to the coast when young, moving to deeper waters when adult. Migration and reproduction When conger eels are between 5 and 15 years old, their bodies undergo a transformation, with the reproductive organs of both males and females increasing in size and the skeleton reducing in mass and the teeth falling out. Females appear to increase in weight and size more than the males. Conger eels then make migrations to spawning areas in the Mediterranean and the Atlantic, "although the existence of one or multiple spawning grounds for the species remains uncertain". The female conger eels produce several million eggs, and both the females and males die after spawning. Once hatched, the larval conger eels begin to swim back to shallower waters, where they live until they reach maturity. They then migrate to repeat the cycle. Gallery
Biology and health sciences
Anguilliformes
Animals
4786318
https://en.wikipedia.org/wiki/Linear%20dynamical%20system
Linear dynamical system
Linear dynamical systems are dynamical systems whose evolution functions are linear. While dynamical systems, in general, do not have closed-form solutions, linear dynamical systems can be solved exactly, and they have a rich set of mathematical properties. Linear systems can also be used to understand the qualitative behavior of general dynamical systems, by calculating the equilibrium points of the system and approximating it as a linear system around each such point. Introduction In a linear dynamical system, the variation of a state vector (an -dimensional vector denoted ) equals a constant matrix (denoted ) multiplied by . This variation can take two forms: either as a flow, in which varies continuously with time or as a mapping, in which varies in discrete steps These equations are linear in the following sense: if and are two valid solutions, then so is any linear combination of the two solutions, e.g., where and are any two scalars. The matrix need not be symmetric. Linear dynamical systems can be solved exactly, in contrast to most nonlinear ones. Occasionally, a nonlinear system can be solved exactly by a change of variables to a linear system. Moreover, the solutions of (almost) any nonlinear system can be well-approximated by an equivalent linear system near its fixed points. Hence, understanding linear systems and their solutions is a crucial first step to understanding the more complex nonlinear systems. Solution of linear dynamical systems If the initial vector is aligned with a right eigenvector of the matrix , the dynamics are simple where is the corresponding eigenvalue; the solution of this equation is as may be confirmed by substitution. If is diagonalizable, then any vector in an -dimensional space can be represented by a linear combination of the right and left eigenvectors (denoted ) of the matrix . Therefore, the general solution for is a linear combination of the individual solutions for the right eigenvectors Similar considerations apply to the discrete mappings. Classification in two dimensions The roots of the characteristic polynomial det(A - λI) are the eigenvalues of A. The sign and relation of these roots, , to each other may be used to determine the stability of the dynamical system For a 2-dimensional system, the characteristic polynomial is of the form where is the trace and is the determinant of A. Thus the two roots are in the form: , and and . Thus if then the eigenvalues are of opposite sign, and the fixed point is a saddle. If then the eigenvalues are of the same sign. Therefore, if both are positive and the point is unstable, and if then both are negative and the point is stable. The discriminant will tell you if the point is nodal or spiral (i.e. if the eigenvalues are real or complex).
Mathematics
Dynamical systems
null
4788153
https://en.wikipedia.org/wiki/Black%20bullhead
Black bullhead
The black bullhead or black bullhead catfish (Ameiurus melas) is a species of bullhead catfish. Like other bullhead catfish, it has the ability to thrive in waters that are low in oxygen, brackish, turbid, and/or very warm. It also has barbels located near its mouth, a broad head, spiny fins, and no scales. It can be identified from other bullheads as the barbels are black, and it has a tan crescent around the tail. Its caudal fin is truncated (squared off at the corners). Like virtually all catfish, it is nocturnal, preferring to feed at night, although young feed during the day. It generally does not get as large as the channel or blue catfish, with average adult weights are in the range, and almost never as large as . It has a typical length of , with the largest specimen being , making it the largest of the bullheads. It is typically black or dark brown on the dorsal side of its body and yellow or white on the ventral side. Like most of the bullheads (and even flathead catfish), it has a squared tail fin, which is strikingly different from the forked tail of channel and blue catfish. It is a bottom-rover fish, meaning it is well-adapted for bottom living. It is typically dorsoventrally flattened, and has a slightly humped back. Its color depends on the area where it is taken, but it generally is darker than brown or yellow bullheads (A. nebulosus and A. natalis, respectively). It can be distinguished from a flathead catfish (Pylodictis olivaris) by the fact that the black bullhead's lower lip does not protrude past the upper lip. Distinguishing it from the brown bullhead is a bit more difficult, depending on the area where it is caught, but a distinguishing detail between the two includes a nearly smooth pectoral spine on the black bullhead, while the brown's corresponding spine is strongly barbed. The anal fin of the black bullhead also has a gray base, and its tail has a pale bar. Also, the brown bullhead generally has 21 to 24 soft rays through its anal fin as opposed to the black bullhead's 17 to 21. The brown bullhead is also typically mottled brown and green on top instead of the darker black. Both the black and brown bullheads can easily be distinguished from the yellow bullhead by the color of the barbels on their chin: the yellow bullhead has white barbels under its mouth. Habitat Black bullheads are found throughout the central United States, often in stagnant or slow-moving waters with soft bottoms. They have been known to congregate in confined spaces, such as lake outlets or under dams. They are very tolerant fish, and are able to live in muddy water, with warmer temperatures and in water with lower levels of oxygen, which reduce competition from other fish. Black bullheads also occur as an invasive species in large parts of Europe. The species has been eradicated from the United Kingdom by use of rotenone biocide. It was only found in one place, Lake Meadows, Billericay, Essex, and they grew to a maximum weight of . Diet Black bullheads are omnivorous, so they eat almost anything, from grains and other plant matter to insects, dead or living fish, and crustaceans. Midge larvae and other young insects are the primary diet for adult bullheads. Black bullheads have been known to eat small fish and fish eggs as well. They have short, pointed, conical teeth, formed in multiple rows called cardiform teeth. Black bullheads have no scales; instead, they have about 100,000 taste receptors placed all over their bodies. Many of these are located on the barbels near their mouths. The receptors help the fish to identify food in their dark habitats. During the winter, black bullheads decrease food intake, and may stop eating altogether. Instead, they bury themselves around the shore line of the lake in debris, with only their gills exposed. This "hibernation" allows them to survive conditions of low oxygen and low temperature. Reproduction Black bullheads start to spawn in April and continue through June. The females scoop out a small hole or depression in the lake floor and lay 2000 to 6000 eggs. The males fertilize the eggs, then care for them. When the eggs hatch a week later, both parents watch over the fry for a short while. Angling Considered rough fish, black bullheads are not as popular for sport fishing as their larger relatives, channel catfish, blue catfish, and flat head catfish. However, they have pale flesh and make excellent table fare when water quality is good despite their small size. As with channel catfish, the flesh around the bellies and gills of larger individuals can be strong tasting due to yellow fat, but these flavors can be avoided by removing the fatty portions of a large specimen when cleaning. They are the largest of the bullheads and are one of several catfish informally referred to as mud catfish. They have been introduced in many areas of the US because of their ability to survive (and even thrive) in less than ideal conditions, but they are seldom used in active stocking programs due to their relatively low desirability. Fisheries experts tend to not recommend them because they compete with bluegill and channel catfish for food and do not grow as fast or get as big as channel catfish. For that reason, finding them commercially for pond stocking is difficult. That said, in clean water, meat quality is very good, and unlike channel catfish, black bullheads reproduce and indefinitely maintain healthy populations without restocking in ponds populated with bass and crappie. In fact, as with bluegill, a pond with black bullhead in it needs a predator species such as bass to keep the bullhead population under control. Due to their ability to reproduce in a pond with bass, bullheads are the best catfish for mixed-species ponds that are not fished out and restocked regularly. Black bullheads can be caught using similar techniques as for channel or blue catfish, although their small size may require smaller bait and hooks. They respond well to earthworms and tend to feed higher up in the water column than channel catfish. Like most catfish, they are most active at night, and tend to be less active during the day, bedding under piers or in shady shore areas. In some areas of little to no fishing pressure, black bullheads have been found to be more aggressive and have been caught while casting and retrieving metal spoon lures. Defense At the base of their pectoral and dorsal fins are spines, which they can use as spurs to cut predators.
Biology and health sciences
Siluriformes
Animals
4790683
https://en.wikipedia.org/wiki/Engineering%20ethics
Engineering ethics
Engineering ethics is the field of system of moral principles that apply to the practice of engineering. The field examines and sets the obligations by engineers to society, to their clients, and to the profession. As a scholarly discipline, it is closely related to subjects such as the philosophy of science, the philosophy of engineering, and the ethics of technology. Background and origins Up to the 19th century and growing concerns As engineering rose as a distinct profession during the 19th century, engineers saw themselves as either independent professional practitioners or technical employees of large enterprises. There was considerable tension between the two sides as large industrial employers fought to maintain control of their employees. In the United States growing professionalism gave rise to the development of four founding engineering societies: The American Society of Civil Engineers (ASCE) (1851), the American Institute of Electrical Engineers (AIEE) (1884), the American Society of Mechanical Engineers (ASME) (1880), and the American Institute of Mining Engineers (AIME) (1871). ASCE and AIEE were more closely identified with the engineer as learned professional, where ASME, to an extent, and AIME almost entirely, identified with the view that the engineer is a technical employee. Even so, at that time ethics was viewed as a personal rather than a broad professional concern. Turn of the 20th century and turning point When the 19th century drew to a close and the 20th century began, there had been series of significant structural failures, including some spectacular bridge failures, notably the Ashtabula River Railroad Disaster (1876), Tay Bridge Disaster (1879), and the Quebec Bridge collapse (1907). These had a profound effect on engineers and forced the profession to confront shortcomings in technical and construction practice, as well as ethical standards. One response was the development of formal codes of ethics by three of the four founding engineering societies. AIEE adopted theirs in 1912. ASCE and ASME did so in 1914. AIME did not adopt a code of ethics in its history. Concerns for professional practice and protecting the public highlighted by these bridge failures, as well as the Boston molasses disaster (1919), provided impetus for another movement that had been underway for some time: to require formal credentials (Professional Engineering licensure in the US) as a requirement to practice. This involves meeting some combination of educational, experience, and testing requirements. In 1950, the Association of German Engineers developed an oath for all its members titled 'The Confession of the Engineers', directly hinting at the role of engineers in the atrocities committed during World War II. Over the following decades most American states and Canadian provinces either required engineers to be licensed, or passed special legislation reserving title rights to organization of professional engineers. The Canadian model is to require all persons working in fields of engineering that posed a risk to life, health, property, the public welfare and the environment to be licensed, and all provinces required licensing by the 1950s. The US model has generally been only to require the practicing engineers offering engineering services that impact the public welfare, safety, safeguarding of life, health, or property to be licensed, while engineers working in private industry without a direct offering of engineering services to the public or other businesses, education, and government need not be licensed. This has perpetuated the split between professional engineers and those in private industry. Professional societies have adopted generally uniform codes of ethics. Recent developments Efforts to promote ethical practice continue. In addition to the professional societies and chartering organizations efforts with their members, the Canadian Iron Ring and American Order of the Engineer trace their roots to the 1907 Quebec Bridge collapse. Both require members to swear an oath to uphold ethical practice and wear a symbolic ring as a reminder. In the United States, the National Society of Professional Engineers released in 1946 its Canons of Ethics for Engineers and Rules of Professional Conduct, which evolved to the current Code of Ethics, adopted in 1964. These requests ultimately led to the creation of the Board of Ethical Review in 1954. Ethics cases rarely have easy answers, but the BER's nearly 500 advisory opinions have helped bring clarity to the ethical issues engineers face daily. Currently, bribery and political corruption is being addressed very directly by several professional societies and business groups around the world. However, new issues have arisen, such as offshoring, sustainable development, and environmental protection, that the profession is having to consider and address. General principles Codes of engineering ethics identify a specific precedence with respect to the engineer's consideration for the public, clients, employers, and the profession. Many engineering professional societies have prepared codes of ethics. Some date to the early decades of the twentieth century. These have been incorporated to a greater or lesser degree into the regulatory laws of several jurisdictions. While these statements of general principles served as a guide, engineers still require sound judgment to interpret how the code would apply to specific circumstances. The general principles of the codes of ethics are largely similar across the various engineering societies and chartering authorities of the world, which further extend the code and publish specific guidance. The following is an example from the American Society of Civil Engineers: Engineers shall hold paramount the safety, health and welfare of the public and shall strive to comply with the principles of sustainable development in the performance of their professional duties. Engineers shall perform services only in areas of their competence. Engineers shall issue public statements only in an objective and truthful manner. Engineers shall act in professional matters for each employer or client as faithful agents or trustees, and shall avoid conflicts of interest. Engineers shall build their professional reputation on the merit of their services and shall not compete unfairly with others. Engineers shall act in such a manner as to uphold and enhance the honor, integrity, and dignity of the engineering profession and shall act with zero-tolerance for bribery, fraud, and corruption. Engineers shall continue their professional development throughout their careers, and shall provide opportunities for the professional development of those engineers under their supervision. Engineers shall, in all matters related to their profession, treat all persons fairly and encourage equitable participation without regard to gender or gender identity, race, national origin, ethnicity, religion, age, sexual orientation, disability, political affiliation, or family, marital, or economic status. In 1990, EPFL students elaborated the Archimedean Oath, which is an ethical code of practice for engineers and technicians, similar to the Hippocratic Oath used in the medical world. Obligation to society The paramount value recognized by engineers is the safety and welfare of the public. As demonstrated by the following selected excerpts, this is the case for professional engineering organizations in nearly every jurisdiction and engineering discipline: Institute of Electrical and Electronics Engineers: "We, the members of the IEEE, … do hereby commit ourselves to the highest ethical and professional conduct and agree: 1. to accept responsibility in making decisions consistent with the safety, health and welfare of the public, and to disclose promptly factors that might endanger the public or the environment;" Institution of Civil Engineers: "Members of the ICE should always be aware of their overriding responsibility to the public good. A member’s obligations to the client can never override this, and members of the ICE should not enter undertakings which compromise this responsibility. The ‘public good’ encompasses care and respect for the environment, and for humanity's cultural, historical and archaeological heritage, as well as the primary responsibility members have to protect the health and well-being of present and future generations." Professional Engineers Ontario: "A practitioner shall, regard the practitioner's duty to public welfare as paramount." National Society of Professional Engineers: "Engineers, in the fulfillment of their professional duties, shall: Hold paramount the safety, health, and welfare of the public." American Society of Mechanical Engineers: "Engineers shall hold paramount the safety, health and welfare of the public in the performance of their professional duties." Institute of Industrial Engineers: "Engineers uphold and advance the integrity, honor and dignity of the engineering profession by: 2. Being honest and impartial, and serving with fidelity the public, their employers and clients." American Institute of Chemical Engineers: "To achieve these goals, members shall hold paramount the safety, health and welfare of the public and protect the environment in performance of their professional duties." American Nuclear Society: "ANS members uphold and advance the integrity and honor of their professions by using their knowledge and skill for the enhancement of human welfare and the environment; being honest and impartial; serving with fidelity the public, their employers, and their clients; and striving to continuously improve the competence and prestige of their various professions." Society of Fire Protection Engineers: "In the practice of their profession, fire protection engineers must maintain and constantly improve their competence and perform under a standard of professional behavior which requires adherence to the highest principles of ethical conduct with balanced regard for the interests of the public, clients, employers, colleagues, and the profession." Responsibility of engineers The engineers recognize that the greatest merit is the work and exercise their profession committed to serving society, attending to the welfare and progress of the majority. By transforming nature for the benefit of mankind, engineers must increase their awareness of the world as the abode of humanity, their interest in the universe as a guarantee of overcoming their spirit, and knowledge of reality to make the world fairer and happier. The engineer should reject any paper that is intended to harm the general interest, thus avoiding a situation that might be hazardous or threatening to the environment, life, health, or other rights of human beings. It is an inescapable duty of the engineer to uphold the prestige of the profession, to ensure its proper discharge, and to maintain a professional demeanor rooted in ability, honesty, fortitude, temperance, magnanimity, modesty, honesty, and justice; with the consciousness of individual well-being subordinate to the social good. The engineers and their employers must ensure the continuous improvement of their knowledge, particularly of their profession, disseminate their knowledge, share their experience, provide opportunities for education and training of workers, provide recognition, moral and material support to the schools where they studied, thus returning the benefits and opportunities they and their employers have received. It is the responsibility of the engineers to carry out their work efficiently and to support the law. In particular, they must ensure compliance with the standards of worker protection as provided by the law. As professionals, the engineers are expected to commit themselves to high standards of conduct (NSPE). [1] 11/27/11 Duty to Report (Whistleblowing) A basic ethical dilemma is that an engineer has the duty to report to the appropriate authority a possible risk to others from a client or employer failing to follow the engineer's directions. According to first principles, this duty overrides the duty to a client and/or employer. An engineer may be disciplined, or have their license revoked, even if the failure to report such a danger does not result in the loss of life or health. If an engineer is overruled by a non-technical authority or a technical authority they must inform the authority, in writing, the reasons for their advice and the consequences of the deviation from the advice. In many cases, this duty can be discharged by advising the client of the consequences in a forthright matter, and ensuring the client takes the engineer's advice. In very rare cases, where even a governmental authority may not take appropriate action, the engineer can only discharge the duty by making the situation public. As a result, whistleblowing by professional engineers is not an unusual event, and courts have often sided with engineers in such cases, overruling duties to employers and confidentiality considerations that otherwise would have prevented the engineer from speaking out. Conduct There are several other ethical issues that engineers may face. Some have to do with technical practice, but many others have to do with broader considerations of business conduct. These include: Relationships with clients, consultants, competitors, and contractors Ensuring legal compliance by clients, client's contractors, and others Conflict of interest Bribery and kickbacks, which also may include: Gifts, meals, services, and entertainment Treatment of confidential or proprietary information Consideration of the employer's assets Outside employment/activities (Moonlighting) Some engineering societies are addressing environmental protection as a stand-alone question of ethics. The field of business ethics often overlaps and informs ethical decision making for engineers. Case studies and key individuals Petroski notes that most engineering failures are much more involved than simple technical mis-calculations and involve the failure of the design process or management culture. However, not all engineering failures involve ethical issues. The infamous collapse of the first Tacoma Narrows Bridge, and the losses of the Mars Polar Lander and Mars Climate Orbiter were technical and design process failures. Nor are all engineering ethics issues necessary engineering failures per se - Northwestern University instructor Sheldon Epstein cited The Holocaust as an example of a breach in engineering ethics despite (and because of) the engineers' creations being successful at carrying out the Nazis' mission of genocide. There is the ethics issue of whether engineers considered vulnerability to hostile intent, such as governmental buildings or industrial sites, in the same way weather is considered regardless of the project specifications. Lysenkoism is a specific form of ethical failure, which when engineers (or scientists) allow political agendas take precedent over professional ethics. These episodes of engineering failure include ethical as well as technical issues. Titan submersible implosion (2023) General Motors ignition switch recalls (2014) Deepwater Horizon oil spill (2010) Space Shuttle Columbia disaster (2003) Space Shuttle Challenger disaster (1986) Therac-25 accidents (1985 to 1987) Chernobyl disaster (1986) Bhopal disaster (1984) Kansas City Hyatt Regency walkway collapse (1981) Love Canal (1980), Lois Gibbs Three Mile Island accident (1979) Citigroup Center (1978), Ford Pinto safety problems (1970s) Minamata disease (1908–1973) Aberfan disaster (1966) Chevrolet Corvair safety problems (1960s), Ralph Nader, and Unsafe at Any Speed Boston molasses disaster (1919) Quebec Bridge collapse (1907), Theodore Cooper Johnstown Flood (1889), South Fork Fishing and Hunting Club Tay Bridge Disaster (1879), Thomas Bouch, William Henry Barlow, and William Yolland Ashtabula River Railroad Disaster (1876), Amasa Stone
Technology
Basics
null
4791850
https://en.wikipedia.org/wiki/Chinchilla%20rabbit
Chinchilla rabbit
Chinchilla rabbits are a group of three rabbit breeds that have been bred for a coat that resembles that of chinchillas. Despite their name, they are not related to, and cannot interbreed with, chinchillas, a genus of rodent. Rabbits, in contrast, are lagomorphs. A mutation diluted the yellow pigment in the hairs to almost white, changing in this way the color of the fur of the wild type fur (agouti) into chinchilla. There are three breeds of Chinchilla recognized by the American Rabbit Breeders Association. Other breeds may have recognized Chinchilla varieties (such as Dutch), but the three Chinchilla breeds each have only one variety. Standard Chinchilla Weight: Standard Chinchilla is the original chinchilla version with the larger versions being developed from it. It has a compact body and rollback fur. American Chinchilla Weight: The American Chinchilla or "Heavyweight Chinchilla" is larger than the Standard Chinchilla, it has a commercial body type but the same roll back coat. Standard Chinchillas bred for large size produced this breed. Chinchilla Rabbits originated in France and were bred to standard by M. J. Dybowski. They were introduced to the United States in 1919. Bred to be a meat and fur rabbit, the American Chinchilla Rabbit can be shown/exhibited or kept as a stocky, hardy pet. American Chinchilla Rabbits do not require regular grooming. Adult American Chinchilla Rabbits weigh different for each sex. Males (Bucks)- 9-11#, and Females (Does) 10-12#. These stocky rabbits have a slight curve to their medium length bodies, beginning at the nape of their necks and following through to the rump. They carry their ears straight erect. The quality of the pelt is first and more important when breeding for the "Standard Of Perfection". American Chinchilla Rabbits are a six-class breed in show. (Any rabbit that matures over 9 pounds is a 6-class breed, maturation weights under 9# are 4-class breeds.) The American Chinchilla Rabbit was bred from large Standard Chinchilla Rabbits in order to produce a meatier rabbit. They were originally called Heavyweight Chinchilla Rabbits. Junior and intermediate American Chinchilla Rabbits may be shown in age classifications higher than their own if they are overweight. Bucks and does under six months and nine pounds are considered juniors. Intermediate American Chinchilla Rabbits are bucks and does six to eight months of age. The American Chinchilla Rabbit is listed on "The Livestock Conservancy as being the only "critically endangered" rabbit at this time. American Chinchilla Rabbits are good breeders, with an average litter of 6-9 kits. Giant Chinchilla Weight: The Giant Chinchilla is a result of crosses between Chinchilla and Flemish Giant breeds; it originates in the United States. This breed is used primarily as a commercial meat rabbit.
Biology and health sciences
Rabbits
Animals
12891610
https://en.wikipedia.org/wiki/Papeda%20%28citrus%29
Papeda (citrus)
Papeda or papaeda is the common name for a group of Citrus species and varieties native to tropical Asia that are hardy and slow-growing, and produce unpalatable fruit. Walter Tennyson Swingle segregated these species into a separate subgenus, Papeda, that included the Ichang lemon, yuzu, kaffir lime, kabosu, sudachi, and a number of wild and uncultivated species and hybrids. Recent genetic analysis shows the papedas to be distributed among distinct branches of the Citrus phylogenetic tree, and hence Swingle's proposed subgenus is polyphyletic and not a valid taxonomic grouping, but the term persists as a common name. Because of generally slow growth and bitter, less palatable fruits than in other citruses, papeda species have only limited commercial cultivation. Some species, like ichang papeda, are used in landscaping, while others are important for rootstocking and as genome source for breeding disease-resistant and frost-hardy citrus hybrids. In some cases the skin or leaves are used as a flavoring in Asian cuisine. It is believed, based on molecular studies, that the citron, pomelo, mandarin and papedas were the ancestors of most hybrid citrus species and their varieties, which resulted from breeding or natural hybridization among the parental species. For example, the Key lime, a hybrid between a papeda, the micrantha, and a citron, has in turn given rise to many commercial types of limes. Classification There are four species of Papeda currently recognised by Kew and the Missouri Botanical Garden. These are: Citrus cavaleriei - the Ichang papeda Citrus halimii - mountain citron Citrus latipes - khasi papeda Citrus hystrix - The kaffir lime or Mauritius papeda There are many naturally occurring varieties that are now classified as subspecies: Citrus hystrix var. micrantha - small-flowered papeda (locally known as the biasong) Citrus hystrix var. microcarpa - small-fruited papeda (locally known as the samuyao) Citrus hystrix var. celebica - Celebes papeda Citrus hystrix var. macroptera - Melanesian papeda Citrus x aurantiifolia var. macrophylla - alemow Citrus x aurantiifolia var. webberi - kalpi Citrus longispina - winged lime (unresolved as to whether or not it is a hybrid, variety or species) A number of hybrids between this subgenus and the subgenus Citrus also exist: Ichandarins Yuzu (ichang papeda × mandarin) Sudachi (ichang papeda × mandarin) Ichang lemon (ichang papeda × pomelo) Hyuganatsu (yuzu × pomelo, or yuzu sport) Kabosu (ichang papeda × bitter orange)
Biology and health sciences
Citrus fruits
Plants
474589
https://en.wikipedia.org/wiki/Ultra-high-energy%20cosmic%20ray
Ultra-high-energy cosmic ray
In astroparticle physics, an ultra-high-energy cosmic ray (UHECR) is a cosmic ray with an energy greater than 1 EeV (1018 electronvolts, approximately 0.16 joules), far beyond both the rest mass and energies typical of other cosmic ray particles. The origin of these highest energy cosmic ray is not known. These particles are extremely rare; between 2004 and 2007, the initial runs of the Pierre Auger Observatory (PAO) detected 27 events with estimated arrival energies above , that is, about one such event every four weeks in the area surveyed by the observatory. Observational history The first observation of a cosmic ray particle with an energy exceeding (16 J) was made by John Linsley and Livio Scarsi at the Volcano Ranch experiment in New Mexico in 1962. Cosmic ray particles with even higher energies have since been observed. Among them was the Oh-My-God particle observed by the University of Utah's Fly's Eye experiment on the evening of 15 October 1991 over Dugway Proving Ground, Utah. Its observation was shocking to astrophysicists, who estimated its energy at approximately (50 J)—essentially an atomic nucleus with kinetic energy equal to a baseball () traveling at about . The energy of this particle is some 40 million times that of the highest energy protons that have been produced in any terrestrial particle accelerator. However, only a small fraction of this energy would be available for an interaction with a proton or neutron on Earth, with most of the energy remaining in the form of kinetic energy of the products of the interaction (see ). The effective energy available for such a collision is the square root of double the product of the particle's energy and the mass energy of the proton, which for this particle gives , roughly 50 times the collision energy of the Large Hadron Collider. Since the first observation, by the University of Utah's Fly's Eye Cosmic Ray Detector, at least fifteen similar events have been recorded, confirming the phenomenon. These very high energy cosmic ray particles are very rare; the energy of most cosmic ray particles is between 10 MeV and 10 GeV. Ultra-high-energy cosmic ray observatories AGASA – Akeno Giant Air Shower Array in Japan Antarctic Impulse Transient Antenna (ANITA) detects ultra-high-energy cosmic neutrinos believed to be caused by ultra-high-energy cosmic ray particles Extreme Universe Space Observatory GRAPES-3 (Gamma Ray Astronomy PeV EnergieS 3rd establishment) is a project for cosmic ray study with air shower detector array and large area muon detectors at Ooty in southern India. High Resolution Fly's Eye Cosmic Ray Detector (HiRes) MARIACHI – Mixed Apparatus for Radar Investigation of Cosmic-rays of High Ionization located on Long Island, USA. Pierre Auger Observatory Telescope Array Project Yakutsk Extensive Air Shower Array Tunka experiment The COSMICi project at Florida A&M University is developing technology for a distributed network of low-cost detectors for UHECR showers in collaboration with MARIACHI. Cosmic-Ray Extremely Distributed Observatory (CREDO) Pierre Auger Observatory Pierre Auger Observatory is an international cosmic ray observatory designed to detect ultra-high-energy cosmic ray particles (with energies beyond 1020 eV). These high-energy particles have an estimated arrival rate of just 1 per square kilometer per century, therefore, in order to record a large number of these events, the Auger Observatory has created a detection area of 3,000 km2 (the size of Rhode Island) in Mendoza Province, western Argentina. The Pierre Auger Observatory, in addition to obtaining directional information from the cluster of water-Cherenkov tanks used to observe the cosmic-ray-shower components, also has four telescopes trained on the night sky to observe fluorescence of the nitrogen molecules as the shower particles traverse the sky, giving further directional information on the original cosmic ray particle. In September 2017, data from 12 years of observations from PAO supported an extragalactic source (outside of Earth's galaxy) for the origin of extremely high energy cosmic rays. Suggested origins The origin of these rare highest energy cosmic rays is not known. Since observations find no correlation with the Galactic plane and Galactic magnetic fields are not strong enough to accelerate particles to these energies, these cosmic rays are believed to have extra-galactic origin. Neutron stars One suggested source of UHECR particles is their origination from neutron stars. In young neutron stars with spin periods of <10 ms, the magnetohydrodynamic (MHD) forces from the quasi-neutral fluid of superconducting protons and electrons existing in a neutron superfluid accelerate iron nuclei to UHECR velocities. The neutron superfluid in rapidly rotating stars creates a magnetic field of 108 to 1011 teslas, at which point the neutron star is classified as a magnetar. This magnetic field is the strongest stable field in the observed universe and creates the relativistic MHD wind believed to accelerate iron nuclei remaining from the supernova to the necessary energy. Another hypothesized source of UHECRs from neutron stars is during neutron star to strange star combustion. This hypothesis relies on the assumption that strange matter is the ground state of matter which has no experimental or observational data to support it. Due to the immense gravitational pressures from the neutron star, it is believed that small pockets of matter consisting of up, down, and strange quarks in equilibrium acting as a single hadron (as opposed to a number of baryons). This will then combust the entire star to strange matter, at which point the neutron star becomes a strange star and its magnetic field breaks down, which occurs because the protons and neutrons in the quasi-neutral fluid have become strangelets. This magnetic field breakdown releases large amplitude electromagnetic waves (LAEMWs). The LAEMWs accelerate light ion remnants from the supernova to UHECR energies. "Ultra-high-energy cosmic ray electrons" (defined as electrons with energies of ≥1014eV) might be explained by the Centrifugal mechanism of acceleration in the magnetospheres of the Crab-like Pulsars. The feasibility of electron acceleration to this energy scale in the Crab pulsar magnetosphere is supported by the 2019 observation of ultra-high-energy gamma rays coming from the Crab Nebula, a young pulsar with a spin period of 33 ms. Active galactic cores Interactions with blue-shifted cosmic microwave background radiation limit the distance that these particles can travel before losing energy; this is known as the Greisen–Zatsepin–Kuzmin limit or GZK limit. The source of such high energy particles has been a mystery for many years. Recent results from the Pierre Auger Observatory show that ultra-high-energy cosmic ray arrival directions appear to be correlated with extragalactic supermassive black holes at the center of nearby galaxies called active galactic nuclei (AGN). However, since the angular correlation scale used is fairly large (3.1°) these results do not unambiguously identify the origins of such cosmic ray particles. The AGN could merely be closely associated with the actual sources, for example in galaxies or other astrophysical objects that are clumped with matter on large scales within 100 megaparsecs. Some of the supermassive black holes in AGN are known to be rotating, as in the Seyfert galaxy MCG 6-30-15 with time-variability in their inner accretion disks. Black hole spin is a potentially effective agent to drive UHECR production, provided ions are suitably launched to circumvent limiting factors deep within the galactic nucleus, notably curvature radiation and inelastic scattering with radiation from the inner disk. Low-luminosity, intermittent Seyfert galaxies may meet the requirements with the formation of a linear accelerator several light years away from the nucleus, yet within their extended ion tori whose UV radiation ensures a supply of ionic contaminants. The corresponding electric fields are small, on the order of 10 V/cm, whereby the observed UHECRs are indicative for the astronomical size of the source. Improved statistics by the Pierre Auger Observatory will be instrumental in identifying the presently tentative association of UHECRs (from the Local Universe) with Seyferts and LINERs. Other possible sources of the particles In addition to neutron stars and active galactic nuclei, the best candidate sources of the UHECR are: Supernova remnants intergalactic shocks created during the epoch of galaxy formation gamma-ray bursts relativistic supernovae Relation with dark matter It is hypothesized that active galactic nuclei are capable of converting dark matter into high energy protons. Yuri Pavlov and Andrey Grib at the Alexander Friedmann Laboratory for Theoretical Physics in Saint Petersburg hypothesize that dark matter particles are about 15 times heavier than protons, and that they can decay into pairs of heavier virtual particles of a type that interacts with ordinary matter. Near an active galactic nucleus, one of these particles can fall into the black hole, while the other escapes, as described by the Penrose process. Some of those particles will collide with incoming particles; these are very high energy collisions which, according to Pavlov, can form ordinary visible protons with very high energy. Pavlov then claims that evidence of such processes are ultra-high-energy cosmic ray particles. Propagation Ultra-high-energy particles can interact with the photons in the cosmic microwave background while traveling over cosmic distances. This lead to a predicted high energy cutoff for those cosmic rays known as the Greisen–Zatsepin–Kuzmin limit (GZK limit) which matches observed cosmic ray spectra. The propagation of particles can also be affected by cosmic magnetic fields. While there is some studies of galactic magnetic fields, the origin and scale of extragalactic magnetic fields are poorly understood.
Physical sciences
Basics_2
Astronomy
474843
https://en.wikipedia.org/wiki/Sea%20pen
Sea pen
Sea pens are marine cnidarians belonging to the order Pennatulacea, which are colony-forming benthic filter feeders within the class Octocorallia (subphylum Anthozoa). There are 14 families within the order and 35 extant genera; it is estimated that of 450 described species, around 200 are valid. Sea pens have a cosmopolitan distribution, being found in tropical and temperate waters worldwide, from intertidal shallow waters to deep seas of more than . Sea pens are grouped with the octocorals together with sea whips (gorgonians), but there has been only one molecular study focusing on the phylogenetic relationships within the order Pennatulacea, which mainly treated deep-sea species, and thus information on shallow water species is still lacking. Although the group is named for its supposed resemblance to antique quill pens, only sea pen species belonging to the suborder Subselliflorae live up to the comparison. Those belonging to the much larger suborder Sessiliflorae lack feathery structures and grow in club-like or radiating forms. The latter suborder includes what are commonly known as sea pansies. The earliest accepted sea pen fossils are known from the Cambrian-aged Burgess Shale (Thaumaptilon). Similar fossils from the Ediacaran may show the dawn of sea pens. Precisely what these early fossils are, however, is not decided. Taxonomy The order Pennatulacea consists of the following families: Chunellidae Echinoptilidae Renillidae Scleroptilidae Stachyptilidae Suborder Sessiliflorae Anthoptilidae Funiculinidae Kophobelemnidae Protoptilidae Pseudumbellulidae Umbellulidae Veretillidae Suborder Subsessiliflorae Halipteridae Pennatulidae Virgulariidae Biology Due to their wide geographic distribution and long evolutionary history, genetic variation within the different species of sea pen is quite large. Throughout evolution of Pennatulaceans, most sea pens have kept their original mitochondrial gene order, but a certain clade of sea pens shown unique rearrangements through ancestral state reconstruction. There are many populations of sea pens found in mainly Indian waters. It is their polyps that are affected genetically, as they have dispersed within the different waters and islands, and how they use their polyps (tentacles) to protect themselves and other species. As octocorals, sea pens are colonial animals with multiple polyps (which look somewhat like miniature sea anemones), each with eight tentacles. Unlike other octocorals, however, a sea pen's polyps are specialized to specific functions: a single polyp develops into a rigid, erect stalk (the rachis) and loses its tentacles, forming a bulbous "root" or peduncle at its base. The other polyps branch out from this central stalk, forming water intake structures (siphonozooids), feeding structures (autozooids) with nematocysts, and reproductive structures. The entire colony is fortified by calcium carbonate in the form of spicules and a central axial rod. Using their root-like peduncles to anchor themselves in sandy or muddy substrate, the exposed portion of sea pens may rise up to in some species, such as the tall sea pen (Funiculina quadrangularis). Sea pens are sometimes brightly coloured; the orange sea pen (Ptilosarcus gurneyi) is a notable example. Rarely found above depths of , sea pens prefer deeper waters where turbulence is less likely to uproot them. Some species may inhabit depths of or more. While generally sessile animals, sea pens are able to relocate and re-anchor themselves if need be. They position themselves favourably in the path of currents, ensuring a steady flow of plankton, the sea pens' chief source of food. Their primary predators are nudibranchs and sea stars, some of which feed exclusively on sea pens. The sea pens' ability to be clumped together and spatially unpredictable hinders sea stars' predation abilities. When touched, some sea pens emit a bright greenish light; this is known as bioluminescence. They may also force water out of their bodies for defence, rapidly deflating and retreating into their peduncle. Like other anthozoans, sea pens reproduce by coordinating a release of sperm and eggs into the water column; this may occur seasonally or throughout the year. Fertilized eggs develop into larvae called planulae which drift freely for about a week before settling on the substrate. Mature sea pens provide shelter for other animals, such as juvenile fish. Analysis of rachis growth rings indicates sea pens may live for 100 years or more, if the rings are indeed annual in nature. Some sea pens exhibit glide reflection symmetry, rare among extant animals. Aquarium trade Sea pens are sometimes sold in the aquarium trade. However, they are generally hard to care for because they need a very deep substrate and have special food requirements.
Biology and health sciences
Cnidarians
Animals
474873
https://en.wikipedia.org/wiki/Mamba
Mamba
Mambas are fast-moving, highly venomous snakes of the genus Dendroaspis (which literally means "tree asp") in the family Elapidae. Four extant species are recognised currently; three of those four species are essentially arboreal and green in colour, whereas the black mamba, Dendroaspis polylepis, is largely terrestrial and generally brown or grey in colour. All are native to various regions in sub-Saharan Africa and all are feared throughout their ranges, especially the black mamba. In Africa there are many legends and stories about mambas. Behaviour The three green species of mambas are arboreal, whereas the black mamba is largely terrestrial. All four species are active diurnal hunters, preying on birds, lizards, and small mammals. At nightfall some species, especially the terrestrial black mamba, shelter in a lair. A mamba may retain the same lair for years. Resembling a cobra, the threat display of a mamba includes rearing, opening the mouth and hissing. The black mamba's mouth is black within, which renders the threat more conspicuous. A rearing mamba has a narrower yet longer hood and tends to lean well forward, instead of standing erect as a cobra does. Stories of black mambas that chase and attack humans are common, but in fact the snakes generally avoid contact with humans. The black mamba (Dendroaspis polylepis) is a highly venomous snake species native to various parts of sub-Saharan Africa. Black mambas are fast-moving, nervous snakes that will strike when threatened. According to findings by Branch (2016), their venom comprises neurotoxins and cardiotoxins that can rapidly induce symptoms, including dizziness, extreme fatigue, vision problems, foaming at the mouth, paralysis, convulsions, and eventual death from respiratory or cardiac failure if untreated. Although black mamba venom is highly toxic, antivenom is available and can treat envenomation promptly. Most apparent cases of pursuit probably are examples of where witnesses have mistaken the snake's attempt to retreat to its lair when a human happens to be in the way. The black mamba usually uses its speed to escape from threats, and humans actually are their main predators, rather than prey. Venom All mambas have medically significant venom, with dendrotoxins, short chain alpha-neurotoxins, cardiotoxins and fasciculins. All mambas are classified as snakes of medical importance by the World Health Organization. There are multiple components in dendrotoxins with different targets: Dendrotoxin 1, which inhibits the K+ channels at the pre and post-synaptic level in the intestinal smooth muscle. It also inhibits Ca2+-sensitive K+ channels from rat skeletal muscle‚ incorporated into planar bilayers (Kd = 90 nM in 50 mM KCl).) Dendrotoxin 3, which inhibits acetylcholine M4 receptors. Dendrotoxin 7, commonly referred to as muscarinic toxin 7 (MT7) inhibits acetylcholine M1 receptors. Dendrotoxin K, structurally homologous to Kunitz-type proteinase inhibitors with activity as a selective blocker of voltage-gated potassium channels Toxicity alone does not determine severity of envenomation; other factors include the snake's temperament, venom yields, proximity of wounds to the CNS and depth of punctures. Bites by all members of this genus are capable of causing rapid onsets of symptoms, but it is the black mamba whose bite has the worst prognosis, possibly as a result of its more terrestrial nature (having more potential for human contact), high defensiveness (having a higher possibility to deliver fatal bites instead of dry bites), large size (giving it a higher strike position proximal to the victim's brain), and higher average venom yields and potential toxicity (based on experimental results). A lethality rate of near 100% for untreated black mamba bites has been circulating between various sources, which is probably based on a single medical record made in a single district between 1957 and 1963 when specific antivenom had yet to be introduced. Seven out of seven victims of this species who received non-specific polyvalent antivenom, that had no effect on the species' toxins, succumbed to its bites. However, another snakebite survey in South Africa reported a death rate of approximately 43% among those who received ineffective treatments (15 fatal cases out of 35 patients). A mamba-specific antivenom was introduced in 1962, followed by a fully polyvalent antivenom in 1971; over this period, 5 out of 38 people in South Africa bitten by black mambas who received the antivenom died, according to the same report. Since then, the number has significantly dropped with the widespread use of specific antivenom. Despite their fearsome reputation and often exaggerated notoriety, mamba envenomation occurs far less frequently than some other snakes', for instance the puff adder. Besides proximity to residences, behaviour of a given species is also a critical aspect when it comes to snakebite morbidities. Mambas are agile, usually fleeing from any confrontation with unambiguous threat display which allows early recognition of the serpent, avoiding escalation in tension. Taxonomy Dendroaspis, is derived from Ancient Greek déndron (δένδρον), meaning "tree", and aspis (ασπίς), which is understood to mean "shield", but also denotes "cobra" or simply "snake", in particular "snake with hood (shield)". Via Latin aspis, it is the source of the English word "asp". In ancient texts, aspis or asp often referred to the Egyptian cobra (Naja haje), in reference to its shield-like hood. The genus was first described by the German naturalist Hermann Schlegel in 1848, with Elaps jamesonii as the type species. It was misspelt as Dendraspis by Dumeril in 1856, and generally uncorrected by subsequent authors. In 1936, Dutch herpetologist Leo Brongersma pointed out that the correct spelling was Dendroaspis but added that the name was invalid as Fitzinger had coined Dendraspis in 1843 for the king cobra and hence had priority. However, in 1962 German herpetologist Robert Mertens proposed that the 1843 description of Dendraspis by Fitzinger be suppressed due to its similarity to Dendroaspis, and the confusion it would cause by its use. Range and characteristics Black mambas live in the savannas and rocky hills of southern and eastern Africa. They are Africa's longest venomous snake, reaching up to 14 feet in length, although 8.2 feet is more the average. They are also among the fastest snakes in the world, slithering at speeds of up to 12.5 miles per hour. * Including the nominate subspecies. T Type species. Phylogeny A 2018 analysis of the venom of the mambas, as well as a 2016 genetic analysis, found the following cladogram representative of the relationship between the species.
Biology and health sciences
Reptiles
null
474905
https://en.wikipedia.org/wiki/Tincture%20of%20iodine
Tincture of iodine
Tincture of iodine, iodine tincture, or weak iodine solution is an antiseptic. It is usually 2 to 3% elemental iodine, along with potassium iodide or sodium iodide, dissolved in a mixture of ethanol and water. Tincture solutions are characterized by the presence of alcohol. It was used from 1908 in pre-operative skin preparation by Italian surgeon Antonio Grossich. In the United Kingdom, the development of an iodine solution for skin sterilisation was pioneered by Lionel Stretton. The British Medical Journal published the detail of his work at Kidderminster Infirmary in 1909. Stretton used a much weaker solution than that used by Grossich. He claimed in 1915 that Grossich had been using a liquid akin to Liquor Iodi Fortis, and that it was he, Stretton, who had introduced the method using Tincture of Iodine BP which came to be used across the world. USP formulas USP Tincture of Iodine is defined in the U.S. National Formulary (NF) as containing in each 100 mL, 1.8 to 2.2 grams of elemental iodine, and 2.1 to 2.6 grams of sodium iodide. Alcohol is 50 mL, and the balance is purified water. This "2% free iodine" solution has 0.08 mol/L of I2, which provides about 1 mg of free iodine per 0.05 mL drop. The "2% free iodine" description is based on the quantity of elemental iodine, not sodium/potassium iodide. USP Strong Iodine Tincture is defined in the NF as containing in each 100 mL, 6.8 to 7.5 gram of iodine, and 4.7 to 5.5 gram of potassium iodide. Purified water is 50 mL and the balance is alcohol. This 7% tincture solution is about 3.5 times more concentrated than USP 2% tincture. As in the case of Lugol's iodine, the role of iodide in the solution is to increase the solubility of the elemental iodine, by turning it to the soluble triiodide anion I3−. However, since iodine has moderate solubility in ethanol, it is also assisted by this solvent directly. Lugol's iodine, by contrast, has no alcohol, and has twice the mass of potassium iodide as of elemental iodine. Alcohol content in the tincture of iodine can be determined by the methods of Alcock, Roscoe – Schorlemmer and Thurston – Thurston. Usage As both USP solutions contain elemental iodine, which is moderately toxic when ingested in amounts larger than those required to disinfect water, tincture of iodine is sold labelled “for external use only,” and used primarily as a disinfectant. Tincture of iodine is often found in emergency survival kits, used both to disinfect wounds and to sanitize surface water for drinking. When an alcohol solution is not desirable for this purpose, the alcohol-free Lugol's iodine, an aqueous solution of iodine in potassium iodide solution, or else povidone-iodine (brand names Wokadine, Betadine), a PVPI solution, can be used. Small amounts may be added to suspect drinking water as a disinfectant (typically 5 mg free iodine per liter, or 5 drops of 2% tincture). Though this treatment is effective against bacteria and viruses, it does not protect against protozoan parasites such as Cryptosporidium and Giardia. Iodine solution is used to sanitize the surface of fruit and vegetables from bacteria and viruses. The common concentration for sanitization is 25 ppm iodophor for 1 minute. However, the effectiveness depends on whether the solution penetrates into rifts, and whether dirt is effectively removed at first. The oocytes of protozoan parasites will not be killed, and it is also doubtful that bacterial spores will be killed. Iodine solutions should not be considered able to sanitize or disinfect salad, fruit or vegetables that are contaminated with feces. Iodine tincture is not a recommended source of solely-nutritional iodine. Nutritional iodine is better supplied in the form of the less toxic iodide (see SSKI) or iodate salts, which the body can easily convert to thyroid hormone. Nevertheless, the iodide in tincture of iodine used as a water disinfectant does supply more than adequate nutritional iodine, perhaps 30 or more times the recommended daily allowance per liter or quart. Application of tincture or Lugol's to the skin also results in absorption and bioavailability of some moderate fraction of the iodine. This method can be used to saturate the thyroid with iodine to help prevent the excessive uptake of radioactive iodine-131 in a nuclear accident.
Physical sciences
Halide salts
Chemistry
475008
https://en.wikipedia.org/wiki/Stiffness
Stiffness
Stiffness is the extent to which an object resists deformation in response to an applied force. The complementary concept is flexibility or pliability: the more flexible an object is, the less stiff it is. Calculations The stiffness, of a body is a measure of the resistance offered by an elastic body to deformation. For an elastic body with a single degree of freedom (DOF) (for example, stretching or compression of a rod), the stiffness is defined as where, is the force on the body is the displacement produced by the force along the same degree of freedom (for instance, the change in length of a stretched spring) Stiffness is usually defined under quasi-static conditions, but sometimes under dynamic loading. In the International System of Units, stiffness is typically measured in newtons per meter (). In Imperial units, stiffness is typically measured in pounds (lbs) per inch. Generally speaking, deflections (or motions) of an infinitesimal element (which is viewed as a point) in an elastic body can occur along multiple DOF (maximum of six DOF at a point). For example, a point on a horizontal beam can undergo both a vertical displacement and a rotation relative to its undeformed axis. When there are degrees of freedom a matrix must be used to describe the stiffness at the point. The diagonal terms in the matrix are the direct-related stiffnesses (or simply stiffnesses) along the same degree of freedom and the off-diagonal terms are the coupling stiffnesses between two different degrees of freedom (either at the same or different points) or the same degree of freedom at two different points. In industry, the term influence coefficient is sometimes used to refer to the coupling stiffness. It is noted that for a body with multiple DOF, the equation above generally does not apply since the applied force generates not only the deflection along its direction (or degree of freedom) but also those along with other directions. For a body with multiple DOF, to calculate a particular direct-related stiffness (the diagonal terms), the corresponding DOF is left free while the remaining should be constrained. Under such a condition, the above equation can obtain the direct-related stiffness for the degree of unconstrained freedom. The ratios between the reaction forces (or moments) and the produced deflection are the coupling stiffnesses. The elasticity tensor is a generalization that describes all possible stretch and shear parameters. A single spring may intentionally be designed to have variable (non-linear) stiffness throughout its displacement. Compliance The inverse of stiffness is or , typically measured in units of metres per newton. In rheology, it may be defined as the ratio of strain to stress, and so take the units of reciprocal stress, for example, 1/Pa. Rotational stiffness A body may also have a rotational stiffness, given by where is the applied moment is the rotation angle In the SI system, rotational stiffness is typically measured in newton-metres per radian. In the SAE system, rotational stiffness is typically measured in inch-pounds per degree. Further measures of stiffness are derived on a similar basis, including: shear stiffness - the ratio of applied shear force to shear deformation torsional stiffness - the ratio of applied torsion moment to the angle of twist Relationship to elasticity The elastic modulus of a material is not the same as the stiffness of a component made from that material. Elastic modulus is a property of the constituent material; stiffness is a property of a structure or component of a structure, and hence it is dependent upon various physical dimensions that describe that component. That is, the modulus is an intensive property of the material; stiffness, on the other hand, is an extensive property of the solid body that is dependent on the material its shape and boundary conditions. For example, for an element in tension or compression, the axial stiffness is where is the (tensile) elastic modulus (or Young's modulus), is the cross-sectional area, is the length of the element. Similarly, the torsional stiffness of a straight section is where is the rigidity modulus of the material, is the torsion constant for the section. Note that the torsional stiffness has dimensions [force] * [length] / [angle], so that its SI units are N*m/rad. For the special case of unconstrained uniaxial tension or compression, Young's modulus be thought of as a measure of the stiffness of a structure. Applications The stiffness of a structure is of principal importance in many engineering applications, so the modulus of elasticity is often one of the primary properties considered when selecting a material. A high modulus of elasticity is sought when deflection is undesirable, while a low modulus of elasticity is required when flexibility is needed. In biology, the stiffness of the extracellular matrix is important for guiding the migration of cells in a phenomenon called durotaxis. Another application of stiffness finds itself in skin biology. The skin maintains its structure due to its intrinsic tension, contributed to by collagen, an extracellular protein that accounts for approximately 75% of its dry weight. The pliability of skin is a parameter of interest that represents its firmness and extensibility, encompassing characteristics such as elasticity, stiffness, and adherence. These factors are of functional significance to patients. This is of significance to patients with traumatic injuries to the skin, whereby the pliability can be reduced due to the formation and replacement of healthy skin tissue by a pathological scar. This can be evaluated both subjectively, or objectively using a device such as the Cutometer. The Cutometer applies a vacuum to the skin and measures the extent to which it can be vertically distended. These measurements are able to distinguish between healthy skin, normal scarring, and pathological scarring, and the method has been applied within clinical and industrial settings to monitor both pathophysiological sequelae, and the effects of treatments on skin.
Physical sciences
Solid mechanics
null
475153
https://en.wikipedia.org/wiki/Patch%20%28computing%29
Patch (computing)
A patch is data that is intended to be used to modify an existing software resource such as a program or a file, often to fix bugs and security vulnerabilities. A patch may be created to improve functionality, usability, or performance. A patch is typically provided by a vendor for updating the software that they provide. A patch may be created manually, but commonly it is created via a tool that compares two versions of the resource and generates data that can be used to transform one to the other. Typically, a patch needs to be applied to the specific version of the resource it is intended to modify, although there are exceptions. Some patching tools can detect the version of the existing resource and apply the appropriate patch, even if it supports multiple versions. As more patches are released, their cumulative size can grow significantly, sometimes exceeding the size of the resource itself. To manage this, the number of supported versions may be limited, or a complete copy of the resource might be provided instead. Patching allows for modifying a compiled (machine language) program when the source code is unavailable. This demands a thorough understanding of the inner workings of the compiled code, which is challenging without access to the source code. Patching also allows for making changes to a program without rebuilding it from source. For small changes, it can be more economical to distribute a patch than to distribute the complete resource. Although often intended to fix problems, a poorly designed patch can introduce new problems (see software regressions). In some cases updates may knowingly break the functionality or disable a device, for instance, by removing components for which the update provider is no longer licensed. Patch management is a part of lifecycle management, and is the process of using a strategy and plan of what patches should be applied to which systems at a specified time. Typically, a patch is applied via programmed control to computer storage so that it is permanent. In some cases a patch is applied by a programmer via a tool such as a debugger to computer memory in which case the change is lost when the resource is reloaded from storage. Types Binary patches Patches for proprietary software are typically distributed as executable files instead of source code. When executed these files load a program into memory which manages the installation of the patch code into the target program(s) on disk. Patches for other software are typically distributed as data files containing the patch code. These are read by a patch utility program which performs the installation. This utility modifies the target program's executable file—the program's machine code—typically by overwriting its bytes with bytes representing the new patch code. If the new code will fit in the space (number of bytes) occupied by the old code, it may be put in place by overwriting directly over the old code. This is called an inline patch. If the new code is bigger than the old code, the patch utility will append load record(s) containing the new code to the object file of the target program being patched. When the patched program is run, execution is directed to the new code with branch instructions (jumps or calls) patched over the place in the old code where the new code is needed. On early 8-bit microcomputers, for example the Radio Shack TRS-80, the operating system includes a PATCH/CMD utility which accepts patch data from a text file and applies the fixes to the target program's executable binary file(s). The patch code must have place(s) in memory to be executed at runtime. Inline patches are no difficulty, but when additional memory space is needed the programmer must improvise. Naturally if the patch programmer is the one who first created the code to be patched, this is easier. Savvy programmers plan in advance for this need by reserving memory for later expansion, left unused when producing their final iteration. Other programmers not involved with the original implementation, seeking to incorporate changes at a later time, must find or make space for any additional bytes needed. The most fortunate possible circumstance for this is when the routine to be patched is a distinct module. In this case the patch programmer need merely adjust the pointers or length indicators that signal to other system components the space occupied by the module; he is then free to populate this memory space with his expanded patch code. If the routine to be patched does not exist as a distinct memory module, the programmer must find ways to shrink the routine to make enough room for the expanded patch code. Typical tactics include shortening code by finding more efficient sequences of instructions (or by redesigning with more efficient algorithms), compacting message strings and other data areas, externalizing program functions to mass storage (such as disk overlays), or removal of program features deemed less important than the changes to be installed with the patch. Small in-memory machine code patches can be manually applied with the system debug utility, such as CP/M's DDT or MS-DOS's DEBUG debuggers. Programmers working in interpreted BASIC often used the POKE command to alter the functionality of a system service routine or the interpreter itself. Source code patches Patches can also circulate in the form of source code modifications. In this case, the patches usually consist of textual differences between two source code files, called "diffs". These types of patches commonly come out of open-source software projects. In these cases, developers expect users to compile the new or changed files themselves. Large patches Because the word "patch" carries the connotation of a small fix, large fixes may use different nomenclature. Bulky patches or patches that significantly change a program may circulate as "service packs" or as "software updates". Microsoft Windows NT and its successors (including Windows 2000, Windows XP, Windows Vista and Windows 7) use the "service pack" terminology. Historically, IBM used the terms "FixPaks" and "Corrective Service Diskette" to refer to these updates. History Historically, software suppliers distributed patches on paper tape or on punched cards, expecting the recipient to cut out the indicated part of the original tape (or deck), and patch in (hence the name) the replacement segment. Later patch distributions used magnetic tape. Then, after the invention of removable disk drives, patches came from the software developer via a disk or, later, CD-ROM via mail. With widely available Internet access, downloading patches from the developer's web site or through automated software updates became often available to the end-users. Starting with Apple's Mac OS 9 and Microsoft's Windows ME, PC operating systems gained the ability to get automatic software updates via the Internet. Computer programs can often coordinate patches to update a target program. Automation simplifies the end-user's task they need only to execute an update program, whereupon that program makes sure that updating the target takes place completely and correctly. Service packs for Microsoft Windows NT and its successors and for many commercial software products adopt such automated strategies. Some programs can update themselves via the Internet with very little or no intervention on the part of users. The maintenance of server software and of operating systems often takes place in this manner. In situations where system administrators control a number of computers, this sort of automation helps to maintain consistency. The application of security patches commonly occurs in this manner. With the advent of larger storage media and higher Internet bandwidth, it became common to replace entire files (or even all of a program's files) rather than modifying existing files, especially for smaller programs. Application The size of patches may vary from a few bytes to hundreds of megabytes; thus, more significant changes imply a larger size, though this also depends on whether the patch includes entire files or only the changed portion(s) of files. In particular, patches can become quite large when the changes add or replace non-program data, such as graphics and sounds files. Such situations commonly occur in the patching of computer games. Compared with the initial installation of software, patches usually do not take long to apply. In the case of operating systems and computer server software, patches have the particularly important role of fixing security holes. Some critical patches involve issues with drivers. Patches may require prior application of other patches, or may require prior or concurrent updates of several independent software components. To facilitate updates, operating systems often provide automatic or semi-automatic updating facilities. Completely automatic updates have not succeeded in gaining widespread popularity in corporate computing environments, partly because of the aforementioned glitches, but also because administrators fear that software companies may gain unlimited control over their computers. Package management systems can offer various degrees of patch automation. Usage of completely automatic updates has become far more widespread in the consumer market, due largely to the fact that Microsoft Windows added support for them, and Service Pack 2 of Windows XP (available in 2004) enabled them by default. Cautious users, particularly system administrators, tend to put off applying patches until they can verify the stability of the fixes. Microsoft (W)SUS supports this. In the cases of large patches or of significant changes, distributors often limit availability of patches to qualified developers as a beta test. Applying patches to firmware poses special challenges, as it often involves the provisioning of totally new firmware images, rather than applying only the differences from the previous version. The patch usually consists of a firmware image in form of binary data, together with a supplier-provided special program that replaces the previous version with the new version; a motherboard BIOS update is an example of a common firmware patch. Any unexpected error or interruption during the update, such as a power outage, may render the motherboard unusable. It is possible for motherboard manufacturers to put safeguards in place to prevent serious damage; for example, the update procedure could make and keep a backup of the firmware to use in case it determines that the primary copy is corrupt (usually through the use of a checksum, such as a CRC). Video games Video games receive patches to fix compatibility problems after their initial release just like any other software, but they can also be applied to change game rules or algorithms. These patches may be prompted by the discovery of exploits in the multiplayer game experience that can be used to gain unfair advantages over other players. Extra features and gameplay tweaks can often be added. These kinds of patches are common in first-person shooters with multiplayer capability, and in MMORPGs, which are typically very complex with large amounts of content, almost always rely heavily on patches following the initial release, where patches sometimes add new content and abilities available to players. Because the balance and fairness for all players of an MMORPG can be severely corrupted within a short amount of time by an exploit, servers of an MMORPG are sometimes taken down with short notice in order to apply a critical patch with a fix. Companies sometimes release games knowing that they have bugs. Computer Gaming Worlds Scorpia in 1994 denounced "companies—too numerous to mention—who release shoddy product knowing they can get by with patches and upgrades, and who make pay-testers of their customers". In software development Patches sometimes become mandatory to fix problems with libraries or with portions of source code for programs in frequent use or in maintenance. This commonly occurs on very large-scale software projects, but rarely in small-scale development. In open-source projects, the authors commonly receive patches or many people publish patches that fix particular problems or add certain functionality, like support for local languages outside the project's locale. In an example from the early development of the Linux kernel (noted for publishing its complete source code), Linus Torvalds, the original author, received hundreds of thousands of patches from many programmers to apply against his original version. The Apache HTTP Server originally evolved as a number of patches that Brian Behlendorf collated to improve NCSA HTTPd, hence a name that implies that it is a collection of patches ("a patchy server"). The FAQ on the project's official site states that the name 'Apache' was chosen from respect for the Native American Indian tribe of Apache. However, the 'a patchy server' explanation was initially given on the project's website. Variants Hotfix A hotfix or Quick Fix Engineering update (QFE update) is a single, cumulative package that includes information (often in the form of one or more files) that is used to address a problem in a software product (i.e., a software bug). Typically, hotfixes are made to address a specific customer situation. Microsoft once used this term but has stopped in favor of new terminology: General Distribution Release (GDR) and Limited Distribution Release (LDR). Blizzard Entertainment, however, defines a hotfix as "a change made to the game deemed critical enough that it cannot be held off until a regular content patch". Point release A point release is a minor release of a software project, especially one intended to fix bugs or do small cleanups rather than add significant features. Often, there are too many bugs to be fixed in a single major or minor release, creating a need for a point release. Program temporary fix Program temporary fix or Product temporary fix (PTF), depending on date, is the standard IBM terminology for a single bug fix, or group of fixes, distributed in a form ready to install for customers. A PTF was sometimes referred to as a “ZAP”. Customers sometime explain the acronym in a tongue-in-cheek manner as permanent temporary fix or more practically probably this fixes, because they have the option to make the PTF a permanent part of the operating system if the patch fixes the problem. Security patches A security patch is a change applied to an asset to correct the weakness described by a vulnerability. This corrective action will prevent successful exploitation and remove or mitigate a threat's capability to exploit a specific vulnerability in an asset. Patch management is a part of vulnerability management the cyclical practice of identifying, classifying, remediating, and mitigating vulnerabilities. Security patches are the primary method of fixing security vulnerabilities in software. Currently Microsoft releases its security patches once a month ("patch Tuesday"), and other operating systems and software projects have security teams dedicated to releasing the most reliable software patches as soon after a vulnerability announcement as possible. Security patches are closely tied to responsible disclosure. These security patches are critical to ensure that business process does not get affected. In 2017, companies were struck by a ransomware called WannaCry which encrypts files in certain versions of Microsoft Windows and demands a ransom via BitCoin. In response to this, Microsoft released a patch which stops the ransomware from running. Service pack A service pack or SP or a feature pack (FP) comprises a collection of updates, fixes, or enhancements to a software program delivered in the form of a single installable package. Companies often release a service pack when the number of individual patches to a given program reaches a certain (arbitrary) limit, or the software release has shown to be stabilized with a limited number of remaining issues based on users' feedback and bug tracking such as Bugzilla. In large software applications such as office suites, operating systems, database software, or network management, it is not uncommon to have a service pack issued within the first year or two of a product's release. Installing a service pack is easier and less error-prone than installing many individual patches, even more so when updating multiple computers over a network, where service packs are common. Unofficial patches An unofficial patch is a patch for a program written by a third party instead of the original developer. Similar to an ordinary patch, it alleviates bugs or shortcomings. Examples are security fixes by security specialists when an official patch by the software producers itself takes too long. Other examples are unofficial patches created by the game community of a video game which became unsupported. Monkey patches Monkey patching means extending or modifying a program locally (affecting only the running instance of the program). Hot patching Hot patching, also known as live patching or dynamic software updating, is the application of patches without shutting down and restarting the system or the program concerned. This addresses problems related to unavailability of service provided by the system or the program. Method can be used to update Linux kernel without stopping the system. A patch that can be applied in this way is called a hot patch or a live patch. This is becoming a common practice in the mobile app space. Companies like Rollout.io use method swizzling to deliver hot patches to the iOS ecosystem. Another method for hot-patching iOS apps is JSPatch. Cloud providers often use hot patching to avoid downtime for customers when updating underlying infrastructure. Slipstreaming In computing, slipstreaming is the act of integrating patches (including service packs) into the installation files of their original app, so that the result allows a direct installation of the updated app. The nature of slipstreaming means that it involves an initial outlay of time and work, but can save a lot of time (and, by extension, money) in the long term. This is especially significant for administrators that are tasked with managing a large number of computers, where typical practice for installing an operating system on each computer would be to use the original media and then update each computer after the installation was complete. This would take a lot more time than starting with a more up-to-date (slipstreamed) source, and needing to download and install the few updates not included in the slipstreamed source. However, not all patches can be applied in this fashion and one disadvantage is that if it is discovered that a certain patch is responsible for later problems, said patch cannot be removed without using an original, non-slipstreamed installation source. Software update systems Software update systems allow for updates to be managed by users and software developers. In the 2017 Petya cyberpandemic, the financial software "MeDoc"'s update system is said to have been compromised to spread malware via its updates. On the Tor Blog, cybersecurity expert Mike Perry states that deterministic, distributed builds are likely the only way to defend against malware that attacks the software development and build processes to infect millions of machines in a single, officially signed, instantaneous update. Update managers also allow for security updates to be applied quickly and widely. Update managers of Linux such as Synaptic allow users to update all software installed on their machine. Applications like Synaptic use cryptographic checksums to verify source/local files before they are applied to ensure fidelity against malware. Malicious updates Some hacker may compromise legitimate software update channel and inject malicious code.
Technology
Software development: General
null
475161
https://en.wikipedia.org/wiki/Stoneware
Stoneware
Stoneware is a broad term for pottery fired at a relatively high temperature. A modern definition is a vitreous or semi-vitreous ceramic made primarily from stoneware clay or non-refractory fire clay. End applications include tableware, decorative ware such as vases. Stoneware is fired at between about to . Historically, reaching such temperatures was a long-lasting challenge, and temperatures somewhat below these were used for a long time. It was developed independently in different locations around the world, after earthenware and before porcelain. Stoneware is not recognised as a category in traditional East Asian terminology, and much Asian stoneware, such as Chinese Ding ware for example, is counted as porcelain by local definitions. Terms such as "porcellaneous" or "near-porcelain" may be used in such cases. Traditional East Asian thinking classifies pottery only into "low-fired" and "high-fired" wares, equating to earthenware and porcelain, without the intermediate European class of stoneware, and the many local types of stoneware were mostly classed as porcelain, though often not white and translucent. One definition of stoneware is from the Combined Nomenclature of the European Communities, a European industry standard. It states: Stoneware, which, though dense, impermeable and hard enough to resist scratching by a steel point, differs from porcelain because it is more opaque, and normally only partially vitrified. It may be vitreous or semi-vitreous. It is usually coloured grey or brownish because of impurities in the clay used for its manufacture, and is normally glazed.Though "normally glazed" is not true for many historical and modern examples. Types Five basic categories of stoneware have been suggested: Traditional stoneware: a dense and inexpensive body. It is opaque, can be of any colour and breaks with a conchoidal or stony fracture. Traditionally made of fine-grained secondary, plastic clays which can be used to shape very large pieces. Fine stoneware: made from more carefully selected, prepared, and blended raw materials. It is used to produce tableware and art ware. Chemical stoneware: used in the chemical industry, and when resistance to chemical attack is needed. Purer raw materials are used than for other stoneware bodies. Has largely been replaced by chemical porcelain. Thermal shock resistant stoneware: has additions of certain materials to enhance the thermal shock resistance of the fired body. Electrical stoneware: historically used for electrical insulators, although it has been replaced by electrical porcelain. Another type, Flintless Stoneware, has also been identified. It is defined in the UK Pottery (Health and Welfare) Special Regulations of 1950 as: "Stoneware, the body of which consists of natural clay to which no flint or quartz or other form of free silica has been added." Production Materials The compositions of stoneware bodies vary considerably, and include both prepared and 'as dug'; the former being by far the dominant type for studio and industry. Nevertheless, the vast majority will conform to: plastic fire clays, 0 to 100%; ball clays, 0 to 15%; quartz, 0%; feldspar and chamotte, 0 to 15%. The key raw material is either naturally occurring stoneware clay or non-refractory fire clay. The mineral kaolinite is present but disordered, and although mica and quartz are present their particle size is very small. Stoneware clay is often accompanied by impurities such as iron or carbon, giving it a "dirty" look, and its plasticity can vary widely. Non-refractory fire clay may be another key raw material. Fire clays are generally considered refractory, because they withstand very high temperatures before melting or crumbling. Refractory fire clays have a high concentration of kaolinite, with lesser amounts of mica and quartz. Non-refractory fire clays, however, have larger amounts of mica and feldspar. Shaping Firing Stoneware can be once-fired or twice-fired. Maximum firing temperatures can vary significantly, from 1100 °C to 1300 °C depending on the flux content. Most commonly an oxidising kiln atmosphere is used. Typically, temperatures will be between 1180 °C and 1280 °C. To produce a better quality fired glaze finish, twice-firing can be used. This can be especially important for formulations composed of highly carbonaceous clays. For these, biscuit firing is around 900 °C, and glost firing (the firing used to form the glaze over the ware) 1180–1280 °C. After firing the Water absorption should be less than 1 per cent. History Asia The Indus Valley civilization produced stoneware, with an industry of a nearly industrial-scale mass-production of stoneware bangles throughout the civilization's Mature Period (2600–1900 BC). Early examples of stoneware have been found in China, naturally as an extension of higher temperatures achieved from early development of reduction firing, with large quantities produced from the Han dynasty onwards. In both medieval China and Japan, stoneware was very common, and several types became admired for their simple forms and subtle glaze effects. Japan did not make porcelain until about 1600, and north China (in contrast to the south) lacks the appropriate kaolin-rich clays for porcelain on a strict Western definition. Jian ware in the Song dynasty was mostly used for tea wares, and appealed to Buddhist monks. Most Longquan celadon, a very important ware in medieval China, was stoneware. Ding ware comes very close to porcelain, and even modern Western sources are notably divided as to how to describe it, although it is not translucent and the body often grey rather than white. In China, fine pottery largely consisted of porcelain by the Ming dynasty, and stoneware was mostly restricted to utilitarian wares and those for the poor. Exceptions to this include the unglazed Yixing clay teapot, made from a clay believed to suit tea especially well, and Shiwan ware, used for popular figures and architectural sculpture. In Japan many traditional types of stoneware, for example Oribe ware and Shino ware, were preferred for chawan cups for the Japanese tea ceremony, and have been valued up to the present for this and other uses. From a combination of philosophical and nationalist reasons, the primitive or folk art aesthetic qualities of many Japanese village traditions, originally mostly made by farmers in slack periods in the agricultural calendar, have retained considerable prestige. Influential tea masters praised the rough, spontaneous, wabi-sabi, appearance of Japanese rural wares, mostly stoneware, over the perfection of Chinese-inspired porcelain made by highly skilled specialists. Stoneware was also produced in Korean pottery, from at least the 5th century, and much of the finest Korean pottery might be so classified; like elsewhere the border with porcelain is imprecise. Celadons and much underglaze blue and white pottery can be called stoneware. Historical stoneware production sites in Thailand are Si Satchanalai and Sukhothai. The firing technology seems to have come from China. Europe In contrast to Asia, stoneware could be produced in Europe only from the late Middle Ages, as European kilns were less efficient, and the right sorts of clay less common. Some ancient Roman pottery had approached being stoneware, but not as a consistent type of ware. Medieval stoneware remained a much-exported speciality of Germany, especially along the Rhine, until the Renaissance or later, typically used for large jugs, jars and beer-mugs. "Proto-stoneware", such as Pingsdorf ware, and then "near-stoneware" was developed there by 1250, and fully vitrified wares were being produced on a large scale by 1325. The salt-glazed style that became typical was not perfected until the late 15th century. England became the most inventive and important European maker of fancy stoneware in the 18th and 19th centuries, but there is no clear evidence for native English stoneware production before the mid-17th century. German imports were common from the early 16th century at least, and known as "Cologne ware", after the centre of shipping it rather than of making it. Some German potters were probably making stoneware in London in the 1640s, and a father and son Wooltus (or Woolters) were doing so in Southampton in the 1660s. In the second half of the 18th century Wedgwood developed a number of ceramic bodies. One of these, Jasperware, is sometimes classified as stoneware although its raw materials differ considerably from all other stonewares; it remains in production. Other manufacturers produced their own types, including various ironstone china types, which some classified as earthenware. Significant amounts of modern, commercial tableware and kitchenware use stoneware, and it is common in craft and studio pottery. The popular Japanese-inspired raku ware is normally stoneware. Historical examples Bartmann jug: A decorated stoneware form that was manufactured in Europe throughout the 16th and 17th centuries, especially in the Cologne region of Germany. Redware: Unglazed stoneware with a terracotta red, initially imitating Chinese Yixing ware teapots. Mostly c. 1680–1750. The Dutch-German Elers brothers brought it to Staffordshire in the 1690s. Böttger Ware: A dark red stoneware developed by Johann Friedrich Böttger by 1710, a superior form of redware. It is a very significant stage in the development of porcelain in Europe. Cane Ware: An eighteenth-century English stoneware of a light brownish-yellow colour (like bamboo), developed by Josiah Wedgwood in the 1770s. During the 19th and the earlier part of the 20th century, cane ware continued to be made in South Derbyshire and the Burton-on-Trent area as kitchen-ware and sanitary-ware. It had a fine-textured cane-coloured body with a white engobe on the inner surface often referred to as cane and white. Crouch Ware, now often just called Staffordshire: salt-glazed stoneware. Light-coloured, developed in 1696 in Burslem. It is one of the earliest types of stoneware made in England. The origin of the name has been disputed: on one theory, the ingredients included a clay from Crich, Derbyshire, the word "crouch" being a corruption. On another, it comes from Creussen near Bayreuth in Bavaria, whose type of tall cruche jugs were called "crouch" when imported to England. Jasperware: Another Wedgwood development, using tinted clay bodies in contrasting colours, unglazed. Rosso Antico: A red, unglazed stoneware made in England during the 18th century by Josiah Wedgwood. It was a refinement of the redware previously made in North Staffordshire by the Elers brothers. Coade stone: A type of artificial stone moulded into sculptures and architectural details, imitating marble. Developed in England around 1770. Ironstone china - patented in 1813, often classed as earthenware, but very strong and vitreous, and popular for wares with heavy usage. Stone china - made in Staffordshire, mainly in the first half of the 19th century. Very hard, opaque, giving "a clear ring when lightly tapped". Typically brightly decorated by transfer printing, often with outlines that were finished in overglaze enamels by hand. American stoneware: The predominant houseware of 19th century North America, where the alternatives were less developed. Gallery Citations General sources Hughes, G. Bernard, The Country Life Pocket Book of China, 1965, Country Life Ltd Wood, Frank L., The World of British Stoneware: Its History, Manufacture and Wares, 2014, Troubador Publishing Ltd, , 9781783063673 External links Beardman jugs from the Avondster site Photographs and history of early Rhenish stoneware vessels Japanese stoneware in the collection of the Asia Society Ceramic materials Cookware and bakeware Pottery Tableware it:Ceramica#Il grès
Technology
Materials
null
475199
https://en.wikipedia.org/wiki/Low-pressure%20area
Low-pressure area
In meteorology, a low-pressure area, low area or low is a region where the atmospheric pressure is lower than that of surrounding locations. It is the opposite of a high-pressure area. Low-pressure areas are commonly associated with inclement weather (such as cloudy, windy, with possible rain or storms), while high-pressure areas are associated with lighter winds and clear skies. Winds circle anti-clockwise around lows in the northern hemisphere, and clockwise in the southern hemisphere, due to opposing Coriolis forces. Low-pressure systems form under areas of wind divergence that occur in the upper levels of the atmosphere (aloft). The formation process of a low-pressure area is known as cyclogenesis. In meteorology, atmospheric divergence aloft occurs in two kinds of places: The first is in the area on the east side of upper troughs, which form half of a Rossby wave within the Westerlies (a trough with large wavelength that extends through the troposphere). A second is an area where wind divergence aloft occurs ahead of embedded shortwave troughs, which are of smaller wavelength. Diverging winds aloft, ahead of these troughs, cause atmospheric lift within the troposphere below as air flows upwards away from the surface, which lowers surface pressures as this upward motion partially counteracts the force of gravity packing the air close to the ground. Thermal lows form due to localized heating caused by greater solar incidence over deserts and other land masses. Since localized areas of warm air are less dense than their surroundings, this warmer air rises, which lowers atmospheric pressure near that portion of the Earth's surface. Large-scale thermal lows over continents help drive monsoon circulations. Low-pressure areas can also form due to organized thunderstorm activity over warm water. When this occurs over the tropics in concert with the Intertropical Convergence Zone, it is known as a monsoon trough. Monsoon troughs reach their northerly extent in August and their southerly extent in February. When a convective low acquires a well-hot circulation in the tropics it is termed a tropical cyclone. Tropical cyclones can form during any month of the year globally but can occur in either the northern or southern hemisphere during December. Atmospheric lift will also generally produce cloud cover through adiabatic cooling once the air temperature drops below the dew point as it rises, the cloudy skies typical of low-pressure areas act to dampen diurnal temperature extremes. Since clouds reflect sunlight, incoming shortwave solar radiation decreases, which causes lower temperatures during the day. At night the absorptive effect of clouds on outgoing longwave radiation, such as heat energy from the surface, allows for warmer night-time minimums in all seasons. The stronger the area of low pressure, the stronger the winds experienced in its vicinity. Globally, low-pressure systems are most frequently located over the Tibetan Plateau and in the lee of the Rocky Mountains. In Europe (particularly in the British Isles and Netherlands), recurring low-pressure weather systems are typically known as "low levels". Formation Cyclogenesis is the development and strengthening of cyclonic circulations, or low-pressure areas, within the atmosphere. Cyclogenesis is the opposite of , and has an anticyclonic (high-pressure system) equivalent which deals with the formation of high-pressure areas—anticyclogenesis. Cyclogenesis is an umbrella term for several different processes, all of which result in the development of some sort of cyclone. Meteorologists use the term "cyclone" where circular pressure systems flow in the direction of the Earth's rotation, which normally coincides with areas of low pressure. The largest low-pressure systems are cold-core polar cyclones and extratropical cyclones which lie on the synoptic scale. Warm-core cyclones such as tropical cyclones, mesocyclones, and polar lows lie within the smaller mesoscale. Subtropical cyclones are of intermediate size. Cyclogenesis can occur at various scales, from the microscale to the synoptic scale. Larger-scale troughs, also called Rossby waves, are synoptic in scale. Shortwave troughs embedded within the flow around larger scale troughs are smaller in scale, or mesoscale in nature. Both Rossby waves and shortwaves embedded within the flow around Rossby waves migrate equatorward of the polar cyclones located in both the Northern and Southern hemispheres. All share one important aspect, that of upward vertical motion within the troposphere. Such upward motions decrease the mass of local atmospheric columns of air, which lowers surface pressure. Extratropical cyclones form as waves along weather fronts due to a passing by shortwave aloft or upper-level jet streak before occluding later in their life cycle as cold-core cyclones. Polar lows are small-scale, short-lived atmospheric low-pressure systems that occur over the ocean areas poleward of the main polar front in both the Northern and Southern Hemispheres. They are part of the larger class of mesoscale weather-systems. Polar lows can be difficult to detect using conventional weather reports and are a hazard to high-latitude operations, such as shipping and offshore platforms. They are vigorous systems that have near-surface winds of at least . Tropical cyclones form due to latent heat driven by significant thunderstorm activity, and are warm-core with well-defined circulations. Certain criteria need to be met for their formation. In most situations, water temperatures of at least are needed down to a depth of at least ; waters of this temperature cause the overlying atmosphere to be unstable enough to sustain convection and thunderstorms. Another factor is rapid cooling with height, which allows the release of the heat of condensation that powers a tropical cyclone. High humidity is needed, especially in the lower-to-mid troposphere; when there is a great deal of moisture in the atmosphere, conditions are more favorable for disturbances to develop. Low amounts of wind shear are needed, as high shear is disruptive to the storm's circulation. Lastly, a formative tropical cyclone needs a pre-existing system of disturbed weather, although without a circulation no cyclonic development will take place. Mesocyclones form as warm core cyclones over land, and can lead to tornado formation. Waterspouts can also form from mesocyclones, but more often develop from environments of high instability and low vertical wind shear. In deserts, lack of ground and plant moisture that would normally provide evaporative cooling can lead to intense, rapid solar heating of the lower layers of air. The hot air is less dense than surrounding cooler air. This, combined with the rising of the hot air, results in a low-pressure area called a thermal low. Monsoon circulations are caused by thermal lows which form over large areas of land and their strength is driven by how land heats more quickly than the surrounding nearby ocean. This generates a steady wind blowing toward the land, bringing the moist near-surface air over the oceans with it. Similar rainfall is caused by the moist ocean-air being lifted upwards by mountains, surface heating, convergence at the surface, divergence aloft, or from storm-produced outflows at the surface. However the lifting occurs, the air cools due to expansion in lower pressure, which in turn produces condensation. In winter, the land cools off quickly, but the ocean keeps the heat longer due to its higher specific heat. The hot air over the ocean rises, creating a low-pressure area and a breeze from land to ocean while a large area of drying high pressure is formed over the land, increased by wintertime cooling. Monsoons resemble sea and land breezes, terms usually referring to the localized, diurnal (daily) cycle of circulation near coastlines everywhere, but they are much larger in scale - also stronger and seasonal. Climatology Mid-latitudes and subtropics Large polar cyclones help determine the steering of systems moving through the mid-latitudes, south of the Arctic and north of the Antarctic. The Arctic oscillation provides an index used to gauge the magnitude of this effect in the Northern Hemisphere. Extratropical cyclones tend to form east of climatological trough positions aloft near the east coast of continents, or west side of oceans. A study of extratropical cyclones in the Southern Hemisphere shows that between the 30th and 70th parallels there are an average of 37 cyclones in existence during any 6-hour period. A separate study in the Northern Hemisphere suggests that approximately 234 significant extratropical cyclones form each winter. In Europe, particularly in the United Kingdom and in the Netherlands, recurring extratropical low-pressure weather systems are typically known as depressions. These tend to bring wet weather throughout the year. Thermal lows also occur during the summer over continental areas across the subtropics - such as the Sonoran Desert, the Mexican Plateau, the Sahara, South America, and Southeast Asia. The lows are most commonly located over the Tibetan Plateau and in the lee of the Rocky Mountains. Monsoon trough Elongated areas of low pressure form at the monsoon trough or Intertropical Convergence Zone as part of the Hadley cell circulation. Monsoon troughing in the western Pacific reaches its zenith in latitude during the late summer when the wintertime surface ridge in the opposite hemisphere is the strongest. It can reach as far as the 40th parallel in East Asia during August and 20th parallel in Australia during February. Its poleward progression is accelerated by the onset of the summer monsoon which is characterized by the development of lower air pressure over the warmest part of the various continents. The large-scale thermal lows over continents help create pressure gradients which drive monsoon circulations. In the southern hemisphere, the monsoon trough associated with the Australian monsoon reaches its most southerly latitude in February, oriented along a west-northwest/east-southeast axis. Many of the world's rainforests are associated with these climatological low-pressure systems. Tropical cyclone Tropical cyclones generally need to form more than or poleward of the 5th parallel north and 5th parallel south, allowing the Coriolis effect to deflect winds blowing towards the low-pressure center and creating a circulation. Worldwide, tropical cyclone activity peaks in late summer, when the difference between temperatures aloft and sea surface temperatures is the greatest. However, each particular basin has its own seasonal patterns. On a worldwide scale, May is the least active month while September is the most active month. Nearly one-third of the world's tropical cyclones form within the western Pacific Ocean, making it the most active tropical cyclone basin on Earth. Associated weather Wind is initially accelerated from areas of high pressure to areas of low pressure. This is due to density (or temperature and moisture) differences between two air masses. Since stronger high-pressure systems contain cooler or drier air, the air mass is denser and flows towards areas that are warm or moist, which are in the vicinity of low-pressure areas in advance of their associated cold fronts. The stronger the pressure difference, or pressure gradient, between a high-pressure system and a low-pressure system, the stronger the wind. Thus, stronger areas of low pressure are associated with stronger winds. The Coriolis force caused by the Earth's rotation is what gives winds around low-pressure areas (such as in hurricanes, cyclones, and typhoons) their counter-clockwise (anticlockwise) circulation in the northern hemisphere (as the wind moves inward and is deflected right from the center of high pressure) and clockwise circulation in the southern hemisphere (as the wind moves inward and is deflected left from the center of high pressure). A tropical cyclone differs from a hurricane or typhoon based only on geographic location. A tropical cyclone is fundamentally different from a mid-latitude cyclone. A hurricane is a storm that occurs in the Atlantic Ocean and northeastern Pacific Ocean, a typhoon occurs in the northwestern Pacific Ocean, and a tropical cyclone occurs in the south Pacific or Indian Ocean. Friction with land slows down the wind flowing into low-pressure systems and causes wind to flow more inward, or flowing more ageostrophically, toward their centers. Tornadoes are often too small, and of too short duration, to be influenced by the Coriolis force, but may be so-influenced when arising from a low-pressure system.
Physical sciences
Meteorology: General
null