text
stringlengths
26
3.6k
page_title
stringlengths
1
71
source
stringclasses
1 value
token_count
int64
10
512
id
stringlengths
2
8
url
stringlengths
31
117
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
In the clothes free movement, the term "smoothie" refers to an individual who has removed their body hair. In the past, such practices were frowned upon and in some cases, forbidden: violators could face exclusion from the club. Enthusiasts grouped together and formed societies of their own that catered to that fashion, and smoothies became a major percentage at some nudist venues. The first Smoothie club (TSC) was founded by a British couple in 1991. A Dutch branch was founded in 1993 in order to give the idea of a hairless body greater publicity in the Netherlands. Being a Smoothie is described by its supporters as exceptionally comfortable and liberating. The Smoothy-Club is also a branch of the World of the Nudest Nudist (WNN) and organizes nudist ship cruises and regular nudist events. Other reasons Religion Head-shaving (tonsure) is a part of some Buddhist, Christian, Muslim, Jain and Hindu traditions. Buddhist and Christian monks generally undergo some form of tonsure during their induction into monastic life. Within Amish society, tradition ordains men to stop shaving a part of their facial hair upon marriage and grow a Shenandoah style beard which serves the significance of wearing a wedding ring, moustaches are rejected as they are regarded as martial (traditionally associated with the military). In Judaism (see Shaving in Judaism), there is no obligation for women to remove body hair or facial hair, unless they wish to do so. However, in preparation for a woman's immersion in a ritual bath after concluding her days of purification (following her menstrual cycle), the custom of Jewish women is to shave off their pubic hair. During a mourning ritual, Jewish men are restricted in the Torah and Halakha to using scissors and prohibited from using a razor blade to shave their beards or sideburns, and, by custom, neither men nor women may cut or shave their hair during the shiva period.
Hair removal
Wikipedia
410
155140
https://en.wikipedia.org/wiki/Hair%20removal
Biology and health sciences
Hygiene and grooming: General
Health
The Baháʼí Faith recommends against complete and long-term head-shaving outside of medical purposes. It is not currently practiced as a law, contingent upon a future decision by the Universal House of Justice, its highest governing body. Sikhs take an even stronger stance, opposing all forms of hair removal. One of the "Five Ks" of Sikhism is Kesh, meaning "hair". Baptized Sikhs are specifically instructed to have unshorn Kesh (the hair on their head and beards for men) as a major tenet of the Sikh faith. To Sikhs, the maintenance and management of long hair is a manifestation of one's piety. The majority of Muslims believe that adult removal of pubic and axillary hair, as a hygienic measure, is religiously beneficial. Under Muslim law (Sharia), it is recommended to keep the beard. A Muslim may trim or cut hair on the head. In the 9th century, the use of chemical depilatories for women was introduced by Ziryab in Al-Andalus. Medical The body hair of surgical patients is often removed beforehand on the skin surrounding surgical sites. Shaving was the primary form of hair removal until reports in 1983 showed that it may lead to an increased risk of infection. Clippers are now the recommended pre-surgical hair removal method. A 2021 systematic review brought together evidence on different techniques for hair removal before surgery. This involved 25 studies with a total of 8919 participants. Using a razor probably increases the chance of developing a surgical site infection compared to using clippers or hair removal cream or not removing hair before surgery. Removing hair on the day of surgery rather than the day before may also slightly reduce the number of infections. Some people with trichiasis find it medically necessary to remove ingrown eyelashes. The shaving of hair has sometimes been used in attempts to eradicate lice or to minimize body odor due to the accumulation of odor-causing micro-organisms in hair. In extreme situations, people may need to remove all body hair to prevent or combat infestation by lice, fleas and other parasites. Such a practice was used, for example, in Ancient Egypt. It has been suggested that an increasing percentage of humans removing their pubic hair has led to reduced crab louse populations in some parts of the world.
Hair removal
Wikipedia
479
155140
https://en.wikipedia.org/wiki/Hair%20removal
Biology and health sciences
Hygiene and grooming: General
Health
In the military A buzz cut or completely shaven haircut is common in military organizations where, among other reasons, it is considered to promote uniformity and neatness. Most militaries have occupational safety and health policies that govern the hair length and hairstyles permitted; in the field and living in close-quarter environments where bathing and sanitation can be difficult, soldiers can be susceptible to parasite infestation such as head lice, that are more easily propagated with long and unkempt hair. It also requires less maintenance in the field and in adverse weather it dries more quickly. Short hair is also less likely to cause severe burns from flash flame exposure (as a result of flash fires from explosions) which can easily set hair alight. Short hair can also minimize interference with safety equipment and fittings attached to the head, such as combat helmets and NBC suits. Militaries may also require men to maintain clean-shaven faces as facial hair can prevent an air-tight seal between the face and military gas masks or other respiratory equipment, such as a pilot's oxygen mask, or full-face diving mask. The process of testing whether a mask adequately fits the face is known as a "respirator fit test". In many militaries, head-shaving (known as the induction cut) is mandatory for men when beginning their recruit training. However, even after the initial recruitment phase, when head-shaving is no longer required, many soldiers maintain a completely or partially shaven hairstyle (such as a "high and tight", "flattop" or "buzz cut") for personal convenience or neatness. Head-shaving is not required and is often not permitted for women in military service, although they must have their hair cut or tied to regulation length. For example, the shortest hair a female soldier can have in the U.S. Army is 1/4 inch from the scalp.
Hair removal
Wikipedia
395
155140
https://en.wikipedia.org/wiki/Hair%20removal
Biology and health sciences
Hygiene and grooming: General
Health
In sport It is a common practice for professional footballers (soccer players) and road cyclists to remove leg hair for a number of reasons. In the case of a crash or tackle, the absence of the leg hair means the injuries (usually road rash or scarring) can be cleaned up more efficiently, and treatment is not impeded. Professional cyclists, as well as professional footballers, also receive regular leg massages, and the absence of hair reduces the friction and increases their comfort and effectiveness. Football players are also required to wear shin guards, and in case of a skin rash, the affected area can be treated more efficiently. It is also common for competitive swimmers to shave the hair off their legs, arms, and torsos (and even their whole bodies from the neckline down), to reduce drag and provide a heightened "feel" for the water by removing the exterior layer of skin along with the body hair. As punishment In some situations, people's hair is shaved as a punishment or a form of humiliation. After World War II, head-shaving was a common punishment in France, the Netherlands, and Norway for women who had collaborated with the Nazis during the occupation, and, in particular, for women who had sexual relations with an occupying soldier. In the United States, during the Vietnam War, conservative students would sometimes attack student radicals or "hippies" by shaving beards or cutting long hair. One notorious incident occurred at Stanford University, when unruly fraternity members grabbed Resistance founder (and student-body president) David Harris, cut off his long hair, and shaved his beard. During European witch-hunts of the Medieval and Early Modern periods, alleged witches were stripped naked and their entire body shaved to discover the so-called witches' marks. The discovery of witches' marks was then used as evidence in trials. Inmates have their heads shaved upon entry at certain prisons. Forms of hair removal and methods Depilation is the removal of the part of the hair above the surface of the skin. The most common form of depilation is shaving or trimming. Another option is the use of chemical depilatories, which work by breaking the disulfide bonds that link the protein chains that give hair its strength. Epilation is the removal of the entire hair, including the part below the skin. Methods include waxing, sugaring, epilators, lasers, threading, intense pulsed light or electrology. Hair is also sometimes removed by plucking with tweezers.
Hair removal
Wikipedia
509
155140
https://en.wikipedia.org/wiki/Hair%20removal
Biology and health sciences
Hygiene and grooming: General
Health
Depilation methods "Depilation", or temporary removal of hair to the level of the skin, lasts several hours to several days and can be achieved by Shaving or trimming (manually or with electric shavers which can be used on pubic hair or body hair) Depilatories (creams or "shaving powders" which chemically dissolve hair) Friction (rough surfaces used to buff away hair)
Hair removal
Wikipedia
87
155140
https://en.wikipedia.org/wiki/Hair%20removal
Biology and health sciences
Hygiene and grooming: General
Health
Epilation methods "Epilation", or removal of the entire hair from the root, lasts several days to several weeks and may be achieved by Tweezing (hairs are tweezed, or pulled out, with tweezers or with fingers) Waxing (a hot or cold layer is applied and then removed with porous strips) Sugaring (hair is removed by applying a sticky paste to the skin in the direction of hair growth and then peeling off with a porous strip) Threading (also called fatlah or khite in Arabic, or band in Persian) in which a twisted thread catches hairs as it is rolled across the skin Epilators (mechanical devices that rapidly grasp hairs and pull them out). Drugs that directly attack hair growth or inhibit the development of new hair cells. Hair growth will become less and less until it finally stops; normal depilation/epilation will be performed until that time. Hair growth will return to normal if use of product is discontinued. Products include the following: The pharmaceutical drug eflornithine hydrochloride (with the trade names Vaniqa and Follinil) inhibits the enzyme ornithine decarboxylase, preventing new hair cells from producing putrescine for stabilizing their DNA. Antiandrogens, including spironolactone, cyproterone acetate, flutamide, bicalutamide, and finasteride, can be used to reduce or eliminate unwanted body hair, such as in the treatment of hirsutism. Although effective for reducing body hair, antiandrogens have little effect on facial hair. However, slight effectiveness may be observed, such as some reduction in density/coverage and slower growth. Antiandrogens will also prevent further development of facial hair, despite only minimally affecting that which is already there. With the exception of 5α-reductase inhibitors such as finasteride and dutasteride, antiandrogens are contraindicated in men due to the risk of feminizing side effects such as gynecomastia as well as other adverse reactions (e.g., infertility), and are generally only used in women for cosmetic/hair-reduction purposes.
Hair removal
Wikipedia
462
155140
https://en.wikipedia.org/wiki/Hair%20removal
Biology and health sciences
Hygiene and grooming: General
Health
Permanent hair removal Electrology has been practiced in the United States since 1875. It is approved by the FDA. This technique permanently destroys germ cells responsible for hair growth by way of the insertion of a fine probe into the hair follicle and the application of a current adjusted to each hair type and treatment area. Electrology is the only permanent hair removal method recognized by the FDA. Permanent hair reduction Laser hair removal (lasers and laser diodes): Laser hair removal technology became widespread in the US and many other countries from the 1990s onwards. It has been approved in the United States by the FDA since 1997. With this technology, light is directed at the hair and is absorbed by dark pigment, resulting in the destruction of the hair follicle. This hair removal method sometimes becomes permanent after several sessions. The number of sessions needed depends upon the amount and type of hair being removed. Intense pulsed light (IPL) This technology is becoming more common for at-home devices, many of which are advertised as "laser hair removal" but actually use IPL technology. Diode epilation (high energy LEDs but not laser diodes) Clinical comparisons of effectiveness A 2006 review article in the journal "Lasers in Medical Science" compared intense pulsed light (IPL) and both alexandrite and diode lasers. The review found no statistical difference in effectiveness, but a higher incidence of side effects with diode laser-based treatment. Hair reduction after 6 months was reported as 68.75% for alexandrite lasers, 71.71% for diode lasers, and 66.96% for IPL. Side effects were reported as 9.5% for alexandrite lasers, 28.9% for diode lasers, and 15.3% for IPL. All side effects were found to be temporary and even pigmentation changes returned to normal within 6 months. A 2006 meta-analysis of randomized controlled trials found that alexandrite and diode lasers caused 50% hair reduction for up to 6 months, while there was no evidence of hair reduction from intense pulsed light, neodymium-YAG or ruby lasers. Experimental or banned methods Photodynamic therapy for hair removal (experimental) X-ray hair removal is an efficient, and usually permanent, hair removal method, but also causes severe health problems, occasional disfigurement, and even death. It is illegal in the United States.
Hair removal
Wikipedia
489
155140
https://en.wikipedia.org/wiki/Hair%20removal
Biology and health sciences
Hygiene and grooming: General
Health
Doubtful methods Many methods have been proposed or sold over the years without published clinical proof they can work as claimed. Electric tweezers Transdermal electrolysis Transcutaneous hair removal Microwave hair removal Foods and dietary supplements Non-prescription topical preparations (also called "hair inhibitors", "hair retardants", or "hair growth inhibitors") Advantages and disadvantages There are several disadvantages to many of these hair removal methods. Hair removal can cause issues: skin inflammation, minor burns, lesions, scarring, ingrown hairs, bumps, and infected hair follicles (folliculitis). Some removal methods are not permanent, can cause medical problems and permanent damage, or have very high costs. Some of these methods are still in the testing phase and have not been clinically proven. One issue that can be considered an advantage or a disadvantage depending upon an individual's viewpoint, is that removing hair has the effect of removing information about the individual's hair growth patterns due to genetic predisposition, illness, androgen levels (such as from pubertal hormonal imbalances or drug side effects), and/or gender status. In the hair follicle, stem cells reside in a discrete microenvironment called the bulge, located at the base of the part of the follicle that is established during morphogenesis but does not degenerate during the hair cycle. The bulge contains multipotent stem cells that can be recruited during wound healing to help repair the epidermis.
Hair removal
Wikipedia
315
155140
https://en.wikipedia.org/wiki/Hair%20removal
Biology and health sciences
Hygiene and grooming: General
Health
Shaving is the removal of hair, by using a razor or any other kind of bladed implement, to slice it down—to the level of the skin or otherwise. Shaving is most commonly practiced by men to remove their facial hair and by women to remove their leg and underarm hair. A man is called clean-shaven if he has had his beard entirely removed. Both men and women sometimes shave their chest hair, abdominal hair, leg hair, underarm hair, pubic hair, or any other body hair. Head shaving is much more common among men. It is often associated with religious practice, the armed forces, and some competitive sports such as swimming, bodybuilding, and extreme sports. Historically, head shaving has also been used to humiliate, punish, for purification or to show submission to an authority. In more recent history, head shaving has been used in fund-raising efforts, particularly for cancer research organizations and charitable organizations which serve cancer patients. The shaving of head hair is also sometimes done by cancer patients when their treatment may result in hair loss, and by people experiencing male pattern baldness. History Before the advent of razors, hair was sometimes removed using two shells to pull the hair out or using water and a sharp tool. Around 3000 BC when copper tools were developed, copper razors were invented. The idea of an aesthetic approach to personal hygiene may have begun at this time, though Egyptian priests may have practiced something similar to this earlier. Alexander the Great strongly promoted shaving the beard for Macedonian soldiers before battle because he feared the enemy would grab them. In some Native American tribes, at the time of contact with British colonists, it was customary for men and women to remove all body hair. Straight razors have been manufactured in Sheffield, England since the 18th century. In the United States, getting a straight razor shave in a barbershop and self-shaving with a straight razor were still common in the early 1900s. The popularisation of self-shaving changed this. According to an estimate by New York City barber Charles de Zemler, barbers' shaving revenue dropped from about 50 percent around the time of the Spanish–American War to 10 percent in 1939 due to the invention of the safety razor and electric razor.
Shaving
Wikipedia
461
155154
https://en.wikipedia.org/wiki/Shaving
Biology and health sciences
Health and fitness
null
Safety razors have existed since at least 1876 when the single-edge Star safety razor was patented by brothers Frederick and Otto Kampfe. The razor was essentially a small piece of a straight razor attached to a handle using a clamp mechanism. Before each shave the blade had to be attached to a special holder, stropped with a leather belt, and placed back into the razor. After a time, the blade needed to be honed by a cutler. In 1895, King Camp Gillette invented the double-edged safety razor, with cheap disposable blades sharpened from two sides. It took him until 1901 to build a working, patentable model, and commercial production began in 1903. The razor gained popularity during World War I when the U.S. military started issuing Gillette shaving kits to its servicemen: in 1918, the Gillette Safety Razor Company sold 3.5 million razors and 32 million blades. After the First World War, the company changed the pricing of its razor from a premium $5 to a more affordable $1 (equivalent to $ and $ in , respectively), leading to another big surge in popularity. The Second World War led to a similar increase in users when Gillette was ordered to dedicate its entire razor production and most blade production to the U.S. military. During the war, 12.5 million razors and 1.5 billion blades were provided to servicemen. In 1970, Wilkinson Sword introduced the 'bonded blade' razor, which consisted of a single blade housed in a plastic cartridge. Gillette followed in 1971 with its Trac II cartridge razor that utilised two blades. Gillette built on this twin blade design for a time, introducing new razors with added features such as a pivoting head, lubricating strip, and spring-mounted blades until their 1998 launch of the triple-bladed Mach3 razor. Schick launched a four-blade Quattro razor later the same year, and in 2006 Gillette launched the five-blade Fusion. Since then, razors with six and seven blades have been introduced. Wholly disposable razors gained popularity in the 1970s after Bic brought the first disposable razor to market in 1974. Other manufacturers, Gillette included, soon introduced their own disposable razors, and by 1980 disposables made up more than 27 percent of worldwide unit sales for razors.
Shaving
Wikipedia
491
155154
https://en.wikipedia.org/wiki/Shaving
Biology and health sciences
Health and fitness
null
Shaving methods Shaving can be done with a straight razor or safety razor (called 'manual shaving' or 'wet shaving') or an electric razor (called 'dry shaving') or beard trimmer. The removal of a full beard often requires the use of scissors or an electric (or beard) trimmer to reduce the mass of hair, simplifying the process. Wet shaving There are two types of manual razors: straight razor and safety razors. Safety razors are further subdivided into double-edged razors, single edge, injector razors, cartridge razors and disposable razors. Double-edge razors are named so because the blade that they use has two sharp edges on opposite sides of the blade. Current multi-bladed cartridge manufacturers attempt to differentiate themselves by having more or fewer blades than their competitors, each arguing that their product gives a greater shave quality at a more affordable price. Before wet shaving, the area to be shaved is usually doused in warm to hot water by showering or bathing or covered for several minutes with a hot wet towel to soften the skin and hair. Dry hair is difficult to cut, and the required cutting force is reduced significantly once the hair is hydrated. Fully hydrated hair requires about 65% less force to cut, and hair is almost fully hydrated after two minutes of contact with room temperature water. The time required for hydration is reduced when using higher temperature water. A lathering or lubricating agent such as cream, shaving soap, gel, foam or oil is normally applied after this. Lubricating and moisturizing the skin to be shaved helps prevent irritation and damage known as razor burn. Many razor cartridges include a lubricating strip, made of polyethylene glycol, to function instead of or in supplement to extrinsic agents. It also lifts and softens the hairs, causing them to swell. This enhances the cutting action and sometimes permits cutting the hairs slightly below the surface of the skin. Additionally, during shaving, the lather indicates areas that have not been addressed. When soap is used, it is generally applied with a shaving brush, which has long, soft bristles. It is worked up into a usable lather by the brush, either against the face, in a shaving mug, bowl, scuttle, or palm of the hand.
Shaving
Wikipedia
495
155154
https://en.wikipedia.org/wiki/Shaving
Biology and health sciences
Health and fitness
null
Since cuts are more likely when using safety razors and straight razors, wet shaving is generally done in more than one pass with the blade. The goal is to reduce the amount of hair with each pass, instead of trying to eliminate all of it in a single pass. This also reduces the risks of cuts, soreness, and ingrown hairs. Alum blocks and styptic pencils are used to close cuts resulting from the shave. Aftershave An aftershave lotion or balm is sometimes used after finishing shaving. It may contain an antiseptic agent such as isopropyl alcohol, both to prevent infection from cuts and to act as an astringent to reduce skin irritation, a perfume, and a moisturizer to soften the facial skin. Electric shaving The electric shaver (electric razor) consists of a set of oscillating or rotating blades, which are held behind a perforated metal screen which prevents them from coming into contact with the skin and behaves much like the second blade in a pair of scissors. When the razor is held against the skin, the whiskers poke through the holes in the screen and are sliced by the moving blades. In some designs the blades are a rotating cylinder. In others they are one or more rotating disks or a set of oscillating blades. Each design has an optimum motion over the skin for the best shave and manufacturers provide guidance on this. Generally, circular or cylindrical blades (rotary-type shaver) move in a circular motion and oscillating blades (foil-type shaver) move left and right. Hitachi has produced foil-type shavers with a rotary blade that operates similarly to the blade assembly of a reel-type lawn mower. The first electric razor was built by Jacob Schick in 1928.
Shaving
Wikipedia
378
155154
https://en.wikipedia.org/wiki/Shaving
Biology and health sciences
Health and fitness
null
The main disadvantages of electric shaving are that it may not cut the whiskers as closely as razor shaving does and it requires a source of electricity, usually a rechargeable battery. The advantages include fewer cuts to the skin, quicker shaving, and no need for water and lather sources (a wet shave). The initial cost of electric shaving is higher, due to the cost of the shaver itself, but the long-term cost can be significantly lower, since the cutting parts do not need replacement for several months and a lathering product is not required. Some people also find they do not experience ingrown hairs (pseudofolliculitis barbae, also called razor bumps), when using an electric shaver. In contrast to wet shaving, electric shave lotions are intended to stiffen the whiskers. Stiffening is achieved by dehydrating the follicles using solutions of alcohols and a degreaser such as isopropyl myristate. Lotions are also sold to reduce skin irritation, but electric shaving does not usually require the application of any lubrication. This is called Dry Shaving. Mechanical shavers powered by a spring motor have been manufactured, although in the late 20th century they became rare. Such shavers can operate for up to two minutes each time the spring is wound and do not require an electrical outlet or batteries. Such type of shaver, the "Monaco" brand, was used on American space flights in the 1960s and 1970s, during the Apollo missions. Trimmer A trimmer has two adjacent blades, each with teeth on its cutting edge. One blade oscillates alongside a stationary blade so that the teeth cut any hair that falls between them. The main advantage of a trimmer, unlike shaving tools, is that longer beards can be trimmed to a short length efficiently and effectively, including as preparation for shaving. Effects of shaving
Shaving
Wikipedia
402
155154
https://en.wikipedia.org/wiki/Shaving
Biology and health sciences
Health and fitness
null
Aberrations Shaving can have numerous side effects, including cuts, abrasions, and irritation. Many side effects can be minimized by using a fresh blade, applying plenty of lubrication, shaving in the direction of hair growth, and avoiding pressing the razor into the skin. A shaving brush can also help to lift the hair and spread the lubrication. The cosmetic market in some consumer economies offers many products to reduce these effects; they commonly dry the affected area, and some also help to lift out the trapped hair(s). Some people who shave choose to use only single-blade or wire-wrapped blades that shave farther away from the skin. Others have skin that cannot tolerate razor shaving at all; they use depilatory shaving powders to dissolve hair above the skin's surface, or grow a beard. Some anatomical parts, such as the scrotum, require extra care and more advanced equipment due to the uneven surface of the skin when the testicles shrivel during coldness, or its imbalance when the testicles hang low due to being warmer. Cuts Cuts from shaving can bleed for about fifteen minutes. Shaving cuts can be caused by blade movement perpendicular to the blade's cutting axis or by regular / orthogonal shaving over prominent bumps on the skin (which the blade incises). As such, the presence of acne can make shaving cuts more likely, and extra care must be exercised. The use of a fresh, sharp blade as well as proper cleaning and lubrication of skin can help prevent cuts. Some razor blade manufacturers include disposal containers or receptacles to avoid injuries to anyone handling the garbage. Razor burn
Shaving
Wikipedia
345
155154
https://en.wikipedia.org/wiki/Shaving
Biology and health sciences
Health and fitness
null
Razor burn is an irritation of the skin caused by using a blunt blade or not using proper technique. It appears as a mild rash 2–4 minutes after shaving (once hair starts to grow through sealed skin) and usually disappears after a few hours to a few days, depending on severity. In severe cases, razor burn can also be accompanied by razor bumps, where the area around shaved hairs get raised red welts or infected pustules. A rash at the time of shaving is usually a sign of lack of lubrication. Razor burn is a common problem, especially among those who shave coarse hairs on areas with sensitive skin like the bikini line, pubic hair, underarms, chest, and beard. The condition can be caused by shaving too closely, shaving with a blunt blade, dry shaving, applying too much pressure when shaving, shaving too quickly or roughly, or shaving against the grain. Ways to prevent razor burn include keeping the skin moist, using a shaving brush and lather, using a moisturizing shaving gel, shaving in the direction of the hair growth, resisting the urge to shave too closely, applying minimal pressure, avoiding scratching or irritation after shaving, avoiding irritating products on the shaved area (colognes, perfumes, etc.) and using an aftershave cream with aloe vera or other emollients. Putting a warm, wet cloth on one's skin helps as well, by softening hairs. This can also be done by using pre-shave oil before the application of shaving cream. Essential oils such as coconut oil, tea-tree oil, peppermint oil, and lavender oil help to soothe skin after shaving. They have anti-inflammatory, antiseptic, and antibacterial properties. In some cases multi-bladed razors can cause skin irritation by shaving too close to the skin. Switching to a single- or double-bladed razor and not stretching the skin while shaving can mitigate this. One other technique involves exfoliating the skin before and after shaving, using various exfoliating products, included but not limited to, brushes, mitts, and loofah. This process removes dead skin cells, reducing the potential for ingrown hairs and allowing the razor to glide across the skin smoothly decreasing the risk of the razor snagging or grabbing causing razor burn. Razor bumps
Shaving
Wikipedia
492
155154
https://en.wikipedia.org/wiki/Shaving
Biology and health sciences
Health and fitness
null
is a medical term for persistent inflammation caused by shaving. It is also known by the initials PFB or colloquial terms such as razor bumps. Myths Shaving does not cause terminal hair to grow back thicker, coarser or darker. This belief arose because hair that has never been cut has a naturally tapered end as it emerges from the skin's hair follicle, whereas, after cutting, there is no taper. The cut hair may thus appear to be thicker, and feel coarser as a result of the sharp edges on each cut strand. The fact that shorter hairs are "harder" (less flexible) than longer hairs also contributes to this effect. Hair can also appear darker after it grows back because hair that has never been cut is often lighter from sun exposure. In addition, as humans grow older, hair tends to grow coarser and in more places on the face and body. For example, teenagers may start shaving their face or legs at around 16, but as they age, hair will start to grow more abundantly and thicker, leading some to believe this was due to the shaving, but in reality is just part of the maturation process. Shaving in religion Hinduism, Buddhism, Jainism and Christianity Hindu, Jain and Buddhist (usually only monks or nuns) temples have ceremonies of shaving the hair from the scalp of priests, nuns, and certain followers, as a symbol of their renunciation of worldly fashion and esteem. Amish men and some other plain peoples shave their beard until they are married, after which they allow it to grow but continue to shave their mustaches. Tonsure is the practice of some Christian churches. In Hinduism, in certain communities, a child's birth hair is shaved off as part of a set of religious rites (samskaras) Islam Sunni Leading classical Islāmic jurist and theologian Abdullāh b. Abī Zayd says in his 'Risalah', "and the Prophet ordered that the beard be left alone and allowed to grow abundantly and that it not be trimmed. Malik said: “And there is no objection in trimming from its length when it becomes very long.” And what Malik said, more than one of the Companions and the Successors also said.”" Muslim jurists have unanimously agreed that shaving the entire head, and, to a lesser degree, cutting it during pilgrimage is preferable. It is proven that Muhammad shaved his entire head, and he prayed for those who shaved their heads or cut their hair.
Shaving
Wikipedia
510
155154
https://en.wikipedia.org/wiki/Shaving
Biology and health sciences
Health and fitness
null
Islām also teaches followers to shave/pluck body hair such as pubic and armpit hair on a regular basis (40 days). Shī'a and Sunnī narrations from the Prophet state that: "God's Prophet (May God bless him) said: 'Anyone who believes in God and the Hereafter should not postpone shaving the pubic hair for more than forty days.'" Shia According to the Shia scholars, the length of beard should not exceed the width of a fist. Trimming of facial hair is allowed, however, shaving it is Haram (forbidden). Judaism Observant Jewish men are subject to restrictions on the shaving of their beards, as Leviticus 19:27 forbids the shaving of the corners of the head and prohibits the marring of the corners of the beard. The Hebrew word used in this verse refers specifically to shaving with a blade against the skin; rabbis at different times and places have interpreted it in many ways. Tools like scissors and electric razors, which cut the hair between two blades instead of between blade and skin, are permitted. Sikhism Observant Sikhs also follow the practice of keeping their hair uncut.
Shaving
Wikipedia
245
155154
https://en.wikipedia.org/wiki/Shaving
Biology and health sciences
Health and fitness
null
Shrews (family Soricidae) are small mole-like mammals classified in the order Eulipotyphla. True shrews are not to be confused with treeshrews, otter shrews, elephant shrews, West Indies shrews, or marsupial shrews, which belong to different families or orders. Although its external appearance is generally that of a long-nosed mouse, a shrew is not a rodent, as mice are. It is, in fact, a much closer relative of hedgehogs and moles; shrews are related to rodents only in that both belong to the Boreoeutheria magnorder. Shrews have sharp, spike-like teeth, whereas rodents have gnawing front incisor teeth. Shrews are distributed almost worldwide. Among the major tropical and temperate land masses, only New Guinea, Australia, New Zealand, and South America have no native shrews. However, as a result of the Great American Interchange, South America does have a relatively recently naturalised population, present only in the northern Andes. The shrew family has 385 known species, making it the fourth-most species-diverse mammal family. The only mammal families with more species are the muroid rodent families (Muridae and Cricetidae) and the bat family Vespertilionidae. Characteristics All shrews are tiny, most no larger than a mouse. The largest species is the Asian house shrew (Suncus murinus) of tropical Asia, which is about long and weighs around The Etruscan shrew (Suncus etruscus), at about and , is the smallest known living terrestrial mammal. In general, shrews are terrestrial creatures that forage for seeds, insects, nuts, worms, and a variety of other foods in leaf litter and dense vegetation e.g. grass, but some specialise in climbing trees, living underground, living under snow, or even hunting in water. They have small eyes and generally poor vision, but have excellent senses of hearing and smell. They are very active animals, with voracious appetites. Shrews have unusually high metabolic rates, above that expected in comparable small mammals. For this reason, they need to eat almost constantly like moles. Shrews in captivity can eat to 2 times their own body weight in food daily.
Shrew
Wikipedia
489
155176
https://en.wikipedia.org/wiki/Shrew
Biology and health sciences
Soricomorpha
null
They do not hibernate, but some species are capable of entering torpor. In winter, many species undergo morphological changes that drastically reduce their body weight. Shrews can lose between 30% and 50% of their body weight, shrinking the size of bones, skull, and internal organs. Whereas rodents have gnawing incisors that grow throughout life, the teeth of shrews wear down throughout life, a problem made more extreme because they lose their milk teeth before birth, so have only one set of teeth throughout their lifetimes. In some species, exposed areas of the teeth are dark red due to the presence of iron in the tooth enamel. The iron reinforces the surfaces that are exposed to the most stress, which helps prolong the life of the teeth. This adaptation is not found in species with lower metabolism, which do not have to eat as much and therefore do not wear down the enamel to the same degree. The only other mammals' teeth with pigmented enamel are the incisors of rodents. Apart from the first pair of incisors, which are long and sharp, and the chewing molars at the back of the mouth, the teeth of shrews are small and peg-like, and may be reduced in number. The dental formula of shrews is: Shrews are fiercely territorial, driving off rivals, and coming together only to mate. Many species dig burrows for catching food and hiding from predators, although this is not universal. Female shrews can have up to 10 litters a year; in the tropics, they breed all year round; in temperate zones, they cease breeding only in the winter. Shrews have gestation periods of 17–32 days. The female often becomes pregnant within a day or so of giving birth, and lactates during her pregnancy, weaning one litter as the next is born. Shrews live 12 to 30 months. A characteristic behaviour observed in many species of shrew is known as "caravanning". This is when a litter of young shrews form a line behind the mother, each gripping the shrew in front by the fur at the base of the tail.
Shrew
Wikipedia
446
155176
https://en.wikipedia.org/wiki/Shrew
Biology and health sciences
Soricomorpha
null
Shrews are unusual among mammals in a number of respects. Unlike most mammals, some species of shrews are venomous. Shrew venom is not conducted into the wound by fangs, but by grooves in the teeth. The venom contains various compounds, and the contents of the venom glands of the American short-tailed shrew are sufficient to kill 200 mice by intravenous injection. One chemical extracted from shrew venom may be potentially useful in the treatment of high blood pressure, while another compound may be useful in the treatment of some neuromuscular diseases and migraines. The saliva of the northern short-tailed shrew (Blarina brevicauda) contains soricidin, a peptide which has been studied for use in treating ovarian cancer. Also, along with the bats and toothed whales, some species of shrews use echolocation. Unlike most other mammals, shrews lack zygomatic bones (also called the jugals), so have incomplete zygomatic arches. Echolocation The only terrestrial mammals known to echolocate are two genera (Sorex and Blarina) of shrews, the tenrecs of Madagascar, bats, and the solenodons. These include the Eurasian or common shrew (Sorex araneus) and the American vagrant shrew (Sorex vagrans) and northern short-tailed shrew (Blarina brevicauda). These shrews emit series of ultrasonic squeaks. By nature the shrew sounds, unlike those of bats, are low-amplitude, broadband, multiharmonic, and frequency modulated. They contain no "echolocation clicks" with reverberations and would seem to be used for simple, close-range spatial orientation. In contrast to bats, shrews use echolocation only to investigate their habitats rather than additionally to pinpoint food. Except for large and thus strongly reflecting objects, such as a big stone or tree trunk, they probably are not able to disentangle echo scenes, but rather derive information on habitat type from the overall call reverberations. This might be comparable to human hearing whether one calls into a beech forest or into a reverberant wine cellar. Classification
Shrew
Wikipedia
469
155176
https://en.wikipedia.org/wiki/Shrew
Biology and health sciences
Soricomorpha
null
The 385 shrew species are placed in 26 genera, which are grouped into three living subfamilies: Crocidurinae (white-toothed shrews), Myosoricinae (African shrews), and Soricinae (red-toothed shrews). In addition, the family contains the extinct subfamilies Limnoecinae, Crocidosoricinae, Allosoricinae, and Heterosoricinae (although Heterosoricinae is also commonly considered a separate family). Family Soricidae Subfamily Crocidurinae Crocidura Diplomesodon Feroculus Palawanosorex Paracrocidura Ruwenzorisorex Scutisorex Solisorex Suncus Sylvisorex Subfamily Myosoricinae Congosorex Myosorex Surdisorex Subfamily Soricinae Tribe Anourosoricini Anourosorex Tribe Blarinellini Blarinella Tribe Blarinini Blarina Cryptotis Tribe Nectogalini Chimarrogale Chodsigoa Episoriculus Nectogale Neomys †Asoriculus †Nesiotites Soriculus Tribe Notiosoricini Megasorex Notiosorex Tribe Soricini Sorex
Shrew
Wikipedia
268
155176
https://en.wikipedia.org/wiki/Shrew
Biology and health sciences
Soricomorpha
null
Female ejaculation is characterized as an expulsion of fluid from the Skene's gland at the lower end of the urethra during or before an orgasm. It is also known colloquially as squirting or gushing, although research indicates that female ejaculation and squirting are different phenomena, squirting being attributed to a sudden expulsion of liquid that partly comes from the bladder and contains urine. Female ejaculation is physiologically distinct from coital incontinence, with which it is sometimes confused. There have been few studies on female ejaculation. A failure to adopt common definitions and research methodology by the scientific community has been the primary contributor to this lack of experimental data. Research has suffered from highly selected participants, narrow case studies, or very small sample sizes, and consequently has yet to produce significant results. Much of the research into the composition of the fluid focuses on determining whether it is, or contains, urine. It is common for any secretion that exits the vagina, and for fluid that exits the urethra, during sexual activity to be referred to as female ejaculate, which has led to significant confusion in the literature. Whether the fluid is secreted by the Skene's gland through and around the urethra has also been a topic of discussion; while the exact source and nature of the fluid remains controversial among medical professionals, and are related to doubts over the existence of the G-spot, there is substantial evidence that the Skene's gland is the source of female ejaculation. The function of female ejaculation, however, remains unclear. Reports In questionnaire surveys, 35–50% of women report that they have at some time experienced the gushing of fluid during orgasm. Other studies find anywhere from 10 to 69%, depending on the definitions and methods used. For instance Kratochvíl (1994) surveyed 200 women and found that 6% reported ejaculating, an additional 13% had some experience and about 60% reported release of fluid without actual gushing. Reports on the volume of fluid expelled vary considerably, starting from amounts that would be imperceptible to a woman, to mean values of 1–5 ml.
Female ejaculation
Wikipedia
457
155202
https://en.wikipedia.org/wiki/Female%20ejaculation
Biology and health sciences
Human anatomy
Health
The suggestion that women can expel fluid from their genital area as part of sexual arousal has been described by women's health writer Rebecca Chalker as "one of the most hotly debated questions in modern sexology". Female ejaculation has been discussed in anatomical, medical, and biological literature throughout recorded history. The reasons for the interest in female ejaculation have been questioned by feminist writers. Western literature 16th to 18th century In the 16th century, the Dutch physician Laevinius Lemnius, referred to how a woman "draws forth the man's seed and casts her own with it". In the 17th century, François Mauriceau described glands at the female urethral meatus that "pour out great quantities of saline liquor during coition, which increases the heat and enjoyment of women". This century saw an increasing understanding of female sexual anatomy and function, in particular the work of the Bartholin family in Denmark. De Graaf In the 17th century, the Dutch anatomist Reinier de Graaf wrote an influential treatise on the reproductive organs Concerning the Generative Organs of Women which is much cited in the literature on this topic. De Graaf discussed the original controversy but supported the Aristotelian view. He identified the source as the glandular structures and ducts surrounding the urethra.
Female ejaculation
Wikipedia
275
155202
https://en.wikipedia.org/wiki/Female%20ejaculation
Biology and health sciences
Human anatomy
Health
He identified [XIII:212] the various controversies regarding the ejaculate and its origin, but stated he believed that this fluid "which rushes out with such impetus during venereal combat or libidinous imagining" was derived from a number of sources, including the vagina, urinary tract, cervix and uterus. He appears to identify Skene's ducts, when he writes [XIII: 213] "those [ducts] which are visible around the orifice of the neck of the vagina and the outlet of the urinary passage receive their fluid from the female 'parastatae', or rather the thick membranous body around the urinary passage." However he appears not to distinguish between the lubrication of the perineum during arousal and an orgasmic ejaculate when he refers to liquid "which in libidinous women often rushes out at the mere sight of a handsome man." Further on [XIII:214] he refers to "liquid as usually comes from the pudenda in one gush." However, his prime purpose was to distinguish between generative fluid and pleasurable fluid, in his stand on the Aristotelian semen controversy. 19th century Krafft-Ebing's study of sexual perversion, Psychopathia Sexualis (1886), describes female ejaculation under the heading "Congenital Sexual Inversion in Women" as a perversion related to neurasthenia and homosexuality. It is also described by Freud in pathological terms in his study of Dora (1905), where he relates it to hysteria.
Female ejaculation
Wikipedia
339
155202
https://en.wikipedia.org/wiki/Female%20ejaculation
Biology and health sciences
Human anatomy
Health
However, women's writing of that time portrayed this in more positive terms. Thus we find Almeda Sperry writing to Emma Goldman in 1918, about the "rhythmic spurt of your love juices".<ref>{{cite book|url=https://books.google.com/books?id=8aZ-jOTonK8C&pg=PA154 |title=Falk C. Love, Anarchy and Emma Goldman. Holt Rinehart, NY 1984, at 175. Cited in Nestle J. A Restricted Country. Cleis 2003, at 163 |access-date=2011-10-30|isbn=9781573441520 |last1=Nestle |first1=Joan |year=2003 |publisher=Cleis Press }}</ref> Anatomical knowledge was also advanced by Alexander Skene's description of para-urethral or periurethral glands (glands around the urethra) in 1880, which have been variously claimed to be one source of the fluids in the ejaculate, and now commonly referred to as the Skene's glands. 20th century Early 20th-century understanding Female ejaculation is mentioned as normal in early 20th century 'marriage manuals', such as TH Van de Velde's Ideal Marriage: Its Physiology and Technique (1926). Certainly van de Velde was well aware of the varied experiences of women. In 1948, Huffman, an American gynaecologist, published his studies of the prostatic tissue in women together with a historical account and detailed drawings. These clearly showed the difference between the original glands identified by Skene at the urinary meatus, and the more proximal collections of glandular tissue emptying directly into the urethra. Most of the interest had focused on the substance and structure rather than function of the glands. A more definitive contemporary account of ejaculation appeared shortly after, in 1950, with the publication of an essay by Gräfenberg based on his observations of women during orgasm. However this paper made little impact, and was dismissed in the major sexological writings of that time, such as Kinsey (1953) and Masters and Johnson (1966), equating this "erroneous belief" with urinary stress incontinence. Although clearly Kinsey was familiar with the phenomenon, commenting that (p. 612);
Female ejaculation
Wikipedia
500
155202
https://en.wikipedia.org/wiki/Female%20ejaculation
Biology and health sciences
Human anatomy
Health
as were Masters and Johnson ten years later, who observed (pp 79–80): (emphasis in original) yet dismissed it (p. 135) – "female ejaculation is an erroneous but widespread concept", and even twenty years later in 1982, they repeated the statement that it was erroneous (p. 69–70) and the result of "urinary stress incontinence". Late 20th-century awareness The topic did not receive serious attention again until a review by Josephine Lowndes Sevely and JW Bennett appeared in 1978. This latter paper, which traces the history of the controversies to that point, and a series of three papers in 1981 by Beverly Whipple and colleagues in the Journal of Sex Research, became the focal point of the current debate. Whipple became aware of the phenomenon when studying urinary incontinence, with which it is often confused. As Sevely and Bennett point out, this is "not new knowledge, but a rediscovery of lost awareness that should contribute towards reshaping our view of female sexuality". Nevertheless, the theory advanced by these authors was immediately dismissed by many other authors, such as physiologist Joseph Bohlen, for not being based on rigorous scientific procedures, and psychiatrist Helen Singer Kaplan (1983) stated: Some radical feminist writers, such as Sheila Jeffreys (1985) were also dismissive, claiming it as a figment of male fantasy: It required the detailed anatomical work of Helen O'Connell from 1998 onwards to more properly elucidate the relationships between the different anatomical structures involved. As she observes, the female perineal urethra is embedded in the anterior vaginal wall and is surrounded by erectile tissue in all directions except posteriorly where it relates to the vaginal wall. "The distal vagina, clitoris, and urethra form an integrated entity covered superficially by the vulval skin and its epithelial features. These parts have a shared vasculature and nerve supply and during sexual stimulation respond as a unit".
Female ejaculation
Wikipedia
429
155202
https://en.wikipedia.org/wiki/Female%20ejaculation
Biology and health sciences
Human anatomy
Health
Anthropological accounts Female ejaculation appears in 20th-century anthropological works, such as Malinowski's Melanesian study, The Sexual Life of Savages (1929), and Gladwin and Sarason's "Truk: Man in Paradise" (1956). Malinowski states that in the language of the Trobriand Island people, a single word is used to describe ejaculation in both male and female. In describing sexual relations amongst the Chuukese Micronesians, Gladwin and Sarason state that "Female orgasm is commonly signalled by urination". (p. 205) provides a number of examples from other cultures, including the Ugandan Batoro, Mohave Indians, Mangaians, and Ponapese. (
Female ejaculation
Wikipedia
159
155202
https://en.wikipedia.org/wiki/Female%20ejaculation
Biology and health sciences
Human anatomy
Health
In colorimetry, the Munsell color system is a color space that specifies colors based on three properties of color: hue (basic color), value (lightness), and chroma (color intensity). It was created by Albert H. Munsell in the first decade of the 20th century and adopted by the United States Department of Agriculture (USDA) as the official color system for soil research in the 1930s. Several earlier color order systems had placed colors into a three-dimensional color solid of one form or another, but Munsell was the first to separate hue, value, and chroma into perceptually uniform and independent dimensions, and he was the first to illustrate the colors systematically in three-dimensional space. Munsell's system, particularly the later renotations, is based on rigorous measurements of human subjects' visual responses to color, putting it on a firm experimental scientific basis. Because of this basis in human visual perception, Munsell's system has outlasted its contemporary color models, and though it has been superseded for some uses by models such as CIELAB (L*a*b*) and CIECAM02, it is still in wide use today. Explanation The system consists of three independent properties of color which can be represented cylindrically in three dimensions as an irregular color solid: hue, measured by degrees around horizontal circles chroma, measured radially outward from the neutral (gray) vertical axis value, measured vertically on the core cylinder from 0 (black) to 10 (white) Munsell determined the spacing of colors along these dimensions by taking measurements of human visual responses. In each dimension, Munsell colors are as close to perceptually uniform as he could make them, which makes the resulting shape quite irregular. As Munsell explains: Hue Each horizontal circle Munsell divided into five principal hues: Red, Yellow, Green, Blue, and Purple, along with 5 intermediate hues (e.g., YR) halfway between adjacent principal hues. Each of these 10 steps, with the named hue given number 5, is then broken into 10 sub-steps, so that 100 hues are given integer values. In practice, color charts conventionally specify 40 hues, in increments of 2.5, progressing as for example 10R to 2.5YR.
Munsell color system
Wikipedia
490
155350
https://en.wikipedia.org/wiki/Munsell%20color%20system
Physical sciences
Basics
Physics
Two colors of equal value and chroma, on opposite sides of a hue circle, are complementary colors, and mix additively to the neutral gray of the same value. The diagram below shows 40 evenly spaced Munsell hues, with complements vertically aligned. Value Value, or lightness, varies vertically along the color solid, from black (value 0) at the bottom, to white (value 10) at the top. Neutral grays lie along the vertical axis between black and white. Several color solids before Munsell's plotted luminosity from black on the bottom to white on the top, with a gray gradient between them, but these systems neglected to keep perceptual lightness constant across horizontal slices. Instead, they plotted fully saturated yellow (light), and fully saturated blue and purple (dark) along the equator. Chroma Chroma, measured radially from the center of each slice, represents the “purity” of a color (related to saturation), with lower chroma being less pure (more washed out, as in pastels). Note that there is no intrinsic upper limit to chroma. Different areas of the color space have different maximal chroma coordinates. For instance light yellow colors have considerably more potential chroma than light purples, due to the nature of the eye and the physics of color stimuli. This led to a wide range of possible chroma levels—up to the high 30s for some hue–value combinations (though it is difficult or impossible to make physical objects in colors of such high chromas, and they cannot be reproduced on current computer displays). Vivid solid colors are in the range of approximately 8. Specifying a color A color is fully specified by listing the three numbers for hue, value, and chroma in that order. For instance, a purple of medium lightness and fairly saturated would be 5P 5/10 with 5P meaning the color in the middle of the purple hue band, 5/ meaning medium value (lightness), and a chroma of 10 (see swatch). An achromatic color is specified by the syntax . For example, a medium grey is specified by "N 5/".
Munsell color system
Wikipedia
460
155350
https://en.wikipedia.org/wiki/Munsell%20color%20system
Physical sciences
Basics
Physics
In computer processing, the Munsell colors are converted to a set of "HVC" numbers. The V and C are the same as the normal chroma and value. The H (hue) number is converted by mapping the hue rings into numbers between 0 and 100, where both 0 and 100 correspond to 10RP. As the Munsell books, including the 1943 renotation, only contains colors for some points in the Munsell space, it is non-trivial to specify an arbitrary color in Munsell space. Interpolation must be used to assign meanings to non-book colors such as "2.8Y 6.95/2.3", followed by an inversion of the fitted Munsell-to-xyY transform. The ASTM has defined a method in 2008, but Centore 2012 is known to work better. History and influence The idea of using a three-dimensional color solid to represent all colors was developed during the 18th and 19th centuries. Several different shapes for such a solid were proposed, including: a double triangular pyramid by Tobias Mayer in 1758, a single triangular pyramid by Johann Heinrich Lambert in 1772, a sphere by Philipp Otto Runge in 1810, a hemisphere by Michel Eugène Chevreul in 1839, a cone by Hermann von Helmholtz in 1860, a tilted cube by William Benson in 1868, and a slanted double cone by August Kirschmann in 1895. These systems became progressively more sophisticated, with Kirschmann’s even recognizing the difference in value between bright colors of different hues. But all of them remained either purely theoretical or encountered practical problems in accommodating all colors. Furthermore, none was based on any rigorous scientific measurement of human vision; before Munsell, the relationship between hue, value, and chroma was not understood. Albert Munsell, an artist and professor of art at the Massachusetts Normal Art School (now Massachusetts College of Art and Design, or MassArt), wanted to create a "rational way to describe color" that would use decimal notation instead of color names (which he felt were "foolish" and "misleading"), which he could use to teach his students about color. He first started work on the system in 1898 and published it in full form in A Color Notation in 1905.
Munsell color system
Wikipedia
472
155350
https://en.wikipedia.org/wiki/Munsell%20color%20system
Physical sciences
Basics
Physics
The original embodiment of the system (the 1905 Atlas) had some deficiencies as a physical representation of the theoretical system. These were improved significantly in the 1929 Munsell Book of Color and through an extensive series of experiments carried out by the Optical Society of America in the 1940s resulting in the notations (sample definitions) for the modern Munsell Book of Color. Though several replacements for the Munsell system have been invented, building on Munsell's foundational ideas—including the Optical Society of America's Uniform Color Scales, and the International Commission on Illumination’s CIELAB (L*a*b*) and CIECAM02 color models—the Munsell system is still widely used, by, among others, ANSI to define skin color and hair color for forensic pathology, the USGS for matching soil color, in prosthodontics during the selection of tooth color for dental restorations, and breweries for matching beer color. The original Munsell color chart remains useful for comparing computer models of human color vision.
Munsell color system
Wikipedia
219
155350
https://en.wikipedia.org/wiki/Munsell%20color%20system
Physical sciences
Basics
Physics
Liriodendron () is a genus of two species of characteristically large trees, deciduous over most of their populations, in the magnolia family (Magnoliaceae). These trees are widely known by the common name tulip tree or tuliptree for their large flowers superficially resembling tulips. It is sometimes referred to as tulip poplar or yellow poplar, and the wood simply as "poplar", although not closely related to the true poplars. Other common names include canoewood, saddle-leaf tree, and white wood. The two extant species are Liriodendron tulipifera, native to eastern North America, and Liriodendron chinense, native to China and Vietnam. Both species often grow to great size; the North American species may reach as much as in height. The North American species is commonly used horticulturally, the Chinese species is increasing in cultivation, and hybrids have been produced between these two allopatrically distributed species. Various extinct species of Liriodendron have been described from the fossil record. Description Liriodendron trees are easily recognized by their leaves, which are distinctive, having four lobes in most cases and a cross-cut notched or straight apex. Leaf size varies from 8–22 cm long and 6–25 cm wide. They are deciduous in the vast majority of cases for both species; however, each species has a semi-deciduous variety at the southern limit of its range in Florida and Yunnan respectively. The tulip tree is often a large tree, 18–60 m high and 60–120 cm in diameter. The stoutest well-authenticated Tulip tree was the Liberty Tree in Maryland which was in circumference. It died in 1999. The tree is known to reach the height of , in groves where they compete for sunlight, somewhat less if growing in an open field. Its trunk is usually columnar, with a long, branch-free bole forming a compact, rather than open, conical crown of slender branches. It has deep roots that spread widely.
Liriodendron
Wikipedia
418
155383
https://en.wikipedia.org/wiki/Liriodendron
Biology and health sciences
Magnoliales
Plants
Leaves are slightly larger in L. chinense, compared to L. tulipifera, but with considerable overlap between the species; the petiole is 4–18 cm long. Leaves on young trees tend to be more deeply lobed and larger in size than those on mature trees. In autumn, the leaves turn yellow, or brown and yellow. Both species grow rapidly in rich, moist soils of temperate climates. They hybridize easily, producing L. x sinoamericanum cultivars. Flowers are 3–10 cm in diameter and have nine tepals — three green outer sepals and six inner petals which are yellow-green, with an orange flare at the base in L. tulipifera and L. x sinoamericanum. They start forming after around 15 years and are superficially similar to a tulip in shape, hence the tree's name. Flowers of L. tulipifera have a faint cucumber odor. The stamens and pistils are arranged spirally around a central spike or gynaecium; the stamens fall off, and the pistils become the samaras. The fruit is a cone-like aggregate of samaras 4–9 cm long, each of which has a roughly tetrahedral seed with one edge attached to the central conical spike and the other edge attached to the wing. Distribution Liriodendron trees are also easily recognized by their general shape, with the higher branches sweeping together in one direction, and they are also recognizable by their height, as the taller ones usually protrude above the canopy of oaks, maples, and other trees—more markedly with the American species. Appalachian cove forests often contain several tulip trees of height and girth not seen in other species of eastern hardwoods.
Liriodendron
Wikipedia
369
155383
https://en.wikipedia.org/wiki/Liriodendron
Biology and health sciences
Magnoliales
Plants
In the Appalachian cove forests, trees 150 to 165 ft in height are common, and trees from 166 to nearly 180 ft are also found. More Liriodendron over 170 ft in height have been measured by the Eastern Native Tree Society than for any other eastern species. The current tallest tulip tree on record has reached 191.9 ft, the tallest native angiosperm tree known in North America. The tulip tree is rivaled in eastern forests only by white pine, loblolly pine, and eastern hemlock. Reports of tulip trees over 200 ft have been made, but none of the measurements has been confirmed by the Eastern Native Tree Society. Most reflect measurement errors attributable to not accurately locating the highest crown point relative to the base of the tree—a common error made by the users employing only clinometers/hypsometers when measuring height. Maximum circumferences for the species are between 24 and 30 ft at breast height, although a few historical specimens may have been slightly larger. The Great Smoky Mountains National Park has the greatest population of tulip trees 20 ft and over in circumference. The largest-volume tulip tree known anywhere is the Sag Branch Giant, which has a trunk and limb volume approaching . Paleo history Liriodendrons have been reported as fossils from the Late Cretaceous and early Tertiary of North America and central Asia. They are known widely as Tertiary-age fossils in Europe and well outside their present range in Asia and North America, showing a once-circumpolar northern distribution. Like many "Arcto-Tertiary" genera, Liriodendron apparently became extinct in Europe due to the east-west orientation of its mountains that blocked southward migration during the large-scale glaciation and aridity of climate during glacial phases. The genus name should not be confused with an extinct genus known only through fossils. That is Lepidodendron, which entails an important group of long-extinct pteridophytes in the phylum Lycopodiophyta that are well known Paleozoic coal-age fossils). Cultivation and use Liriodendron trees prefer a temperate climate, sun or part shade, and deep, fertile, well-drained and slightly acidic soil. Propagation is by seed or grafting. Plants grown from seed may take more than eight years to flower. Grafted plants flower depending on the age of the scion plant.
Liriodendron
Wikipedia
501
155383
https://en.wikipedia.org/wiki/Liriodendron
Biology and health sciences
Magnoliales
Plants
The wood of the North American species (called poplar or tulipwood) is fine grained and stable. It is easy to work and commonly used for cabinet and furniture framing, i.e. internal structural members and subsurfaces for veneering. Additionally, much inexpensive furniture, described for sales purposes simply as "hardwood", is in fact primarily stained poplar. In the literature of American furniture manufacturers from the first half of the 20th century, it is often referred to as "gum wood". The wood is only moderately rot-resistant and is not commonly used in shipbuilding, but has found some recent use in light-craft construction. The wood is readily available, and when air dried, has a density around . The name canoewood probably refers to the tree's use for construction of dugout canoes by eastern Native Americans, for which its fine grain and large trunk size is eminently suited. Tulip tree leaves are eaten by the caterpillars of some Lepidoptera, for example the eastern tiger swallowtail (Papilio glaucus). Species and cultivars Liriodendron chinense Liriodendron tulipifera 'Ardis' is a small-leaf, compact cultivar 'Aureomarginatum' is variegated with yellow-margined leaves 'Fastigiatum' grows with an erect or columnar habit (fastigiate) 'Florida' strain — a fast-growing early bloomer, leaves have round lobes 'Glen Gold' bears yellow-gold colored leaves 'Mediopictum' is a variegated cultivar with gold-centered leaves 'Chapel Hill' and 'Doc Deforce's Delight' are hybrids of the above two species
Liriodendron
Wikipedia
356
155383
https://en.wikipedia.org/wiki/Liriodendron
Biology and health sciences
Magnoliales
Plants
Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory. Basic questions addressed by computability theory include: What does it mean for a function on the natural numbers to be computable? How can noncomputable functions be classified into a hierarchy based on their level of noncomputability? Although there is considerable overlap in terms of knowledge and methods, mathematical computability theorists study the theory of relative computability, reducibility notions, and degree structures; those in the computer science field focus on the theory of subrecursive hierarchies, formal methods, and formal languages. The study of which mathematical constructions can be effectively performed is sometimes called recursive mathematics. Introduction Computability theory originated in the 1930s, with the work of Kurt Gödel, Alonzo Church, Rózsa Péter, Alan Turing, Stephen Kleene, and Emil Post. The fundamental results the researchers obtained established Turing computability as the correct formalization of the informal idea of effective calculation. In 1952, these results led Kleene to coin the two names "Church's thesis" and "Turing's thesis". Nowadays these are often considered as a single hypothesis, the Church–Turing thesis, which states that any function that is computable by an algorithm is a computable function. Although initially skeptical, by 1946 Gödel argued in favor of this thesis: With a definition of effective calculation came the first proofs that there are problems in mathematics that cannot be effectively decided. In 1936, Church and Turing were inspired by techniques used by Gödel to prove his incompleteness theorems - in 1931, Gödel independently demonstrated that the is not effectively decidable. This result showed that there is no algorithmic procedure that can correctly decide whether arbitrary mathematical propositions are true or false.
Computability theory
Wikipedia
440
155414
https://en.wikipedia.org/wiki/Computability%20theory
Mathematics
Discrete mathematics
null
Many problems in mathematics have been shown to be undecidable after these initial examples were established. In 1947, Markov and Post published independent papers showing that the word problem for semigroups cannot be effectively decided. Extending this result, Pyotr Novikov and William Boone showed independently in the 1950s that the word problem for groups is not effectively solvable: there is no effective procedure that, given a word in a finitely presented group, will decide whether the element represented by the word is the identity element of the group. In 1970, Yuri Matiyasevich proved (using results of Julia Robinson) Matiyasevich's theorem, which implies that Hilbert's tenth problem has no effective solution; this problem asked whether there is an effective procedure to decide whether a Diophantine equation over the integers has a solution in the integers. Turing computability The main form of computability studied in the field was introduced by Turing in 1936. A set of natural numbers is said to be a computable set (also called a decidable, recursive, or Turing computable set) if there is a Turing machine that, given a number n, halts with output 1 if n is in the set and halts with output 0 if n is not in the set. A function f from natural numbers to natural numbers is a (Turing) computable, or recursive function if there is a Turing machine that, on input n, halts and returns output f(n). The use of Turing machines here is not necessary; there are many other models of computation that have the same computing power as Turing machines; for example the μ-recursive functions obtained from primitive recursion and the μ operator.
Computability theory
Wikipedia
357
155414
https://en.wikipedia.org/wiki/Computability%20theory
Mathematics
Discrete mathematics
null
The terminology for computable functions and sets is not completely standardized. The definition in terms of μ-recursive functions as well as a different definition of functions by Gödel led to the traditional name recursive for sets and functions computable by a Turing machine. The word decidable stems from the German word which was used in the original papers of Turing and others. In contemporary use, the term "computable function" has various definitions: according to Nigel J. Cutland, it is a partial recursive function (which can be undefined for some inputs), while according to Robert I. Soare it is a total recursive (equivalently, general recursive) function. This article follows the second of these conventions. In 1996, Soare gave additional comments about the terminology. Not every set of natural numbers is computable. The halting problem, which is the set of (descriptions of) Turing machines that halt on input 0, is a well-known example of a noncomputable set. The existence of many noncomputable sets follows from the facts that there are only countably many Turing machines, and thus only countably many computable sets, but according to the Cantor's theorem, there are uncountably many sets of natural numbers. Although the halting problem is not computable, it is possible to simulate program execution and produce an infinite list of the programs that do halt. Thus the halting problem is an example of a computably enumerable (c.e.) set, which is a set that can be enumerated by a Turing machine (other terms for computably enumerable include recursively enumerable and semidecidable). Equivalently, a set is c.e. if and only if it is the range of some computable function. The c.e. sets, although not decidable in general, have been studied in detail in computability theory. Areas of research Beginning with the theory of computable sets and functions described above, the field of computability theory has grown to include the study of many closely related topics. These are not independent areas of research: each of these areas draws ideas and results from the others, and most computability theorists are familiar with the majority of them. Relative computability and the Turing degrees
Computability theory
Wikipedia
491
155414
https://en.wikipedia.org/wiki/Computability%20theory
Mathematics
Discrete mathematics
null
Computability theory in mathematical logic has traditionally focused on relative computability, a generalization of Turing computability defined using oracle Turing machines, introduced by Turing in 1939. An oracle Turing machine is a hypothetical device which, in addition to performing the actions of a regular Turing machine, is able to ask questions of an oracle, which is a particular set of natural numbers. The oracle machine may only ask questions of the form "Is n in the oracle set?". Each question will be immediately answered correctly, even if the oracle set is not computable. Thus an oracle machine with a noncomputable oracle will be able to compute sets that a Turing machine without an oracle cannot. Informally, a set of natural numbers A is Turing reducible to a set B if there is an oracle machine that correctly tells whether numbers are in A when run with B as the oracle set (in this case, the set A is also said to be (relatively) computable from B and recursive in B). If a set A is Turing reducible to a set B and B is Turing reducible to A then the sets are said to have the same Turing degree (also called degree of unsolvability). The Turing degree of a set gives a precise measure of how uncomputable the set is. The natural examples of sets that are not computable, including many different sets that encode variants of the halting problem, have two properties in common: They are computably enumerable, and Each can be translated into any other via a many-one reduction. That is, given such sets A and B, there is a total computable function f such that A = {x : f(x) ∈ B}. These sets are said to be many-one equivalent (or m-equivalent).
Computability theory
Wikipedia
377
155414
https://en.wikipedia.org/wiki/Computability%20theory
Mathematics
Discrete mathematics
null
Many-one reductions are "stronger" than Turing reductions: if a set A is many-one reducible to a set B, then A is Turing reducible to B, but the converse does not always hold. Although the natural examples of noncomputable sets are all many-one equivalent, it is possible to construct computably enumerable sets A and B such that A is Turing reducible to B but not many-one reducible to B. It can be shown that every computably enumerable set is many-one reducible to the halting problem, and thus the halting problem is the most complicated computably enumerable set with respect to many-one reducibility and with respect to Turing reducibility. In 1944, Post asked whether every computably enumerable set is either computable or Turing equivalent to the halting problem, that is, whether there is no computably enumerable set with a Turing degree intermediate between those two. As intermediate results, Post defined natural types of computably enumerable sets like the simple, hypersimple and hyperhypersimple sets. Post showed that these sets are strictly between the computable sets and the halting problem with respect to many-one reducibility. Post also showed that some of them are strictly intermediate under other reducibility notions stronger than Turing reducibility. But Post left open the main problem of the existence of computably enumerable sets of intermediate Turing degree; this problem became known as Post's problem. After ten years, Kleene and Post showed in 1954 that there are intermediate Turing degrees between those of the computable sets and the halting problem, but they failed to show that any of these degrees contains a computably enumerable set. Very soon after this, Friedberg and Muchnik independently solved Post's problem by establishing the existence of computably enumerable sets of intermediate degree. This groundbreaking result opened a wide study of the Turing degrees of the computably enumerable sets which turned out to possess a very complicated and non-trivial structure.
Computability theory
Wikipedia
439
155414
https://en.wikipedia.org/wiki/Computability%20theory
Mathematics
Discrete mathematics
null
There are uncountably many sets that are not computably enumerable, and the investigation of the Turing degrees of all sets is as central in computability theory as the investigation of the computably enumerable Turing degrees. Many degrees with special properties were constructed: hyperimmune-free degrees where every function computable relative to that degree is majorized by a (unrelativized) computable function; high degrees relative to which one can compute a function f which dominates every computable function g in the sense that there is a constant c depending on g such that g(x) < f(x) for all x > c; random degrees containing algorithmically random sets; 1-generic degrees of 1-generic sets; and the degrees below the halting problem of limit-computable sets. The study of arbitrary (not necessarily computably enumerable) Turing degrees involves the study of the Turing jump. Given a set A, the Turing jump of A is a set of natural numbers encoding a solution to the halting problem for oracle Turing machines running with oracle A. The Turing jump of any set is always of higher Turing degree than the original set, and a theorem of Friedburg shows that any set that computes the Halting problem can be obtained as the Turing jump of another set. Post's theorem establishes a close relationship between the Turing jump operation and the arithmetical hierarchy, which is a classification of certain subsets of the natural numbers based on their definability in arithmetic. Much recent research on Turing degrees has focused on the overall structure of the set of Turing degrees and the set of Turing degrees containing computably enumerable sets. A deep theorem of Shore and Slaman states that the function mapping a degree x to the degree of its Turing jump is definable in the partial order of the Turing degrees. A survey by Ambos-Spies and Fejer gives an overview of this research and its historical progression. Other reducibilities An ongoing area of research in computability theory studies reducibility relations other than Turing reducibility. Post introduced several strong reducibilities, so named because they imply truth-table reducibility. A Turing machine implementing a strong reducibility will compute a total function regardless of which oracle it is presented with. Weak reducibilities are those where a reduction process may not terminate for all oracles; Turing reducibility is one example.
Computability theory
Wikipedia
507
155414
https://en.wikipedia.org/wiki/Computability%20theory
Mathematics
Discrete mathematics
null
The strong reducibilities include: One-one reducibility: A is one-one reducible (or 1-reducible) to B if there is a total computable injective function f such that each n is in A if and only if f(n) is in B. Many-one reducibility: This is essentially one-one reducibility without the constraint that f be injective. A is many-one reducible (or m-reducible) to B if there is a total computable function f such that each n is in A if and only if f(n) is in B. Truth-table reducibility: A is truth-table reducible to B if A is Turing reducible to B via an oracle Turing machine that computes a total function regardless of the oracle it is given. Because of compactness of Cantor space, this is equivalent to saying that the reduction presents a single list of questions (depending only on the input) to the oracle simultaneously, and then having seen their answers is able to produce an output without asking additional questions regardless of the oracle's answer to the initial queries. Many variants of truth-table reducibility have also been studied. Further reducibilities (positive, disjunctive, conjunctive, linear and their weak and bounded versions) are discussed in the article Reduction (computability theory). The major research on strong reducibilities has been to compare their theories, both for the class of all computably enumerable sets as well as for the class of all subsets of the natural numbers. Furthermore, the relations between the reducibilities has been studied. For example, it is known that every Turing degree is either a truth-table degree or is the union of infinitely many truth-table degrees. Reducibilities weaker than Turing reducibility (that is, reducibilities that are implied by Turing reducibility) have also been studied. The most well known are arithmetical reducibility and hyperarithmetical reducibility. These reducibilities are closely connected to definability over the standard model of arithmetic.
Computability theory
Wikipedia
457
155414
https://en.wikipedia.org/wiki/Computability%20theory
Mathematics
Discrete mathematics
null
Rice's theorem and the arithmetical hierarchy Rice showed that for every nontrivial class C (which contains some but not all c.e. sets) the index set E = {e: the eth c.e. set We is in C} has the property that either the halting problem or its complement is many-one reducible to E, that is, can be mapped using a many-one reduction to E (see Rice's theorem for more detail). But, many of these index sets are even more complicated than the halting problem. These type of sets can be classified using the arithmetical hierarchy. For example, the index set FIN of the class of all finite sets is on the level Σ2, the index set REC of the class of all recursive sets is on the level Σ3, the index set COFIN of all cofinite sets is also on the level Σ3 and the index set COMP of the class of all Turing-complete sets Σ4. These hierarchy levels are defined inductively, Σn+1 contains just all sets which are computably enumerable relative to Σn; Σ1 contains the computably enumerable sets. The index sets given here are even complete for their levels, that is, all the sets in these levels can be many-one reduced to the given index sets. Reverse mathematics The program of reverse mathematics asks which set-existence axioms are necessary to prove particular theorems of mathematics in subsystems of second-order arithmetic. This study was initiated by Harvey Friedman and was studied in detail by Stephen Simpson and others; in 1999, Simpson gave a detailed discussion of the program. The set-existence axioms in question correspond informally to axioms saying that the powerset of the natural numbers is closed under various reducibility notions. The weakest such axiom studied in reverse mathematics is recursive comprehension, which states that the powerset of the naturals is closed under Turing reducibility.
Computability theory
Wikipedia
408
155414
https://en.wikipedia.org/wiki/Computability%20theory
Mathematics
Discrete mathematics
null
Numberings A numbering is an enumeration of functions; it has two parameters, e and x and outputs the value of the e-th function in the numbering on the input x. Numberings can be partial-computable although some of its members are total computable functions. Admissible numberings are those into which all others can be translated. A Friedberg numbering (named after its discoverer) is a one-one numbering of all partial-computable functions; it is necessarily not an admissible numbering. Later research dealt also with numberings of other classes like classes of computably enumerable sets. Goncharov discovered for example a class of computably enumerable sets for which the numberings fall into exactly two classes with respect to computable isomorphisms. The priority method Post's problem was solved with a method called the priority method; a proof using this method is called a priority argument. This method is primarily used to construct computably enumerable sets with particular properties. To use this method, the desired properties of the set to be constructed are broken up into an infinite list of goals, known as requirements, so that satisfying all the requirements will cause the set constructed to have the desired properties. Each requirement is assigned to a natural number representing the priority of the requirement; so 0 is assigned to the most important priority, 1 to the second most important, and so on. The set is then constructed in stages, each stage attempting to satisfy one of more of the requirements by either adding numbers to the set or banning numbers from the set so that the final set will satisfy the requirement. It may happen that satisfying one requirement will cause another to become unsatisfied; the priority order is used to decide what to do in such an event. Priority arguments have been employed to solve many problems in computability theory, and have been classified into a hierarchy based on their complexity. Because complex priority arguments can be technical and difficult to follow, it has traditionally been considered desirable to prove results without priority arguments, or to see if results proved with priority arguments can also be proved without them. For example, Kummer published a paper on a proof for the existence of Friedberg numberings without using the priority method.
Computability theory
Wikipedia
460
155414
https://en.wikipedia.org/wiki/Computability%20theory
Mathematics
Discrete mathematics
null
The lattice of computably enumerable sets When Post defined the notion of a simple set as a c.e. set with an infinite complement not containing any infinite c.e. set, he started to study the structure of the computably enumerable sets under inclusion. This lattice became a well-studied structure. Computable sets can be defined in this structure by the basic result that a set is computable if and only if the set and its complement are both computably enumerable. Infinite c.e. sets have always infinite computable subsets; but on the other hand, simple sets exist but do not always have a coinfinite computable superset. Post introduced already hypersimple and hyperhypersimple sets; later maximal sets were constructed which are c.e. sets such that every c.e. superset is either a finite variant of the given maximal set or is co-finite. Post's original motivation in the study of this lattice was to find a structural notion such that every set which satisfies this property is neither in the Turing degree of the computable sets nor in the Turing degree of the halting problem. Post did not find such a property and the solution to his problem applied priority methods instead; in 1991, Harrington and Soare found eventually such a property. Automorphism problems Another important question is the existence of automorphisms in computability-theoretic structures. One of these structures is that one of computably enumerable sets under inclusion modulo finite difference; in this structure, A is below B if and only if the set difference B − A is finite. Maximal sets (as defined in the previous paragraph) have the property that they cannot be automorphic to non-maximal sets, that is, if there is an automorphism of the computably enumerable sets under the structure just mentioned, then every maximal set is mapped to another maximal set. In 1974, Soare showed that also the converse holds, that is, every two maximal sets are automorphic. So the maximal sets form an orbit, that is, every automorphism preserves maximality and any two maximal sets are transformed into each other by some automorphism. Harrington gave a further example of an automorphic property: that of the creative sets, the sets which are many-one equivalent to the halting problem.
Computability theory
Wikipedia
489
155414
https://en.wikipedia.org/wiki/Computability%20theory
Mathematics
Discrete mathematics
null
Besides the lattice of computably enumerable sets, automorphisms are also studied for the structure of the Turing degrees of all sets as well as for the structure of the Turing degrees of c.e. sets. In both cases, Cooper claims to have constructed nontrivial automorphisms which map some degrees to other degrees; this construction has, however, not been verified and some colleagues believe that the construction contains errors and that the question of whether there is a nontrivial automorphism of the Turing degrees is still one of the main unsolved questions in this area. Kolmogorov complexity The field of Kolmogorov complexity and algorithmic randomness was developed during the 1960s and 1970s by Chaitin, Kolmogorov, Levin, Martin-Löf and Solomonoff (the names are given here in alphabetical order; much of the research was independent, and the unity of the concept of randomness was not understood at the time). The main idea is to consider a universal Turing machine U and to measure the complexity of a number (or string) x as the length of the shortest input p such that U(p) outputs x. This approach revolutionized earlier ways to determine when an infinite sequence (equivalently, characteristic function of a subset of the natural numbers) is random or not by invoking a notion of randomness for finite objects. Kolmogorov complexity became not only a subject of independent study but is also applied to other subjects as a tool for obtaining proofs. There are still many open problems in this area.
Computability theory
Wikipedia
318
155414
https://en.wikipedia.org/wiki/Computability%20theory
Mathematics
Discrete mathematics
null
Frequency computation This branch of computability theory analyzed the following question: For fixed m and n with 0 < m < n, for which functions A is it possible to compute for any different n inputs x1, x2, ..., xn a tuple of n numbers y1, y2, ..., yn such that at least m of the equations A(xk) = yk are true. Such sets are known as (m, n)-recursive sets. The first major result in this branch of computability theory is Trakhtenbrot's result that a set is computable if it is (m, n)-recursive for some m, n with 2m > n. On the other hand, Jockusch's semirecursive sets (which were already known informally before Jockusch introduced them 1968) are examples of a set which is (m, n)-recursive if and only if 2m < n + 1. There are uncountably many of these sets and also some computably enumerable but noncomputable sets of this type. Later, Degtev established a hierarchy of computably enumerable sets that are (1, n + 1)-recursive but not (1, n)-recursive. After a long phase of research by Russian scientists, this subject became repopularized in the west by Beigel's thesis on bounded queries, which linked frequency computation to the above-mentioned bounded reducibilities and other related notions. One of the major results was Kummer's Cardinality Theory which states that a set A is computable if and only if there is an n such that some algorithm enumerates for each tuple of n different numbers up to n many possible choices of the cardinality of this set of n numbers intersected with A; these choices must contain the true cardinality but leave out at least one false one.
Computability theory
Wikipedia
418
155414
https://en.wikipedia.org/wiki/Computability%20theory
Mathematics
Discrete mathematics
null
Inductive inference This is the computability-theoretic branch of learning theory. It is based on E. Mark Gold's model of learning in the limit from 1967 and has developed since then more and more models of learning. The general scenario is the following: Given a class S of computable functions, is there a learner (that is, computable functional) which outputs for any input of the form (f(0), f(1), ..., f(n)) a hypothesis. A learner M learns a function f if almost all hypotheses are the same index e of f with respect to a previously agreed on acceptable numbering of all computable functions; M learns S if M learns every f in S. Basic results are that all computably enumerable classes of functions are learnable while the class REC of all computable functions is not learnable. Many related models have been considered and also the learning of classes of computably enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards. Generalizations of Turing computability Computability theory includes the study of generalized notions of this field such as arithmetic reducibility, hyperarithmetical reducibility and α-recursion theory, as described by Sacks in 1990. These generalized notions include reducibilities that cannot be executed by Turing machines but are nevertheless natural generalizations of Turing reducibility. These studies include approaches to investigate the analytical hierarchy which differs from the arithmetical hierarchy by permitting quantification over sets of natural numbers in addition to quantification over individual numbers. These areas are linked to the theories of well-orderings and trees; for example the set of all indices of computable (nonbinary) trees without infinite branches is complete for level of the analytical hierarchy. Both Turing reducibility and hyperarithmetical reducibility are important in the field of effective descriptive set theory. The even more general notion of degrees of constructibility is studied in set theory.
Computability theory
Wikipedia
428
155414
https://en.wikipedia.org/wiki/Computability%20theory
Mathematics
Discrete mathematics
null
Continuous computability theory Computability theory for digital computation is well developed. Computability theory is less well developed for analog computation that occurs in analog computers, analog signal processing, analog electronics, artificial neural networks and continuous-time control theory, modelled by differential equations and continuous dynamical systems. For example, models of computation such as the Blum–Shub–Smale machine model have formalized computation on the reals. Relationships between definability, proof and computability There are close relationships between the Turing degree of a set of natural numbers and the difficulty (in terms of the arithmetical hierarchy) of defining that set using a first-order formula. One such relationship is made precise by Post's theorem. A weaker relationship was demonstrated by Kurt Gödel in the proofs of his completeness theorem and incompleteness theorems. Gödel's proofs show that the set of logical consequences of an effective first-order theory is a computably enumerable set, and that if the theory is strong enough this set will be uncomputable. Similarly, Tarski's indefinability theorem can be interpreted both in terms of definability and in terms of computability. Computability theory is also linked to second-order arithmetic, a formal theory of natural numbers and sets of natural numbers. The fact that certain sets are computable or relatively computable often implies that these sets can be defined in weak subsystems of second-order arithmetic. The program of reverse mathematics uses these subsystems to measure the non-computability inherent in well known mathematical theorems. In 1999, Simpson discussed many aspects of second-order arithmetic and reverse mathematics. The field of proof theory includes the study of second-order arithmetic and Peano arithmetic, as well as formal theories of the natural numbers weaker than Peano arithmetic. One method of classifying the strength of these weak systems is by characterizing which computable functions the system can prove to be total. For example, in primitive recursive arithmetic any computable function that is provably total is actually primitive recursive, while Peano arithmetic proves that functions like the Ackermann function, which are not primitive recursive, are total. Not every total computable function is provably total in Peano arithmetic, however; an example of such a function is provided by Goodstein's theorem.
Computability theory
Wikipedia
493
155414
https://en.wikipedia.org/wiki/Computability%20theory
Mathematics
Discrete mathematics
null
Name The field of mathematical logic dealing with computability and its generalizations has been called "recursion theory" since its early days. Robert I. Soare, a prominent researcher in the field, has proposed that the field should be called "computability theory" instead. He argues that Turing's terminology using the word "computable" is more natural and more widely understood than the terminology using the word "recursive" introduced by Kleene. Many contemporary researchers have begun to use this alternate terminology. These researchers also use terminology such as partial computable function and computably enumerable (c.e.) set instead of partial recursive function and recursively enumerable (r.e.) set. Not all researchers have been convinced, however, as explained by Fortnow and Simpson. Some commentators argue that both the names recursion theory and computability theory fail to convey the fact that most of the objects studied in computability theory are not computable. In 1967, Rogers has suggested that a key property of computability theory is that its results and structures should be invariant under computable bijections on the natural numbers (this suggestion draws on the ideas of the Erlangen program in geometry). The idea is that a computable bijection merely renames numbers in a set, rather than indicating any structure in the set, much as a rotation of the Euclidean plane does not change any geometric aspect of lines drawn on it. Since any two infinite computable sets are linked by a computable bijection, this proposal identifies all the infinite computable sets (the finite computable sets are viewed as trivial). According to Rogers, the sets of interest in computability theory are the noncomputable sets, partitioned into equivalence classes by computable bijections of the natural numbers. Professional organizations The main professional organization for computability theory is the Association for Symbolic Logic, which holds several research conferences each year. The interdisciplinary research Association Computability in Europe (CiE) also organizes a series of annual conferences.
Computability theory
Wikipedia
439
155414
https://en.wikipedia.org/wiki/Computability%20theory
Mathematics
Discrete mathematics
null
Corrosion is a natural process that converts a refined metal into a more chemically stable oxide. It is the gradual deterioration of materials (usually a metal) by chemical or electrochemical reaction with their environment. Corrosion engineering is the field dedicated to controlling and preventing corrosion. In the most common use of the word, this means electrochemical oxidation of metal in reaction with an oxidant such as oxygen, hydrogen, or hydroxide. Rusting, the formation of red-orange iron oxides, is a well-known example of electrochemical corrosion. This type of corrosion typically produces oxides or salts of the original metal and results in a distinctive coloration. Corrosion can also occur in materials other than metals, such as ceramics or polymers, although in this context, the term "degradation" is more common. Corrosion degrades the useful properties of materials and structures including mechanical strength, appearance, and permeability to liquids and gases. Corrosive is distinguished from caustic: the former implies mechanical degradation, the latter chemical. Many structural alloys corrode merely from exposure to moisture in air, but the process can be strongly affected by exposure to certain substances. Corrosion can be concentrated locally to form a pit or crack, or it can extend across a wide area, more or less uniformly corroding the surface. Because corrosion is a diffusion-controlled process, it occurs on exposed surfaces. As a result, methods to reduce the activity of the exposed surface, such as passivation and chromate conversion, can increase a material's corrosion resistance. However, some corrosion mechanisms are less visible and less predictable. The chemistry of corrosion is complex; it can be considered an electrochemical phenomenon. During corrosion at a particular spot on the surface of an object made of iron, oxidation takes place and that spot behaves as an anode. The electrons released at this anodic spot move through the metal to another spot on the object, and reduce oxygen at that spot in presence of H+ (which is believed to be available from carbonic acid () formed due to dissolution of carbon dioxide from air into water in moist air condition of atmosphere. Hydrogen ion in water may also be available due to dissolution of other acidic oxides from the atmosphere). This spot behaves as a cathode. Galvanic corrosion
Corrosion
Wikipedia
466
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
Galvanic corrosion occurs when two different metals have physical or electrical contact with each other and are immersed in a common electrolyte, or when the same metal is exposed to electrolyte with different concentrations. In a galvanic couple, the more active metal (the anode) corrodes at an accelerated rate and the more noble metal (the cathode) corrodes at a slower rate. When immersed separately, each metal corrodes at its own rate. What type of metal(s) to use is readily determined by following the galvanic series. For example, zinc is often used as a sacrificial anode for steel structures. Galvanic corrosion is of major interest to the marine industry and also anywhere water (containing salts) contacts pipes or metal structures. Factors such as relative size of anode, types of metal, and operating conditions (temperature, humidity, salinity, etc.) affect galvanic corrosion. The surface area ratio of the anode and cathode directly affects the corrosion rates of the materials. Galvanic corrosion is often prevented by the use of sacrificial anodes. Galvanic series In any given environment (one standard medium is aerated, room-temperature seawater), one metal will be either more noble or more active than others, based on how strongly its ions are bound to the surface. Two metals in electrical contact share the same electrons, so that the "tug-of-war" at each surface is analogous to competition for free electrons between the two materials. Using the electrolyte as a host for the flow of ions in the same direction, the noble metal will take electrons from the active one. The resulting mass flow or electric current can be measured to establish a hierarchy of materials in the medium of interest. This hierarchy is called a galvanic series and is useful in predicting and understanding corrosion. Corrosion removal Often, it is possible to chemically remove the products of corrosion. For example, phosphoric acid in the form of naval jelly is often applied to ferrous tools or surfaces to remove rust. Corrosion removal should not be confused with electropolishing, which removes some layers of the underlying metal to make a smooth surface. For example, phosphoric acid may also be used to electropolish copper but it does this by removing copper, not the products of copper corrosion.
Corrosion
Wikipedia
489
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
Resistance to corrosion Some metals are more intrinsically resistant to corrosion than others (for some examples, see galvanic series). There are various ways of protecting metals from corrosion (oxidation) including painting, hot-dip galvanization, cathodic protection, and combinations of these. Intrinsic chemistry The materials most resistant to corrosion are those for which corrosion is thermodynamically unfavorable. Any corrosion products of gold or platinum tend to decompose spontaneously into pure metal, which is why these elements can be found in metallic form on Earth and have long been valued. More common "base" metals can only be protected by more temporary means. Some metals have naturally slow reaction kinetics, even though their corrosion is thermodynamically favorable. These include such metals as zinc, magnesium, and cadmium. While corrosion of these metals is continuous and ongoing, it happens at an acceptably slow rate. An extreme example is graphite, which releases large amounts of energy upon oxidation, but has such slow kinetics that it is effectively immune to electrochemical corrosion under normal conditions. Passivation Passivation refers to the spontaneous formation of an ultrathin film of corrosion products, known as a passive film, on the metal's surface that act as a barrier to further oxidation. The chemical composition and microstructure of a passive film are different from the underlying metal. Typical passive film thickness on aluminium, stainless steels, and alloys is within 10 nanometers. The passive film is different from oxide layers that are formed upon heating and are in the micrometer thickness range – the passive film recovers if removed or damaged whereas the oxide layer does not. Passivation in natural environments such as air, water and soil at moderate pH is seen in such materials as aluminium, stainless steel, titanium, and silicon.
Corrosion
Wikipedia
371
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
Passivation is primarily determined by metallurgical and environmental factors. The effect of pH is summarized using Pourbaix diagrams, but many other factors are influential. Some conditions that inhibit passivation include high pH for aluminium and zinc, low pH or the presence of chloride ions for stainless steel, high temperature for titanium (in which case the oxide dissolves into the metal, rather than the electrolyte) and fluoride ions for silicon. On the other hand, unusual conditions may result in passivation of materials that are normally unprotected, as the alkaline environment of concrete does for steel rebar. Exposure to a liquid metal such as mercury or hot solder can often circumvent passivation mechanisms. It has been shown using electrochemical scanning tunneling microscopy that during iron passivation, an n-type semiconductor Fe(III) oxide grows at the interface with the metal that leads to the buildup of an electronic barrier opposing electron flow and an electronic depletion region that prevents further oxidation reactions. These results indicate a mechanism of "electronic passivation". The electronic properties of this semiconducting oxide film also provide a mechanistic explanation of corrosion mediated by chloride, which creates surface states at the oxide surface that lead to electronic breakthrough, restoration of anodic currents, and disruption of the electronic passivation mechanism. Corrosion in passivated materials Passivation is extremely useful in mitigating corrosion damage, however even a high-quality alloy will corrode if its ability to form a passivating film is hindered. Proper selection of the right grade of material for the specific environment is important for the long-lasting performance of this group of materials. If breakdown occurs in the passive film due to chemical or mechanical factors, the resulting major modes of corrosion may include pitting corrosion, crevice corrosion, and stress corrosion cracking. Pitting corrosion
Corrosion
Wikipedia
380
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
Certain conditions, such as low concentrations of oxygen or high concentrations of species such as chloride which compete as anions, can interfere with a given alloy's ability to re-form a passivating film. In the worst case, almost all of the surface will remain protected, but tiny local fluctuations will degrade the oxide film in a few critical points. Corrosion at these points will be greatly amplified, and can cause corrosion pits of several types, depending upon conditions. While the corrosion pits only nucleate under fairly extreme circumstances, they can continue to grow even when conditions return to normal, since the interior of a pit is naturally deprived of oxygen and locally the pH decreases to very low values and the corrosion rate increases due to an autocatalytic process. In extreme cases, the sharp tips of extremely long and narrow corrosion pits can cause stress concentration to the point that otherwise tough alloys can shatter; a thin film pierced by an invisibly small hole can hide a thumb sized pit from view. These problems are especially dangerous because they are difficult to detect before a part or structure fails. Pitting remains among the most common and damaging forms of corrosion in passivated alloys, but it can be prevented by control of the alloy's environment. Pitting results when a small hole, or cavity, forms in the metal, usually as a result of de-passivation of a small area. This area becomes anodic, while part of the remaining metal becomes cathodic, producing a localized galvanic reaction. The deterioration of this small area penetrates the metal and can lead to failure. This form of corrosion is often difficult to detect due to the fact that it is usually relatively small and may be covered and hidden by corrosion-produced compounds. Weld decay and knifeline attack Stainless steel can pose special corrosion challenges, since its passivating behavior relies on the presence of a major alloying component (chromium, at least 11.5%). Because of the elevated temperatures of welding and heat treatment, chromium carbides can form in the grain boundaries of stainless alloys. This chemical reaction robs the material of chromium in the zone near the grain boundary, making those areas much less resistant to corrosion. This creates a galvanic couple with the well-protected alloy nearby, which leads to "weld decay" (corrosion of the grain boundaries in the heat affected zones) in highly corrosive environments. This process can seriously reduce the mechanical strength of welded joints over time.
Corrosion
Wikipedia
506
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
A stainless steel is said to be "sensitized" if chromium carbides are formed in the microstructure. A typical microstructure of a normalized type 304 stainless steel shows no signs of sensitization, while a heavily sensitized steel shows the presence of grain boundary precipitates. The dark lines in the sensitized microstructure are networks of chromium carbides formed along the grain boundaries. Special alloys, either with low carbon content or with added carbon "getters" such as titanium and niobium (in types 321 and 347, respectively), can prevent this effect, but the latter require special heat treatment after welding to prevent the similar phenomenon of "knifeline attack". As its name implies, corrosion is limited to a very narrow zone adjacent to the weld, often only a few micrometers across, making it even less noticeable. Crevice corrosion Crevice corrosion is a localized form of corrosion occurring in confined spaces (crevices), to which the access of the working fluid from the environment is limited. Formation of a differential aeration cell leads to corrosion inside the crevices. Examples of crevices are gaps and contact areas between parts, under gaskets or seals, inside cracks and seams, spaces filled with deposits, and under sludge piles. Crevice corrosion is influenced by the crevice type (metal-metal, metal-non-metal), crevice geometry (size, surface finish), and metallurgical and environmental factors. The susceptibility to crevice corrosion can be evaluated with ASTM standard procedures. A critical crevice corrosion temperature is commonly used to rank a material's resistance to crevice corrosion.
Corrosion
Wikipedia
366
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
Hydrogen grooving In the chemical industry, hydrogen grooving is the corrosion of piping at grooves created by the interaction of a corrosive agent, corroded pipe constituents, and hydrogen gas bubbles. For example, when sulfuric acid () flows through steel pipes, the iron in the steel reacts with the acid to form a passivation coating of iron sulfate () and hydrogen gas (). The iron sulfate coating will protect the steel from further reaction; however, if hydrogen bubbles contact this coating, it will be removed. Thus, a groove can be formed by a travelling bubble, exposing more steel to the acid, causing a vicious cycle. The grooving is exacerbated by the tendency of subsequent bubbles to follow the same path. High-temperature corrosion High-temperature corrosion is chemical deterioration of a material (typically a metal) as a result of heating. This non-galvanic form of corrosion can occur when a metal is subjected to a hot atmosphere containing oxygen, sulfur ("sulfidation"), or other compounds capable of oxidizing (or assisting the oxidation of) the material concerned. For example, materials used in aerospace, power generation, and even in car engines must resist sustained periods at high temperature, during which they may be exposed to an atmosphere containing the potentially highly-corrosive products of combustion. Some products of high-temperature corrosion can potentially be turned to the advantage of the engineer. The formation of oxides on stainless steels, for example, can provide a protective layer preventing further atmospheric attack, allowing for a material to be used for sustained periods at both room and high temperatures in hostile conditions. Such high-temperature corrosion products, in the form of compacted oxide layer glazes, prevent or reduce wear during high-temperature sliding contact of metallic (or metallic and ceramic) surfaces. Thermal oxidation is also commonly used to produce controlled oxide nanostructures, including nanowires and thin films. Microbial corrosion
Corrosion
Wikipedia
402
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
Microbial corrosion, or commonly known as microbiologically influenced corrosion (MIC), is a corrosion caused or promoted by microorganisms, usually chemoautotrophs. It can apply to both metallic and non-metallic materials, in the presence or absence of oxygen. Sulfate-reducing bacteria are active in the absence of oxygen (anaerobic); they produce hydrogen sulfide, causing sulfide stress cracking. In the presence of oxygen (aerobic), some bacteria may directly oxidize iron to iron oxides and hydroxides, other bacteria oxidize sulfur and produce sulfuric acid causing biogenic sulfide corrosion. Concentration cells can form in the deposits of corrosion products, leading to localized corrosion. Accelerated low-water corrosion (ALWC) is a particularly aggressive form of MIC that affects steel piles in seawater near the low water tide mark. It is characterized by an orange sludge, which smells of hydrogen sulfide when treated with acid. Corrosion rates can be very high and design corrosion allowances can soon be exceeded leading to premature failure of the steel pile. Piles that have been coated and have cathodic protection installed at the time of construction are not susceptible to ALWC. For unprotected piles, sacrificial anodes can be installed locally to the affected areas to inhibit the corrosion or a complete retrofitted sacrificial anode system can be installed. Affected areas can also be treated using cathodic protection, using either sacrificial anodes or applying current to an inert anode to produce a calcareous deposit, which will help shield the metal from further attack. Metal dusting Metal dusting is a catastrophic form of corrosion that occurs when susceptible materials are exposed to environments with high carbon activities, such as synthesis gas and other high-CO environments. The corrosion manifests itself as a break-up of bulk metal to metal powder. The suspected mechanism is firstly the deposition of a graphite layer on the surface of the metal, usually from carbon monoxide (CO) in the vapor phase. This graphite layer is then thought to form metastable M3C species (where M is the metal), which migrate away from the metal surface. However, in some regimes, no M3C species is observed indicating a direct transfer of metal atoms into the graphite layer. Protection from corrosion
Corrosion
Wikipedia
480
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
Various treatments are used to slow corrosion damage to metallic objects which are exposed to the weather, salt water, acids, or other hostile environments. Some unprotected metallic alloys are extremely vulnerable to corrosion, such as those used in neodymium magnets, which can spall or crumble into powder even in dry, temperature-stable indoor environments unless properly treated. Surface treatments When surface treatments are used to reduce corrosion, great care must be taken to ensure complete coverage, without gaps, cracks, or pinhole defects. Small defects can act as an "Achilles' heel", allowing corrosion to penetrate the interior and causing extensive damage even while the outer protective layer remains apparently intact for a period of time. Applied coatings Plating, painting, and the application of enamel are the most common anti-corrosion treatments. They work by providing a barrier of corrosion-resistant material between the damaging environment and the structural material. Aside from cosmetic and manufacturing issues, there may be tradeoffs in mechanical flexibility versus resistance to abrasion and high temperature. Platings usually fail only in small sections, but if the plating is more noble than the substrate (for example, chromium on steel), a galvanic couple will cause any exposed area to corrode much more rapidly than an unplated surface would. For this reason, it is often wise to plate with active metal such as zinc or cadmium. If the zinc coating is not thick enough the surface soon becomes unsightly with rusting obvious. The design life is directly related to the metal coating thickness. Painting either by roller or brush is more desirable for tight spaces; spray would be better for larger coating areas such as steel decks and waterfront applications. Flexible polyurethane coatings, like Durabak-M26 for example, can provide an anti-corrosive seal with a highly durable slip resistant membrane. Painted coatings are relatively easy to apply and have fast drying times although temperature and humidity may cause dry times to vary.
Corrosion
Wikipedia
407
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
Reactive coatings If the environment is controlled (especially in recirculating systems), corrosion inhibitors can often be added to it. These chemicals form an electrically insulating or chemically impermeable coating on exposed metal surfaces, to suppress electrochemical reactions. Such methods make the system less sensitive to scratches or defects in the coating, since extra inhibitors can be made available wherever metal becomes exposed. Chemicals that inhibit corrosion include some of the salts in hard water (Roman water systems are known for their mineral deposits), chromates, phosphates, polyaniline, other conducting polymers, and a wide range of specially designed chemicals that resemble surfactants (i.e., long-chain organic molecules with ionic end groups). Anodization Aluminium alloys often undergo a surface treatment. Electrochemical conditions in the bath are carefully adjusted so that uniform pores, several nanometers wide, appear in the metal's oxide film. These pores allow the oxide to grow much thicker than passivating conditions would allow. At the end of the treatment, the pores are allowed to seal, forming a harder-than-usual surface layer. If this coating is scratched, normal passivation processes take over to protect the damaged area. Anodizing is very resilient to weathering and corrosion, so it is commonly used for building facades and other areas where the surface will come into regular contact with the elements. While being resilient, it must be cleaned frequently. If left without cleaning, panel edge staining will naturally occur. Anodization is the process of converting an anode into cathode by bringing a more active anode in contact with it. Biofilm coatings A new form of protection has been developed by applying certain species of bacterial films to the surface of metals in highly corrosive environments. This process increases the corrosion resistance substantially. Alternatively, antimicrobial-producing biofilms can be used to inhibit mild steel corrosion from sulfate-reducing bacteria. Controlled permeability formwork Controlled permeability formwork (CPF) is a method of preventing the corrosion of reinforcement by naturally enhancing the durability of the cover during concrete placement. CPF has been used in environments to combat the effects of carbonation, chlorides, frost, and abrasion. Cathodic protection
Corrosion
Wikipedia
471
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
Cathodic protection (CP) is a technique to control the corrosion of a metal surface by making it the cathode of an electrochemical cell. Cathodic protection systems are most commonly used to protect steel pipelines and tanks; steel pier piles, ships, and offshore oil platforms. Sacrificial anode protection For effective CP, the potential of the steel surface is polarized (pushed) more negative until the metal surface has a uniform potential. With a uniform potential, the driving force for the corrosion reaction is halted. For galvanic CP systems, the anode material corrodes under the influence of the steel, and eventually it must be replaced. The polarization is caused by the current flow from the anode to the cathode, driven by the difference in electrode potential between the anode and the cathode. The most common sacrificial anode materials are aluminum, zinc, magnesium and related alloys. Aluminum has the highest capacity, and magnesium has the highest driving voltage and is thus used where resistance is higher. Zinc is general purpose and the basis for galvanizing. A number of problems are associated with sacrificial anodes. Among these, from an environmental perspective, is the release of zinc, magnesium, aluminum and heavy metals such as cadmium into the environment including seawater. From a working perspective, sacrificial anodes systems are considered to be less precise than modern cathodic protection systems such as Impressed Current Cathodic Protection (ICCP) systems. Their ability to provide requisite protection has to be checked regularly by means of underwater inspection by divers. Furthermore, as they have a finite lifespan, sacrificial anodes need to be replaced regularly over time. Impressed current cathodic protection For larger structures, galvanic anodes cannot economically deliver enough current to provide complete protection. Impressed current cathodic protection (ICCP) systems use anodes connected to a DC power source (such as a cathodic protection rectifier). Anodes for ICCP systems are tubular and solid rod shapes of various specialized materials. These include high silicon cast iron, graphite, mixed metal oxide or platinum coated titanium or niobium coated rod and wires. Anodic protection
Corrosion
Wikipedia
459
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
Anodic protection impresses anodic current on the structure to be protected (opposite to the cathodic protection). It is appropriate for metals that exhibit passivity (e.g. stainless steel) and suitably small passive current over a wide range of potentials. It is used in aggressive environments, such as solutions of sulfuric acid. Anodic protection is an electrochemical method of corrosion protection by keeping metal in passive state Rate of corrosion The formation of an oxide layer is described by the Deal–Grove model, which is used to predict and control oxide layer formation in diverse situations. A simple test for measuring corrosion is the weight loss method. The method involves exposing a clean weighed piece of the metal or alloy to the corrosive environment for a specified time followed by cleaning to remove corrosion products and weighing the piece to determine the loss of weight. The rate of corrosion () is calculated as where is a constant, is the weight loss of the metal in time , is the surface area of the metal exposed, and is the density of the metal (in g/cm3). Other common expressions for the corrosion rate is penetration depth and change of mechanical properties. Economic impact In 2002, the US Federal Highway Administration released a study titled "Corrosion Costs and Preventive Strategies in the United States" on the direct costs associated with metallic corrosion in the US industry. In 1998, the total annual direct cost of corrosion in the US roughly $276 billion (or 3.2% of the US gross domestic product at the time). Broken down into five specific industries, the economic losses are $22.6 billion in infrastructure, $17.6 billion in production and manufacturing, $29.7 billion in transportation, $20.1 billion in government, and $47.9 billion in utilities.
Corrosion
Wikipedia
366
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
Rust is one of the most common causes of bridge accidents. As rust displaces a much higher volume than the originating mass of iron, its build-up can also cause failure by forcing apart adjacent components. It was the cause of the collapse of the Mianus River Bridge in 1983, when support bearings rusted internally and pushed one corner of the road slab off its support. Three drivers on the roadway at the time died as the slab fell into the river below. The following NTSB investigation showed that a drain in the road had been blocked for road re-surfacing, and had not been unblocked; as a result, runoff water penetrated the support hangers. Rust was also an important factor in the Silver Bridge disaster of 1967 in West Virginia, when a steel suspension bridge collapsed within a minute, killing 46 drivers and passengers who were on the bridge at the time. Similarly, corrosion of concrete-covered steel and iron can cause the concrete to spall, creating severe structural problems. It is one of the most common failure modes of reinforced concrete bridges. Measuring instruments based on the half-cell potential can detect the potential corrosion spots before total failure of the concrete structure is reached. Until 20–30 years ago, galvanized steel pipe was used extensively in the potable water systems for single and multi-family residents as well as commercial and public construction. Today, these systems have long ago consumed the protective zinc and are corroding internally, resulting in poor water quality and pipe failures. The economic impact on homeowners, condo dwellers, and the public infrastructure is estimated at $22 billion as the insurance industry braces for a wave of claims due to pipe failures. Corrosion in nonmetals Most ceramic materials are almost entirely immune to corrosion. The strong chemical bonds that hold them together leave very little free chemical energy in the structure; they can be thought of as already corroded. When corrosion does occur, it is almost always a simple dissolution of the material or chemical reaction, rather than an electrochemical process. A common example of corrosion protection in ceramics is the lime added to soda–lime glass to reduce its solubility in water; though it is not nearly as soluble as pure sodium silicate, normal glass does form sub-microscopic flaws when exposed to moisture. Due to its brittleness, such flaws cause a dramatic reduction in the strength of a glass object during its first few hours at room temperature. Corrosion of polymers
Corrosion
Wikipedia
494
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
Polymer degradation involves several complex and often poorly understood physiochemical processes. These are strikingly different from the other processes discussed here, and so the term "corrosion" is only applied to them in a loose sense of the word. Because of their large molecular weight, very little entropy can be gained by mixing a given mass of polymer with another substance, making them generally quite difficult to dissolve. While dissolution is a problem in some polymer applications, it is relatively simple to design against. A more common and related problem is "swelling", where small molecules infiltrate the structure, reducing strength and stiffness and causing a volume change. Conversely, many polymers (notably flexible vinyl) are intentionally swelled with plasticizers, which can be leached out of the structure, causing brittleness or other undesirable changes. The most common form of degradation, however, is a decrease in polymer chain length. Mechanisms which break polymer chains are familiar to biologists because of their effect on DNA: ionizing radiation (most commonly ultraviolet light), free radicals, and oxidizers such as oxygen, ozone, and chlorine. Ozone cracking is a well-known problem affecting natural rubber for example. Plastic additives can slow these process very effectively, and can be as simple as a UV-absorbing pigment (e.g., titanium dioxide or carbon black). Plastic shopping bags often do not include these additives so that they break down more easily as ultrafine particles of litter. Corrosion of glass Glass is characterized by a high degree of corrosion resistance. Because of its high water resistance, it is often used as primary packaging material in the pharmaceutical industry since most medicines are preserved in a watery solution. Besides its water resistance, glass is also robust when exposed to certain chemically-aggressive liquids or gases. Glass disease is the corrosion of silicate glasses in aqueous solutions. It is governed by two mechanisms: diffusion-controlled leaching (ion exchange) and hydrolytic dissolution of the glass network. Both mechanisms strongly depend on the pH of contacting solution: the rate of ion exchange decreases with pH as 10−0.5pH, whereas the rate of hydrolytic dissolution increases with pH as 100.5pH.
Corrosion
Wikipedia
449
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
Mathematically, corrosion rates of glasses are characterized by normalized corrosion rates of elements (g/cm2·d) which are determined as the ratio of total amount of released species into the water (g) to the water-contacting surface area (cm2), time of contact (days), and weight fraction content of the element in the glass : . The overall corrosion rate is a sum of contributions from both mechanisms (leaching + dissolution): . Diffusion-controlled leaching (ion exchange) is characteristic of the initial phase of corrosion and involves replacement of alkali ions in the glass by a hydronium (H3O+) ion from the solution. It causes an ion-selective depletion of near surface layers of glasses and gives an inverse-square-root dependence of corrosion rate with exposure time. The diffusion-controlled normalized leaching rate of cations from glasses (g/cm2·d) is given by: , where is time, is the th cation effective diffusion coefficient (cm2/d), which depends on pH of contacting water as , and is the density of the glass (g/cm3). Glass network dissolution is characteristic of the later phases of corrosion and causes a congruent release of ions into the water solution at a time-independent rate in dilute solutions (g/cm2·d): , where is the stationary hydrolysis (dissolution) rate of the glass (cm/d). In closed systems, the consumption of protons from the aqueous phase increases the pH and causes a fast transition to hydrolysis. However, a further saturation of solution with silica impedes hydrolysis and causes the glass to return to an ion-exchange; e.g., diffusion-controlled regime of corrosion. In typical natural conditions, normalized corrosion rates of silicate glasses are very low and are of the order of 10−7 to 10−5 g/(cm2·d). The very high durability of silicate glasses in water makes them suitable for hazardous and nuclear waste immobilisation. Glass corrosion tests There exist numerous standardized procedures for measuring the corrosion (also called chemical durability) of glasses in neutral, basic, and acidic environments, under simulated environmental conditions, in simulated body fluid, at high temperature and pressure, and under other conditions.
Corrosion
Wikipedia
483
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
The standard procedure ISO 719 describes a test of the extraction of water-soluble basic compounds under neutral conditions: 2 g of glass, particle size 300–500 μm, is kept for 60 min in 50 mL de-ionized water of grade 2 at 98 °C; 25 mL of the obtained solution is titrated against 0.01 mol/L HCl solution. The volume of HCl required for neutralization is classified according to the table below. The standardized test ISO 719 is not suitable for glasses with poor or not extractable alkaline components, but which are still attacked by water; e.g., quartz glass, B2O3 glass or P2O5 glass. Usual glasses are differentiated into the following classes: Hydrolytic class 1 (Type I): This class, which is also called neutral glass, includes borosilicate glasses (e.g., Duran, Pyrex, Fiolax). Glass of this class contains essential quantities of boron oxides, aluminium oxides and alkaline earth oxides. Through its composition neutral glass has a high resistance against temperature shocks and the highest hydrolytic resistance. Against acid and neutral solutions it shows high chemical resistance, because of its poor alkali content against alkaline solutions. Hydrolytic class 2 (Type II): This class usually contains sodium silicate glasses with a high hydrolytic resistance through surface finishing. Sodium silicate glass is a silicate glass, which contains alkali- and alkaline earth oxide and primarily sodium oxide and calcium oxide. Hydrolytic class 3 (Type III): Glass of the 3rd hydrolytic class usually contains sodium silicate glasses and has a mean hydrolytic resistance, which is two times poorer than of type 1 glasses. Acid class DIN 12116 and alkali class DIN 52322 (ISO 695) are to be distinguished from the hydrolytic class DIN 12111 (ISO 719).
Corrosion
Wikipedia
407
155443
https://en.wikipedia.org/wiki/Corrosion
Physical sciences
Chemical reactions
null
Heritability is a statistic used in the fields of breeding and genetics that estimates the degree of variation in a phenotypic trait in a population that is due to genetic variation between individuals in that population. The concept of heritability can be expressed in the form of the following question: "What is the proportion of the variation in a given trait within a population that is not explained by the environment or random chance?" Other causes of measured variation in a trait are characterized as environmental factors, including observational error. In human studies of heritability these are often apportioned into factors from "shared environment" and "non-shared environment" based on whether they tend to result in persons brought up in the same household being more or less similar to persons who were not. Heritability is estimated by comparing individual phenotypic variation among related individuals in a population, by examining the association between individual phenotype and genotype data, or even by modeling summary-level data from genome-wide association studies (GWAS). Heritability is an important concept in quantitative genetics, particularly in selective breeding and behavior genetics (for instance, twin studies). It is the source of much confusion due to the fact that its technical definition is different from its commonly-understood folk definition. Therefore, its use conveys the incorrect impression that behavioral traits are "inherited" or specifically passed down through the genes. Behavioral geneticists also conduct heritability analyses based on the assumption that genes and environments contribute in a separate, additive manner to behavioral traits. Overview
Heritability
Wikipedia
314
155624
https://en.wikipedia.org/wiki/Heritability
Biology and health sciences
Genetics
Biology
Heritability measures the fraction of phenotype variability that can be attributed to genetic variation. This is not the same as saying that this fraction of an individual phenotype is caused by genetics. For example, it is incorrect to say that since the heritability of personality traits is about 0.6, that means that 60% of your personality is inherited from your parents and 40% comes from the environment. In addition, heritability can change without any genetic change occurring, such as when the environment starts contributing to more variation. As a case in point, consider that both genes and environment have the potential to influence intelligence. Heritability could increase if genetic variation increases, causing individuals to show more phenotypic variation, like showing different levels of intelligence. On the other hand, heritability might also increase if the environmental variation decreases, causing individuals to show less phenotypic variation, like showing more similar levels of intelligence. Heritability increases when genetics are contributing more variation or because non-genetic factors are contributing less variation; what matters is the relative contribution. Heritability is specific to a particular population in a particular environment. High heritability of a trait, consequently, does not necessarily mean that the trait is not very susceptible to environmental influences. Heritability can also change as a result of changes in the environment, migration, inbreeding, or how heritability itself is measured in the population under study. The heritability of a trait should not be interpreted as a measure of the extent to which said trait is genetically determined in an individual. The extent of dependence of phenotype on environment can also be a function of the genes involved. Matters of heritability are complicated because genes may canalize a phenotype, making its expression almost inevitable in all occurring environments. Individuals with the same genotype can also exhibit different phenotypes through a mechanism called phenotypic plasticity, which makes heritability difficult to measure in some cases. Recent insights in molecular biology have identified changes in transcriptional activity of individual genes associated with environmental changes. However, there are a large number of genes whose transcription is not affected by the environment.
Heritability
Wikipedia
441
155624
https://en.wikipedia.org/wiki/Heritability
Biology and health sciences
Genetics
Biology
Estimates of heritability use statistical analyses to help to identify the causes of differences between individuals. Since heritability is concerned with variance, it is necessarily an account of the differences between individuals in a population. Heritability can be univariate – examining a single trait – or multivariate – examining the genetic and environmental associations between multiple traits at once. This allows a test of the genetic overlap between different phenotypes: for instance hair color and eye color. Environment and genetics may also interact, and heritability analyses can test for and examine these interactions (GxE models). A prerequisite for heritability analyses is that there is some population variation to account for. This last point highlights the fact that heritability cannot take into account the effect of factors which are invariant in the population. Factors may be invariant if they are absent and do not exist in the population, such as no one having access to a particular antibiotic, or because they are omnipresent, like if everyone is drinking coffee. In practice, all human behavioral traits vary and almost all traits show some heritability. Definition Any particular phenotype can be modeled as the sum of genetic and environmental effects: Phenotype (P) = Genotype (G) + Environment (E). Likewise the phenotypic variance in the trait – Var (P) – is the sum of effects as follows: Var(P) = Var(G) + Var(E) + 2 Cov(G,E). In a planned experiment Cov(G,E) can be controlled and held at 0. In this case, heritability, is defined as H2 is the broad-sense heritability. This reflects all the genetic contributions to a population's phenotypic variance including additive, dominant, and epistatic (multi-genic interactions), as well as maternal and paternal effects, where individuals are directly affected by their parents' phenotype, such as with milk production in mammals.
Heritability
Wikipedia
414
155624
https://en.wikipedia.org/wiki/Heritability
Biology and health sciences
Genetics
Biology
A particularly important component of the genetic variance is the additive variance, Var(A), which is the variance due to the average effects (additive effects) of the alleles. Since each parent passes a single allele per locus to each offspring, parent-offspring resemblance depends upon the average effect of single alleles. Additive variance represents, therefore, the genetic component of variance responsible for parent-offspring resemblance. The additive genetic portion of the phenotypic variance is known as Narrow-sense heritability and is defined as An upper case H2 is used to denote broad sense, and lower case h2 for narrow sense. For traits which are not continuous but dichotomous such as an additional toe or certain diseases, the contribution of the various alleles can be considered to be a sum, which past a threshold, manifests itself as the trait, giving the liability threshold model in which heritability can be estimated and selection modeled. Additive variance is important for selection. If a selective pressure such as improving livestock is exerted, the response of the trait is directly related to narrow-sense heritability. The mean of the trait will increase in the next generation as a function of how much the mean of the selected parents differs from the mean of the population from which the selected parents were chosen. The observed response to selection leads to an estimate of the narrow-sense heritability (called realized heritability). This is the principle underlying artificial selection or breeding. Example The simplest genetic model involves a single locus with two alleles (b and B) affecting one quantitative phenotype. The number of B alleles can be 0, 1, or 2. For any genotype, (Bi,Bj), where Bi and Bj are either 0 or 1, the expected phenotype can then be written as the sum of the overall mean, a linear effect, and a dominance deviation (one can think of the dominance term as an interaction between Bi and Bj): The additive genetic variance at this locus is the weighted average of the squares of the additive effects: where There is a similar relationship for the variance of dominance deviations: where The linear regression of phenotype on genotype is shown in Figure 1.
Heritability
Wikipedia
454
155624
https://en.wikipedia.org/wiki/Heritability
Biology and health sciences
Genetics
Biology
Assumptions Estimates of the total heritability of human traits assume the absence of epistasis, which has been called the "assumption of additivity". Although some researchers have cited such estimates in support of the existence of "missing heritability" unaccounted for by known genetic loci, the assumption of additivity may render these estimates invalid. There is also some empirical evidence that the additivity assumption is frequently violated in behavior genetic studies of adolescent intelligence and academic achievement. Estimating heritability Since only P can be observed or measured directly, heritability must be estimated from the similarities observed in subjects varying in their level of genetic or environmental similarity. The statistical analyses required to estimate the genetic and environmental components of variance depend on the sample characteristics. Briefly, better estimates are obtained using data from individuals with widely varying levels of genetic relationship - such as twins, siblings, parents and offspring, rather than from more distantly related (and therefore less similar) subjects. The standard error for heritability estimates is improved with large sample sizes. In non-human populations it is often possible to collect information in a controlled way. For example, among farm animals it is easy to arrange for a bull to produce offspring from a large number of cows and to control environments. Such experimental control is generally not possible when gathering human data, relying on naturally occurring relationships and environments. In classical quantitative genetics, there were two schools of thought regarding estimation of heritability. One school of thought was developed by Sewall Wright at The University of Chicago, and further popularized by C. C. Li (University of Chicago) and J. L. Lush (Iowa State University). It is based on the analysis of correlations and, by extension, regression. Path Analysis was developed by Sewall Wright as a way of estimating heritability. The second was originally developed by R. A. Fisher and expanded at The University of Edinburgh, Iowa State University, and North Carolina State University, as well as other schools. It is based on the analysis of variance of breeding studies, using the intraclass correlation of relatives. Various methods of estimating components of variance (and, hence, heritability) from ANOVA are used in these analyses. Today, heritability can be estimated from general pedigrees using linear mixed models and from genomic relatedness estimated from genetic markers.
Heritability
Wikipedia
483
155624
https://en.wikipedia.org/wiki/Heritability
Biology and health sciences
Genetics
Biology
Studies of human heritability often utilize adoption study designs, often with identical twins who have been separated early in life and raised in different environments. Such individuals have identical genotypes and can be used to separate the effects of genotype and environment. A limit of this design is the common prenatal environment and the relatively low numbers of twins reared apart. A second and more common design is the twin study in which the similarity of identical and fraternal twins is used to estimate heritability. These studies can be limited by the fact that identical twins are not completely genetically identical, potentially resulting in an underestimation of heritability. In observational studies, or because of evocative effects (where a genome evokes environments by its effect on them), G and E may covary: gene environment correlation. Depending on the methods used to estimate heritability, correlations between genetic factors and shared or non-shared environments may or may not be confounded with heritability. Regression/correlation methods of estimation The first school of estimation uses regression and correlation to estimate heritability. Comparison of close relatives In the comparison of relatives, we find that in general, where r can be thought of as the coefficient of relatedness, b is the coefficient of regression and t is the coefficient of correlation. Parent-offspring regression Heritability may be estimated by comparing parent and offspring traits (as in Fig. 2). The slope of the line (0.57) approximates the heritability of the trait when offspring values are regressed against the average trait in the parents. If only one parent's value is used then heritability is twice the slope. (This is the source of the term "regression," since the offspring values always tend to regress to the mean value for the population, i.e., the slope is always less than one). This regression effect also underlies the DeFries–Fulker method for analyzing twins selected for one member being affected.
Heritability
Wikipedia
410
155624
https://en.wikipedia.org/wiki/Heritability
Biology and health sciences
Genetics
Biology
Sibling comparison A basic approach to heritability can be taken using full-Sib designs: comparing similarity between siblings who share both a biological mother and a father. When there is only additive gene action, this sibling phenotypic correlation is an index of familiarity – the sum of half the additive genetic variance plus full effect of the common environment. It thus places an upper limit on additive heritability of twice the full-Sib phenotypic correlation. Half-Sib designs compare phenotypic traits of siblings that share one parent with other sibling groups. Twin studies Heritability for traits in humans is most frequently estimated by comparing resemblances between twins. "The advantage of twin studies, is that the total variance can be split up into genetic, shared or common environmental, and unique environmental components, enabling an accurate estimation of heritability". Fraternal or dizygotic (DZ) twins on average share half their genes (assuming there is no assortative mating for the trait), and so identical or monozygotic (MZ) twins on average are twice as genetically similar as DZ twins. A crude estimate of heritability, then, is approximately twice the difference in correlation between MZ and DZ twins, i.e. Falconer's formula H2=2(r(MZ)-r(DZ)). The effect of shared environment, c2, contributes to similarity between siblings due to the commonality of the environment they are raised in. Shared environment is approximated by the DZ correlation minus half heritability, which is the degree to which DZ twins share the same genes, c2=DZ-1/2h2. Unique environmental variance, e2, reflects the degree to which identical twins raised together are dissimilar, e2=1-r(MZ). Analysis of variance methods of estimation The second set of methods of estimation of heritability involves ANOVA and estimation of variance components. Basic model We use the basic discussion of Kempthorne. Considering only the most basic of genetic models, we can look at the quantitative contribution of a single locus with genotype Gi as where is the effect of genotype Gi and is the environmental effect. Consider an experiment with a group of sires and their progeny from random dams. Since the progeny get half of their genes from the father and half from their (random) mother, the progeny equation is
Heritability
Wikipedia
502
155624
https://en.wikipedia.org/wiki/Heritability
Biology and health sciences
Genetics
Biology
Intraclass correlations Consider the experiment above. We have two groups of progeny we can compare. The first is comparing the various progeny for an individual sire (called within sire group). The variance will include terms for genetic variance (since they did not all get the same genotype) and environmental variance. This is thought of as an error term. The second group of progeny are comparisons of means of half sibs with each other (called among sire group). In addition to the error term as in the within sire groups, we have an addition term due to the differences among different means of half sibs. The intraclass correlation is , since environmental effects are independent of each other. The ANOVA In an experiment with sires and progeny per sire, we can calculate the following ANOVA, using as the genetic variance and as the environmental variance: The term is the intraclass correlation between half sibs. We can easily calculate . The expected mean square is calculated from the relationship of the individuals (progeny within a sire are all half-sibs, for example), and an understanding of intraclass correlations. The use of ANOVA to calculate heritability often fails to account for the presence of gene–-environment interactions, because ANOVA has a much lower statistical power for testing for interaction effects than for direct effects. Model with additive and dominance terms For a model with additive and dominance terms, but not others, the equation for a single locus is where is the additive effect of the ith allele, is the additive effect of the jth allele, is the dominance deviation for the ijth genotype, and is the environment. Experiments can be run with a similar setup to the one given in Table 1. Using different relationship groups, we can evaluate different intraclass correlations. Using as the additive genetic variance and as the dominance deviation variance, intraclass correlations become linear functions of these parameters. In general, Intraclass correlation where and are found as P[ alleles drawn at random from the relationship pair are identical by descent], and P[ genotypes drawn at random from the relationship pair are identical by descent]. Some common relationships and their coefficients are given in Table 2.
Heritability
Wikipedia
452
155624
https://en.wikipedia.org/wiki/Heritability
Biology and health sciences
Genetics
Biology
Linear mixed models A wide variety of approaches using linear mixed models have been reported in literature. Via these methods, phenotypic variance is partitioned into genetic, environmental and experimental design variances to estimate heritability. Environmental variance can be explicitly modeled by studying individuals across a broad range of environments, although inference of genetic variance from phenotypic and environmental variance may lead to underestimation of heritability due to the challenge of capturing the full range of environmental influence affecting a trait. Other methods for calculating heritability use data from genome-wide association studies to estimate the influence on a trait by genetic factors, which is reflected by the rate and influence of putatively associated genetic loci (usually single-nucleotide polymorphisms) on the trait. This can lead to underestimation of heritability, however. This discrepancy is referred to as "missing heritability" and reflects the challenge of accurately modeling both genetic and environmental variance in heritability models. When a large, complex pedigree or another aforementioned type of data is available, heritability and other quantitative genetic parameters can be estimated by restricted maximum likelihood (REML) or Bayesian methods. The raw data will usually have three or more data points for each individual: a code for the sire, a code for the dam and one or several trait values. Different trait values may be for different traits or for different time points of measurement. The currently popular methodology relies on high degrees of certainty over the identities of the sire and dam; it is not common to treat the sire identity probabilistically. This is not usually a problem, since the methodology is rarely applied to wild populations (although it has been used for several wild ungulate and bird populations), and sires are invariably known with a very high degree of certainty in breeding programmes. There are also algorithms that account for uncertain paternity. The pedigrees can be viewed using programs such as Pedigree Viewer , and analyzed with programs such as ASReml, VCE , WOMBAT , MCMCglmm within the R environment or the BLUPF90 family of programs . Pedigree models are helpful for untangling confounds such as reverse causality, maternal effects such as the prenatal environment, and confounding of genetic dominance, shared environment, and maternal gene effects. Genomic heritability
Heritability
Wikipedia
490
155624
https://en.wikipedia.org/wiki/Heritability
Biology and health sciences
Genetics
Biology
When genome-wide genotype data and phenotypes from large population samples are available, one can estimate the relationships between individuals based on their genotypes and use a linear mixed model to estimate the variance explained by the genetic markers. This gives a genomic heritability estimate based on the variance captured by common genetic variants. There are multiple methods that make different adjustments for allele frequency and linkage disequilibrium. Particularly, the method called High-Definition Likelihood (HDL) can estimate genomic heritability using only GWAS summary statistics, making it easier to incorporate large sample size available in various GWAS meta-analysis. Response to selection In selective breeding of plants and animals, the expected response to selection of a trait with known narrow-sense heritability can be estimated using the breeder's equation: In this equation, the Response to Selection (R) is defined as the realized average difference between the parent generation and the next generation, and the Selection Differential (S) is defined as the average difference between the parent generation and the selected parents. For example, imagine that a plant breeder is involved in a selective breeding project with the aim of increasing the number of kernels per ear of corn. For the sake of argument, let us assume that the average ear of corn in the parent generation has 100 kernels. Let us also assume that the selected parents produce corn with an average of 120 kernels per ear. If h2 equals 0.5, then the next generation will produce corn with an average of 0.5(120-100) = 10 additional kernels per ear. Therefore, the total number of kernels per ear of corn will equal, on average, 110. Observing the response to selection in an artificial selection experiment will allow calculation of realized heritability as in Fig. 4. Heritability in the above equation is equal to the ratio only if the genotype and the environmental noise follow Gaussian distributions. Controversies
Heritability
Wikipedia
401
155624
https://en.wikipedia.org/wiki/Heritability
Biology and health sciences
Genetics
Biology
Heritability estimates' prominent critics, such as Steven Rose, Jay Joseph, and Richard Bentall, focus largely on heritability estimates in behavioral sciences and social sciences. Bentall has claimed that such heritability scores are typically calculated counterintuitively to derive numerically high scores, that heritability is misinterpreted as genetic determination, and that this alleged bias distracts from other factors that researches have found more causally important, such as childhood abuse causing later psychosis. Heritability estimates are also inherently limited because they do not convey any information regarding whether genes or environment play a larger role in the development of the trait under study. For this reason, David Moore and David Shenk describe the term "heritability" in the context of behavior genetics as "...one of the most misleading in the history of science" and argue that it has no value except in very rare cases. When studying complex human traits, it is impossible to use heritability analysis to determine the relative contributions of genes and environment, as such traits result from multiple causes interacting. In particular, Feldman and Lewontin emphasize that heritability is itself a function of environmental variation. However, some researchers argue that it is possible to disentangle the two. The controversy over heritability estimates is largely via their basis in twin studies. The scarce success of molecular-genetic studies to corroborate such population-genetic studies' conclusions is the missing heritability problem. Eric Turkheimer has argued that newer molecular methods have vindicated the conventional interpretation of twin studies, although it remains mostly unclear how to explain the relations between genes and behaviors. According to Turkheimer, both genes and environment are heritable, genetic contribution varies by environment, and a focus on heritability distracts from other important factors. Overall, however, heritability is a concept widely applicable.
Heritability
Wikipedia
383
155624
https://en.wikipedia.org/wiki/Heritability
Biology and health sciences
Genetics
Biology
Penetrance in genetics is the proportion of individuals carrying a particular variant (or allele) of a gene (genotype) that also expresses an associated trait (phenotype). In medical genetics, the penetrance of a disease-causing mutation is the proportion of individuals with the mutation that exhibit clinical symptoms among all individuals with such mutation. For example: If a mutation in the gene responsible for a particular autosomal dominant disorder has 95% penetrance, then 95% of those with the mutation will go on to develop the disease, showing its phenotype, whereas 5% will not.   Penetrance only refers to whether an individual with a specific genotype exhibits any phenotypic signs or symptoms, and is not to be confused with variable expressivity which is to what extent or degree the symptoms for said disease are shown (the expression of the phenotypic trait). Meaning that, even if the same disease-causing mutation affects separate individuals, the expressivity will vary. Degrees of penetrance Complete penetrance If 100% of individuals carrying a particular genotype express the associated trait, the genotype is said to show complete penetrance. Neurofibromatosis type 1 (NF1), is an autosomal dominant condition which shows complete penetrance, consequently everyone who inherits the disease-causing variant of this gene will develop some degree of symptoms for NF1. Reduced penetrance The penetrance is said to be reduced if less than 100% of individuals carrying a particular genotype express associated traits, and is likely to be caused by a combination of genetic, environmental and lifestyle factors. BRCA1 is an example of a genotype with reduced penetrance. By age 70, the mutation is estimated to have a breast cancer penetrance of around 65% in women. Meaning that about 65% of women carrying the gene will develop breast cancer by the time they turn 70. Non-penetrance: Within the category of reduced penetrance, individuals carrying the mutation without displaying any signs or symptoms, are said to have a genotype that is non-penetrant. For the BRCA1 example above, the remaining 35% which never develop breast cancer, are therefore carrying the mutation, but it is non-penetrant. This can lead to healthy, unaffected parents carrying the mutation on to future generations that might be affected.
Penetrance
Wikipedia
491
155625
https://en.wikipedia.org/wiki/Penetrance
Biology and health sciences
Genetics
Biology
Factors affecting penetrance Many factors such as age, sex, environment, epigenetic modifiers, and modifier genes are linked to penetrance. These factors can help explain why certain individuals with a specific genotype exhibit symptoms or signs of disease, whilst others do not. Age-dependent penetrance If clinical signs associated with a specific genotype appear more frequently with increasing age, the penetrance is said to be age dependent. Some diseases are non-penetrant up until a certain age and then the penetrance starts to increase drastically, whilst others exhibit low penetrance at an early age and continue to increase with time. For this reason, many diseases have a different estimated penetrance dependent on the age. A specific hexanucleotide repeat expansion within the C9orf72 gene said to be a major cause for developing amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD) is an example of a genotype with age dependent penetrance. The genotype is said to be non-penetrant until the age of 35, 50% penetrant by the age of 60, and almost completely penetrant by age 80. Gender-related penetrance For some mutations, the phenotype is more frequently present in one sex and in rare cases mutations appear completely non-penetrant in a particular gender. This is called gender-related penetrance or sex-dependent penetrance and may be the result of allelic variation, disorders in which the expression of the disease is limited to organs only found in one sex such as testis or ovaries, or sex steroid-responsive genes. Breast cancer caused by the BRCA2 mutation is an example of a disease with gender-related penetrance. The penetrance is determined to be much higher in women than men. By age 70, around 86% of females in contrast to 6% of males with the same mutation is estimated to develop breast cancer.
Penetrance
Wikipedia
414
155625
https://en.wikipedia.org/wiki/Penetrance
Biology and health sciences
Genetics
Biology
In cases where clinical symptoms or the phenotype related to a genetic mutation are present only in one sex, the disorder is said to be sex-limited. Familial male-limited precocious puberty (FMPP) caused by a mutation in the LHCGR gene, is an example of a genotype only penetrant in males. Meaning that males with this particular genotype exhibit symptoms of the disease whilst the same genotype is nonpenetrant in females. Genetic modifiers Genetic modifiers are genetic variants or mutations able to modify a primary disease-causing variant's phenotypic outcome without being disease causing themselves. For instance, in single gene disorders there is one gene primarily responsible for development of the disease, but modifier genes inherited separately can affect the phenotype. Meaning that the presence of a mutation located on a loci different from the one with the disease-causing mutation, may either hinder manifestation of the phenotype or alter the mutations effects, and thereby influencing the penetrance. Environmental modifiers Exposure to environmental and lifestyle factors such as chemicals, diet, alcohol intake, drugs and stress are some of the factors that might influence disease penetrance. For example, several studies of BRCA1 and BRCA2 mutations, associated with an elevated risk of breast and ovarian cancer in women, have examined associations with environmental and behavioral modifiers such as pregnancies, history of breast feeding, smoking, diet, and so forth. Epigenetic regulation Sometimes, genetic alterations which can cause genetic disease and phenotypic traits, are not from changes related directly to the DNA sequence, but from epigenetic alterations such as DNA methylation or histone modifications. Epigenetic differences may therefore be one of the factors contributing to reduced penetrance. A study done on a pair of genetically identical monozygotic twins, where one twin got diagnosed with leukemia and later on thyroid carcinoma whilst the other had no registered illnesses, showed that the affected twin had increased methylation levels of the BRCA 1 gene. The research concluded that the family had no known DNA-repair syndrome or any other hereditary diseases in the last four generations, and no genetic differences between the studied pair of monozygotic twins were detected in the BRCA1 regulatory region. This indicates that epigenetic changes caused by environmental or behavioral factors had a key role in the cause of promotor hypermethylation of the BRCA1 gene in the affected twin, which caused the cancer.
Penetrance
Wikipedia
511
155625
https://en.wikipedia.org/wiki/Penetrance
Biology and health sciences
Genetics
Biology
Determining penetrance It can be challenging to estimate the penetrance of a specific genotype due to all the influencing factors. In addition to the factors mentioned above there are several other considerations that must be taken into account when penetrance is determined: Ascertainment bias Penetrance estimates can be affected by ascertainment bias if the sampling is not systematic. Traditionally a phenotype-driven approach focusing on individuals with a given condition and their family members has been used to determine penetrance. However, it may be difficult to transfer these estimates over to the general population because family members may share other genetic and/or environmental factors that could influence manifestation of said disease, leading to ascertainment bias and an overestimation of the penetrance. Large-scale population-based studies, which use both genetic sequencing and phenotype data from large groups of people, is a different method for determining penetrance. This method offers less upward bias compared to family-based studies and is more accurate the larger the sample population is. These studies may contain a healthy-participant-bias which can lead to lower penetrance estimates. Phenocopies A genotype with complete penetrance will always display the clinical phenotypic traits related to its mutation (taking into consideration the expressivity), but the signs or symptoms displayed by a specific affected individual can often be similar to other unrelated phenotypical traits. Taking into consideration the effect that environmental or behavioral modifiers have, and how they can impact the cause of a mutation or epigenetic alteration, we now have the cause as to how different paths lead to the same phenotypic display. When similar phenotypes can be observed but by different causes, it is called phenocopies. Phenocopies is when environmental and/or behavioral modifiers causes an illness which mimics the phenotype of a genetic inherited disease. Because of phenocopies, determining the degree of penetrance for a genetic disease requires full knowledge of the individuals attending the studies, and the factors that may or may not have caused their illness.
Penetrance
Wikipedia
432
155625
https://en.wikipedia.org/wiki/Penetrance
Biology and health sciences
Genetics
Biology
For example, new research on Hypertrophic Cardiomyopathy (HCM) based on a technique called Cardiac Magnetic Resonance (CMR), describes how various genetic illnesses that showcase the same phenotypic traits as HCM, are actually phenocopies. Previously these phenocopies were all diagnosed and treated, thought to arrive from the same cause, but because of new diagnostic methods, they can now be separated and treated more efficiently. Subjects not yet covered Allelic heterogeneity Polygenic inheritance Locus heterogeneity
Penetrance
Wikipedia
112
155625
https://en.wikipedia.org/wiki/Penetrance
Biology and health sciences
Genetics
Biology
Ibuprofen is a nonsteroidal anti-inflammatory drug (NSAID) that is used to relieve pain, fever, and inflammation. This includes painful menstrual periods, migraines, and rheumatoid arthritis. It may also be used to close a patent ductus arteriosus in a premature baby. It can be taken orally (by mouth) or intravenously. It typically begins working within an hour. Common side effects include heartburn, nausea, indigestion, and abdominal pain. As with other NSAIDs, potential side effects include gastrointestinal bleeding. Long-term use has been associated with kidney failure, and rarely liver failure, and it can exacerbate the condition of patients with heart failure. At low doses, it does not appear to increase the risk of heart attack; however, at higher doses it may. Ibuprofen can also worsen asthma. While its safety in early pregnancy is unclear, it appears to be harmful in later pregnancy, so it is not recommended during that period. Like other NSAIDs, it works by inhibiting the production of prostaglandins by decreasing the activity of the enzyme cyclooxygenase (COX). Ibuprofen is a weaker anti-inflammatory agent than other NSAIDs. Ibuprofen was discovered in 1961 by Stewart Adams and John Nicholson while working at Boots UK Limited and initially marketed as Brufen. It is available under a number of brand names including Advil, Motrin, and Nurofen. Ibuprofen was first marketed in 1969 in the United Kingdom and in 1974 in the United States. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2022, it was the 33rd most commonly prescribed medication in the United States, with more than 17million prescriptions. Medical uses
Ibuprofen
Wikipedia
390
155627
https://en.wikipedia.org/wiki/Ibuprofen
Biology and health sciences
Pain treatments
Health
Ibuprofen is used primarily to treat fever (including postvaccination fever), mild to moderate pain (including pain relief after surgery), painful menstruation, osteoarthritis, dental pain, headaches, and pain from kidney stones. About 60% of people respond to any NSAID; those who do not respond well to a particular one may respond to another. A Cochrane medical review of 51 trials of NSAIDs for the treatment of lower back pain found that "NSAIDs are effective for short-term symptomatic relief in patients with acute low back pain". It is used for inflammatory diseases such as juvenile idiopathic arthritis and rheumatoid arthritis. It is also used for pericarditis and patent ductus arteriosus. Ibuprofen lysine In some countries, ibuprofen lysine (the lysine salt of ibuprofen, sometimes called "ibuprofen lysinate") is licensed for treatment of the same conditions as ibuprofen; the lysine salt is used because it is more water-soluble. In 2006, ibuprofen lysine was approved in the United States by the Food and Drug Administration (FDA) for closure of patent ductus arteriosus in premature infants weighing between , who are no more than 32 weeks gestational age when usual medical management (such as fluid restriction, diuretics, and respiratory support) is not effective. Adverse effects Adverse effects include nausea, heartburn, indigestion, diarrhea, constipation, gastrointestinal ulceration, headache, dizziness, rash, salt and fluid retention, and high blood pressure. Infrequent adverse effects include esophageal ulceration, heart failure, high blood levels of potassium, kidney impairment, confusion, and bronchospasm. Ibuprofen can exacerbate asthma, sometimes fatally. Allergic reactions, including anaphylaxis, may occur. Ibuprofen may be quantified in blood, plasma, or serum to demonstrate the presence of the drug in a person having experienced an anaphylactic reaction, confirm a diagnosis of poisoning in people who are hospitalized, or assist in a medicolegal death investigation. A monograph relating ibuprofen plasma concentration, time since ingestion, and risk of developing renal toxicity in people who have overdosed has been published.
Ibuprofen
Wikipedia
508
155627
https://en.wikipedia.org/wiki/Ibuprofen
Biology and health sciences
Pain treatments
Health
In October 2020, the U.S. FDA required the drug label to be updated for all NSAID medications to describe the risk of kidney problems in unborn babies that result in low amniotic fluid. Cardiovascular risk Along with several other NSAIDs, chronic ibuprofen use is correlated with the risk of progression to hypertension in women, though less than for paracetamol (acetaminophen), and myocardial infarction (heart attack), particularly among those chronically using higher doses. On 9 July 2015, the U.S. FDA toughened warnings of increased heart attack and stroke risk associated with ibuprofen and related NSAIDs; the NSAID aspirin is not included in this warning. The European Medicines Agency (EMA) issued similar warnings in 2015. Skin Along with other NSAIDs, ibuprofen has been associated with the onset of bullous pemphigoid or pemphigoid-like blistering. As with other NSAIDs, ibuprofen has been reported to be a photosensitizing agent, but it is considered a weak photosensitizing agent compared to other members of the 2-arylpropionic acid class. Like other NSAIDs, ibuprofen is an extremely rare cause of the autoimmune diseases Stevens–Johnson syndrome (SJS) and toxic epidermal necrolysis. Interactions Alcohol Drinking alcohol when taking ibuprofen may increase the risk of stomach bleeding. Aspirin According to the FDA, "ibuprofen can interfere with the antiplatelet effect of low-dose aspirin, potentially rendering aspirin less effective when used for cardioprotection and stroke prevention". Allowing sufficient time between doses of ibuprofen and immediate-release (IR) aspirin can avoid this problem. The recommended elapsed time between a dose of ibuprofen and a dose of aspirin depends on which is taken first. It would be 30 minutes or more for ibuprofen taken after IR aspirin, and 8 hours or more for ibuprofen taken before IR aspirin. However, this timing cannot be recommended for enteric-coated aspirin. If ibuprofen is taken only occasionally without the recommended timing, though, the reduction of the cardioprotection and stroke prevention of a daily aspirin regimen is minimal.
Ibuprofen
Wikipedia
501
155627
https://en.wikipedia.org/wiki/Ibuprofen
Biology and health sciences
Pain treatments
Health
Paracetamol (acetaminophen) Ibuprofen combined with paracetamol is considered generally safe in children for short-term usage. Overdose Ibuprofen overdose has become common since it was licensed for over-the-counter (OTC) use. Many overdose experiences are reported in the medical literature, although the frequency of life-threatening complications from ibuprofen overdose is low. Human responses in cases of overdose range from an absence of symptoms to a fatal outcome despite intensive-care treatment. Most symptoms are an excess of the pharmacological action of ibuprofen and include abdominal pain, nausea, vomiting, drowsiness, dizziness, headache, ear ringing, and nystagmus. Rarely, more severe symptoms such as gastrointestinal bleeding, seizures, metabolic acidosis, hyperkalemia, low blood pressure, slow heart rate, fast heart rate, atrial fibrillation, coma, liver dysfunction, acute kidney failure, cyanosis, respiratory depression, and cardiac arrest have been reported. The severity of symptoms varies with the ingested dose and the time elapsed; however, individual sensitivity also plays an important role. Generally, the symptoms observed with an overdose of ibuprofen are similar to the symptoms caused by overdoses of other NSAIDs. The correlation between the severity of symptoms and measured ibuprofen plasma levels is weak. Toxic effects are unlikely at doses below 100mg/kg, but can be severe above 400mg/kg (around 150 tablets of 200mg units for an average adult male); however, large doses do not indicate the clinical course is likely to be lethal. A precise lethal dose is difficult to determine, as it may vary with age, weight, and concomitant conditions of the person.
Ibuprofen
Wikipedia
367
155627
https://en.wikipedia.org/wiki/Ibuprofen
Biology and health sciences
Pain treatments
Health
Treatment to address an ibuprofen overdose is based on how the symptoms present. In cases presenting early, decontamination of the stomach is recommended. This is achieved using activated charcoal; charcoal absorbs the drug before it can enter the bloodstream. Gastric lavage is now rarely used, but can be considered if the amount ingested is potentially life-threatening, and it can be performed within 60 minutes of ingestion. Purposeful vomiting is not recommended. Most ibuprofen ingestions produce only mild effects, and the management of overdose is straightforward. Standard measures to maintain normal urine output should be instituted and kidney function monitored. Since ibuprofen has acidic properties and is also excreted in the urine, forced alkaline diuresis is theoretically beneficial. However, because ibuprofen is highly protein-bound in the blood, the kidneys' excretion of the unchanged drug is minimal. Forced alkaline diuresis is, therefore, of limited benefit. Miscarriage A Canadian study of pregnant women suggests that those taking any type or amount of NSAIDs (including ibuprofen, diclofenac, and naproxen) were 2.4 times more likely to miscarry than those not taking the medications. However, an Israeli study found no increased risk of miscarriage in the group of mothers using NSAIDs. Pharmacology Ibuprofen works by inhibiting cyclooxygenase (COX) enzymes, which convert arachidonic acid to prostaglandin H2 (PGH2). PGH2, in turn, is converted by other enzymes into various prostaglandins (which mediate pain, inflammation, and fever) and thromboxane A2 (which stimulates platelet aggregation and promotes blood clot formation).
Ibuprofen
Wikipedia
377
155627
https://en.wikipedia.org/wiki/Ibuprofen
Biology and health sciences
Pain treatments
Health
Like aspirin and indomethacin, ibuprofen is a nonselective COX inhibitor, in that it inhibits two isoforms of cyclooxygenase, COX-1 and COX-2. The analgesic, antipyretic, and anti-inflammatory activity of NSAIDs appears to operate mainly through inhibition of COX-2, which decreases the synthesis of prostaglandins involved in mediating inflammation, pain, fever, and swelling. Antipyretic effects may be due to action on the hypothalamus, resulting in an increased peripheral blood flow, vasodilation, and subsequent heat dissipation. Inhibition of COX-1 instead would be responsible for unwanted effects on the gastrointestinal tract. However, the role of the individual COX isoforms in the analgesic, anti-inflammatory, and gastric damage effects of NSAIDs is uncertain, and different compounds cause different degrees of analgesia and gastric damage. Ibuprofen is administered as a racemic mixture. The R-enantiomer undergoes extensive interconversion to the S-enantiomer in vivo. The S-enantiomer is believed to be the more pharmacologically active enantiomer. The R-enantiomer is converted through a series of three main enzymes. These enzymes include acyl-CoA-synthetase, which converts the R-enantiomer to (−)-R-ibuprofen I-CoA; 2-arylpropionyl-CoA epimerase, which converts (−)-R-ibuprofen I-CoA to (+)-S-ibuprofen I-CoA; and hydrolase, which converts (+)-S-ibuprofen I-CoA to the S-enantiomer. In addition to the conversion of ibuprofen to the S-enantiomer, the body can metabolize ibuprofen to several other compounds, including numerous hydroxyl, carboxyl and glucuronyl metabolites. Virtually all of these have no pharmacological effects. Unlike most other NSAIDs, ibuprofen also acts as an inhibitor of Rho kinase and may be useful in recovery from spinal cord injury. Another unusual activity is inhibition of the sweet taste receptor.
Ibuprofen
Wikipedia
493
155627
https://en.wikipedia.org/wiki/Ibuprofen
Biology and health sciences
Pain treatments
Health
Pharmacokinetics After oral administration, peak serum concentration is reached after 12 hours, and up to 99% of the drug is bound to plasma proteins. The majority of ibuprofen is metabolized and eliminated within 24 hours in the urine; however, 1% of the unchanged drug is removed through biliary excretion. Chemistry Ibuprofen is practically insoluble in water, but very soluble in most organic solvents like ethanol (66.18g/100mL at 40°C for 90% EtOH), methanol, acetone and dichloromethane. The original synthesis of ibuprofen by the Boots Group started with the compound isobutylbenzene. The synthesis took six steps. A modern, greener technique with fewer waste byproducts for the synthesis involves only three steps and was developed in the 1980s by the Celanese Chemical Company. The synthesis is initiated with the acylation of isobutylbenzene using the recyclable Lewis acid catalyst hydrogen fluoride. The following catalytic hydrogenation of isobutylacetophenone is performed with either Raney nickel or palladium on carbon to lead into the key-step, the carbonylation of 1-(4-isobutylphenyl)ethanol. This is achieved by a PdCl2(PPh3)2 catalyst, at around 50 bar of CO pressure, in the presence of HCl (10%). The reaction presumably proceeds through the intermediacy of the styrene derivative (acidic elimination of the alcohol) and (1-chloroethyl)benzene derivative (Markovnikow addition of HCl to the double bond). Stereochemistry Ibuprofen, like other 2-arylpropionate derivatives such as ketoprofen, flurbiprofen and naproxen, contains a stereocenter in the α-position of the propionate moiety. The product sold in pharmacies is a racemic mixture of the S and R-isomers. The S (dextrorotatory) isomer is the more biologically active; this isomer has been isolated and used medically (see dexibuprofen for details). The isomerase enzyme, alpha-methylacyl-CoA racemase, converts (R)-ibuprofen into the (S)-enantiomer.
Ibuprofen
Wikipedia
511
155627
https://en.wikipedia.org/wiki/Ibuprofen
Biology and health sciences
Pain treatments
Health
(S)-ibuprofen, the eutomer, harbors the desired therapeutic activity. The inactive (R)-enantiomer, the distomer, undergoes a unidirectional chiral inversion to offer the active (S)-enantiomer. That is, when the ibuprofen is administered as a racemate the distomer is converted in vivo into the eutomer while the latter is unaffected. History Ibuprofen was derived from propionic acid by the research arm of Boots Group during the 1960s. The name is derived from the 3 functional groups: isobutyl (ibu) propionic acid (pro) phenyl (fen). Its discovery was the result of research during the 1950s and 1960s to find a safer alternative to aspirin. The molecule was discovered and synthesized by a team led by Stewart Adams, with a patent application filed in 1961. Adams initially tested the drug as treatment for his hangover. In 1985, Boots' worldwide patent for ibuprofen expired and generic products were launched. The medication was launched as a treatment for rheumatoid arthritis in the United Kingdom in 1969, and in the United States in 1974. Later, in 1983 and 1984, it became the first NSAID (other than aspirin) to be available over-the-counter (OTC) in these two countries. Boots was awarded the Queen's Award for Technical Achievement in 1985 for the development of the drug. In November 2013, work on ibuprofen was recognized by the erection of a Royal Society of Chemistry blue plaque at Boots' Beeston Factory site in Nottingham, which reads: and another at BioCity Nottingham, the site of the original laboratory, which reads: Availability and administration Ibuprofen was made available by prescription in the United Kingdom in 1969 and in the United States in 1974. Ibuprofen is the International nonproprietary name (INN), British Approved Name (BAN), Australian Approved Name (AAN) and United States Adopted Name (USAN). In the United States, it has been sold under the brand-names Motrin and Advil since 1974 and 1984, respectively. Ibuprofen is commonly available in the United States up to the FDA's 1984 dose limit OTC, rarely used higher by prescription.
Ibuprofen
Wikipedia
486
155627
https://en.wikipedia.org/wiki/Ibuprofen
Biology and health sciences
Pain treatments
Health
In 2009, the first injectable formulation of ibuprofen was approved in the United States, under the brand name Caldolor. Ibuprofen can be taken orally (by mouth) (as a tablet, a capsule, or a suspension) and intravenously. Research Ibuprofen is sometimes used for the treatment of acne because of its anti-inflammatory properties, and has been sold in Japan in topical form for adult acne. As with other NSAIDs, ibuprofen may be useful in the treatment of severe orthostatic hypotension (low blood pressure when standing up). NSAIDs are of unclear utility in the prevention and treatment of Alzheimer's disease. Ibuprofen has been associated with a lower risk of Parkinson's disease and may delay or prevent it. Aspirin, other NSAIDs, and paracetamol (acetaminophen) had no effect on the risk for Parkinson's. In March 2011, researchers at Harvard Medical School announced in Neurology that ibuprofen had a neuroprotective effect against the risk of developing Parkinson's disease. People regularly consuming ibuprofen were reported to have a 38% lower risk of developing Parkinson's disease, but no such effect was found for other pain relievers, such as aspirin and paracetamol. Use of ibuprofen to lower the risk of Parkinson's disease in the general population would not be problem-free, given the possibility of adverse effects on the urinary and digestive systems. Some dietary supplements might be dangerous to take along with ibuprofen and other NSAIDs, but , more research needs to be conducted to be certain. These supplements include those that can prevent platelet aggregation, including ginkgo, garlic, ginger, bilberry, dong quai, feverfew, ginseng, turmeric, meadowsweet (Filipendula ulmaria), and willow (Salix spp.); those that contain coumarin, including chamomile, horse chestnut, fenugreek and red clover; and those that increase the risk of bleeding, like tamarind.
Ibuprofen
Wikipedia
458
155627
https://en.wikipedia.org/wiki/Ibuprofen
Biology and health sciences
Pain treatments
Health
Ibuprofen lysine is sold for rapid pain relief; given in the form of its lysine salt, absorption is much quicker (35 minutes for the salt compared to 90120 minutes for ibuprofen). However, a clinical trial with 351 participants in 2020, funded by Sanofi, found no significant difference between ibuprofen and ibuprofen lysine concerning the eventual onset of action or analgesic efficacy.
Ibuprofen
Wikipedia
94
155627
https://en.wikipedia.org/wiki/Ibuprofen
Biology and health sciences
Pain treatments
Health
Fluoride () is an inorganic, monatomic anion of fluorine, with the chemical formula (also written ), whose salts are typically white or colorless. Fluoride salts typically have distinctive bitter tastes, and are odorless. Its salts and minerals are important chemical reagents and industrial chemicals, mainly used in the production of hydrogen fluoride for fluorocarbons. Fluoride is classified as a weak base since it only partially associates in solution, but concentrated fluoride is corrosive and can attack the skin. Fluoride is the simplest fluorine anion. In terms of charge and size, the fluoride ion resembles the hydroxide ion. Fluoride ions occur on Earth in several minerals, particularly fluorite, but are present only in trace quantities in bodies of water in nature. Nomenclature Fluorides include compounds that contain ionic fluoride and those in which fluoride does not dissociate. The nomenclature does not distinguish these situations. For example, sulfur hexafluoride and carbon tetrafluoride are not sources of fluoride ions under ordinary conditions. The systematic name fluoride, the valid IUPAC name, is determined according to the additive nomenclature. However, the name fluoride is also used in compositional IUPAC nomenclature which does not take the nature of bonding involved into account. Fluoride is also used non-systematically, to describe compounds which release fluoride upon dissolving. Hydrogen fluoride is itself an example of a non-systematic name of this nature. However, it is also a trivial name, and the preferred IUPAC name for fluorane. Occurrence Fluorine is estimated to be the 13th-most abundant element in Earth's crust and is widely dispersed in nature, entirely in the form of fluorides. The vast majority is held in mineral deposits, the most commercially important of which is fluorite (CaF2). Natural weathering of some kinds of rocks, as well as human activities, releases fluorides into the biosphere through what is sometimes called the fluorine cycle. In water
Fluoride
Wikipedia
444
155650
https://en.wikipedia.org/wiki/Fluoride
Physical sciences
Halide salts
Chemistry
Fluoride is naturally present in groundwater, fresh and saltwater sources, as well as in rainwater, particularly in urban areas. Seawater fluoride levels are usually in the range of 0.86 to 1.4 mg/L, and average 1.1 mg/L (milligrams per litre). For comparison, chloride concentration in seawater is about 19 g/L. The low concentration of fluoride reflects the insolubility of the alkaline earth fluorides, e.g., CaF2. Concentrations in fresh water vary more significantly. Surface water such as rivers or lakes generally contains between 0.01 and 0.3 mg/L. Groundwater (well water) concentrations vary even more, depending on the presence of local fluoride-containing minerals. For example, natural levels of under 0.05 mg/L have been detected in parts of Canada but up to 8 mg/L in parts of China; in general levels rarely exceed 10 mg/litre In parts of Asia the groundwater can contain dangerously high levels of fluoride, leading to serious health problems. Worldwide, 50 million people receive water from water supplies that naturally have close to the "optimal level". In other locations the level of fluoride is very low, sometimes leading to fluoridation of public water supplies to bring the level to around 0.7–1.2 ppm. Mining can increase local fluoride levels Fluoride can be present in rain, with its concentration increasing significantly upon exposure to volcanic activity or atmospheric pollution derived from burning fossil fuels or other sorts of industry, particularly aluminium smelters. In plants All vegetation contains some fluoride, which is absorbed from soil and water. Some plants concentrate fluoride from their environment more than others. All tea leaves contain fluoride; however, mature leaves contain as much as 10 to 20 times the fluoride levels of young leaves from the same plant. Chemical properties Basicity Fluoride can act as a base. It can combine with a proton (): This neutralization reaction forms hydrogen fluoride (HF), the conjugate acid of fluoride. In aqueous solution, fluoride has a pKb value of 10.8. It is therefore a weak base, and tends to remain as the fluoride ion rather than generating a substantial amount of hydrogen fluoride. That is, the following equilibrium favours the left-hand side in water:
Fluoride
Wikipedia
510
155650
https://en.wikipedia.org/wiki/Fluoride
Physical sciences
Halide salts
Chemistry
However, upon prolonged contact with moisture, soluble fluoride salts will decompose to their respective hydroxides or oxides, as the hydrogen fluoride escapes. Fluoride is distinct in this regard among the halides. The identity of the solvent can have a dramatic effect on the equilibrium shifting it to the right-hand side, greatly increasing the rate of decomposition. Structure of fluoride salts Salts containing fluoride are numerous and adopt myriad structures. Typically the fluoride anion is surrounded by four or six cations, as is typical for other halides. Sodium fluoride and sodium chloride adopt the same structure. For compounds containing more than one fluoride per cation, the structures often deviate from those of the chlorides, as illustrated by the main fluoride mineral fluorite (CaF2) where the Ca2+ ions are surrounded by eight F− centers. In CaCl2, each Ca2+ ion is surrounded by six Cl− centers. The difluorides of the transition metals often adopt the rutile structure whereas the dichlorides have cadmium chloride structures. Inorganic chemistry Upon treatment with a standard acid, fluoride salts convert to hydrogen fluoride and metal salts. With strong acids, it can be doubly protonated to give . Oxidation of fluoride gives fluorine. Solutions of inorganic fluorides in water contain F− and bifluoride . Few inorganic fluorides are soluble in water without undergoing significant hydrolysis. In terms of its reactivity, fluoride differs significantly from chloride and other halides, and is more strongly solvated in protic solvents due to its smaller radius/charge ratio. Its closest chemical relative is hydroxide, since both have similar geometries.
Fluoride
Wikipedia
368
155650
https://en.wikipedia.org/wiki/Fluoride
Physical sciences
Halide salts
Chemistry
Naked fluoride Most fluoride salts dissolve to give the bifluoride () anion. Sources of true F− anions are rare because the highly basic fluoride anion abstracts protons from many, even adventitious, sources. Relative unsolvated fluoride, which does exist in aprotic solvents, is called "naked". Naked fluoride is a strong Lewis base, and a powerful nucleophile. Some quaternary ammonium salts of naked fluoride include tetramethylammonium fluoride and tetrabutylammonium fluoride. Cobaltocenium fluoride is another example. However, they all lack structural characterization in aprotic solvents. Because of their high basicity, many so-called naked fluoride sources are in fact bifluoride salts. In late 2016 imidazolium fluoride was synthesized that is the closest approximation of a thermodynamically stable and structurally characterized example of a "naked" fluoride source in an aprotic solvent (acetonitrile). The sterically demanding imidazolium cation stabilizes the discrete anions and protects them from polymerization.
Fluoride
Wikipedia
254
155650
https://en.wikipedia.org/wiki/Fluoride
Physical sciences
Halide salts
Chemistry
Biochemistry At physiological pHs, hydrogen fluoride is usually fully ionised to fluoride. In biochemistry, fluoride and hydrogen fluoride are equivalent. Fluorine, in the form of fluoride, is considered to be a micronutrient for human health, necessary to prevent dental cavities, and to promote healthy bone growth. The tea plant (Camellia sinensis L.) is a known accumulator of fluorine compounds, released upon forming infusions such as the common beverage. The fluorine compounds decompose into products including fluoride ions. Fluoride is the most bioavailable form of fluorine, and as such, tea is potentially a vehicle for fluoride dosing. Approximately, 50% of absorbed fluoride is excreted renally with a twenty-four-hour period. The remainder can be retained in the oral cavity, and lower digestive tract. Fasting dramatically increases the rate of fluoride absorption to near 100%, from a 60% to 80% when taken with food. Per a 2013 study, it was found that consumption of one litre of tea a day, can potentially supply the daily recommended intake of 4 mg per day. Some lower quality brands can supply up to a 120% of this amount. Fasting can increase this to 150%. The study indicates that tea drinking communities are at an increased risk of dental and skeletal fluorosis, in the case where water fluoridation is in effect. Fluoride ion in low doses in the mouth reduces tooth decay. For this reason, it is used in toothpaste and water fluoridation. At much higher doses and frequent exposure, fluoride causes health complications and can be toxic. Applications Fluoride salts and hydrofluoric acid are the main fluorides of industrial value. Organofluorine chemistry Organofluorine compounds are pervasive. Many drugs, many polymers, refrigerants, and many inorganic compounds are made from fluoride-containing reagents. Often fluorides are converted to hydrogen fluoride, which is a major reagent and precursor to reagents. Hydrofluoric acid and its anhydrous form, hydrogen fluoride, are particularly important.
Fluoride
Wikipedia
475
155650
https://en.wikipedia.org/wiki/Fluoride
Physical sciences
Halide salts
Chemistry
Production of metals and their compounds The main uses of fluoride, in terms of volume, are in the production of cryolite, Na3AlF6. It is used in aluminium smelting. Formerly, it was mined, but now it is derived from hydrogen fluoride. Fluorite is used on a large scale to separate slag in steel-making. Mined fluorite (CaF2) is a commodity chemical used in steel-making. Uranium hexafluoride is employed in the purification of uranium isotopes. Cavity prevention Fluoride-containing compounds, such as sodium fluoride or sodium monofluorophosphate are used in topical and systemic fluoride therapy for preventing tooth decay. They are used for water fluoridation and in many products associated with oral hygiene. Originally, sodium fluoride was used to fluoridate water; hexafluorosilicic acid (H2SiF6) and its salt sodium hexafluorosilicate (Na2SiF6) are more commonly used additives, especially in the United States. The fluoridation of water is known to prevent tooth decay and is considered by the U.S. Centers for Disease Control and Prevention to be "one of 10 great public health achievements of the 20th century". In some countries where large, centralized water systems are uncommon, fluoride is delivered to the populace by fluoridating table salt. For the method of action for cavity prevention, see Fluoride therapy. Fluoridation of water has its critics . Fluoridated toothpaste is in common use. Meta-analysis show the efficacy of 500 ppm fluoride in toothpastes. However, no beneficial effect can be detected when more than one fluoride source is used for daily oral care. Laboratory reagent Fluoride salts are commonly used in biological assay processing to inhibit the activity of phosphatases, such as serine/threonine phosphatases. Fluoride mimics the nucleophilic hydroxide ion in these enzymes' active sites. Beryllium fluoride and aluminium fluoride are also used as phosphatase inhibitors, since these compounds are structural mimics of the phosphate group and can act as analogues of the transition state of the reaction.
Fluoride
Wikipedia
483
155650
https://en.wikipedia.org/wiki/Fluoride
Physical sciences
Halide salts
Chemistry
Dietary recommendations The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for some minerals in 1997. Where there was not sufficient information to establish EARs and RDAs, an estimate designated Adequate Intake (AI) was used instead. AIs are typically matched to actual average consumption, with the assumption that there appears to be a need, and that need is met by what people consume. The current AI for women 19 years and older is 3.0 mg/day (includes pregnancy and lactation). The AI for men is 4.0 mg/day. The AI for children ages 1–18 increases from 0.7 to 3.0 mg/day. The major known risk of fluoride deficiency appears to be an increased risk of bacteria-caused tooth cavities. As for safety, the IOM sets tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of fluoride the UL is 10 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women ages 18 and older the AI is set at 2.9 mg/day (including pregnancy and lactation). For men, the value is 3.4 mg/day. For children ages 1–17 years, the AIs increase with age from 0.6 to 3.2 mg/day. These AIs are comparable to the U.S. AIs. The EFSA reviewed safety evidence and set an adult UL at 7.0 mg/day (lower for children). For U.S. food and dietary supplement labeling purposes, the amount of a vitamin or mineral in a serving is expressed as a percent of Daily Value (%DV). Although there is information to set Adequate Intake, fluoride does not have a Daily Value and is not required to be shown on food labels.
Fluoride
Wikipedia
463
155650
https://en.wikipedia.org/wiki/Fluoride
Physical sciences
Halide salts
Chemistry
Estimated daily intake Daily intakes of fluoride can vary significantly according to the various sources of exposure. Values ranging from 0.46 to 3.6–5.4 mg/day have been reported in several studies (IPCS, 1984). In areas where water is fluoridated this can be expected to be a significant source of fluoride, however fluoride is also naturally present in virtually all foods and beverages at a wide range of concentrations. The maximum safe daily consumption of fluoride is 10 mg/day for an adult (U.S.) or 7 mg/day (European Union). The upper limit of fluoride intake from all sources (fluoridated water, food, beverages, fluoride dental products and dietary fluoride supplements) is set at 0.10 mg/kg/day for infants, toddlers, and children through to 8 years old. For older children and adults, who are no longer at risk for dental fluorosis, the upper limit of fluoride is set at 10 mg/day regardless of weight. Safety Ingestion According to the U.S. Department of Agriculture, the Dietary Reference Intakes, which is the "highest level of daily nutrient intake that is likely to pose no risk of adverse health effects" specify 10 mg/day for most people, corresponding to 10 L of fluoridated water with no risk. For young children the values are smaller, ranging from 0.7 mg/d to 2.2 mg/d for infants. Water and food sources of fluoride include community water fluoridation, seafood, tea, and gelatin. Soluble fluoride salts, of which sodium fluoride is the most common, are toxic, and have resulted in both accidental and self-inflicted deaths from acute poisoning. The lethal dose for most adult humans is estimated at 5 to 10 g (which is equivalent to 32 to 64 mg elemental fluoride per kg body weight). A case of a fatal poisoning of an adult with 4 grams of sodium fluoride is documented, and a dose of 120 g sodium fluoride has been survived. For sodium fluorosilicate (Na2SiF6), the median lethal dose (LD50) orally in rats is 125 mg/kg, corresponding to 12.5 g for a 100 kg adult.
Fluoride
Wikipedia
484
155650
https://en.wikipedia.org/wiki/Fluoride
Physical sciences
Halide salts
Chemistry