id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
315,139
https://en.wikipedia.org/wiki/Metronidazole
Metronidazole, sold under the brand name Flagyl among others, is an antibiotic and antiprotozoal medication. It is used either alone or with other antibiotics to treat pelvic inflammatory disease, endocarditis, and bacterial vaginosis. It is effective for dracunculiasis, giardiasis, trichomoniasis, and amebiasis. It is an option for a first episode of mild-to-moderate Clostridioides difficile colitis if vancomycin or fidaxomicin is unavailable. Metronidazole is available orally (by mouth), as a cream or gel, and by slow intravenous infusion (injection into a vein). Common side effects include nausea, a metallic taste, loss of appetite, and headaches. Occasionally seizures or allergies to the medication may occur. Some state that metronidazole should not be used in early pregnancy, while others state doses for trichomoniasis are safe. Metronidazole is generally considered compatible with breastfeeding. Metronidazole began to be commercially used in 1960 in France. It is on the World Health Organization's List of Essential Medicines. It is available in most areas of the world. In 2022, it was the 133rd most commonly prescribed medication in the United States, with more than 4million prescriptions. Medical uses Metronidazole has activity against some protozoans and most anaerobic bacteria (both Gram-negative and Gram-positive classes) but not the aerobic bacteria. Metronidazole is primarily used to treat: bacterial vaginosis, pelvic inflammatory disease (along with other antibacterials like ceftriaxone), pseudomembranous colitis, aspiration pneumonia, rosacea (topical), fungating wounds (topical), intra-abdominal infections, lung abscess, periodontal disease, amoebiasis, oral infections, giardiasis, trichomoniasis, and infections caused by susceptible anaerobic organisms such as Bacteroides, Fusobacterium, Clostridium, Peptostreptococcus, and Prevotella species. It is also often used to eradicate Helicobacter pylori along with other drugs and to prevent infection in people recovering from surgery. Metronidazole is bitter and so the liquid suspension contains metronidazole benzoate. This may require hydrolysis in the gastrointestinal tract and some sources speculate that it may be unsuitable in people with diarrhea or feeding-tubes in the duodenum or jejunum. Bacterial vaginosis Drugs of choice for the treatment of bacterial vaginosis include metronidazole and clindamycin. An effective treatment option for mixed infectious vaginitis is a combination of clotrimazole and metronidazole. Trichomoniasis The 5-nitroimidazole drugs (metronidazole and tinidazole) are the mainstay of treatment for infection with Trichomonas vaginalis. Treatment for both the infected patient and the patient's sexual partner is recommended, even if asymptomatic. Therapy other than 5-nitroimidazole drugs is also an option, but cure rates are much lower. Giardiasis Oral metronidazole is a treatment option for giardiasis, however, the increasing incidence of nitroimidazole resistance is leading to the increased use of other compound classes. Dracunculus In the case of Dracunculus medinensis (Guinea worm), metronidazole merely facilitates worm extraction rather than killing the worm. C. difficile colitis Initial antibiotic therapy for less-severe Clostridioides difficile infection colitis (pseudomembranous colitis) consists of metronidazole, vancomycin, or fidaxomicin by mouth. In 2017, the IDSA generally recommended vancomycin and fidaxomicin over metronidazole. Vancomycin by mouth has been shown to be more effective in treating people with severe C. difficile colitis. E. histolytica Entamoeba histolytica invasive amebiasis is treated with metronidazole for eradication, in combination with diloxanide to prevent recurrence. Although it is generally a standard treatment it is associated with some side effects. Preterm births Metronidazole has also been used in women to prevent preterm birth associated with bacterial vaginosis, amongst other risk factors including the presence of cervicovaginal fetal fibronectin (fFN). Metronidazole was ineffective in preventing preterm delivery in high-risk pregnant women (selected by history and a positive fFN test) and, conversely, the incidence of preterm delivery was found to be higher in women treated with metronidazole. Hypoxic radiosensitizer In addition to its anti-biotic properties, attempts were also made to use a possible radiation-sensitizing effect of metronidazole in the context of radiation therapy against hypoxic tumors. However, the neurotoxic side effects occurring at the required dosages have prevented the widespread use of metronidazole as an adjuvant agent in radiation therapy. However, other nitroimidazoles derived from metronidazole such as nimorazole with reduced electron affinity showed less serious neuronal side effects and have found their way into radio-onological practice for head and neck tumors in some countries. Perioral dermatitis Canadian Family Physician has recommended topical metronidazole as a third-line treatment for the perioral dermatitis either along with or without oral tetracycline or oral erythromycin as first and second line treatment respectively. Adverse effects Common adverse drug reactions (≥1% of those treated with the drug) associated with systemic metronidazole therapy include: nausea, diarrhea, weight loss, abdominal pain, vomiting, headache, dizziness, and metallic taste in the mouth. Intravenous administration is commonly associated with thrombophlebitis. Infrequent adverse effects include: hypersensitivity reactions (rash, itch, flushing, fever), headache, dizziness, vomiting, glossitis, stomatitis, dark urine, and paraesthesia. High doses and long-term systemic treatment with metronidazole are associated with the development of leucopenia, neutropenia, increased risk of peripheral neuropathy, and central nervous system toxicity. Common adverse drug reaction associated with topical metronidazole therapy include local redness, dryness and skin irritation; and eye watering (if applied near eyes). Metronidazole has been associated with cancer in animal studies. In rare cases, it can also cause temporary hearing loss that reverses after cessation of the treatment. Some evidence from studies in rats indicates the possibility it may contribute to serotonin syndrome, although no case reports documenting this have been published to date. Mutagenesis and carcinogenesis In 2016 metronidazole was listed by the U.S. National Toxicology Program (NTP) as reasonably anticipated to be a human carcinogen. Although some of the testing methods have been questioned, oral exposure has been shown to cause cancer in experimental animals and has also demonstrated some mutagenic effects in bacterial cultures. The relationship between exposure to metronidazole and human cancer is unclear. One study found an excess in lung cancer among women (even after adjusting for smoking), while other studies found either no increased risk, or a statistically insignificant risk. Metronidazole is listed as a possible carcinogen according to the World Health Organization (WHO) International Agency for Research on Cancer (IARC). A study in those with Crohn's disease also found chromosomal abnormalities in circulating lymphocytes in people treated with metronidazole. Stevens–Johnson syndrome Metronidazole alone rarely causes Stevens–Johnson syndrome, but is reported to occur at high rates when combined with mebendazole. Neurotoxicity Several studies in the human and animal models have recorded the neurotoxicity of metronidazole. One possible mechanism underlying this toxicity is that metronidazole may interference with postsynaptic central monoaminergic neurotransmission and immunomodulation. Additionally other research suggests that the role of nitric oxide isoforms and inflammatory cytokines may also play a role. Drug interactions Alcohol Consuming alcohol while taking metronidazole has been suspected in case reports to cause a disulfiram-like reaction with effects that can include nausea, vomiting, flushing of the skin, tachycardia, and shortness of breath. People are often advised not to drink alcohol during systemic metronidazole therapy and for at least 48 hours after completion of treatment. However, some studies call into question the mechanism of the interaction of alcohol and metronidazole, and a possible central toxic serotonin reaction for the alcohol intolerance is suggested. Metronidazole is also generally thought to inhibit the liver metabolism of propylene glycol (found in some foods, medicines, and in many electronic cigarette e-liquids), thus propylene glycol may potentially have similar interaction effects with metronidazole. Other drug interactions Metronidazole is a moderate inhibitor of the enzyme CYP2C9 belonging to the cytochrome P450 family. As a result, metronidazole may interact with medications metabolized by this enzyme. Examples of such medications are lomitapide and warfarin, to name a few. Pharmacology Mechanism of action Metronidazole is of the nitroimidazole class. It is a prodrug that inhibits nucleic acid synthesis by forming nitroso radicals, which disrupt the DNA of microbial cells. Metronidazole activates by receiving an electron from the reduced ferredoxin produced by pyruvate synthase (PFOR) in anaerobic organisms, equivalent to pyruvate dehydrogenase in aerobic organisms, thus turning into a highly reactive radical anion. After the radical loses the electron to its target, it recycles back to the unactivated form of metronidazole, ready to be activated again. This function only occurs when metronidazole is partially reduced, and because oxygen competes with metronidazole for the electron, this reduction requires a local environment with low oxygen concentration that usually happens only in anaerobic bacteria and protozoans. Therefore, it has relatively little effect upon human cells or aerobic bacteria. Elevation of oxygen level in the organism will decrease its rate of generating the activated metronidazole, but also increase the rate of recycling back to the unactivated metronidazole. Pharmacokinetics Oral metronidazole is approximately 80% bioavailable via the gut and peak blood plasma concentrations occur after one to two hours. Food may slow down absorption but does not diminish it. Of the circulating substance, about 20% is bound to plasma proteins. It penetrates well into tissues, the cerebrospinal fluid, the amniotic fluid and breast milk, as well as into abscess cavities. About 60% of the metronidazole is metabolized by oxidation to the main metabolite hydroxymetronidazole and a carboxylic acid derivative, and by glucuronidation. The metabolites show antibiotic and antiprotozoal activity in vitro. Metronidazole and its metabolites are mainly excreted via the kidneys (77%) and to a lesser extent via the faeces (14%). The biological half-life of metronidazole in healthy adults is eight hours, in infants during the first two months of their lives about 23 hours, and in premature babies up to 100 hours. The biological activity of hydroxymetronidazole is 30% to 65%, and the elimination half-life is longer than that of the parent compound. The serum half-life of hydroxymetronidazole after suppository was 10 hours, 19 hours after intravenous infusion, and 11 hours after a tablet. Resistance Resistance in parasites is found in T. vaginilis, and G. lamblia, but not E. histolytica, and two major methods are observed. The first method involves an impaired oxygen scavenging capability that increase the local concentration of oxygen, leading to the decreased activation and increased recycling of metronidazole. The second method is associated with lowered levels of pyruvate synthase and ferredoxin, the latter due to the lowered transcription of the ferredoxin gene. Strains employing the second method will still respond to a higher dosage of metronidazole. Resistance in bacteria is documented in Bacteriodes spp. that resistant to nitroimidazoles including metronidazole. In the resistant strains, 5-nitroimidazole reductase is identified as the culprit that actively reduces metronidazole to inactive forms. Currently eleven types are identified which are encoded by nimA through nimK respectively. The gene is encoded either in the chromosome or the episome. Other mechanisms may include reduced drug activation, efflux pumps, altered redox potential and biofilm formation. In the recent years it is observed that the resistance to metronidazole is increasingly common, complicating its clinical effectiveness. History The drug was initially developed by Rhône-Poulenc in the 1950s and licensed to G.D. Searle. Searle was acquired by Pfizer in 2003. The original patent expired in 1982, but evergreening reformulation occurred thereafter. Brand name In India, it is sold under the brand name Metrogyl and Flagyl. In Bangladesh, it is available as Amodis, Amotrex, Dirozyl, Filmet, Flagyl, Flamyd, Metra, Metrodol, Metryl, etc. In Pakistan, it is sold under the brand name of Flagyl and Metrozine. In the United States it is sold under the brand name Noritate. Synthesis 2-Methylimidazole (1) may be prepared via the Debus-Radziszewski imidazole synthesis, or from ethylenediamine and acetic acid, followed by treatment with lime, then Raney nickel. 2-Methylimidazole is nitrated to give 2-methyl-4(5)-nitroimidazole (2), which is in turn alkylated with ethylene oxide or 2-chloroethanol to give metronidazole (3): Research Metronidazole is researched for its anti-inflammatory and immunomodulatory properties. Studies have shown that metronidazole can decrease the production of reactive oxygen species (ROS) and nitric oxide by activated immune cells, such as macrophages and neutrophils. Metronidazole's immunomodulatory properties are thought to be related to its ability to decrease the activation of nuclear factor-kappa B (NF-κB), a transcription factor that regulates the expression of pro-inflammatory cytokines, including chemokines, and adhesion molecules. Cytokines are small proteins that are secreted by immune cells and play a key role in the immune response. Chemokines are a type of cytokines that act as chemoattractants, meaning they attract and guide immune cells to specific sites in the body where they are needed. Cell adhesion molecules play an important role in the immune response by facilitating the interaction between immune cells and other cells in the body, such as endothelial cells, which form the lining of blood vessels. By inhibiting NF-κB activation, metronidazole can reduce the production of pro-inflammatory cytokines, such as TNF-alpha, IL-6, and IL-1β. Metronidazole has been studied in various immunological disorders, including inflammatory bowel disease, periodontitis, and rosacea. In these conditions, metronidazole has been suspected to have anti-inflammatory and immunomodulatory effects that could be beneficial in the treatment of these conditions. Despite the success in treating rosacea with metronidazole, the exact mechanism of why metronidazole in rosacea is efficient is not precisely known, i.e., which properties of metronidazole help treat rosacea: antibacterial or immunomodulatory or both, or other mechanism is involved. Increased ROS production in rosacea is thought to contribute to the inflammatory process and skin damage, so metronidazole's ability to decrease ROS may explain the mechanism of action in this disease, but this remains speculation. Metronidazole is also researched as a potential anti-inflammatory agent in periodontitis treatment. Veterinary use Metronidazole is used to treat infections of Giardia in dogs, cats, and other companion animals, but it does not reliably clear infection with this organism and is being supplanted by fenbendazole for this purpose in dogs and cats. It is also used for the management of chronic inflammatory bowel disease, gastrointestinal infections, periodontal disease, and systemic infections in cats and dogs. Another common usage is the treatment of systemic and/or gastrointestinal clostridial infections in horses. Metronidazole is used in the aquarium hobby to treat ornamental fish and as a broad-spectrum treatment for bacterial and protozoan infections in reptiles and amphibians. In general, the veterinary community may use metronidazole for any potentially susceptible anaerobic infection. The U.S. Food and Drug Administration (FDA) suggests it only be used when necessary because it has been shown to be carcinogenic in mice and rats, as well as to prevent antimicrobial resistance. The appropriate dosage of metronidazole varies based on the animal species, the condition being treated and the specific formulation of the product. References External links Drugs developed by AbbVie Antiprotozoal agents Disulfiram-like drugs Fishkeeping French inventions IARC Group 2B carcinogens Nitroimidazole antibiotics Drugs developed by Pfizer Wikipedia medicine articles ready to translate Sanofi World Health Organization essential medicines
Metronidazole
[ "Biology" ]
3,971
[ "Antiprotozoal agents", "Biocides" ]
315,212
https://en.wikipedia.org/wiki/Therac-25
The Therac-25 is a computer-controlled radiation therapy machine produced by Atomic Energy of Canada Limited (AECL) in 1982 after the Therac-6 and Therac-20 units (the earlier units had been produced in partnership with of France). The Therac-25 was involved in at least six accidents between 1985 and 1987, in which some patients were given massive overdoses of radiation. Because of concurrent programming errors (also known as race conditions), it sometimes gave its patients radiation doses that were hundreds of times greater than normal, resulting in death or serious injury. These accidents highlighted the dangers of software control of safety-critical systems. The Therac-25 has become a standard case study in health informatics, software engineering, and computer ethics. It highlights the dangers of engineer overconfidence after the engineers dismissed end-user reports, leading to severe consequences. History The French company CGR manufactured the Neptune and Sagittaire linear accelerators. In the early 1970s, CGR and the Canadian public company Atomic Energy of Canada Limited (AECL) collaborated on the construction of linear accelerators controlled by a DEC PDP-11 minicomputer: the Therac-6, which produced X-rays of up to 6 MeV, and the Therac-20, which could produce X-rays or electrons of up to 20 MeV. The computer increased ease of use because the accelerator could operate without it. CGR developed the software for the Therac-6 and reused some subroutines for the Therac-20. In 1981, the two companies ended their collaboration agreement. AECL developed a new double pass concept for electron acceleration in a more confined space, changing its energy source from klystron to magnetron. In certain techniques, the electrons produced are used directly, while in others they are made to collide against a tungsten anode to produce X-ray beams. This dual accelerator concept was applied to the Therac-20 and Therac-25, with the latter being much more compact, versatile, and easy to use. It was also more economical for a hospital to have a dual machine that could apply treatments of electrons and X-rays, instead of two machines. The Therac-25 was designed as a machine controlled by a computer, with some safety mechanisms switched from hardware to software as a result. AECL decided not to duplicate some safety mechanisms, and reused modules and code routines from the Therac-20 for the Therac-25. The first prototype of the Therac-25 was built in 1976 and was put on the market in late 1982. The software for the Therac-25 was developed by one person over several years using PDP-11 assembly language. It was an evolution of the Therac-6 software. In 1986, the programmer left AECL. In a subsequent lawsuit, lawyers were unable to identify the programmer or learn about his qualification and experience. Five machines were installed in the United States and six in Canada. After the accidents, in 1988 AECL dissolved the AECL Medical section and the company Theratronics International Ltd took over the maintenance of the installed Therac-25 machines. Design The machine had three modes of operation, with a turntable moving some apparatus into position for each of those modes: either a light, some scan magnets, or a tungsten target and flattener. A "field light" mode, which allowed the patient and collimator to be correctly positioned by illuminating the treatment area with visible light. Direct electron-beam therapy, in which a narrow, low-current beam of high-energy () electrons was scanned over the treatment area by magnets; Megavolt X-ray (or photon) therapy, which delivered a beam of 25 MeV X-ray photons. The X-ray photons are produced by colliding a high current, narrow beam of electrons with a tungsten target. The X-rays are then passed through a flattening filter, and then measured using an X-ray ion chamber. The flattening filter resembles an inverted ice-cream cone, and it shapes and attenuates the X-rays. The electron beam current required to produce the X-rays is about 100 times greater than that used for electron therapy. The patient is placed on a fixed stretcher. Above them is a turntable to which the components that modify the electron beam are fixed. The turntable has a position for the X-ray mode (photons), another position for the electron mode and a third position for making adjustments using visible light. In this position an electron beam is not expected, and a light that is reflected in a stainless steel mirror simulates the beam. In this position there is no ion chamber acting as a radiation dosimeter because the radiation beam is not expected to function. The turntable has some microswitches that indicate the position to the computer. When the plate is in one of the three allowed fixed positions a plunger locks it by interlocking. In this type of machine, electromechanical locks were traditionally used to ensure that the turntable was in the correct position before starting treatment. In the Therac-25, these were replaced by software checks. Problem description The six documented accidents occurred when the high-current electron beam generated in X-ray mode was delivered directly to patients. Two software faults were to blame. One was when the operator incorrectly selected X-ray mode before quickly changing to electron mode, which allowed the electron beam to be set for X-ray mode without the X-ray target being in place. A second fault allowed the electron beam to activate during field-light mode, during which no beam scanner was active or target was in place. Previous models had hardware interlocks to prevent such faults, but the Therac-25 had removed them, depending instead on software checks for safety. The high-current electron beam struck the patients with approximately 100 times the intended dose of radiation, and over a narrower area, delivering a potentially lethal dose of beta radiation. The feeling was described by patient Ray Cox as "an intense electric shock", causing him to scream and run out of the treatment room. Several days later, radiation burns appeared, and the patients showed the symptoms of radiation poisoning; in three cases, the injured patients later died as a result of the overdose. Radiation overexposure incidents Kennestone Regional Oncology Center, 1985 A Therac-25 had been in operation for six months in Marietta, Georgia at the Kennestone Regional Oncology Center when, on June 3, 1985, applied radiation therapy treatment following a lumpectomy was being performed on 61-year-old woman Katie Yarbrough. She was set to receive a 10-MeV dose of electron therapy to her clavicle. When therapy began, she stated she experienced a "tremendous force of heat...this red-hot sensation." The technician entered the room, to whom Katie stated, "you burned me." The technician assured her this was not possible. She returned home where, in the following days, she experienced reddening of the treatment area. Shortly after, her shoulder became locked in place and she experienced spasms. Within two weeks, the aforementioned redness spread from her chest to her back, indicating that the source of the burn had passed through her, which is the case with radiation burns. The staff at the treatment center did not believe it was possible for the Therac-25 to cause such an injury, and it was treated as a symptom of her cancer. Later, the hospital physicist consulted the AECL about the incident. He calculated that the applied dose was between 15,000 and 20,000 rad (radiation absorbed dose) when she should have been dosed with 200 rad. A dose of 1000 rad can be fatal. In October 1985, Katie sued the hospital and the manufacturer of the machine. In November 1985, the AECL was notified of the lawsuit. It was not until March 1986, after another incident involving the Therac-25, that the AECL informed the FDA that it had received a complaint from the patient. Due to the radiation overdose, her breast had to be surgically removed, an arm and shoulder were immobilized, and she was in constant pain. The treatment printout function was not activated at the time of treatment and there was no record of the applied radiation data. An out-of-court settlement was reached to resolve the lawsuit. Ontario Cancer Foundation, 1985 The Therac-25 had been in operation in the clinic for six months when, on July 26, 1985, a 40-year-old patient was receiving her 24th treatment for cervical cancer. The operator activated the treatment, but after five seconds the machine stopped with the error message "H-tilt", the treatment pause indication and the dosimeter indicating that no radiation had been applied. The operator pressed the key (Proceed : continue). The machine stopped again. The operator repeated the process five times until the machine stopped the treatment. A technician was called and found no problem. The machine was used to treat six other patients on the same day. The patient complained of burning and swelling in the area and was hospitalized on July 30. She was suspected of a radiation overdose and the machine was taken out of service. On November 3, 1985, the patient died of cancer, although the autopsy mentioned that if she had not died then, she would have had to undergo a hip replacement due to damage from the radiation overdose. A technician estimated that she received between 13,000 and 17,000 rad. The incident was reported to the FDA and the Canadian Radiation Protection Bureau. The AECL suspected that there might be an error with three microswitches that reported the position of the turntable. The AECL was unable to replicate a failure of the microswitches and microswitch testing was inconclusive. They then changed the method to be tolerant of one failure and modified the software to check if the turntable was moving or in the treatment position. Afterward, the AECL claimed that the modifications represented a five-order-of-magnitude increase in safety. Yakima Valley Memorial Hospital, 1985 In December 1985 a woman developed an erythema with a parallel band pattern after receiving treatment from a Therac-25 unit. Hospital staff sent a letter on January 31, 1986, to the AECL about the incident. The AECL responded in two pages detailing the reasons why radiation overdose was impossible on the Therac-25, stating both machine failure and operator error were not possible. Six months later, the patient developed chronic ulcers under the skin due to tissue necrosis. She had surgery and skin grafts were placed. The patient continued to live with minor sequelae. East Texas Cancer Center, Tyler, March 1986 Over two years, this hospital treated more than 500 patients with the Therac-25 with no incident. On March 21, 1986, a patient presented for his ninth treatment session for a tumor on his back. The treatment was set to be 22-MeV of electrons with a dose of 180 rad in an area of 10x17 cm, with an accumulated radiation in 6 weeks of 6000 rad. The experienced operator entered the session data and realized that she had written an “x” for ‘x-ray’ instead of an “e” for ‘electron beam’ as the type of treatment. With the cursor she went up and changed the “x” to an “e” and since the rest of the parameters were correct she pressed until she got down to the command box. All parameters were marked "Verified" and the message "Rays ready" was displayed. She hit the key ("Beam on"). The machine stopped and displayed the message "Malfunction 54" (error 54). It also showed 'Treatment pause'. The manual said that the "Malfunction 54" message was a "dose input 2" error. A technician later testified that "dose input 2" meant that the radiation delivered was either too high or too low. The radiation monitor (dosimeter) marked 6 units supplied when it had demanded 202 units. The operator pressed ( Proceed : continue). The machine stopped again with the message "Malfunction 54" (error 54) and the dosimeter indicated that it had delivered fewer units than required. The surveillance camera in the radiation room was offline and the intercom had been broken that day. With the first dose the patient felt an electric shock and heard a crackle from the machine. Since it was his ninth session, he realized that it was not normal. He started to get up from the table to ask for help. At that moment the operator pressed to continue the treatment. The patient felt a shock of electricity through his arm, as if his hand was torn off. He reached the door and began to bang on it until the operator opened it. A physician was immediately called to the scene, where they observed intense erythema in the area, suspecting that it had been a simple electric shock. He sent the patient home. The hospital physicist checked the machine and, because it was calibrated to the correct specification, it continued to treat patients throughout the day. The technicians were unaware that the patient had received a massive dose of radiation between 16,500 and 25,000 rads in less than a second over an area of one cm2. The crackling of the machine had been produced by saturation of the ionization chambers, which had the consequence that they indicated that the applied radiation dose had been very low. Over the following weeks the patient experienced paralysis of the left arm, nausea, vomiting, and ended up being hospitalized for radiation-induced myelitis of the spinal cord. His legs, mid-diaphragm and vocal cords ended up paralyzed. He also had recurrent herpes simplex skin infections. He died five months after the overdose. From the day after the accident, AECL technicians checked the machine and were unable to replicate error 54. They checked the grounding of the machine to rule out electric shock as the cause. The machine was back in operation on April 7, 1986. East Texas Cancer Center, Tyler, April 1986 On April 11, 1986, a patient was to receive electron treatment for skin cancer on the face. The prescription was 10 MeV for an area of 7x10 cm. The operator was the same as the one in the March incident, three weeks earlier. After filling in all the treatment data she realized that she had to change the mode from X to E. She did so and pressed to go down to the command box. As "Beam ready" was displayed, she pressed (Proceed : continue). The machine produced a loud noise, which was heard through the intercom. Error 54 was displayed. The operator entered the room and the patient described a burning sensation on his face. The patient died on May 1, 1986, just shy of 3 weeks later. The autopsy showed severe radiation damage to the right temporal lobe and brain stem. The hospital physicist stopped the machine treatments and notified the AECL. After strenuous work, the physicist and operator were able to reproduce the error 54 message. They determined that speed in editing the data entry was a key factor in producing error 54. After much practice, he was able to reproduce the error 54 at will. The AECL stated they could not reproduce the error and they only got it after following the instructions of the physicist so that the data entry was very rapid. Yakima Valley Memorial Hospital, 1987 On January 17, 1987, a patient was to receive a treatment with two film-verification exposures of 4 and 3 rads, plus a 79-rad photon treatment for a total exposure of 86 rads. Film was placed under the patient and 4 rads were administered through a 22 cm × 18 cm opening. The machine was stopped, the aperture was opened to 35 cm × 35 cm and a dose of 3 rad was administered. The machine stopped. The operator entered the room to remove the film plates and adjust the patient's position. He used the hand control inside the room to adjust the turntable. He left the room, forgetting the film plates. In the control room, after seeing the "Beam ready" message, he pressed the key to fire the beams. After 5 seconds the machine stopped and displayed a message that quickly disappeared. Since the machine was paused, the operator pressed (Proceed : continue). The machine stopped, showing "Flatness" as the reason. The operator heard the patient on the intercom, but could not understand him, and entered the room. The patient had felt a severe burning sensation in his chest. The screen showed that he had only been given 7 rad. A few hours later, the patient showed burns on the skin in the area. Four days later the reddening of the area had a banded pattern similar to that produced in the incident the previous year, and for which they had not found the cause. The AECL began an investigation, but was unable to reproduce the event. The hospital physicist conducted tests with film plates to see if he could recreate the incident, which involved two X-ray parameters with the turntable in field-light position. The film appeared to match the film that was left by mistake under the patient during the accident. It was found the patient was exposed to between 8,000 and 10,000 rad instead of the prescribed 86 rad. The patient died in April 1987 from complications due to radiation overdose. The relatives filed a lawsuit that ended with an out-of-court settlement. Root causes A commission attributed the primary cause to generally poor software design and development practices, rather than singling out specific coding errors. In particular, the software was designed so that it was realistically impossible to test it in a rigorous, automated way. Researchers who investigated the accidents found several contributing causes. These included the following institutional causes: AECL did not have the software code independently reviewed and chose to rely on in-house code, including the operating system. AECL did not consider the design of the software during its assessment of how the machine might produce the desired results and what failure modes existed, focusing purely on hardware and asserting that the software was free of bugs. Machine operators were reassured by AECL personnel that overdoses were impossible, leading them to dismiss the Therac-25 as the potential cause of many incidents. AECL had never tested the Therac-25 with the combination of software and hardware until it was assembled at the hospital. The researchers also found several engineering issues: Several error messages merely displayed the word "MALFUNCTION" followed by a number from 1 to 64. The user manual did not explain or even address the error codes, nor give any indication that these errors could pose a threat to patient safety. The system distinguished between errors that halted the machine, requiring a restart, and errors which merely paused the machine (which allowed operators to continue with the same settings using a keypress). However, some errors which endangered the patient merely paused the machine, and the frequent occurrence of minor errors caused operators to become accustomed to habitually unpausing the machine. One failure occurred when a particular sequence of keystrokes was entered on the VT100 terminal that controlled the PDP-11 computer: If the operator were to press "X" to (erroneously) select 25 MeV photon mode, then use "cursor up" to edit the input to "E" to (correctly) select 25 MeV Electron mode, then "Enter", all within eight seconds of the first keypress and well within the capability of an experienced user of the machine, the edit would not be processed and an overdose could be administered. These edits were not noticed as it would take 8 seconds for startup, so it would go with the default setup. The design did not have any hardware interlocks to prevent the electron-beam from operating in its high-energy mode without the target in place. The engineer had reused software from the Therac-6 and Therac-20, which used hardware interlocks that masked their software defects. Those hardware safeties had no way of reporting that they had been triggered, so preexisting errors were overlooked. The hardware provided no way for the software to verify that sensors were working correctly. The table-position system was the first implicated in Therac-25's failures; the manufacturer revised it with redundant switches to cross-check their operation. The software set a flag variable by incrementing it, rather than by setting it to a fixed non-zero value. Occasionally an arithmetic overflow occurred, causing the flag to return to zero and the software to bypass safety checks. Leveson notes that a lesson to be drawn from the incident is to not assume that reused software is safe: "A naive assumption is often made that reusing software or using commercial off-the-shelf software will increase safety because the software will have been exercised extensively. Reusing software modules does not guarantee safety in the new system to which they are transferred ..." In response to incidents like those associated with Therac-25, the IEC 62304 standard was created, which introduces development life cycle standards for medical device software and specific guidance on using software of unknown pedigree. See also 1962 Mexico City radiation accident 1990 Clinic of Zaragoza radiotherapy accident Ciudad Juárez cobalt-60 contamination incident Computer ethics Goiânia accident High integrity software IEC 62304 Ionizing radiation List of civilian radiation accidents List of orphan source incidents Nuclear and radiation accidents and incidents Radiation protection Radioactive scrap metal Samut Prakan radiation accident List of software bugs References Further reading (short summary of the Therac-25 Accidents) Software bugs Health disasters Nuclear medicine Health disasters in Canada Engineering failures Radiation accidents and incidents
Therac-25
[ "Technology", "Engineering" ]
4,461
[ "Systems engineering", "Reliability engineering", "Technological failures", "Engineering failures", "Civil engineering" ]
316,061
https://en.wikipedia.org/wiki/Sluice
A sluice ( ) is a water channel containing a sluice gate, a type of lock to manage the water flow and water level. It can also be an open channel which processes material, such as a river sluice used in gold prospecting or fossicking. A mill race, leet, flume, penstock or lade is a sluice channeling water toward a water mill. The terms sluice, sluice gate, knife gate, and slide gate are used interchangeably in the water and wastewater control industry. Operation "Sluice gate" refers to a movable gate allowing water to flow under it. When a sluice is lowered, water may spill over the top, in which case the gate operates as a weir. Usually, a mechanism drives the sluice up or down. This may be a simple, hand-operated, chain pulled/lowered, worm drive or rack-and-pinion drive, or it may be electrically or hydraulically powered. A flap sluice, however, operates automatically, without external intervention or inputs. Types of sluice gates Flap sluice gate A fully automatic type, controlled by the pressure head across it; operation is similar to that of a check valve. It is a gate hinged at the top. When pressure is from one side, the gate is kept closed; a pressure from the other side opens the sluice when a threshold pressure is surpassed. Vertical rising sluice gate A plate sliding in the vertical direction, which may be controlled by machinery. Radial sluice gate A structure, where a small part of a cylindrical surface serves as the gate, supported by radial constructions going through the cylinder's radius. On occasion, a counterweight is provided. Rising sector sluice gate Also a part of a cylindrical surface, which rests at the bottom of the channel and rises by rotating around its centre. Needle sluice A sluice formed by a number of thin needles held against a solid frame through water pressure as in a needle dam. Fan gate () This type of gate was invented by the Dutch hydraulic engineer in 1808. He was Inspector-General for Waterstaat (Water resource management) of the Kingdom of Holland at the time. The Fan door has the special property that it can open in the direction of high water solely using water pressure. This gate type was primarily used to purposely inundate certain regions, for instance in the case of the Hollandic Water Line. Nowadays this type of gate can still be found in a few places, for example in Gouda. A fan gate has a separate chamber that can be filled with water and is separated on the high-water-level side of the sluice by a large door. When a tube connecting the separate chamber with the high-water-level side of the sluice is opened, the water level, and with that the water pressure in this chamber, will rise to the same level as that on the high-water-level side. As there is no height difference across the larger gate, it exerts no force. However the smaller gate has a higher level on the upstream side, which exerts a force to close the gate. When the tube to the low water side is opened the water level in the chamber will fall. Due to the difference in the surface areas of the doors there will be a net force opening the gate. Designing the sluice gate Sluice gates are one of the most common hydraulic structures used to control or measure the flow in open channels. Vertical rising sluice gates are the most common in open channels and can operate under two flow regimes: free flow and submerged flow. The most important depths in designing of sluice gates are: : upstream depth : opening of the sluice gate : the minimum depth of flow after the sluice gate : the initial depth of the hydraulic jump : the secondary depth of the hydraulic jump : downstream depth Logging sluices In the mountains of the United States, sluices transported logs from steep hillsides to downslope sawmill ponds or yarding areas. Nineteenth-century logging was traditionally a winter activity for men who spent summers working on farms. Where there were freezing nights, water might be applied to logging sluices every night so a fresh coating of slippery ice would reduce friction of logs placed in the sluice the following morning. Placer mining applications Sluice boxes are often used in the recovery of black sands, gold, and other minerals from placer deposits during placer mining operations. They may be small-scale, as used in prospecting, or much larger, as in commercial operations, where the material is sometimes screened using a trommel, screening plant or sieve. Traditional sluices have transverse riffles over a carpet or rubber matting, which trap the heavy minerals, gemstones, and other valuable minerals. Since the early 2000s more miners and prospectors are relying on more modern and effective matting systems. The result is a concentrate which requires additional processing. Types of material Aluminium Most sluices are formed with aluminium using a press brake to form a U shape Wood Traditionally wood was the material of choice for sluice gates. Cast iron Cast iron has been popular when constructing sluice gates for years. This material is great at keeping the strength needed when dealing with powerful water levels. Stainless steel In most cases, stainless steel is lighter than the older cast iron material. Fibre-reinforced plastic (FRP) In modern times, newer materials such as fibre-reinforced plastic are being used to build sluices. These modern technologies have many of the attributes of the older materials, while introducing advantages such as corrosion resistance and much lighter weights. Regional names for sluice gates In the Somerset Levels, sluice gates are known as clyse or clyce. Most of the inhabitants of Guyana refer to sluices as kokers. The Sinhala people in Sri Lanka, who had an ancient civilization based on harvested rain water, refer to sluices as Horovuwa. Gallery See also Control lock Floodgate Gatehouse (waterworks) – An (elaborate) structure to house a sluice gate Lock Rhyne Zijlstra – A Dutch name referring to one who lives near a sluice Van der Sluijs – A Dutch name originating from the Sluice Hydraulic engineering Canal List of canals by country References Further reading External links Soar Valley Sluice Gates Salt/Fresh water separating Sluice Complex (Part of DeltaWorks) Canals Hydraulic engineering Water transport infrastructure Dutch words and phrases
Sluice
[ "Physics", "Engineering", "Environmental_science" ]
1,354
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
316,524
https://en.wikipedia.org/wiki/Fructose%20bisphosphatase%20deficiency
In fructose bisphosphatase deficiency, there is not enough fructose bisphosphatase for gluconeogenesis to occur correctly. Glycolysis (the breakdown of glucose) will still work, as it does not use this enzyme. History The first known description of a patient with this condition was published in 1970 in The Lancet journal. Early research into the disorder was conducted by a team led by Anthony S. Pagliara and Barbara Illingworth Brown at Washington University Medical Center, based on the case of an infant girl from Oak Ridge, Missouri. Presentation Without effective gluconeogenesis (GNG), hypoglycaemia will set in after about 12 hours of fasting. This is the time when liver glycogen stores have been exhausted, and the body has to rely on GNG. When given a dose of glucagon (which would normally increase blood glucose) nothing will happen, as stores are depleted and GNG doesn't work. (In fact, the patient would already have high glucagon levels.) There is no problem with the metabolism of glucose or galactose, but fructose and glycerol cannot be used by the liver to maintain blood glucose levels. If fructose or glycerol are given, there will be a buildup of phosphorylated three-carbon sugars. This leads to phosphate depletion within the cells, and also in the blood. Without phosphate, ATP cannot be made, and many cell processes cannot occur. High levels of glucagon will tend to release fatty acids from adipose tissue, and this will combine with glycerol that cannot be used in the liver, to make triacylglycerides causing a fatty liver. As three carbon molecules cannot be used to make glucose, they will instead be made into pyruvate and lactate. These acids cause a drop in the pH of the blood (a metabolic acidosis). Acetyl CoA (acetyl co-enzyme A) will also build up, leading to the creation of ketone bodies. Diagnosis Diagnosis is made by measurement of FDPase in cultured lymphocytes and confirmed by detection of mutation of FBP1, encoding FDPase. Treatment To treat people with a deficiency of this enzyme, they must avoid needing gluconeogenesis to make glucose. This can be accomplished by not fasting for long periods, and eating high-carbohydrate food. They should avoid fructose containing foods (as well as sucrose which breaks down to fructose). As with all single-gene metabolic disorders, there is always hope for genetic therapy, inserting a healthy copy of the gene into existing liver cells. See also Fructose Gluconeogenesis Metabolism References External links Inborn errors of carbohydrate metabolism
Fructose bisphosphatase deficiency
[ "Chemistry" ]
601
[ "Inborn errors of carbohydrate metabolism", "Carbohydrate metabolism" ]
316,993
https://en.wikipedia.org/wiki/Emulsion%20polymerization
In polymer chemistry, emulsion polymerization is a type of radical polymerization that usually starts with an emulsion incorporating water, monomers, and surfactants. The most common type of emulsion polymerization is an oil-in-water emulsion, in which droplets of monomer (the oil) are emulsified (with surfactants) in a continuous phase of water. Water-soluble polymers, such as certain polyvinyl alcohols or hydroxyethyl celluloses, can also be used to act as emulsifiers/stabilizers. The name "emulsion polymerization" is a misnomer that arises from a historical misconception. Rather than occurring in emulsion droplets, polymerization takes place in the latex/colloid particles that form spontaneously in the first few minutes of the process. These latex particles are typically 100 nm in size, and are made of many individual polymer chains. The particles are prevented from coagulating with each other because each particle is surrounded by the surfactant ('soap'); the charge on the surfactant repels other particles electrostatically. When water-soluble polymers are used as stabilizers instead of soap, the repulsion between particles arises because these water-soluble polymers form a 'hairy layer' around a particle that repels other particles, because pushing particles together would involve compressing these chains. Emulsion polymerization is used to make several commercially important polymers. Many of these polymers are used as solid materials and must be isolated from the aqueous dispersion after polymerization. In other cases the dispersion itself is the end product. A dispersion resulting from emulsion polymerization is often called a latex (especially if derived from a synthetic rubber) or an emulsion (even though "emulsion" strictly speaking refers to a dispersion of an immiscible liquid in water). These emulsions find applications in adhesives, paints, paper coating and textile coatings. They are often preferred over solvent-based products in these applications due to the absence of volatile organic compounds (VOCs) in them. Advantages of emulsion polymerization include: High molecular weight polymers can be made at fast polymerization rates. By contrast, in bulk and solution free-radical polymerization, there is a tradeoff between molecular weight and polymerization rate. The continuous water phase is an excellent conductor of heat, enabling fast polymerization rates without loss of temperature control. Since polymer molecules are contained within the particles, the viscosity of the reaction medium remains close to that of water and is not dependent on molecular weight. The final product can be used as is and does not generally need to be altered or processed. Disadvantages of emulsion polymerization include: Surfactants and other polymerization adjuvants remain in the polymer or are difficult to remove For dry (isolated) polymers, water removal is an energy-intensive process Emulsion polymerizations are usually designed to operate at high conversion of monomer to polymer. This can result in significant chain transfer to polymer. Can not be used for condensation, ionic, or Ziegler-Natta polymerization, although some exceptions are known. History The early history of emulsion polymerization is connected with the field of synthetic rubber. The idea of using an emulsified monomer in an aqueous suspension or emulsion was first conceived at Bayer, before World War I, in an attempt to prepare synthetic rubber. The impetus for this development was the observation that natural rubber is produced at room temperature in dispersed particles stabilized by colloidal polymers, so the industrial chemists tried to duplicate these conditions. The Bayer workers used naturally occurring polymers such as gelatin, ovalbumin, and starch to stabilize their dispersion. By today's definition these were not true emulsion polymerizations, but suspension polymerizations. The first "true" emulsion polymerizations, which used a surfactant and polymerization initiator, were conducted in the 1920s to polymerize isoprene. Over the next twenty years, through the end of World War II, efficient methods for production of several forms of synthetic rubber by emulsion polymerization were developed, but relatively few publications in the scientific literature appeared: most disclosures were confined to patents or were kept secret due to wartime needs. After World War II, emulsion polymerization was extended to production of plastics. Manufacture of dispersions to be used in latex paints and other products sold as liquid dispersions commenced. Ever more sophisticated processes were devised to prepare products that replaced solvent-based materials. Ironically, synthetic rubber manufacture turned more and more away from emulsion polymerization as new organometallic catalysts were developed that allowed much better control of polymer architecture. Theoretical overview The first successful theory to explain the distinct features of emulsion polymerization was developed by Smith and Ewart, and Harkins in the 1940s, based on their studies of polystyrene. Smith and Ewart arbitrarily divided the mechanism of emulsion polymerization into three stages or intervals. Subsequently, it has been recognized that not all monomers or systems undergo these particular three intervals. Nevertheless, the Smith-Ewart description is a useful starting point to analyze emulsion polymerizations. The Smith-Ewart-Harkins theory for the mechanism of free-radical emulsion polymerization is summarized by the following steps: A monomer is dispersed or emulsified in a solution of surfactant and water, forming relatively large droplets in water. Excess surfactant creates micelles in the water. Small amounts of monomer diffuse through the water to the micelle. A water-soluble initiator is introduced into the water phase where it reacts with monomer in the micelles. (This characteristic differs from suspension polymerization where an oil-soluble initiator dissolves in the monomer, followed by polymer formation in the monomer droplets themselves.) This is considered Smith-Ewart interval 1. The total surface area of the micelles is much greater than the total surface area of the fewer, larger monomer droplets; therefore the initiator typically reacts in the micelle and not the monomer droplet. Monomer in the micelle quickly polymerizes and the growing chain terminates. At this point the monomer-swollen micelle has turned into a polymer particle. When both monomer droplets and polymer particles are present in the system, this is considered Smith-Ewart interval 2. More monomer from the droplets diffuses to the growing particle, where more initiators will eventually react. Eventually the free monomer droplets disappear and all remaining monomer is located in the particles. This is considered Smith-Ewart interval 3. Depending on the particular product and monomer, additional monomer and initiator may be continuously and slowly added to maintain their levels in the system as the particles grow. The final product is a dispersion of polymer particles in water. It can also be known as a polymer colloid, a latex, or commonly and inaccurately as an 'emulsion'. Smith-Ewart theory does not predict the specific polymerization behavior when the monomer is somewhat water-soluble, like methyl methacrylate or vinyl acetate. In these cases homogeneous nucleation occurs: particles are formed without the presence or need for surfactant micelles. High molecular weights are developed in emulsion polymerization because the concentration of growing chains within each polymer particle is very low. In conventional radical polymerization, the concentration of growing chains is higher, which leads to termination by coupling, which ultimately results in shorter polymer chains. The original Smith-Ewart-Hawkins mechanism required each particle to contain either zero or one growing chain. Improved understanding of emulsion polymerization has relaxed that criterion to include more than one growing chain per particle, however, the number of growing chains per particle is still considered to be very low. Because of the complex chemistry that occurs during an emulsion polymerization, including polymerization kinetics and particle formation kinetics, quantitative understanding of the mechanism of emulsion polymerization has required extensive computer simulation. Robert Gilbert has summarized a recent theory. More detailed treatment of Smith-Ewart theory Interval 1 When radicals generated in the aqueous phase encounter the monomer within the micelle, they initiate polymerization. The conversion of monomer to polymer within the micelle lowers the monomer concentration and generates a monomer concentration gradient. Consequently, the monomer from monomer droplets and uninitiated micelles begin to diffuse to the growing, polymer-containing, particles. Those micelles that did not encounter a radical during the earlier stage of conversion begin to disappear, losing their monomer and surfactant to the growing particles. The theory predicts that after the end of this interval, the number of growing polymer particles remains constant. Interval 2 This interval is also known as steady state reaction stage. Throughout this stage, monomer droplets act as reservoirs supplying monomer to the growing polymer particles by diffusion through the water. While at steady state, the ratio of free radicals per particle can be divided into three cases. When the number of free radicals per particle is less than , this is called Case 1. When the number of free radicals per particle equals , this is called Case 2. And when there is greater than radical per particle, this is called Case 3. Smith-Ewart theory predicts that Case 2 is the predominant scenario for the following reasons. A monomer-swollen particle that has been struck by a radical contains one growing chain. Because only one radical (at the end of the growing polymer chain) is present, the chain cannot terminate, and it will continue to grow until a second initiator radical enters the particle. As the rate of termination is much greater than the rate of propagation, and because the polymer particles are extremely small, chain growth is terminated immediately after the entrance of the second initiator radical. The particle then remains dormant until a third initiator radical enters, initiating the growth of a second chain. Consequently, the polymer particles in this case either have zero radicals (dormant state), or 1 radical (polymer growing state) and a very short period of 2 radicals (terminating state) which can be ignored for the free radicals per particle calculation. At any given time, a micelle contains either one growing chain or no growing chains (assumed to be equally probable). Thus, on average, there is around 1/2 radical per particle, leading to the Case 2 scenario. The polymerization rate in this stage can be expressed by where is the homogeneous propagation rate constant for polymerization within the particles and is the equilibrium monomer concentration within a particle. represents the overall concentration of polymerizing radicals in the reaction. For Case 2, where the average number of free radicals per micelle are , can be calculated in following expression: where is number concentration of micelles (number of micelles per unit volume), and is the Avogadro constant (). Consequently, the rate of polymerization is then Interval 3 Separate monomer droplets disappear as the reaction continues. Polymer particles in this stage may be sufficiently large enough that they contain more than 1 radical per particle. Process considerations Emulsion polymerizations have been used in batch, semi-batch, and continuous processes. The choice depends on the properties desired in the final polymer or dispersion and on the economics of the product. Modern process control schemes have enabled the development of complex reaction processes, with ingredients such as initiator, monomer, and surfactant added at the beginning, during, or at the end of the reaction. Early styrene-butadiene rubber (SBR) recipes are examples of true batch processes: all ingredients added at the same time to the reactor. Semi-batch recipes usually include a programmed feed of monomer to the reactor. This enables a starve-fed reaction to ensure a good distribution of monomers into the polymer backbone chain. Continuous processes have been used to manufacture various grades of synthetic rubber. Some polymerizations are stopped before all the monomer has reacted. This minimizes chain transfer to polymer. In such cases the monomer must be removed or stripped from the dispersion. Colloidal stability is a factor in design of an emulsion polymerization process. For dry or isolated products, the polymer dispersion must be isolated, or converted into solid form. This can be accomplished by simple heating of the dispersion until all water evaporates. More commonly, the dispersion is destabilized (sometimes called "broken") by addition of a multivalent cation. Alternatively, acidification will destabilize a dispersion with a carboxylic acid surfactant. These techniques may be employed in combination with application of shear to increase the rate of destabilization. After isolation of the polymer, it is usually washed, dried, and packaged. By contrast, products sold as a dispersion are designed with a high degree of colloidal stability. Colloidal properties such as particle size, particle size distribution, and viscosity are of critical importance to the performance of these dispersions. Living polymerization processes that are carried out via emulsion polymerization such as iodine-transfer polymerization and RAFT have been developed. Controlled coagulation techniques can enable better control of the particle size and distribution. Components Monomers Typical monomers are those that undergo radical polymerization, are liquid or gaseous at reaction conditions, and are poorly soluble in water. Solid monomers are difficult to disperse in water. If monomer solubility is too high, particle formation may not occur and the reaction kinetics reduce to that of solution polymerization. Ethene and other simple olefins must be polymerized at very high pressures (up to 800 bar). Comonomers Copolymerization is common in emulsion polymerization. The same rules and comonomer pairs that exist in radical polymerization operate in emulsion polymerization. However, copolymerization kinetics are greatly influenced by the aqueous solubility of the monomers. Monomers with greater aqueous solubility will tend to partition in the aqueous phase and not in the polymer particle. They will not get incorporated as readily in the polymer chain as monomers with lower aqueous solubility. This can be avoided by a programmed addition of monomer using a semi-batch process. Ethene and other alkenes are used as minor comonomers in emulsion polymerization, notably in vinyl acetate copolymers. Small amounts of acrylic acid or other ionizable monomers are sometimes used to confer colloidal stability to a dispersion. Initiators Both thermal and redox generation of free radicals have been used in emulsion polymerization. Persulfate salts are commonly used in both initiation modes. The persulfate ion readily breaks up into sulfate radical ions above about 50 °C, providing a thermal source of initiation. Redox initiation takes place when an oxidant such as a persulfate salt, a reducing agent such as glucose, Rongalite, or sulfite, and a redox catalyst such as an iron compound are all included in the polymerization recipe. Redox recipes are not limited by temperature and are used for polymerizations that take place below 50 °C. Although organic peroxides and hydroperoxides are used in emulsion polymerization, initiators are usually water soluble and partition into the water phase. This enables the particle generation behavior described in the theory section. In redox initiation, either the oxidant or the reducing agent (or both) must be water-soluble, but one component can be water-insoluble. Surfactants Selection of the correct surfactant is critical to the development of any emulsion polymerization process. The surfactant must enable a fast rate of polymerization, minimize coagulum or fouling in the reactor and other process equipment, prevent an unacceptably high viscosity during polymerization (which leads to poor heat transfer), and maintain or even improve properties in the final product such as tensile strength, gloss, and water absorption. Anionic, nonionic, and cationic surfactants have been used, although anionic surfactants are by far most prevalent. Surfactants with a low critical micelle concentration (CMC) are favored; the polymerization rate shows a dramatic increase when the surfactant level is above the CMC, and minimization of the surfactant is preferred for economic reasons and the (usually) adverse effect of surfactant on the physical properties of the resulting polymer. Mixtures of surfactants are often used, including mixtures of anionic with nonionic surfactants. Mixtures of cationic and anionic surfactants form insoluble salts and are not useful. Examples of surfactants commonly used in emulsion polymerization include fatty acids, sodium lauryl sulfate, and alpha-olefin sulfonate. Non-surfactant stabilizers Some grades of polyvinyl alcohol and other water-soluble polymers can promote emulsion polymerization even though they do not typically form micelles and do not act as surfactants (for example, they do not lower surface tension). It is believed that growing polymer chains graft onto these water-soluble polymers, which stabilize the resulting particles. Dispersions prepared with such stabilizers typically exhibit excellent colloidal stability (for example, dry powders may be mixed into the dispersion without causing coagulation). However, they often result in products that are very water sensitive due to the presence of the water-soluble polymer. Other ingredients Other ingredients found in emulsion polymerization include chain transfer agents, buffering agents, and inert salts. Preservatives are added to products sold as liquid dispersions to retard bacterial growth. These are usually added after polymerization, however. Applications Polymers produced by emulsion polymerization can roughly be divided into three categories. Synthetic rubber Some grades of styrene-butadiene (SBR) Some grades of Polybutadiene Polychloroprene (Neoprene) Nitrile rubber Acrylic rubber Fluoroelastomer (FKM) Plastics Some grades of PVC Some grades of polystyrene Some grades of PMMA Acrylonitrile-butadiene-styrene terpolymer (ABS) Polyvinylidene fluoride Polyvinyl fluoride PTFE Dispersions (i.e. polymers sold as aqueous dispersions) polyvinyl acetate polyvinyl acetate copolymers polyacrylates Styrene-butadiene VAE (vinyl acetate – ethylene copolymers) See also International Union of Pure and Applied Chemistry Radical polymerization RAFT (chemistry) Robert Gilbert Dispersion polymerization Ray P. Dinsmore References Chemical processes Polymerization reactions fr:Procédé de polymérisation#Polymérisation en émulsion
Emulsion polymerization
[ "Chemistry", "Materials_science" ]
3,929
[ "Chemical processes", "nan", "Polymer chemistry", "Chemical process engineering", "Polymerization reactions" ]
317,205
https://en.wikipedia.org/wiki/Table%20saw
A table saw (also known as a sawbench or bench saw in England) is a woodworking tool, consisting of a circular saw blade, mounted on an arbor, that is driven by an electric motor (directly, by belt, by cable, or by gears). The drive mechanism is mounted below a table that provides support for the material, usually wood, being cut, with the blade protruding up through the table into the material. In most modern table saws, the table is fixed and the blade position can be adjusted. Moving the blade up or down affects the depth of the cut by controlling how much of the blade is protruding above the table surface. Many saws also have an adjustable angle, where the blade can be tilted relative to the table. Some earlier saws instead had a fixed blade and the table could be adjusted for height (exposure of blade) and angle relative to the blade. Types The general types of table saws are compact, benchtop, jobsite, contractor, hybrid, cabinet, and sliding table saws. Benchtop Benchtop table saws are lightweight and are designed to be placed on a table or other support for operation. This type of saw is most often used by homeowners and DIYers. They almost always have a direct-drive (blade driven directly by the motor) universal motor. Some early models used small induction motors, which weren't very powerful, made the saw heavy, and caused a lot of vibration. Most modern saws can be lifted and carried by one person. These saws often have parts made of steel, aluminum and plastic and are designed to be compact and light. Benchtop table saws are the least expensive (typically costing in the $100-$200 range) and least capable of the table saws; however, they can offer adequate ripping capacity and precision for most tasks. The universal motor is not as durable or as quiet as an induction motor, but it offers more power relative to its size and weight. The top of a benchtop table saw is narrower than those of the contractors and cabinet saws, so the width of stock that can be ripped is reduced. Another restriction results from the top being smaller from the front of the tabletop to the rear. This results in a shorter rip fence, which makes it harder to make a clean, straight cut when ripping. Also, there is less distance from the front edge of the tabletop to the blade, which makes cross cutting stock using a miter more difficult (the miter and/or stock may not be fully supported by the table in front of the blade). Benchtop saws are the smallest type of table saw and have the least mass, potentially resulting in increased vibration during a cut. Nowadays, these models are being phased out for more practical jobsite model saws. Jobsite Jobsite table saws are slightly larger than benchtop models, and usually are placed on a folding or stationary stand during operation. These saws are mostly used by carpenters, contractors, and tradesman on the jobsite (hence the name). Many of these saws are more expensive than benchtop saws (typically in the $300–$600 range). Most saws in this category have small but powerful 15-ampere universal motors. Many higher-end saws have gear-driven motors. Most of these saws are relatively light, and can be easily transported to a job location. Many of these saws are built more ruggedly and are generally more accurate than the entry-level benchtop models. The motors, gears, and cases are generally designed to better withstand the abuse of construction sites. When compared to benchtop saws, many jobsite models have miter slots, better fences, better overall alignment, sliding extension tables, larger rip capacities, and folding stands with wheels. Compact Compact table saws are much larger than portable saws, and sit on a stationary stand. The motor is still a universal type motor, however these are usually driven by small toothed belts. Some saws have cast iron tops, and are similar in appearance to larger contractor saws, although the tables are usually smaller and the build is of lighter construction. Some models even feature sliding-miter tables, with a built-in miter sled that could be tilted to many different angles. Contractor Contractor table saws (also sometimes referred to as open-stand saws) are heavier (200 - 400 lbs), larger saws that are attached to a stand or base, often with wheels. On these saws, the motor (Usually a induction-type motor) hinges off the rear of the saw on a pivoting bracket (although direct drive models have existed) and drives the blade with one, or rarely, two rubber v-belts. This is the type often used by hobbyists and homeowners because standard electrical circuits provide adequate power to run them, and because of their generally low cost when compared to larger saws. Because the motor hangs off the rear of the saw, dust collection is usually problematic or even ineffective. Contractor saws were originally designed to be somewhat portable, often having wheels, and were frequently brought to job sites before the invention of smaller bench-top models in the early 1980s. Contractor saws are heavier than bench-top saws, but are still lightweight when compared to cabinet saws. Their larger size and greater power allows them to be used for larger projects and allows them to be more durable, accurate, and longer-lasting then bench-top saws. Cabinet Cabinet table saws are heavy (600 - 900 lbs), using large amounts of cast iron and steel, to minimize vibration and increase accuracy. A cabinet saw is characterized by having an enclosed base. Cabinet saws usually have induction motors in the range, single-phase, but motors in the range, three-phase, are common in commercial/industrial sites. For home use, this type of motor typically requires that a heavy-duty circuit be installed. The motor is enclosed within the cabinet and drives the blade with one or more parallel V-belts, often "A" belts as "A" belts may be ganged without having to be specially selected (otherwise, specially selected sets of light-duty "4L" belts are used). Cabinet saws offer the following advantages over contractor saws: heavier construction for lower vibration and increased durability; a cabinet-mounted trunnion (the mechanism that incorporates the saw blade mount and allows for height and tilt adjustment); improved dust collection due to the totally enclosed cabinet and common incorporation of a dust collection port. Cabinet saws are designed for, and are capable of very high duty-cycles, such as are encountered in commercial/industrial applications. Where some of the advantages of a cabinet saw are desired in a home shop application, so-called "hybrid" saws have emerged to address this need. Cabinet saws have an easily replaceable insert around the blade in the table top allowing the use of zero-clearance inserts, which greatly reduce tear out on the bottom of the workpiece. It is common for this type of saw to be equipped with a table extension that increases ripping capacity for sheet goods to . These saws are characterized by a cast iron top on a full-length steel base, generally square in section, with radiused corners. Two miter slots ( wide on the largest saws) are located parallel to the blade, one to the left of the blade and one to the right. American-style cabinet saws generally follow the model of the Delta Unisaw, a design that has evolved since 1939. Saws of this general type are made in the US, Canada, Taiwan and China. The most common type of rip fence mounted to this type of saw is characterized by the standard model made by Biesemeyer (now a subsidiary of Delta). It has a sturdy, steel T-type fence mounted to a steel rail at the front of the saw and replaceable laminate faces. American cabinet saws are normally designed to accept a stacked dado blade in addition to a standard saw blade. The most common size of blade is in diameter with a blade arbor diameter of , but in diameter with a blade arbor diameter of are found in commercial/industrial sites. American saws normally include an anti-kickback device that incorporates a splitter or riving knife, toothed anti-kickback pawls, and a clear plastic blade cover. The saw blade can tilt to either the left side or right side of the saw, depending on the model of saw. The original Delta Unisaw and early cabinet saws based on it are all right-tilt units while newer Delta Unisaws and many competitive cabinet saws made after 2000 are left-tilt saws. The change to left-tilt was due to a lower perceived propensity for the cut piece to become trapped between the rip fence and blade and kick back when the blade tilts away from the rip fence (left-tilt saw) versus towards the rip fence (right-tilt saw.) While conceptually simple in design, these saws are highly evolved and are capable of efficient, high volume, precision work. Hybrid Hybrid table saws are designed to compete in the market with high-end contractor table saws. They offer some of the advantages of cabinet saws at a lower price than traditional cabinet saws. Hybrid saws on the market today offer an enclosed cabinet to help improve dust collection. The cabinet can either be similar to a cabinet saw with a full enclosure from the table top to the floor or a shorter cabinet on legs. Some hybrid saws have cabinet-mounted trunnions and some have table-mounted trunnions. In general, cabinet-mounted trunnions are easier to adjust than table-mounted trunnions. Hybrid saws tend to be heavier than contractor saws and lighter than cabinet saws. Some hybrid saws offer a sliding table as an option to improve cross cutting capability. Hybrid saw drive mechanisms vary more than contractor saws and cabinet saws. Drive mechanisms can be a single v-belt, a serpentine belt or multiple v-belts. Hybrid saws have a motor and thus the ability to run on a standard 15- or 20-ampere 120-volt North American household circuit, while a cabinet saw's or larger motor requires a 240-volt supply. Mini and micro Mini and micro table saws have a blade diameter of 4 inches (100 mm) and under. Mini table saws are typically 4 inch, while micro table saws are less than 4 inch, although the naming of the saws is not well defined. Mini and micro table saws are generally used by hobbyists and model builders, although the mini table saws (4 inch) have gained some popularity with building contractors that need only a small saw to cut small pieces (such as wood trim). Being a fraction of the size (and weight) of a normal table saw, they are much easier to carry and transport. Being much smaller than a normal table saw, they are substantially safer to use when cutting very small pieces. Using blades that have a smaller kerf (cutting width) than normal blades, there is less material lost and the possibility of kickback is reduced as well. Sliding A sliding table saw, also called a European cabinet saw or panel saw, is a variation on the traditional cabinet saw. They are generally used to cut large panels and sheet goods, such as plywood or MDF. Sliding table saws have a sliding table on the left side of the blade, usually attached to a folding arm mounted under the table, that is used for cross cutting and ripping larger materials. Sliding table saws are the largest type of table saw, and are mostly used by large production cabinet shops. Most saws use 3–5, or even 7hp three- phase induction motors. Sliding table saws usually incorporate a riving knife to prevent kickback from occurring. Sliding saws sometimes offer a scoring blade, which is a second, smaller diameter blade mounted in front of the regular saw blade. The scoring blade helps reduce splintering the lower face in certain types of stock, especially laminated stock. European models are sometimes available in multi-purpose tool configurations (Combination machine) that offer jointer, planer, shaper(Spindle moulder in Europe) or boring features. The blade arbor typically has a diameter of 30 mm, around twice that of a US saw. Many American woodworkers are likely to use a dado stack or wobble dado to cut dados (square sectioned grooves), while most European woodworkers would use a shaper or a router table for this task. In recent years, European-style sliding table saws have had a small following in North America. They are usually either imported from European manufacturers such as Felder and its subsidiaries, Altendorf and Robland, Taiwanese companies such as Grizzly Industrial, or sold directly by U.S. based-companies such as Powermatic. History Table saws have been an integral part of woodworking for centuries, revolutionizing the way woodworkers manipulate wood to create intricate designs and structures. The table saw has had a profound impact on the field of woodworking by enabling woodworkers to achieve greater precision, efficiency, and versatility in their craft. With the ability to make a wide range of cuts, such as rip cuts, crosscuts, bevel cuts, and dado cuts, the table saw has become an indispensable tool in woodworking workshops worldwide. The history of the table saw dates back to the early 18th century when the first known patent for a table saw was filed in 1777 by Samuel Miller who was an English scientist. Miller's design featured a circular saw blade mounted on an arbor with a table to support the wood being cut. This invention laid the foundation for the development of modern-day table saws. Over the years, advancements in technology and design have led to the evolution of the table saw into various types, including benchtop, contractor, cabinet, and hybrid table saws etc. Each type offers different features and capabilities to meet the needs of woodworkers, from hobbyists to professionals. A key figure in the development of the table saw is Wilhelm Altendorf, a German carpenter Altendorf revolutionized the design of table saws by introducing a sliding table that allowed woodworkers to make precise crosscuts and rip cuts with ease. This innovation set a new standard for accuracy and versatility in table saws. Looking ahead, the future of the table saw will likely be influenced by new technology, like digital controls and sensors that can automate and improve cutting. Also, new blade designs and materials may make cutting even more precise and efficient. Safety Table saws are especially dangerous tools because the operator holds the material being cut, instead of the saw, making it easy to accidentally move hands into the spinning blade. When using other types of circular saws, the material remains stationary, as the operator guides the saw into the material. But a push stick, riving knife, and protective cover over the spinning saw can reduce the chances of accident. Kickback Kickback is the term for when a piece of wood is ripped, and either pinches the blade, or turns outward against the blade of the spinning saw and is propelled back towards the operator at a high speed. The two main injuries that occur from kickback are caused by wood striking the head, chest, or torso of the operator, or the wood moving so quickly that the operator's hands stay on the wood and get pulled across the saw blade. Dust extractor A dust extractor should be fitted if sawdust is prone to build up under the cutting blade. Through friction the spinning blade will quickly ignite the accumulated dust, and the smoke can be mistaken for an overheated blade. The extractor also reduces the risk of a dust explosion and facilitates a healthier working environment. Magnetic featherboard The magnetic feather board was developed in 1990. The patented Grip-Tite is held to a cast iron table top or steel sub fence by high strength permanent magnets. The advantage of a magnetic feather board is the fast setup time on any cast iron tool deck or steel faced fence. When used in conjunction with a steel faced rip fence, they are used to hold down ripped wood on any saw deck and prevent kickback. Feed wheels added to the Grip-Tite base pull ripped wood to the fence, allowing the operator to rip wood on any table saw with no hands near the blade. Miter slot featherboard When a table saw has a table top made of a material other than cast iron, like aluminum, then a miter slot featherboard should be utilized to keep pressure on the stock against the fence when otherwise your hand would be in dangerous proximity to the saw blade. Keep in mind that this style feather board does take more time to set up than the magnetic style when deciding on a tool purchase. If a safety device is more convenient then it may be more frequently utilized. Never place a feather board past the leading edge of the blade or else kickback will occur. Safety precautions Read the instruction manual: Always read and understand your table saw's manual before use. Wear proper clothing: Avoid loose clothing, long sleeves and jewelry, and tie back long hair. Wear close toed shoes. Use personal protective equipment (PPE): Wear safety glasses and earmuffs or earplugs. Keep your work area tidy: Clear your work area of clutter and be sure there are no tripping hazards like power cords. Minimize distractions: Avoid distractions such as TVs or phones that can divert your attention from operating the saw safely. Disconnect power before blade changes: Unplug your table saw before changing blades to prevent accidental start-ups. Avoid wearing gloves: Do not wear gloves while operating the table saw to maintain a secure grip and avoid hazards. Blade height There are two competing schools of thought when it comes to properly setting the height of the blade for sawing. The first is commonly expressed thus: "Only allow the blade to rise above the work by the amount of finger you wish to lose." That is, the blade should protrude above the piece as little as possible, to prevent the loss of a finger in case of a sawing accident. Another competing view is that the saw functions at its best when the angle of the blade teeth arc relative to the top surface of the workpiece is as extreme as possible. This facilitates chip ejection, shortens the overall distance through which the teeth act on the part, reduces power consumption and heat generation, substantially reduces the peak pushing force required, thus improving control, and causes the blade's force on the wood to act mostly downward rather than largely horizontally. Uses Although the majority of table saws are used for cutting wood, table saws can also be used for cutting many materials, including hardwood, softwood, metal, and plastic. It is commonly used for ripping wood (cutting it to width), crosscutting (cutting it to length), kerfing (making small cuts to bend the wood), and cutting rabbets and grooves for joints. It can also make angled bevel cuts and other wood joints. This makes the table saw a versatile and essential tool in any woodworker's workshop. Accessories Outfeed tables: Table saws are often used to rip long boards or sheets of plywood or other sheet materials. The use of an out feed (or outfeed) table makes this process safer and easier. Many of these are shop built, while others are commercially available. Infeed tables: Used to assist feeding long boards or sheets of plywood. In the past roller stands were pretty much the only option, but there are now commercially available infeed units that are more efficient and easier to use. Downdraft tables: Used to draw harmful dust particles away from the user without obstructing the user's movement or productivity. : Table saws commonly have a fence (guide) running from the front of the table (the side nearest the operator) to the back, parallel to the cutting plane of the blade. The distance of the fence from the blade can be adjusted, which determines where on the workpiece the cut is made. The fence is commonly called a "rip fence" referring to its use in guiding the workpiece during the process of making a rip cut. Most table saws come standard with a rip fence, but some high end saws are available without a fence so a fence of the user's choice can be purchased separately. Featherboard: Featherboards are used to keep wood against the rip fence. They can be a single spring, or many springs, as made from wood in many shops. They are held in place by high strength magnets, clamps, or expansion bars in the miter slot. Hold down: The circular sawblade of a tablesaw will pick up a piece of wood if not held down. Hold downs can be a vertical version of featherboards, attached to a fence with magnets or clamps. Another type of hold down uses wheels on a spring-loaded mechanism to push down on a workpiece as it is guided past the blade. Sub fence: A piece of wood clamped to the rip fence allows a dado set to cut into the rip fence, allowing rabbet cuts with a dado blade. Miter gauge: The table has one or two slots (grooves) running from front to back, also parallel to the cutting plane of the blade. These miter slots (or miter grooves) are used to position and guide either a miter gauge (also known as a crosscut fence) or crosscut sled. The miter gauge is usually set to be at 90 degrees to the plane of the blade's cut, to cause the cut made in the workpiece to be made at a right angle. The miter gauge can also be adjusted to cause the cut to be made at a precisely controlled angle (a so-called miter cut). Crosscut sled: A crosscut sled is generally used to hold the workpiece at a fixed 90-degree angle to the blade, allowing precise repeatable cuts at the most commonly used angle. The sled is normally guided by a runner fastened under it that slides in a miter slot. This device is normally shop made, but can be purchased. Tenon jig: A tenon jig is a device that holds the workpiece vertically so cuts can be made across the end. This allows tenons to be formed. Often this is a purchased item, but it can be shop made. The tenon jig is guided by a miter slot or a fence. Stacked dado: Saws made for the US market are generally capable of using a stacked dado blade set. This is a kit with two outer blades and a number of inner "chip breakers" that can be used to cut dados (grooves in the workpiece) of any width up to the maximum (generally ). Stacked dado sets are available in diameters of . 8- and 10-inch stacked dado sets are not recommended for saws with or less. Although 10-inch stacked dado sets are available with a bore, these are recommended with a bore. Inserts: Table saws have a changeable insert in the table through which the blade projects. Purchasable inserts are usually made out of metal. Zero-clearance inserts can be made of a sawable material such as plastic or wood. When a zero-clearance insert is initially inserted, the blade is raised through the insert creating the slot. This creates a slot with no gaps around the blade. The zero clearance insert helps prevent tearout by providing support for wood fibers right next to the blade thus helping to make a very clean cut. Other inserts can be bought or created in the same manner, such as a dado insert. Splitter: A splitter or riving knife is a vertical projection located behind the saw blade. This can be a pin or a fin. It is slightly narrower in width than the blade and located directly in line with the blade. The splitter prevents the material being cut from being rotated thereby helping to prevent kickback. Splitters may incorporate pawls, a mechanism with teeth designed to bite into wood and preventing kickback. Splitters can take many forms, including being part of the blade guard that comes standard with the saw. Another type of splitter is simply a vertical pin or fin attached to an insert. Splitters are available commercially or can be made from wood, metal or plastic. Anti-kickback pawls: Most modern US table saws are fitted with kickback pawls, a set of small spring-loaded metal teeth on a free-swinging pawl (usually attached to the guard) which help to put a strong downward force on a board. This can help to immobilize the board in the event of a kickback. However, these have sometimes been found to be somewhat ineffective when compared to a splitter. Push stick: A handheld safety device used to safely maneuver a workpiece, keeping it flat against the machine table and fence while it is being cut. References Further reading Jim Tolpin (2004). Table Saw Magic. Popular Woodworking Books imprint of F&W Publications. Anthony, Paul. Taunton's complete illustrated guide to tablesaws. Newtown, CT: Taunton Press, 2009. Saws Woodworking machines Power tools
Table saw
[ "Physics", "Technology" ]
5,235
[ "Machines", "Power tools", "Physical quantities", "Physical systems", "Power (physics)", "Woodworking machines" ]
317,227
https://en.wikipedia.org/wiki/Micelle
A micelle () or micella () ( or micellae, respectively) is an aggregate (or supramolecular assembly) of surfactant amphipathic lipid molecules dispersed in a liquid, forming a colloidal suspension (also known as associated colloidal system). A typical micelle in water forms an aggregate with the hydrophilic "head" regions in contact with surrounding solvent, sequestering the hydrophobic single-tail regions in the micelle centre. This phase is caused by the packing behavior of single-tail lipids in a bilayer. The difficulty in filling the volume of the interior of a bilayer, while accommodating the area per head group forced on the molecule by the hydration of the lipid head group, leads to the formation of the micelle. This type of micelle is known as a normal-phase micelle (or oil-in-water micelle). Inverse micelles have the head groups at the centre with the tails extending out (or water-in-oil micelle). Micelles are approximately spherical in shape. Other shapes, such as ellipsoids, cylinders, and bilayers, are also possible. The shape and size of a micelle are a function of the molecular geometry of its surfactant molecules and solution conditions such as surfactant concentration, temperature, pH, and ionic strength. The process of forming micelles is known as micellisation and forms part of the phase behaviour of many lipids according to their polymorphism. History The ability of a soapy solution to act as a detergent has been recognized for centuries. However, it was only at the beginning of the twentieth century that the constitution of such solutions was scientifically studied. Pioneering work in this area was carried out by James William McBain at the University of Bristol. As early as 1913, he postulated the existence of "colloidal ions" to explain the good electrolytic conductivity of sodium palmitate solutions. These highly mobile, spontaneously formed clusters came to be called micelles, a term borrowed from biology and popularized by G.S. Hartley in his classic book Paraffin Chain Salts: A Study in Micelle Formation. The term micelle was coined in nineteenth century scientific literature as the elle diminutive of the Latin word (particle), conveying a new word for "tiny particle". Solvation Individual surfactant molecules that are in the system but are not part of a micelle are called "monomers". Micelles represent a molecular assembly, in which the individual components are thermodynamically in equilibrium with monomers of the same species in the surrounding medium. In water, the hydrophilic "heads" of surfactant molecules are always in contact with the solvent, regardless of whether the surfactants exist as monomers or as part of a micelle. However, the lipophilic "tails" of surfactant molecules have less contact with water when they are part of a micelle—this being the basis for the energetic drive for micelle formation. In a micelle, the hydrophobic tails of several surfactant molecules assemble into an oil-like core, the most stable form of which having no contact with water. By contrast, surfactant monomers are surrounded by water molecules that create a "cage" or solvation shell connected by hydrogen bonds. This water cage is similar to a clathrate and has an ice-like crystal structure and can be characterized according to the hydrophobic effect. The extent of lipid solubility is determined by the unfavorable entropy contribution due to the ordering of the water structure according to the hydrophobic effect. Micelles composed of ionic surfactants have an electrostatic attraction to the ions that surround them in solution, the latter known as counterions. Although the closest counterions partially mask a charged micelle (by up to 92%), the effects of micelle charge affect the structure of the surrounding solvent at appreciable distances from the micelle. Ionic micelles influence many properties of the mixture, including its electrical conductivity. Adding salts to a colloid containing micelles can decrease the strength of electrostatic interactions and lead to the formation of larger ionic micelles. This is more accurately seen from the point of view of an effective charge in hydration of the system. Energy of formation Micelles form only when the concentration of surfactant is greater than the critical micelle concentration (CMC), and the temperature of the system is greater than the critical micelle temperature, or Krafft temperature. The formation of micelles can be understood using thermodynamics: Micelles can form spontaneously because of a balance between entropy and enthalpy. In water, the hydrophobic effect is the driving force for micelle formation, despite the fact that assembling surfactant molecules is unfavorable in terms of both enthalpy and entropy of the system. At very low concentrations of the surfactant, only monomers are present in solution. As the concentration of the surfactant is increased, a point is reached at which the unfavorable entropy contribution, from clustering the hydrophobic tails of the molecules, is overcome by a gain in entropy due to release of the solvation shells around the surfactant tails. At this point, the lipid tails of a part of the surfactants must be segregated from the water. Hence, they start to form micelles. In broad terms, above the CMC, the loss of entropy due to assembly of the surfactant molecules is less than the gain in entropy by setting free the water molecules that were "trapped" in the solvation shells of the surfactant monomers. Also important are enthalpic considerations, such as the electrostatic interactions that occur between the charged parts of surfactants. Micelle packing parameter The micelle packing parameter equation is utilized to help "predict molecular self-assembly in surfactant solutions": where is the surfactant tail volume, is the tail length, and is the equilibrium area per molecule at the aggregate surface. Block copolymer micelles The concept of micelles was introduced to describe the core-corona aggregates of small surfactant molecules, however it has also extended to describe aggregates of amphiphilic block copolymers in selective solvents. It is important to know the difference between these two systems. The major difference between these two types of aggregates is in the size of their building blocks. Surfactant molecules have a molecular weight which is generally of a few hundreds of grams per mole while block copolymers are generally one or two orders of magnitude larger. Moreover, thanks to the larger hydrophilic and hydrophobic parts, block copolymers can have a much more pronounced amphiphilic nature when compared to surfactant molecules. Because of these differences in the building blocks, some block copolymer micelles behave like surfactant ones, while others do not. It is necessary therefore to make a distinction between the two situations. The former ones will belong to the dynamic micelles while the latter will be called kinetically frozen micelles. Dynamic micelles Certain amphiphilic block copolymer micelles display a similar behavior as surfactant micelles. These are generally called dynamic micelles and are characterized by the same relaxation processes assigned to surfactant exchange and micelle scission/recombination. Although the relaxation processes are the same between the two types of micelles, the kinetics of unimer exchange are very different. While in surfactant systems the unimers leave and join the micelles through a diffusion-controlled process, for copolymers the entry rate constant is slower than a diffusion controlled process. The rate of this process was found to be a decreasing power-law of the degree of polymerization of the hydrophobic block to the power 2/3. This difference is due to the coiling of the hydrophobic block of a copolymer exiting the core of a micelle. Block copolymers which form dynamic micelles are some of the tri-block poloxamers under the right conditions. Kinetically frozen micelles When block copolymer micelles do not display the characteristic relaxation processes of surfactant micelles, these are called kinetically frozen micelles. These can be achieved in two ways: when the unimers forming the micelles are not soluble in the solvent of the micelle solution, or if the core forming blocks are glassy at the temperature in which the micelles are found. Kinetically frozen micelles are formed when either of these conditions is met. A special example in which both of these conditions are valid is that of polystyrene-b-poly(ethylene oxide). This block copolymer is characterized by the high hydrophobicity of the core forming block, PS, which causes the unimers to be insoluble in water. Moreover, PS has a high glass transition temperature which is, depending on the molecular weight, higher than room temperature. Thanks to these two characteristics, a water solution of PS-PEO micelles of sufficiently high molecular weight can be considered kinetically frozen. This means that none of the relaxation processes, which would drive the micelle solution towards thermodynamic equilibrium, are possible. Pioneering work on these micelles was done by Adi Eisenberg. It was also shown how the lack of relaxation processes allowed great freedom in the possible morphologies formed. Moreover, the stability against dilution and vast range of morphologies of kinetically frozen micelles make them particularly interesting, for example, for the development of long circulating drug delivery nanoparticles. Inverse/reverse micelles In a non-polar solvent, it is the exposure of the hydrophilic head groups to the surrounding solvent that is energetically unfavourable, giving rise to a water-in-oil system. In this case, the hydrophilic groups are sequestered in the micelle core and the hydrophobic groups extend away from the center. These inverse micelles are proportionally less likely to form on increasing headgroup charge, since hydrophilic sequestration would create highly unfavorable electrostatic interactions. It is well established that for many surfactant/solvent systems a small fraction of the inverse micelles spontaneously acquire a net charge of +qe or -qe. This charging takes place through a disproportionation/comproportionation mechanism rather than a dissociation/association mechanism and the equilibrium constant for this reaction is on the order of 10−4 to 10−11, which means about every 1 in 100 to 1 in 100 000 micelles will be charged. Supermicelles Supermicelle is a hierarchical micelle structure (supramolecular assembly) where individual components are also micelles. Supermicelles are formed via bottom-up chemical approaches, such as self-assembly of long cylindrical micelles into radial cross-, star- or dandelion-like patterns in a specially selected solvent; solid nanoparticles may be added to the solution to act as nucleation centers and form the central core of the supermicelle. The stems of the primary cylindrical micelles are composed of various block copolymers connected by strong covalent bonds; within the supermicelle structure they are loosely held together by hydrogen bonds, electrostatic or solvophobic interactions. Uses When surfactants are present above the critical micelle concentration (CMC), they can act as emulsifiers that will allow a compound that is normally insoluble (in the solvent being used) to dissolve. This occurs because the insoluble species can be incorporated into the micelle core, which is itself solubilized in the bulk solvent by virtue of the head groups' favorable interactions with solvent species. The most common example of this phenomenon is detergents, which clean poorly soluble lipophilic material (such as oils and waxes) that cannot be removed by water alone. Detergents clean also by lowering the surface tension of water, making it easier to remove material from a surface. The emulsifying property of surfactants is also the basis for emulsion polymerization. Micelles may also have important roles in chemical reactions. Micellar chemistry uses the interior of micelles to harbor chemical reactions, which in some cases can make multi-step chemical synthesis more feasible. Doing so can increase reaction yield, create conditions more favorable to specific reaction products (e.g. hydrophobic molecules), and reduce required solvents, side products, and required conditions (e.g. extreme pH). Because of these benefits, Micellular chemistry is thus considered a form of green chemistry. However, micelle formation may also inhibit chemical reactions, such as when reacting molecules form micelles that shield a molecular component vulnerable to oxidation. The use of cationic micelles of cetrimonium chloride, benzethonium chloride, and cetylpyridinium chloride can accelerate chemical reactions between negatively charged compounds (such as DNA or Coenzyme A) in an aqueous environment up to 5 million times. Unlike conventional micellar catalysis, the reactions occur solely on the charged micelles' surface. Micelle formation is essential for the absorption of fat-soluble vitamins and complicated lipids within the human body. Bile salts formed in the liver and secreted by the gall bladder allow micelles of fatty acids to form. This allows the absorption of complicated lipids (e.g., lecithin) and lipid-soluble vitamins (A, D, E, and K) within the micelle by the small intestine. During the process of milk-clotting, proteases act on the soluble portion of caseins, κ-casein, thus originating an unstable micellar state that results in clot formation. Micelles can also be used for targeted drug delivery as gold nanoparticles. See also Critical micelle concentration Micellar liquid chromatography Micellar solutions Micellar solubilization Lipid bilayer Liposome Vesicle (biology) References Supramolecular chemistry Colloidal chemistry Membrane biology
Micelle
[ "Chemistry", "Materials_science" ]
2,943
[ "Colloidal chemistry", "Membrane biology", "Surface science", "Colloids", "nan", "Molecular biology", "Nanotechnology", "Supramolecular chemistry" ]
317,311
https://en.wikipedia.org/wiki/El%20Ni%C3%B1o%E2%80%93Southern%20Oscillation
El Niño–Southern Oscillation (ENSO) is a global climate phenomenon that emerges from variations in winds and sea surface temperatures over the tropical Pacific Ocean. Those variations have an irregular pattern but do have some semblance of cycles. The occurrence of ENSO is not predictable. It affects the climate of much of the tropics and subtropics, and has links (teleconnections) to higher-latitude regions of the world. The warming phase of the sea surface temperature is known as "El Niño" and the cooling phase as "La Niña". The Southern Oscillation is the accompanying atmospheric oscillation, which is coupled with the sea temperature change. El Niño is associated with higher than normal air sea level pressure over Indonesia, Australia and across the Indian Ocean to the Atlantic. La Niña has roughly the reverse pattern: high pressure over the central and eastern Pacific and lower pressure through much of the rest of the tropics and subtropics. The two phenomena last a year or so each and typically occur every two to seven years with varying intensity, with neutral periods of lower intensity interspersed. El Niño events can be more intense but La Niña events may repeat and last longer. A key mechanism of ENSO is the Bjerknes feedback (named after Jacob Bjerknes in 1969) in which the atmospheric changes alter the sea temperatures that in turn alter the atmospheric winds in a positive feedback. Weaker easterly trade winds result in a surge of warm surface waters to the east and reduced ocean upwelling on the equator. In turn, this leads to warmer sea surface temperatures (called El Niño), a weaker Walker circulation (an east-west overturning circulation in the atmosphere) and even weaker trade winds. Ultimately the warm waters in the western tropical Pacific are depleted enough so that conditions return to normal. The exact mechanisms that cause the oscillation are unclear and are being studied. Each country that monitors the ENSO has a different threshold for what constitutes an El Niño or La Niña event, which is tailored to their specific interests. El Niño and La Niña affect the global climate and disrupt normal weather patterns, which as a result can lead to intense storms in some places and droughts in others. El Niño events cause short-term (approximately 1 year in length) spikes in global average surface temperature while La Niña events cause short term surface cooling. Therefore, the relative frequency of El Niño compared to La Niña events can affect global temperature trends on timescales of around ten years. The countries most affected by ENSO are developing countries that are bordering the Pacific Ocean and are dependent on agriculture and fishing. In climate change science, ENSO is known as one of the internal climate variability phenomena. Future trends in ENSO due to climate change are uncertain, although climate change exacerbates the effects of droughts and floods. The IPCC Sixth Assessment Report summarized the scientific knowledge in 2021 for the future of ENSO as follows: "In the long term, it is very likely that the precipitation variance related to El Niño–Southern Oscillation will increase". The scientific consensus is also that "it is very likely that rainfall variability related to changes in the strength and spatial extent of ENSO teleconnections will lead to significant changes at regional scale". Definition and terminology The El Niño–Southern Oscillation is a single climate phenomenon that periodically fluctuates between three phases: Neutral, La Niña or El Niño. La Niña and El Niño are opposite phases in the oscillation which are deemed to occur when specific ocean and atmospheric conditions are reached or exceeded. An early recorded mention of the term "El Niño" ("The Boy" in Spanish) to refer to climate occurred in 1892, when Captain Camilo Carrillo told the geographical society congress in Lima that Peruvian sailors named the warm south-flowing current "El Niño" because it was most noticeable around Christmas. Although pre-Columbian societies were certainly aware of the phenomenon, the indigenous names for it have been lost to history. The capitalized term El Niño refers to the Christ Child, Jesus, because periodic warming in the Pacific near South America is usually noticed around Christmas. Originally, the term El Niño applied to an annual weak warm ocean current that ran southwards along the coast of Peru and Ecuador at about Christmas time. However, over time the term has evolved and now refers to the warm and negative phase of the El Niño–Southern Oscillation (ENSO). The original phrase, El Niño de Navidad, arose centuries ago, when Peruvian fishermen named the weather phenomenon after the newborn Christ. La Niña ("The Girl" in Spanish) is the colder counterpart of El Niño, as part of the broader ENSO climate pattern. In the past, it was also called an anti-El Niño and El Viejo, meaning "the old man." A negative phase exists when atmospheric pressure over Indonesia and the west Pacific is abnormally high and pressure over the east Pacific is abnormally low, during El Niño episodes, and a positive phase is when the opposite occurs during La Niña episodes, and pressure over Indonesia is low and over the west Pacific is high. Fundamentals On average, the temperature of the ocean surface in the tropical East Pacific is roughly cooler than in the tropical West Pacific. The sea surface temperature (SST) of the West Pacific northeast of Australia averages around . SSTs in the East Pacific off the western coast of South America are closer to . Strong trade winds near the equator push water away from the East Pacific and towards the West Pacific. This water is slowly warmed by the Sun as it moves west along the equator. The ocean surface near Indonesia is typically around higher than near Peru because of the buildup of water in the West Pacific. The thermocline, or the transitional zone between the warmer waters near the ocean surface and the cooler waters of the deep ocean, is pushed downwards in the West Pacific due to this water accumulation. The total weight of a column of ocean water is almost the same in the western and east Pacific. Because the warmer waters of the upper ocean are slightly less dense than the cooler deep ocean, the thicker layer of warmer water in the western Pacific means the thermocline there must be deeper. The difference in weight must be enough to drive any deep water return flow. Consequently, the thermocline is tilted across the tropical Pacific, rising from an average depth of about in the West Pacific to a depth of about in the East Pacific. Cooler deep ocean water takes the place of the outgoing surface waters in the East Pacific, rising to the ocean surface in a process called upwelling. Along the western coast of South America, water near the ocean surface is pushed westward due to the combination of the trade winds and the Coriolis effect. This process is known as Ekman transport. Colder water from deeper in the ocean rises along the continental margin to replace the near-surface water. This process cools the East Pacific because the thermocline is closer to the ocean surface, leaving relatively little separation between the deeper cold water and the ocean surface. Additionally, the northward-flowing Humboldt Current carries colder water from the Southern Ocean to the tropics in the East Pacific. The combination of the Humboldt Current and upwelling maintains an area of cooler ocean waters off the coast of Peru. The West Pacific lacks a cold ocean current and has less upwelling as the trade winds are usually weaker than in the East Pacific, allowing the West Pacific to reach warmer temperatures. These warmer waters provide energy for the upward movement of air. As a result, the warm West Pacific has on average more cloudiness and rainfall than the cool East Pacific. ENSO describes a quasi-periodic change of both oceanic and atmospheric conditions over the tropical Pacific Ocean. These changes affect weather patterns across much of the Earth. The tropical Pacific is said to be in one of three states of ENSO (also called "phases") depending on the atmospheric and oceanic conditions. When the tropical Pacific roughly reflects the average conditions, the state of ENSO is said to be in the neutral phase. However, the tropical Pacific experiences occasional shifts away from these average conditions. If trade winds are weaker than average, the effect of upwelling in the East Pacific and the flow of warmer ocean surface waters towards the West Pacific lessen. This results in a cooler West Pacific and a warmer East Pacific, leading to a shift of cloudiness and rainfall towards the East Pacific. This situation is called El Niño. The opposite occurs if trade winds are stronger than average, leading to a warmer West Pacific and a cooler East Pacific. This situation is called La Niña and is associated with increased cloudiness and rainfall over the West Pacific. Bjerknes feedback The close relationship between ocean temperatures and the strength of the trade winds was first identified by Jacob Bjerknes in 1969. Bjerknes also hypothesized that ENSO was a positive feedback system where the associated changes in one component of the climate system (the ocean or atmosphere) tend to reinforce changes in the other. For example, during El Niño, the reduced contrast in ocean temperatures across the Pacific results in weaker trade winds, further reinforcing the El Niño state. This process is known as Bjerknes feedback. Although these associated changes in the ocean and atmosphere often occur together, the state of the atmosphere may resemble a different ENSO phase than the state of the ocean or vice versa. Because their states are closely linked, the variations of ENSO may arise from changes in both the ocean and atmosphere and not necessarily from an initial change of exclusively one or the other. Conceptual models explaining how ENSO operates generally accept the Bjerknes feedback hypothesis. However, ENSO would perpetually remain in one phase if Bjerknes feedback were the only process occurring. Several theories have been proposed to explain how ENSO can change from one state to the next, despite the positive feedback. These explanations broadly fall under two categories. In one view, the Bjerknes feedback naturally triggers negative feedbacks that end and reverse the abnormal state of the tropical Pacific. This perspective implies that the processes that lead to El Niño and La Niña also eventually bring about their end, making ENSO a self-sustaining process. Other theories view the state of ENSO as being changed by irregular and external phenomena such as the Madden–Julian oscillation, tropical instability waves, and westerly wind bursts. Walker circulation The three phases of ENSO relate to the Walker circulation, which was named after Gilbert Walker who discovered the Southern Oscillation during the early twentieth century. The Walker circulation is an east-west overturning circulation in the vicinity of the equator in the Pacific. Upward air is associated with high sea temperatures, convection and rainfall, while the downward branch occurs over cooler sea surface temperatures in the east. During El Niño, as the sea surface temperatures change so does the Walker Circulation. Warming in the eastern tropical Pacific weakens or reverses the downward branch, while cooler conditions in the west lead to less rain and downward air, so the Walker Circulation first weakens and may reverse. Southern Oscillation The Southern Oscillation is the atmospheric component of ENSO. This component is an oscillation in surface air pressure between the tropical eastern and the western Pacific Ocean waters. The strength of the Southern Oscillation is measured by the Southern Oscillation Index (SOI). The SOI is computed from fluctuations in the surface air pressure difference between Tahiti (in the Pacific) and Darwin, Australia (on the Indian Ocean). El Niño episodes have negative SOI, meaning there is lower pressure over Tahiti and higher pressure in Darwin. La Niña episodes on the other hand have positive SOI, meaning there is higher pressure in Tahiti and lower in Darwin. Low atmospheric pressure tends to occur over warm water and high pressure occurs over cold water, in part because of deep convection over the warm water. El Niño episodes are defined as sustained warming of the central and eastern tropical Pacific Ocean, thus resulting in a decrease in the strength of the Pacific trade winds, and a reduction in rainfall over eastern and northern Australia. La Niña episodes are defined as sustained cooling of the central and eastern tropical Pacific Ocean, thus resulting in an increase in the strength of the Pacific trade winds, and the opposite effects in Australia when compared to El Niño. Although the Southern Oscillation Index has a long station record going back to the 1800s, its reliability is limited due to the latitudes of both Darwin and Tahiti being well south of the Equator, so that the surface air pressure at both locations is less directly related to ENSO. To overcome this effect, a new index was created, named the Equatorial Southern Oscillation Index (EQSOI). To generate this index, two new regions, centered on the Equator, were defined. The western region is located over Indonesia and the eastern one over the equatorial Pacific, close to the South American coast. However, data on EQSOI goes back only to 1949. Sea surface height (SSH) changes up or down by several centimeters in Pacific equatorial region with the ESNO: El Niño causes a positive SSH anomaly (raised sea level) because of thermal expansion while La Niña causes a negative SSH anomaly (lowered sea level) via contraction. Three phases of sea surface temperature The El Niño–Southern Oscillation is a single climate phenomenon that quasi-periodically fluctuates between three phases: Neutral, La Niña or El Niño. La Niña and El Niño are opposite phases which require certain changes to take place in both the ocean and the atmosphere before an event is declared. The cool phase of ENSO is La Niña, with SST in the eastern Pacific below average, and air pressure high in the eastern Pacific and low in the western Pacific. The ENSO cycle, including both El Niño and La Niña, causes global changes in temperature and rainfall. Neutral phase If the temperature variation from climatology is within 0.5 °C (0.9 °F), ENSO conditions are described as neutral. Neutral conditions are the transition between warm and cold phases of ENSO. Sea surface temperatures (by definition), tropical precipitation, and wind patterns are near average conditions during this phase. Close to half of all years are within neutral periods. During the neutral ENSO phase, other climate anomalies/patterns such as the sign of the North Atlantic Oscillation or the Pacific–North American teleconnection pattern exert more influence. El Niño phase El Niño conditions are established when the Walker circulation weakens or reverses and the Hadley circulation strengthens, leading to the development of a band of warm ocean water in the central and east-central equatorial Pacific (approximately between the International Date Line and 120°W), including the area off the west coast of South America, as upwelling of cold water occurs less or not at all offshore. This warming causes a shift in the atmospheric circulation, leading to higher air pressure in the western Pacific and lower in the eastern Pacific, with rainfall reducing over Indonesia, India and northern Australia, while rainfall and tropical cyclone formation increases over the tropical Pacific Ocean. The low-level surface trade winds, which normally blow from east to west along the equator, either weaken or start blowing from the other direction. El Niño phases are known to happen at irregular intervals of two to seven years, and lasts nine months to two years. The average period length is five years. When this warming occurs for seven to nine months, it is classified as El Niño "conditions"; when its duration is longer, it is classified as an El Niño "episode". It is thought that there have been at least 30 El Niño events between 1900 and 2024, with the 1982–83, 1997–98 and 2014–16 events among the strongest on record. Since 2000, El Niño events have been observed in 2002–03, 2004–05, 2006–07, 2009–10, 2014–16, 2018–19, and 2023–24. Major ENSO events were recorded in the years 1790–93, 1828, 1876–78, 1891, 1925–26, 1972–73, 1982–83, 1997–98, 2014–16, and 2023–24. During strong El Niño episodes, a secondary peak in sea surface temperature across the far eastern equatorial Pacific Ocean sometimes follows the initial peak. La Niña phase An especially strong Walker circulation causes La Niña, which is considered to be the cold oceanic and positive atmospheric phase of the broader El Niño–Southern Oscillation (ENSO) weather phenomenon, as well as the opposite of weather pattern, where sea surface temperature across the eastern equatorial part of the central Pacific Ocean will be lower than normal by 3–5 °C (5.4–9 °F). The phenomenon occurs as strong winds blow warm water at the ocean's surface away from South America, across the Pacific Ocean towards Indonesia. As this warm water moves west, cold water from the deep sea rises to the surface near South America. The movement of so much heat across a quarter of the planet, and particularly in the form of temperature at the ocean surface, can have a significant effect on weather across the entire planet. Tropical instability waves visible on sea surface temperature maps, showing a tongue of colder water, are often present during neutral or La Niña conditions. La Niña is a complex weather pattern that occurs every few years, often persisting for longer than five months. El Niño and La Niña can be indicators of weather changes across the globe. Atlantic and Pacific hurricanes can have different characteristics due to lower or higher wind shear and cooler or warmer sea surface temperatures. A timeline of all La Niña episodes between 1900 and 2023. Note that each forecast agency has a different criteria for what constitutes a La Niña event, which is tailored to their specific interests. La Niña events have been observed for hundreds of years, and occurred on a regular basis during the early parts of both the 17th and 19th centuries. Since the start of the 20th century, La Niña events have occurred during the following years: Transitional phases Transitional phases at the onset or departure of El Niño or La Niña can also be important factors on global weather by affecting teleconnections. Significant episodes, known as Trans-Niño, are measured by the Trans-Niño index (TNI). Examples of affected short-time climate in North America include precipitation in the Northwest US and intense tornado activity in the contiguous US. Variations ENSO Modoki The first ENSO pattern to be recognised, called Eastern Pacific (EP) ENSO, to distinguish if from others, involves temperature anomalies in the eastern Pacific. However, in the 1990s and 2000s, variations of ENSO conditions were observed, in which the usual place of the temperature anomaly (Niño 1 and 2) is not affected, but an anomaly also arises in the central Pacific (Niño 3.4). The phenomenon is called Central Pacific (CP) ENSO, "dateline" ENSO (because the anomaly arises near the dateline), or ENSO "Modoki" (Modoki is Japanese for "similar, but different"). There are variations of ENSO additional to the EP and CP types, and some scientists argue that ENSO exists as a continuum, often with hybrid types. The effects of the CP ENSO are different from those of the EP ENSO. The El Niño Modoki is associated with more hurricanes more frequently making landfall in the Atlantic. La Niña Modoki leads to a rainfall increase over northwestern Australia and northern Murray–Darling basin, rather than over the eastern portion of the country as in a conventional EP La Niña. Also, La Niña Modoki increases the frequency of cyclonic storms over Bay of Bengal, but decreases the occurrence of severe storms in the Indian Ocean overall. The first recorded El Niño that originated in the central Pacific and moved toward the east was in 1986. Recent Central Pacific El Niños happened in 1986–87, 1991–92, 1994–95, 2002–03, 2004–05 and 2009–10. Furthermore, there were "Modoki" events in 1957–59, 1963–64, 1965–66, 1968–70, 1977–78 and 1979–80. Some sources say that the El Niños of 2006-07 and 2014-16 were also Central Pacific El Niños. Recent years when La Niña Modoki events occurred include 1973–1974, 1975–1976, 1983–1984, 1988–1989, 1998–1999, 2000–2001, 2008–2009, 2010–2011, and 2016–2017. The recent discovery of ENSO Modoki has some scientists believing it to be linked to global warming. However, comprehensive satellite data go back only to 1979. More research must be done to find the correlation and study past El Niño episodes. More generally, there is no scientific consensus on how/if climate change might affect ENSO. There is also a scientific debate on the very existence of this "new" ENSO. A number of studies dispute the reality of this statistical distinction or its increasing occurrence, or both, either arguing the reliable record is too short to detect such a distinction, finding no distinction or trend using other statistical approaches, or that other types should be distinguished, such as standard and extreme ENSO. Likewise, following the asymmetric nature of the warm and cold phases of ENSO, some studies could not identify similar variations for La Niña, both in observations and in the climate models, but some sources could identify variations on La Niña with cooler waters on central Pacific and average or warmer water temperatures on both eastern and western Pacific, also showing eastern Pacific Ocean currents going to the opposite direction compared to the currents in traditional La Niñas. ENSO Costero Coined by the Peruvian (ENFEN), ENSO Costero, or ENSO Oriental, is the name given to the phenomenon where the sea-surface temperature anomalies are mostly focused on the South American coastline, especially from Peru and Ecuador. Studies point many factors that can lead to its occurrence, sometimes accompanying, or being accompanied, by a larger EP ENSO occurrence, or even displaying opposite conditions from the observed ones in the other Niño regions when accompanied by Modoki variations. ENSO Costero events usually present more localized effects, with warm phases leading to increased rainfall over the coast of Ecuador, northern Peru and the Amazon rainforest, and increased temperatures over the northern Chilean coast, and cold phases leading to droughts on the peruvian coast, and increased rainfall and decreased temperatures on its mountainous and jungle regions. Because they don't influence the global climate as much as the other types, these events present lesser and weaker correlations to other significant ENSO features, neither always being triggered by Kelvin waves, nor always being accompanied by proportional Southern Oscillation responses. According to the Coastal Niño Index (ICEN), strong El Niño Costero events include 1957, 1982–83, 1997–98 and 2015–16, and La Niña Costera ones include 1950, 1954–56, 1962, 1964, 1966, 1967–68, 1970–71, 1975–76 and 2013. Monitoring and declaration of conditions Currently, each country has a different threshold for what constitutes an El Niño event, which is tailored to their specific interests, for example: In the United States, an El Niño is declared when the Climate Prediction Center, which monitors the sea surface temperatures in the Niño 3.4 region and the tropical Pacific, forecasts that the sea surface temperature will be above average or more for the next several seasons. The Niño 3.4 region stretches from the 120th to 170th meridians west longitude astride the equator five degrees of latitude on either side, are monitored. It is approximately to the southeast of Hawaii. The most recent three-month average for the area is computed, and if the region is more than 0.5 °C (0.9 °F) above (or below) normal for that period, then an El Niño (or La Niña) is considered in progress. The Australian Bureau of Meteorology looks at the trade winds, Southern Oscillation Index, weather models and sea surface temperatures in the Niño 3 and 3.4 regions, before declaring an ENSO event. The Japan Meteorological Agency declares that an ENSO event has started when the average five month sea surface temperature deviation for the Niño 3 region is over for six consecutive months or longer. The Peruvian government declares that an ENSO Costero is under way if the sea surface temperature deviation in the Niño 1+2 regions equal or exceed for at least three months. The United Kingdom's Met Office also uses a several month period to determine ENSO state. When this warming or cooling occurs for only seven to nine months, it is classified as El Niño/La Niña "conditions"; when it occurs for more than that period, it is classified as El Niño/La Niña "episodes". Effects of ENSO on global climate In climate change science, ENSO is known as one of the internal climate variability phenomena. The other two main ones are Pacific decadal oscillation and Atlantic multidecadal oscillation. La Niña impacts the global climate and disrupts normal weather patterns, which can lead to intense storms in some places and droughts in others. El Niño events cause short-term (approximately 1 year in length) spikes in global average surface temperature while La Niña events cause short term cooling. Therefore, the relative frequency of El Niño compared to La Niña events can affect global temperature trends on decadal timescales. Climate change There is no sign that there are actual changes in the ENSO physical phenomenon due to climate change. Climate models do not simulate ENSO well enough to make reliable predictions. Future trends in ENSO are uncertain as different models make different predictions. It may be that the observed phenomenon of more frequent and stronger El Niño events occurs only in the initial phase of the global warming, and then (e.g., after the lower layers of the ocean get warmer, as well), El Niño will become weaker. It may also be that the stabilizing and destabilizing forces influencing the phenomenon will eventually compensate for each other. The consequences of ENSO in terms of the temperature anomalies and precipitation and weather extremes around the world are clearly increasing and associated with climate change. For example, recent scholarship (since about 2019) has found that climate change is increasing the frequency of extreme El Niño events. Previously there was no consensus on whether climate change will have any influence on the strength or duration of El Niño events, as research alternately supported El Niño events becoming stronger and weaker, longer and shorter. Over the last several decades, the number of El Niño events increased, and the number of La Niña events decreased, although observation of ENSO for much longer is needed to detect robust changes. Studies of historical data show the recent El Niño variation is most likely linked to global warming. For example, some results, even after subtracting the positive influence of decadal variation, are shown to be possibly present in the ENSO trend, the amplitude of the ENSO variability in the observed data still increases, by as much as 60% in the last 50 years. A study published in 2023 by CSIRO researchers found that climate change may have increased by two times the likelihood of strong El Niño events and nine times the likelihood of strong La Niña events. The study stated it found a consensus between different models and experiments. The IPCC Sixth Assessment Report summarized the state of the art of research in 2021 into the future of ENSO as follows: "In the long term, it is very likely that the precipitation variance related to El Niño–Southern Oscillation will increase" and "It is very likely that rainfall variability related to changes in the strength and spatial extent of ENSO teleconnections will lead to significant changes at regional scale". and "There is medium confidence that both ENSO amplitude and the frequency of high-magnitude events since 1950 are higher than over the period from 1850 and possibly as far back as 1400". Investigations regarding tipping points The ENSO is considered to be a potential tipping element in Earth's climate. Global warming can strengthen the ENSO teleconnection and resulting extreme weather events. For example, an increase in the frequency and magnitude of El Niño events have triggered warmer than usual temperatures over the Indian Ocean, by modulating the Walker circulation. This has resulted in a rapid warming of the Indian Ocean, and consequently a weakening of the Asian Monsoon. Effects of ENSO on weather patterns El Niño affects the global climate and disrupts normal weather patterns, which can lead to intense storms in some places and droughts in others. Tropical cyclones Most tropical cyclones form on the side of the subtropical ridge closer to the equator, then move poleward past the ridge axis before recurving into the main belt of the Westerlies. Areas west of Japan and Korea tend to experience many fewer September–November tropical cyclone impacts during El Niño and neutral years. During El Niño years, the break in the subtropical ridge tends to lie near 130°E, which would favor the Japanese archipelago. Based on modeled and observed accumulated cyclone energy (ACE), El Niño years usually result in less active hurricane seasons in the Atlantic Ocean, but instead favor a shift to tropical cyclone activity in the Pacific Ocean, compared to La Niña years favoring above average hurricane development in the Atlantic and less so in the Pacific basin. Over the Atlantic Ocean, vertical wind shear is increased, which inhibits tropical cyclone genesis and intensification, by causing the westerly winds to be stronger. The atmosphere over the Atlantic Ocean can also be drier and more stable during El Niño events, which can inhibit tropical cyclone genesis and intensification. Within the Eastern Pacific basin: El Niño events contribute to decreased easterly vertical wind shear and favor above-normal hurricane activity. However, the impacts of the ENSO state in this region can vary and are strongly influenced by background climate patterns. The Western Pacific basin experiences a change in the location of where tropical cyclones form during El Niño events, with tropical cyclone formation shifting eastward, without a major change in how many develop each year. As a result of this change, Micronesia is more likely, and China less likely, to be affected by tropical cyclones. A change in the location of where tropical cyclones form also occurs within the Southern Pacific Ocean between 135°E and 120°W, with tropical cyclones more likely to occur within the Southern Pacific basin than the Australian region. As a result of this change tropical cyclones are 50% less likely to make landfall on Queensland, while the risk of a tropical cyclone is elevated for island nations like Niue, French Polynesia, Tonga, Tuvalu, and the Cook Islands. Remote influence on tropical Atlantic Ocean A study of climate records has shown that El Niño events in the equatorial Pacific are generally associated with a warm tropical North Atlantic in the following spring and summer. About half of El Niño events persist sufficiently into the spring months for the Western Hemisphere Warm Pool to become unusually large in summer. Occasionally, El Niño's effect on the Atlantic Walker circulation over South America strengthens the easterly trade winds in the western equatorial Atlantic region. As a result, an unusual cooling may occur in the eastern equatorial Atlantic in spring and summer following El Niño peaks in winter. Cases of El Niño-type events in both oceans simultaneously have been linked to severe famines related to the extended failure of monsoon rains. Impacts on humans and ecosystems Economic impacts When El Niño conditions last for many months, extensive ocean warming and the reduction in easterly trade winds limits upwelling of cold nutrient-rich deep water, and its economic effect on local fishing for an international market can be serious. Developing countries that depend on their own agriculture and fishing, particularly those bordering the Pacific Ocean, are usually most affected by El Niño conditions. In this phase of the Oscillation, the pool of warm water in the Pacific near South America is often at its warmest in late December. More generally, El Niño can affect commodity prices and the macroeconomy of different countries. It can constrain the supply of rain-driven agricultural commodities; reduce agricultural output, construction, and services activities; increase food prices; and may trigger social unrest in commodity-dependent poor countries that primarily rely on imported food. A University of Cambridge Working Paper shows that while Australia, Chile, Indonesia, India, Japan, New Zealand and South Africa face a short-lived fall in economic activity in response to an El Niño shock, other countries may actually benefit from an El Niño weather shock (either directly or indirectly through positive spillovers from major trading partners), for instance, Argentina, Canada, Mexico and the United States. Furthermore, most countries experience short-run inflationary pressures following an El Niño shock, while global energy and non-fuel commodity prices increase. The IMF estimates a significant El Niño can boost the GDP of the United States by about 0.5% (due largely to lower heating bills) and reduce the GDP of Indonesia by about 1.0%. Health and social impacts Extreme weather conditions related to the El Niño cycle correlate with changes in the incidence of epidemic diseases. For example, the El Niño cycle is associated with increased risks of some of the diseases transmitted by mosquitoes, such as malaria, dengue fever, and Rift Valley fever. Cycles of malaria in India, Venezuela, Brazil, and Colombia have now been linked to El Niño. Outbreaks of another mosquito-transmitted disease, Australian encephalitis (Murray Valley encephalitis—MVE), occur in temperate south-east Australia after heavy rainfall and flooding, which are associated with La Niña events. A severe outbreak of Rift Valley fever occurred after extreme rainfall in north-eastern Kenya and southern Somalia during the 1997–98 El Niño. ENSO conditions have also been related to Kawasaki disease incidence in Japan and the west coast of the United States, via the linkage to tropospheric winds across the north Pacific Ocean. ENSO may be linked to civil conflicts. Scientists at The Earth Institute of Columbia University, having analyzed data from 1950 to 2004, suggest ENSO may have had a role in 21% of all civil conflicts since 1950, with the risk of annual civil conflict doubling from 3% to 6% in countries affected by ENSO during El Niño years relative to La Niña years. Ecological consequences During the 1982–83, 1997–98 and 2015–16 ENSO events, large extensions of tropical forests experienced a prolonged dry period that resulted in widespread fires, and drastic changes in forest structure and tree species composition in Amazonian and Bornean forests. Their impacts do not restrict only vegetation, since declines in insect populations were observed after extreme drought and terrible fires during El Niño 2015–16. Declines in habitat-specialist and disturbance-sensitive bird species and in large-frugivorous mammals were also observed in Amazonian burned forests, while temporary extirpation of more than 100 lowland butterfly species occurred at a burned forest site in Borneo. In seasonally dry tropical forests, which are more drought tolerant, researchers found that El Niño induced drought increased seedling mortality. In a research published in October 2022, researchers studied seasonally dry tropical forests in a national park in Chiang Mai in Thailand for 7 years and observed that El Niño increased seedling mortality even in seasonally dry tropical forests and may impact entire forests in long run. Coral bleaching Following the El Nino event in 1997 – 1998, the Pacific Marine Environmental Laboratory attributes the first large-scale coral bleaching event to the warming waters. Most critically, global mass bleaching events were recorded in 1997-98 and 2015–16, when around 75-99% losses of live coral were registered across the world. Considerable attention was also given to the collapse of Peruvian and Chilean anchovy populations that led to a severe fishery crisis following the ENSO events in 1972–73, 1982–83, 1997–98 and, more recently, in 2015–16. In particular, increased surface seawater temperatures in 1982-83 also lead to the probable extinction of two hydrocoral species in Panamá, and to a massive mortality of kelp beds along 600 km of coastline in Chile, from which kelps and associated biodiversity slowly recovered in the most affected areas even after 20 years. All these findings enlarge the role of ENSO events as a strong climatic force driving ecological changes all around the world – particularly in tropical forests and coral reefs. Impacts by region Observations of ENSO events since 1950 show that impacts associated with such events depend on the time of year. While certain events and impacts are expected to occur, it is not certain that they will happen. The impacts that generally do occur during most El Niño events include below-average rainfall over Indonesia and northern South America, and above average rainfall in southeastern South America, eastern equatorial Africa, and the southern United States. Africa La Niña results in wetter-than-normal conditions in southern Africa from December to February, and drier-than-normal conditions over equatorial east Africa over the same period. The effects of El Niño on rainfall in southern Africa differ between the summer and winter rainfall areas. Winter rainfall areas tend to get higher rainfall than normal and summer rainfall areas tend to get less rain. The effect on the summer rainfall areas is stronger and has led to severe drought in strong El Niño events. Sea surface temperatures off the west and south coasts of South Africa are affected by ENSO via changes in surface wind strength. During El Niño the south-easterly winds driving upwelling are weaker which results in warmer coastal waters than normal, while during La Niña the same winds are stronger and cause colder coastal waters. These effects on the winds are part of large scale influences on the tropical Atlantic and the South Atlantic High-pressure system, and changes to the pattern of westerly winds further south. There are other influences not known to be related to ENSO of similar importance. Some ENSO events do not lead to the expected changes. Antarctica Many ENSO linkages exist in the high southern latitudes around Antarctica. Specifically, El Niño conditions result in high-pressure anomalies over the Amundsen and Bellingshausen Seas, causing reduced sea ice and increased poleward heat fluxes in these sectors, as well as the Ross Sea. The Weddell Sea, conversely, tends to become colder with more sea ice during El Niño. The exact opposite heating and atmospheric pressure anomalies occur during La Niña. This pattern of variability is known as the Antarctic dipole mode, although the Antarctic response to ENSO forcing is not ubiquitous. Asia In Western Asia, during the region's November–April rainy season, there is increased precipitation in the El Niño phase and reduced precipitation in the La Niña phase on average. During El Niño years: As warm water spreads from the west Pacific and the Indian Ocean to the east Pacific, it takes the rain with it, causing extensive drought in the western Pacific and rainfall in the normally dry eastern Pacific. Singapore experienced the driest February in 2010 since records began in 1869, with only 6.3 mm of rain falling in the month. The years 1968 and 2005 had the next driest Februaries, when 8.4 mm of rain fell. During La Niña years, the formation of tropical cyclones, along with the subtropical ridge position, shifts westward across the western Pacific Ocean, which increases the landfall threat in China. In March 2008, La Niña caused a drop in sea surface temperatures over Southeast Asia by . It also caused heavy rains over the Philippines, Indonesia, and Malaysia. Australia Across most of the continent, El Niño and La Niña have more impact on climate variability than any other factor. There is a strong correlation between the strength of La Niña and rainfall: the greater the sea surface temperature and Southern Oscillation difference from normal, the larger the rainfall change. During El Niño events, the shift in rainfall away from the Western Pacific may mean that rainfall across Australia is reduced. Over the southern part of the continent, warmer than average temperatures can be recorded as weather systems are more mobile and fewer blocking areas of high pressure occur. The onset of the Indo-Australian Monsoon in tropical Australia is delayed by two to six weeks, which as a consequence means that rainfall is reduced over the northern tropics. The risk of a significant bushfire season in south-eastern Australia is higher following an El Niño event, especially when it is combined with a positive Indian Ocean Dipole event. Europe El Niño's effects on Europe are controversial, complex and difficult to analyze, as it is one of several factors that influence the weather over the continent and other factors can overwhelm the signal. North America La Niña causes mostly the opposite effects of El Niño: above-average precipitation across the northern Midwest, the northern Rockies, Northern California, and the Pacific Northwest's southern and eastern regions. Meanwhile, precipitation in the southwestern and southeastern states, as well as southern California, is below average. This also allows for the development of many stronger-than-average hurricanes in the Atlantic and fewer in the Pacific. ENSO is linked to rainfall over Puerto Rico. During an El Niño, snowfall is greater than average across the southern Rockies and Sierra Nevada mountain range, and is well-below normal across the Upper Midwest and Great Lakes states. During a La Niña, snowfall is above normal across the Pacific Northwest and western Great Lakes. In Canada, La Niña will, in general, cause a cooler, snowier winter, such as the near-record-breaking amounts of snow recorded in the La Niña winter of 2007–2008 in eastern Canada. In the spring of 2022, La Niña caused above-average precipitation and below-average temperatures in the state of Oregon. April was one of the wettest months on record, and La Niña effects, while less severe, were expected to continue into the summer. Over North America, the main temperature and precipitation impacts of El Niño generally occur in the six months between October and March. In particular, the majority of Canada generally has milder than normal winters and springs, with the exception of eastern Canada where no significant impacts occur. Within the United States, the impacts generally observed during the six-month period include wetter-than-average conditions along the Gulf Coast between Texas and Florida, while drier conditions are observed in Hawaii, the Ohio Valley, Pacific Northwest and the Rocky Mountains. Study of more recent weather events over California and the southwestern United States indicate that there is a variable relationship between El Niño and above-average precipitation, as it strongly depends on the strength of the El Niño event and other factors. Though it has been historically associated with high rainfall in California, the effects of El Niño depend more strongly on the "flavor" of El Niño than its presence or absence, as only "persistent El Niño" events lead to consistently high rainfall. To the north across Alaska, La Niña events lead to drier than normal conditions, while El Niño events do not have a correlation towards dry or wet conditions. During El Niño events, increased precipitation is expected in California due to a more southerly, zonal, storm track. During La Niña, increased precipitation is diverted into the Pacific Northwest due to a more northerly storm track. During La Niña events, the storm track shifts far enough northward to bring wetter than normal winter conditions (in the form of increased snowfall) to the Midwestern states, as well as hot and dry summers. During the El Niño portion of ENSO, increased precipitation falls along the Gulf coast and Southeast due to a stronger than normal, and more southerly, polar jet stream. Isthmus of Tehuantepec The synoptic condition for the Tehuantepecer, a violent mountain-gap wind in between the mountains of Mexico and Guatemala, is associated with high-pressure system forming in Sierra Madre of Mexico in the wake of an advancing cold front, which causes winds to accelerate through the Isthmus of Tehuantepec. Tehuantepecers primarily occur during the cold season months for the region in the wake of cold fronts, between October and February, with a summer maximum in July caused by the westward extension of the Azores-Bermuda high pressure system. Wind magnitude is greater during El Niño years than during La Niña years, due to the more frequent cold frontal incursions during El Niño winters. Tehuantepec winds reach to , and on rare occasions . The wind's direction is from the north to north-northeast. It leads to a localized acceleration of the trade winds in the region, and can enhance thunderstorm activity when it interacts with the Intertropical Convergence Zone. The effects can last from a few hours to six days. Between 1942 and 1957, La Niña had an impact that caused isotope changes in the plants of Baja California, and that had helped scientists to study his impact. Pacific islands During an El Niño event, New Zealand tends to experience stronger or more frequent westerly winds during their summer, which leads to an elevated risk of drier than normal conditions along the east coast. There is more rain than usual though on New Zealand's West Coast, because of the barrier effect of the North Island mountain ranges and the Southern Alps. Fiji generally experiences drier than normal conditions during an El Niño, which can lead to drought becoming established over the Islands. However, the main impacts on the island nation is felt about a year after the event becomes established. Within the Samoan Islands, below average rainfall and higher than normal temperatures are recorded during El Niño events, which can lead to droughts and forest fires on the islands. Other impacts include a decrease in the sea level, possibility of coral bleaching in the marine environment and an increased risk of a tropical cyclone affecting Samoa. In the late winter and spring during El Niño events, drier than average conditions can be expected in Hawaii. On Guam during El Niño years, dry season precipitation averages below normal, but the probability of a tropical cyclone is more than triple what is normal, so extreme short duration rainfall events are possible. On American Samoa during El Niño events, precipitation averages about 10 percent above normal, while La Niña events are associated with precipitation averaging about 10 percent below normal. South America The effects of El Niño in South America are direct and strong. An El Niño is associated with warm and very wet weather months in April–October along the coasts of northern Peru and Ecuador, causing major flooding whenever the event is strong or extreme. Because El Niño's warm pool feeds thunderstorms above, it creates increased rainfall across the east-central and eastern Pacific Ocean, including several portions of the South American west coast. The effects of El Niño in South America are direct and stronger than in North America. An El Niño is associated with warm and very wet weather months in April–October along the coasts of northern Peru and Ecuador, causing major flooding whenever the event is strong or extreme. The effects during the months of February, March, and April may become critical along the west coast of South America, El Niño reduces the upwelling of cold, nutrient-rich water that sustains large fish populations, which in turn sustain abundant sea birds, whose droppings support the fertilizer industry. The reduction in upwelling leads to fish kills off the shore of Peru. The local fishing industry along the affected coastline can suffer during long-lasting El Niño events. Peruvian fisheries collapsed during the 1970s due to overfishing following the 1972 El Niño Peruvian anchoveta reduction. The fisheries were previously the world's largest, however, this collapse led to the decline of these fisheries. During the 1982–83 event, jack mackerel and anchoveta populations were reduced, scallops increased in warmer water, but hake followed cooler water down the continental slope, while shrimp and sardines moved southward, so some catches decreased while others increased. Horse mackerel have increased in the region during warm events. Shifting locations and types of fish due to changing conditions create challenges for the fishing industry. Peruvian sardines have moved during El Niño events to Chilean areas. Other conditions provide further complications, such as the government of Chile in 1991 creating restrictions on the fishing areas for self-employed fishermen and industrial fleets. Southern Brazil and northern Argentina also experience wetter than normal conditions during El Niño years, but mainly during the spring and early summer. Central Chile receives a mild winter with large rainfall, and the Peruvian-Bolivian Altiplano is sometimes exposed to unusual winter snowfall events. Drier and hotter weather occurs in parts of the Amazon River Basin, Colombia, and Central America. During a time of La Niña, drought affects the coastal regions of Peru and Chile. From December to February, northern Brazil is wetter than normal. La Niña causes higher than normal rainfall in the central Andes, which in turn causes catastrophic flooding on the Llanos de Mojos of Beni Department, Bolivia. Such flooding is documented from 1853, 1865, 1872, 1873, 1886, 1895, 1896, 1907, 1921, 1928, 1929 and 1931. Galápagos Islands The Galápagos Islands are a chain of volcanic islands, nearly 600 miles west of Ecuador, South America. in the Eastern Pacific Ocean. These islands support a wide diversity of terrestrial and marine species. The ecosystem is based on the normal trade winds which influence upwelling of cold, nutrient rich waters to the islands. During an El Niño event the trade winds weaken and sometimes blow from west to east, which causes the Equatorial current to weaken, raising surface water temperatures and decreasing nutrients in waters surrounding the Galápagos. El Niño causes a trophic cascade which impacts entire ecosystems starting with primary producers and ending with critical animals such as sharks, penguins, and seals. The effects of El Niño can become detrimental to populations that often starve and die back during these years. Rapid evolutionary adaptations are displayed amongst animal groups during El Niño years to mitigate El Niño conditions. History In geologic timescales Evidence is also strong for El Niño events during the early Holocene epoch 10,000 years ago. Different modes of ENSO-like events have been registered in paleoclimatic archives, showing different triggering methods, feedbacks and environmental responses to the geological, atmospheric and oceanographic characteristics of the time. These paleorecords can be used to provide a qualitative basis for conservation practices. Scientists have also found chemical signatures of warmer sea surface temperatures and increased rainfall caused by El Niño in coral specimens that are around 13,000 years old. In a paleoclimate study published in 2024, the authors suggest that El Niños had a strong influence on Earth's hothouse climate during the Permian-Triassic extinction event. The increasing intensity and duration of El Niño events were associated with active volcanism, which resulted in the dieback of vegetation, an increase in the amount of carbon dioxide in the atmosphere, a significant warming and disturbances in the circulation of air masses. During human history ENSO conditions have occurred at two- to seven-year intervals for at least the past 300 years, but most of them have been weak. El Niño may have led to the demise of the Moche and other pre-Columbian Peruvian cultures. A recent study suggests a strong El Niño effect between 1789 and 1793 caused poor crop yields in Europe, which in turn helped touch off the French Revolution. The extreme weather produced by El Niño in 1876–77 gave rise to the most deadly famines of the 19th century. The 1876 famine alone in northern China killed up to 13 million people. The phenomenon had long been of interest because of its effects on the guano industry and other enterprises that depend on biological productivity of the sea. It is recorded that as early as 1822, cartographer Joseph Lartigue, of the French frigate La Clorinde under Baron Mackau, noted the "counter-current" and its usefulness for traveling southward along the Peruvian coast. Charles Todd, in 1888, suggested droughts in India and Australia tended to occur at the same time; Norman Lockyer noted the same in 1904. An El Niño connection with flooding was reported in 1894 by Victor Eguiguren (1852–1919) and in 1895 by Federico Alfonso Pezet (1859–1929). In 1924, Gilbert Walker (for whom the Walker circulation is named) coined the term "Southern Oscillation". He and others (including Norwegian-American meteorologist Jacob Bjerknes) are generally credited with identifying the El Niño effect. The major 1982–83 El Niño led to an upsurge of interest from the scientific community. The period 1990–95 was unusual in that El Niños have rarely occurred in such rapid succession. An especially intense El Niño event in 1998 caused an estimated 16% of the world's reef systems to die. The event temporarily warmed air temperature by 1.5 °C, compared to the usual increase of 0.25 °C associated with El Niño events. Since then, mass coral bleaching has become common worldwide, with all regions having suffered "severe bleaching". Around 1525, when Francisco Pizarro made landfall in Peru, he noted rainfall in the deserts, the first written record of the impacts of El Niño. Related patterns Madden–Julian oscillation Link to the El Niño-Southern oscillation Pacific decadal oscillation Mechanisms Pacific Meridional Mode See also For La Niña: 2000 Mozambique flood (attributed to La Niña) 2010 Pakistan floods (attributed to La Niña) 2010–2011 Queensland floods (attributed to La Niña) 2010–2012 La Niña event 2010–2011 Southern Africa floods (attributed to La Niña) 2010–2013 Southern United States and Mexico drought (attributed to La Niña) 2011 East Africa drought (attributed to La Niña) 2020 Atlantic hurricane season (unprecedented severity fueled by La Niña) 2021 eastern Australia floods (severity fueled by La Niña) 2022 Suriname floods (attributed to La Niña) 2023 Auckland Anniversary Weekend floods (attributed to La Niña) 2020–2023 La Niña event For El Niño: 1982–83 El Niño event 1997 Pacific hurricane season (severity fueled by El Niño) 1997–98 El Niño event 2014–2016 El Niño event 2015 Pacific hurricane season (severity fueled by El Niño) 2023–2024 El Niño event References External links Provides current phase of ENSO according to the Australian interpretation. Tropical meteorology Physical oceanography Natural history of the Americas Natural history of Oceania Effects of climate change Regional climate effects Weather hazards Spanish words and phrases Climate oscillations
El Niño–Southern Oscillation
[ "Physics" ]
11,055
[ "Physical phenomena", "Applied and interdisciplinary physics", "Weather hazards", "Weather", "Physical oceanography" ]
4,068,993
https://en.wikipedia.org/wiki/Microecosystem
Microecosystems can exist in locations which are precisely defined by critical environmental factors within small or tiny spaces. Such factors may include temperature, pH, chemical milieu, nutrient supply, presence of symbionts or solid substrates, gaseous atmosphere (aerobic or anaerobic) etc. Some examples Pond microecosystems These microecosystems with limited water volume are often only of temporary duration and hence colonized by organisms which possess a drought-resistant spore stage in the lifecycle, or by organisms which do not need to live in water continuously. The ecosystem conditions applying at a typical pond edge can be quite different from those further from shore. Extremely space-limited water ecosystems can be found in, for example, the water collected in bromeliad leaf bases and the "pitchers" of Nepenthes. Animal gut microecosystems These include the buccal region (especially cavities in the gingiva), rumen, caecum etc. of mammalian herbivores or even invertebrate digestive tracts. In the case of mammalian gastrointestinal microecology, microorganisms such as protozoa, bacteria, as well as curious incompletely defined organisms (such as certain large structurally complex Selenomonads, Quinella ovalis "Quin's Oval", Magnoovum eadii "Eadie's Oval", Oscillospira etc.) can exist in the rumen as incredibly complex, highly enriched mixed populations, (see Moir and Masson images ). This type of microecosystem can adjust rapidly to changes in the nutrition or health of the host animal (usually a ruminant such as cow, sheep, goat etc.); see Hungate's "The Rumen and its microbes 1966). Even within a small closed system such as the rumen there may exist a range of ecological conditions: Many organisms live freely in the rumen fluid whereas others require the substrate and metabolic products supplied by the stomach wall tissue with its folds and interstices. Interesting questions are also posed concerning the transfer of the strict anaerobe organisms in the gut microflora/microfauna to the next host generation. Here, mutual licking and coprophagia certainly play important roles. Soil microecosystems A typical soil microecosystem may be restricted to less than a millimeter in its total depth range owing to steep variation in humidity and/or atmospheric gas composition. The soil grain size and physical and chemical properties of the substrate may also play important roles. Because of the predominant solid phase in these systems they are notoriously difficult to study microscopically without simultaneously disrupting the fine spatial distribution of their components. Terrestrial hot-spring microecosystems These are defined by gradients of water temperature, nutrients, dissolved gases, salt concentrations etc. Along the path of terrestrial water flow the resulting temperature gradient continuum alone may provide many different minute microecosystems, starting with thermophilic bacteria such as Archaea "Archaebacteria" ( or more), followed by conventional thermophiles (), cyanobacteria (blue-green algae) such as the motile filaments of Oscillatoria (), protozoa such as Amoeba, rotifers, then green algae () etc. Of course other factors than temperature also play important roles. Hot springs can provide classic and straightforward ecosystems for microecology studies as well as providing a haven for hitherto undescribed organisms. Deep-sea microecosystems The best known contain rare specialized organisms, found only in the immediate vicinity (sometimes within centimeters) of underwater volcanic vents (or "smokers"). These ecosystems require extremely advanced diving and collection techniques for their scientific exploration. Closed microecosystem One that is sealed and completely independent of outside factors, except for temperature and light. A good example would be a plant contained in a sealed jar and submerged under water. No new factors would be able to enter this ecosystem. References Ecosystems Environmental science Ecology
Microecosystem
[ "Biology", "Environmental_science" ]
861
[ "Symbiosis", "Ecosystems", "Ecology", "nan" ]
4,069,108
https://en.wikipedia.org/wiki/IEC%2061508
IEC 61508 is an international standard published by the International Electrotechnical Commission (IEC) consisting of methods on how to apply, design, deploy and maintain automatic protection systems called safety-related systems. It is titled Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems (E/E/PE, or E/E/PES). IEC 61508 is a basic functional safety standard applicable to all industries. It defines functional safety as: “part of the overall safety relating to the EUC (Equipment Under Control) and the EUC control system which depends on the correct functioning of the E/E/PE safety-related systems, other technology safety-related systems and external risk reduction facilities.” The fundamental concept is that any safety-related system must work correctly or fail in a predictable (safe) way. The standard has two fundamental principles: An engineering process called the safety life cycle is defined based on best practices in order to discover and eliminate design errors and omissions. A probabilistic failure approach to account for the safety impact of device failures. The safety life cycle has 16 phases which roughly can be divided into three groups as follows: Phases 1–5 address analysis Phases 6–13 address realisation Phases 14–16 address operation. All phases are concerned with the safety function of the system. The standard has seven parts: Parts 1–3 contain the requirements of the standard (normative) Part 4 contains definitions Parts 5–7 are guidelines and examples for development and thus informative. Central to the standard are the concepts of probabilistic risk for each safety function. The risk is a function of frequency (or likelihood) of the hazardous event and the event consequence severity. The risk is reduced to a tolerable level by applying safety functions which may consist of E/E/PES, associated mechanical devices, or other technologies. Many requirements apply to all technologies but there is strong emphasis on programmable electronics especially in Part 3. IEC 61508 has the following views on risks: Zero risk can never be reached, only probabilities can be reduced Non-tolerable risks must be reduced (ALARP) Optimal, cost effective safety is achieved when addressed in the entire safety lifecycle Specific techniques ensure that mistakes and errors are avoided across the entire life-cycle. Errors introduced anywhere from the initial concept, risk analysis, specification, design, installation, maintenance and through to disposal could undermine even the most reliable protection. IEC 61508 specifies techniques that should be used for each phase of the life-cycle. The seven parts of the first edition of IEC 61508 were published in 1998 and 2000. The second edition was published in 2010. Hazard and risk analysis The standard requires that hazard and risk assessment be carried out for bespoke systems: 'The EUC (equipment under control) risk shall be evaluated, or estimated, for each determined hazardous event'. The standard advises that 'Either qualitative or quantitative hazard and risk analysis techniques may be used' and offers guidance on a number of approaches. One of these, for the qualitative analysis of hazards, is a framework based on 6 categories of likelihood of occurrence and 4 of consequence. Categories of likelihood of occurrence Consequence categories These are typically combined into a risk class matrix Where: Class I: Unacceptable in any circumstance; Class II: Undesirable: tolerable only if risk reduction is impracticable or if the costs are grossly disproportionate to the improvement gained; Class III: Tolerable if the cost of risk reduction would exceed the improvement; Class IV: Acceptable as it stands, though it may need to be monitored. Safety integrity level The safety integrity level (SIL) provides a target to attain for each safety function. A risk assessment effort yields a target SIL for each safety function. For any given design the achieved SIL is evaluated by three measures: 1. Systematic Capability (SC) which is a measure of design quality. Each device in the design has an SC rating. The SIL of the safety function is limited to smallest SC rating of the devices used. Requirement for SC are presented in a series of tables in Part 2 and Part 3. The requirements include appropriate quality control, management processes, validation and verification techniques, failure analysis etc. so that one can reasonably justify that the final system attains the required SIL. 2. Architecture Constraints which are minimum levels of safety redundancy presented via two alternative methods - Route 1h and Route 2h. 3. Probability of Dangerous Failure Analysis Probabilistic analysis The probability metric used in step 3 above depends on whether the functional component will be exposed to high or low demand: high demand is defined as more than once per year and low demand is defined as less than or equal to once per year (IEC-61508-4). For functions that operate continuously (continuous mode) or functions that operate frequently (high demand mode), SIL specifies an allowable frequency of dangerous failure. For functions that operate intermittently (low demand mode), SIL specifies an allowable probability that the function will fail to respond on demand. Note the difference between function and system. The system implementing the function might be in operation frequently (like an ECU for deploying an air-bag), but the function (like air-bag deployment) might be in demand intermittently. IEC 61508 certification Certification is third party attestation that a product, process, or system meets all requirements of the certification program. Those requirements are listed in a document called the certification scheme. IEC 61508 certification programs are operated by impartial third party organizations called certification bodies (CB). These CBs are accredited to operate following other international standards including ISO/IEC 17065 and ISO/IEC 17025. Certification bodies are accredited to perform the auditing, assessment, and testing work by an accreditation body (AB). There is often one national AB in each country. These ABs operate per the requirements of ISO/IEC 17011, a standard that contains requirements for the competence, consistency, and impartiality of accreditation bodies when accrediting conformity assessment bodies. ABs are members of the International Accreditation Forum (IAF) for work in management systems, products, services, and personnel accreditation or the International Laboratory Accreditation Cooperation (ILAC) for laboratory accreditation. A Multilateral Recognition Arrangement (MLA) between ABs will ensure global recognition of accredited CBs. IEC 61508 certification programs have been established by several global Certification Bodies. Each has defined their own scheme based upon IEC 61508 and other functional safety standards. The scheme lists the referenced standards and specifies procedures which describes their test methods, surveillance audit policy, public documentation policies, and other specific aspects of their program. IEC 61508 certification programs are being offered globally by several recognized CBs including exida, Intertek, SGS-TÜV Saar, TÜV Nord, TÜV Rheinland, TÜV SÜD and UL. Industry/application specific variants Automotive ISO 26262 is an adaptation of IEC 61508 for Automotive Electric/Electronic Systems. It is being widely adopted by the major car manufacturers. Before the launch of ISO 26262, the development of software for safety related automotive systems was predominantly covered by the Motor Industry Software Reliability Association (MISRA) guidelines. The MISRA project was conceived to develop guidelines for the creation of embedded software in road vehicle electronic systems. A set of guidelines for the development of vehicle based software was published in November 1994. This document provided the first automotive industry interpretation of the principles of the, then emerging, IEC 61508 standard. Today MISRA is most widely known for its guidelines on how to use the C and C++ languages. MISRA C has gone on to become the de facto standard for embedded C programming in the majority of safety-related industries, and is also used to improve software quality even where safety is not the main consideration. Rail IEC 62279 provides a specific interpretation of IEC 61508 for railway applications. It is intended to cover the development of software for railway control and protection including communications, signaling and processing systems. EN 50128 and EN 50657 are equivalent CENELEC standards of IEC 62279. Process industries The process industry sector includes many types of manufacturing processes, such as refineries, petrochemical, chemical, pharmaceutical, pulp and paper, and power. IEC 61511 is a technical standard which sets out practices in the engineering of systems that ensure the safety of an industrial process through the use of instrumentation. Power plants IEC 61513 provides requirements and recommendations for the instrumentation and control for systems important to safety of nuclear power plants. It indicates the general requirements for systems that contain conventional hardwired equipment, computer-based equipment or a combination of both types of equipment. An overview list of safety norms specific for nuclear power plants is published by ISO. Machinery IEC 62061 is the machinery-specific implementation of IEC 61508. It provides requirements that are applicable to the system level design of all types of machinery safety-related electrical control systems and also for the design of non-complex subsystems or devices. Testing software Software written in accordance with IEC 61508 may need to be unit tested, depending up on the SIL it needs to achieve. The main requirement in Unit Testing is to ensure that the software is fully tested at the function level and that all possible branches and paths are taken through the software. In some higher SIL level applications, the software code coverage requirement is much tougher and an MC/DC code coverage criterion is used rather than simple branch coverage. To obtain the MC/DC (modified condition/decision coverage) coverage information, one will need a Unit Testing tool, sometimes referred to as a Software Module Testing tool. See also Functional safety Safety standards FMEDA Spurious trip level Time-triggered system (A software architecture used to achieve IEC 61508 compliance) Software quality References Further reading Related safety standards ISO 26262 (is an adaption of IEC 61508 with minor differences) IEC 60730 (Household) DO-178C (Aerospace) Textbooks W. Goble, "Control Systems Safety Evaluation and Reliability" (3rd Edition , Hardcover, 458 pages). I. van Beurden, W. Goble, "Safety Instrumented System Design-Techniques and Design Verification" (1st Edition , 430 pages). M.J.M. Houtermans, "SIL and Functional Safety in a Nutshell" (Risknowlogy Best Practices, 1st Edition, eBook in PDF, ePub, and iBook format, 40 Pages) SIL and Functional Safety in a Nutshell - eBook introducing SIL and Functional Safety M. Medoff, R. Faller, "Functional Safety - An IEC 61508 SIL 3 Compliant Development Process" (3rd Edition, Hardcover, 371 pages, www.exida.com) C. O'Brien, L. Stewart, L. Bredemeyer, "Final Elements in Safety Instrumented Systems - IEC 61511 Compliant Systems and IEC 61508 Compliant Products" (1st Edition, 2018, , Hardcover, 305 pages, www.exida.com) Münch, Jürgen; Armbrust, Ove; Soto, Martín; Kowalczyk, Martin. “Software Process Definition and Management“, Springer, 2012. M.Punch, "Functional Safety for the Mining Industry – An Integrated Approach Using AS(IEC)61508, AS(IEC) 62061 and AS4024.1." (1st Edition, , in A4 paperback, 150 pages). D.Smith, K Simpson, "Safety Critical Systems Handbook: A Straightforward Guide to Functional Safety, IEC 61508 (2010 Edition) And Related Standards, Including Process IEC 61511 and Machinery IEC 62061 and ISO 13849" (3rd Edition , Hardcover, 288 Pages). External links IEC 61508-1:2010 Functional safety of electrical/electronic/programmable electronic safety-related systems- Parts 1 IEC Functional Safety zone 61508 Association A cross-industry group of organizations with an interest in achieving a dependable and cost-effective method for demonstrating compliance with IEC 61508 and related standards. Electrical standards 61508 Safety engineering
IEC 61508
[ "Physics", "Technology", "Engineering" ]
2,506
[ "Systems engineering", "Electrical standards", "Electrical systems", "Safety engineering", "Computer standards", "IEC standards", "Physical systems" ]
4,069,702
https://en.wikipedia.org/wiki/Tuyere
A tuyere or tuyère (; ) is a tube, nozzle or pipe allowing the blowing of air into a furnace or hearth. Air or oxygen is injected into a hearth under pressure from bellows or a blowing engine or other devices. This causes the fire to become hotter in front of the blast than it would otherwise have been, enabling metals to be smelted or melted or made hot enough to be worked in a forge, though these are blown only with air. This applies to any process where a blast is delivered under pressure to make a fire hotter. Archeologists have discovered tuyeres dating from the Iron Age; one example dates from between 770 BCE and 515 BCE. Following the introduction of hot blast, tuyeres are often water-cooled. Around the year 1500 new ironmaking techniques, including the blast furnace and finery forge, were introduced into England from France, along with the French technical terms relating to the new technology. "Tuyere" () is one of these French words, sometimes Anglicised as tue-iron or tue iron. Examples A bloomery normally had one tuyere. Early blast furnaces also had one tuyere, but were fed from bellows perhaps 12 feet (3.7m) long operated by a waterwheel. During the Industrial Revolution, the blast began to be provided using steam engines, initially Watt engines working blowing cylinders. Improvements in foundry practice enabled gas-tight cast iron pipes to be produced, enabling one engine to deliver blast to several sides of a furnace, through multiple tuyeres. A finery forge contained finery and chafery hearths, usually one of the latter and one to three of the former. Each hearth was equipped with its own set of bellows, blowing into it through a tuyere. The blacksmith's hearth at their forge has a tuyere, often blown by foot-operated bellows. Tuyeres were also used in smelting lead and copper in smeltmills. As of 2009 the world's largest blast furnace in Caofeidan, China operated by Shougang Jing Tang United Iron and Steel Ltd had 42 tuyeres, through which the hot blast is injected in the furnace. They are usually made from copper and cooled with a water jacket to withstand the extreme temperatures. References Steelmaking Industrial furnaces
Tuyere
[ "Chemistry" ]
485
[ "Metallurgical processes", "Steelmaking", "Industrial furnaces" ]
4,070,704
https://en.wikipedia.org/wiki/Floorcloth
A floorcloth, or floor-cloth, is a household furnishing used for warmth, decoration, or to protect expensive carpets. They were primarily produced and used from the early 18th to the early 20th century and were also referred to as oilcloth, wax cloths, and painted canvas. Some still use floorcloths as a customizable alternative to rugs, and some artists have elected to use floorcloths as a medium of expression. Most modern floorcloths are made of heavy, unstretched canvas with two or more coats of gesso. They are then painted and varnished to make them waterproof. History Floorcloths had their start in 18th century England, and may have evolved from painted wall tapestries from the 1500s. Textiles were too costly to be used on the floor at that time. From 1578 to 1694 a number of British patents were issued for treating cloth with an oil-type of covering, but it is not known if these were for floor coverings. A British receipt from 1722 refers to "a floor oyled cloth," indicating that they were being used underfoot at that time A London painter and stainer, Nathan Smith, was issued a patent in 1763 for waxed cloth specifically as a floor covering. His recipe for the liquid coating included resin, tar, Spanish brown, beeswax and linseed oil. He set up a factory in Knightsbridge, in London, where the waxed cloth was manufactured and painted, initially freehand or with stencils, but later with wallpaper printing blocks. When American colonists became independent from England, they also began to create their own floorcloths. The first three US presidents, George Washington, John Adams, and Thomas Jefferson all used floorcloths, and Jefferson had plain green ones in the White House. It is hard to place a standard value on floorcloths, as they varied so much in cost and quality. While some were made at home, commercially-produced floor cloths were to be found in shops: in Boston, Samuel Perkins & Sons advertised "painted floor cloths or canvass carpets" in 1816, when they could be purchased for anywhere from $1.37 to $2.25 per square yard. In addition, some itinerant painters traveling in rural areas would sell their services as floorcloth painters. When floorcloths became worn, they were often cut up and reused in less prominent places in the home, and might even be later cut up further for use in small spaces such as closets or pantries. Thus, old floorcloths are not often found in museums, and rarely are found in the possession of collectors. Uses Floorcloths served several purposes: they protected floors, decorated a room, and also helped to insulate a space. Floorcloths might be covered with a carpet during cold weather, or might themselves have straw or newspaper put underneath them to help to keep the cold out. Historical floorcloths varied in size. They might cover a smaller space as an area rug does today, they might be of a size to reach wall to wall, or they might be of a size to be placed under a dining table to protect a costly carpet. These small protective floorcloths were called "covers" in the 18th century and "druggets" in the 19th. Design Initially used by the wealthy, the designs and patterns mimicked a range of other substances, including parquet flooring, tile, and marble. As these useful furnishings found their way into middle-class homes, the variety of patterns grew. The painting of floorcloths might be done at home, by professional painters, or in a factory, and thus the quality, intricacy, and value of the floorcloths varied enormously. Freehand painting of the cloths gave way to printed and stenciled patterns, and the stenciled floorcloths might be very intricate. One floorcloth at the Melrose Plantation in Natchez Mississippi mimicked an intensively patterned Brussels carpet. Waning use of floorcloths By the end of the 19th century, the single term still in use to refer to floorcloths was oil cloth. New materials and processes began to provide some competition for oil cloths, although they did continue to be produced through the early 20th century. A patent was issued in 1844 for kamptulicon, which was well regarded in Great Britain, but did not see much use in the United States. Interest in kamptulicon encouraged more experimentation. One result was the issuance of a patent to Frederick Walton in 1863 for linoleum. Both oil cloth and linoleum were being produced in the same factories, with linoleum more aggressively marketed. In the past few decades, the desire to decorate homes in a more personal way has revived the popularity of floorcloths. Unique designs are made in a variety of styles and colors, using many techniques. This gives today's floorcloths the ability to be created for any style interior. References External links Linens Rugs and carpets Floors
Floorcloth
[ "Engineering" ]
1,031
[ "Structural engineering", "Floors" ]
4,071,245
https://en.wikipedia.org/wiki/Liesegang%20rings
Liesegang rings () are a phenomenon seen in many, if not most, chemical systems undergoing a precipitation reaction under certain conditions of concentration and in the absence of convection. Rings are formed when weakly soluble salts are produced from reaction of two soluble substances, one of which is dissolved in a gel medium. The phenomenon is most commonly seen as rings in a Petri dish or bands in a test tube; however, more complex patterns have been observed, such as dislocations of the ring structure in a Petri dish, helices, and "Saturn rings" in a test tube. Despite continuous investigation since rediscovery of the rings in 1896, the mechanism for the formation of Liesegang rings is still unclear. History The phenomenon was first noticed in 1855 by the German chemist Friedlieb Ferdinand Runge. He observed them in the course of experiments on the precipitation of reagents in blotting paper. In 1896 the German chemist Raphael E. Liesegang noted the phenomenon when he dropped a solution of silver nitrate onto a thin layer of gel containing potassium dichromate. After a few hours, sharp concentric rings of insoluble silver dichromate formed. It has aroused the curiosity of chemists for many years. When formed in a test tube by diffusing one component from the top, layers or bands of precipitate form, rather than rings. Silver nitrate–potassium dichromate reaction The reactions are most usually carried out in test tubes into which a gel is formed that contains a dilute solution of one of the reactants. If a hot solution of agar gel also containing a dilute solution of potassium dichromate is poured in a test tube, and after the gel solidifies a more concentrated solution of silver nitrate is poured on top of the gel, the silver nitrate will begin to diffuse into the gel. It will then encounter the potassium dichromate and will form a continuous region of precipitate at the top of the tube. After some hours, the continuous region of precipitation is followed by a clear region with no sensible precipitate, followed by a short region of precipitate further down the tube. This process continues down the tube forming several, up to perhaps a couple dozen, alternating regions of clear gel and precipitate rings. Some general observations Over the decades huge number of precipitation reactions have been used to study the phenomenon, and it seems quite general. Chromates, metal hydroxides, carbonates, and sulfides, formed with lead, copper, silver, mercury and cobalt salts are sometimes favored by investigators, perhaps because of the pretty, colored precipitates formed. The gels used are usually gelatin, agar or silicic acid gel. The concentration ranges over which the rings form in a given gel for a precipitating system can usually be found for any system by a little systematic empirical experimentation in a few hours. Often the concentration of the component in the agar gel should be substantially less concentrated (perhaps an order of magnitude or more) than the one placed on top of the gel. The first feature usually noted is that the bands which form farther away from the liquid-gel interface are generally farther apart. Some investigators measure this distance and report in some systems, at least, a systematic formula for the distance that they form at. The most frequent observation is that the distance apart that the rings form is proportional to the distance from the liquid-gel interface. This is by no means universal, however, and sometimes they form at essentially random, irreproducible distances. Another feature often noted is that the bands themselves do not move with time, but rather form in place and stay there. For very many systems the precipitate that forms is not the fine coagulant or flocs seen on mixing the two solutions in the absence of the gel, but rather coarse, crystalline dispersions. Sometimes the crystals are well separated from one another, and only a few form in each band. The precipitate that forms a band is not always a binary insoluble compound, but may be even a pure metal. Water glass of density 1.06 made acidic by sufficient acetic acid to make it gel, with 0.05 N copper sulfate in it, covered by a 1 percent solution of hydroxylamine hydrochloride produces large tetrahedrons of metallic copper in the bands. It is not possible to make any general statement of the effect of the composition of the gel. A system that forms nicely for one set of components, might fail altogether and require a different set of conditions if the gel is switched, say, from agar to gelatin. The essential feature of the gel required is that thermal convection in the tube be prevented altogether. Most systems will form rings in the absence of the gelling system if the experiment is carried out in a capillary, where convection does not disturb their formation. In fact, the system does not have to even be liquid. A tube plugged with cotton with a little ammonium hydroxide at one end, and a solution of hydrochloric acid at the other will show rings of deposited ammonium chloride where the two gases meet, if the conditions are chosen correctly. Ring formation has also been observed in solid glasses containing a reducible species. For example, bands of silver have been generated by immersing silicate glass in molten AgNO3 for extended periods of time (Pask and Parmelee, 1943). Theories Several different theories have been proposed to explain the formation of Liesegang rings. The chemist Wilhelm Ostwald in 1897 proposed a theory based on the idea that a precipitate is not formed immediately upon the concentration of the ions exceeding a solubility product, but a region of supersaturation occurs first. When the limit of stability of the supersaturation is reached, the precipitate forms, and a clear region forms ahead of the diffusion front because the precipitate that is below the solubility limit diffuses into the precipitate. This was argued to be a critically flawed theory when it was shown that seeding the gel with a colloidal dispersion of the precipitate (which would arguably prevent any significant region of supersaturation) did not prevent the formation of the rings. Another theory focuses on the adsorption of one or the other of the precipitating ions onto the colloidal particles of the precipitate which forms. If the particles are small, the absorption is large, diffusion is "hindered" and this somehow results in the formation of the rings. Still another proposal, the "coagulation theory" states that the precipitate first forms as a fine colloidal dispersion, which then undergoes coagulation by an excess of the diffusing electrolyte and this somehow results in the formation of the rings. Some more recent theories invoke an auto-catalytic step in the reaction that results in the formation of the precipitate. This would seem to contradict the notion that auto-catalytic reactions are, actually, quite rare in nature. The solution of the diffusion equation with proper boundary conditions, and a set of good assumptions on supersaturation, adsorption, auto-catalysis, and coagulation alone, or in some combination, has not been done yet, it appears, at least in a way that makes a quantitative comparison with experiment possible. However, a theoretical approach for the Matalon-Packter law predicting the position of the precipitate bands when the experiments are performed in a test tube, has been provided A general theory based on Ostwald's 1897 theory has recently been proposed. It can account for several important features sometimes seen, such as revert and helical banding. References Liesegang, R. E.,"Ueber einige Eigenschaften von Gallerten", Naturwissenschaftliche Wochenschrift, Vol. 11, Nr. 30, 353-362 (1896). J.A. Pask and C.W. Parmelee, "Study of Diffusion in Glass," Journal of the American Ceramic Society, Vol. 26, Nr. 8, 267-277 (1943). K. H. Stern, The Liesegang Phenomenon Chem. Rev. 54, 79-99 (1954). Ernest S. Hedges, Liesegang Rings and other Periodic Structures Chapman and Hall (1932). External links Liesegang rings Tout ce que la nature ne peut pas faire VI : Liesegang Rings A Thesis having a summary on reaction-diffusion processes and Liesegang banding (pp. 1-36) Chemical reactions Diffusion Petrology Physical chemistry Thermodynamics Articles containing video clips
Liesegang rings
[ "Physics", "Chemistry", "Mathematics" ]
1,822
[ "Transport phenomena", "Physical phenomena", "Applied and interdisciplinary physics", "Diffusion", "Thermodynamics", "nan", "Physical chemistry", "Dynamical systems" ]
4,072,055
https://en.wikipedia.org/wiki/Cogging%20torque
Cogging torque of electrical motors is the torque due to the interaction between the permanent magnets of the rotor and the stator slots of a permanent magnet machine. It is also known as detent or no-current torque. This torque is position dependent and its periodicity per revolution depends on the number of magnetic poles and the number of teeth on the stator. Cogging torque is an undesirable component for the operation of such a motor. It is especially prominent at lower speeds, with the symptom of jerkiness. Cogging torque results in torque as well as speed ripple; however, at high speed the motor moment of inertia filters out the effect of cogging torque. Reducing the cogging torque A summary of techniques used for reducing cogging torque: Skewing stator stack or magnets Using fractional slots per pole Optimizing the magnet pole arc or width Almost all the techniques used against cogging torque also reduce the motor counter-electromotive force and so reduce the resultant running torque. A slotless and coreless permanent magnet motor does not have any cogging torque. See also Torque ripple Dual-rotor permanent magnet induction motor Footnotes and References Islam, M.S. Mir, S. Sebastian, T. Delphi Steering, Saginaw, MI, USA "Issues in reducing the cogging torque of mass-produced permanent-magnet brushless DC motor". External links [D. Hanselman] Electric motors Torque
Cogging torque
[ "Physics", "Technology", "Engineering" ]
299
[ "Force", "Physical quantities", "Engines", "Electric motors", "Electrical engineering", "Wikipedia categories named after physical quantities", "Torque" ]
4,074,700
https://en.wikipedia.org/wiki/NACA%20airfoil
The NACA airfoil series is a set of standardized airfoil shapes developed by this agency, which became widely used in the design of aircraft wings. Origins NACA initially developed the numbered airfoil system which was further refined by the United States Air Force at Langley Research Center. According to the NASA website: Four-digit series The NACA four-digit wing sections define the profile by: First digit describing maximum camber as percentage of the chord. Second digit describing the distance of maximum camber from the airfoil leading edge in tenths of the chord. Last two digits describing maximum thickness of the airfoil as percent of the chord. For example, the NACA 2412 airfoil has a maximum camber of 2% located 40% (0.4 chords) from the leading edge with a maximum thickness of 12% of the chord. The NACA 0015 airfoil is symmetrical, the 00 indicating that it has no camber. The 15 indicates that the airfoil has a 15% thickness to chord length ratio: it is 15% as thick as it is long. Equation for a symmetrical 4-digit NACA airfoil The formula for the shape of a NACA 00xx foil, with "xx" being replaced by the percentage of thickness to chord, is where: x is the position along the chord from 0 to 1.00 (0 to 100%), is the half thickness at a given value of x (centerline to surface), t is the maximum thickness as a fraction of the chord (so t gives the last two digits in the NACA 4-digit denomination divided by 100). In this equation, at x = 1 (the trailing edge of the airfoil), the thickness is not quite zero. If a zero-thickness trailing edge is required, for example for computational work, one of the coefficients should be modified such that they sum to zero. Modifying the last coefficient (i.e. to −0.1036) results in the smallest change to the overall shape of the airfoil. The leading edge approximates a cylinder with a chord-normalized radius of Now the coordinates of the upper airfoil surface and of the lower airfoil surface are Symmetrical 4-digit series airfoils by default have maximum thickness at 30% of the chord from the leading edge. Equation for a cambered 4-digit NACA airfoil The simplest asymmetric foils are the NACA 4-digit series foils, which use the same formula as that used to generate the 00xx symmetric foils, but with the line of mean camber bent. The formula used to calculate the mean camber line is where m is the maximum camber (100 m is the first of the four digits), p is the location of maximum camber (10 p is the second digit in the NACA xxxx description). For example, a NACA 2412 airfoil uses a 2% camber (first digit) 40% (second digit) along the chord of a 0012 symmetrical airfoil having a thickness 12% (digits 3 and 4) of the chord. For this cambered airfoil, because the thickness needs to be applied perpendicular to the camber line, the coordinates and , of respectively the upper and lower airfoil surface, become where Five-digit series The NACA five-digit series describes more complex airfoil shapes. Its format is LPSTT, where: L: a single digit representing the theoretical optimal lift coefficient at ideal angle of attack CLI = 0.15 L (this is not the same as the lift coefficient CL), P: a single digit for the x coordinate of the point of maximum camber (max. camber at x = 0.05 P), S: a single digit indicating whether the camber is simple (S = 0) or reflex (S = 1), TT: the maximum thickness in percent of chord, as in a four-digit NACA airfoil code. For example, the NACA 23112 profile describes an airfoil with design lift coefficient of 0.3 (0.15 × 2), the point of maximum camber located at 15% chord (5 × 3), reflex camber (1), and maximum thickness of 12% of chord length (12). The camber line for the simple case (S = 0) is defined in two sections: where the chordwise location and the ordinate have been normalized by the chord. The constant is chosen so that the maximum camber occurs at ; for example, for the 230 camber line, and . Finally, constant is determined to give the desired lift coefficient. For a 230 camber-line profile (the first 3 numbers in the 5-digit series), is used. Non-reflexed 3 digit camber lines 3-digit camber lines provide a far forward location for the maximum camber. The camber line is defined as with the camber line gradient The following table presents the various camber-line profile coefficients for a theoretical design lift coefficient of 0.3 - the value of must be linearly scaled for a different desired design lift coefficient: Reflexed 3-digit camber lines Camber lines such as 231 makes the negative trailing edge camber of the 230 series profile to be positively cambered. This results in a theoretical pitching moment of 0. From From The following table presents the various camber-line profile coefficients for a theoretical design lift coefficient of 0.3 - the value of , and must be linearly scaled for a different desired design lift coefficient: Modifications Four- and five-digit series airfoils can be modified with a two-digit code preceded by a hyphen in the following sequence: One digit describing the roundness of the leading edge, with 0 being sharp, 6 being the same as the original airfoil, and larger values indicating a more rounded leading edge. One digit describing the distance of maximum thickness from the leading edge in tenths of the chord. For example, the NACA 1234-05 is a NACA 1234 airfoil with a sharp leading edge and maximum thickness 50% of the chord (0.5 chords) from the leading edge. In addition, for a more precise description of the airfoil all numbers can be presented as decimals. 1-series A new approach to airfoil design was pioneered in the 1930s, in which the airfoil shape was mathematically derived from the desired lift characteristics. Prior to this, airfoil shapes were first created and then had their characteristics measured in a wind tunnel. The 1-series airfoils are described by five digits in the following sequence: The number "1" indicating the series. One digit describing the distance of the minimum-pressure area in tenths of chord. A hyphen. One digit describing the lift coefficient in tenths. Two digits describing the maximum thickness in percent of chord. For example, the NACA 16-123 airfoil has minimum pressure 60% of the chord back with a lift coefficient of 0.1 and maximum thickness of 23% of the chord. 6-series An improvement over 1-series airfoils with emphasis on maximizing laminar flow. The airfoil is described using six digits in the following sequence: The number "6" indicating the series. One digit describing the distance of the minimum pressure area in tenths of the chord. The subscript digit gives the range of lift coefficient in tenths above and below the design lift coefficient in which favorable pressure gradients exist on both surfaces. A hyphen. One digit describing the design lift coefficient in tenths. Two digits describing the maximum thickness as percent of chord. "a=" followed by a decimal number describing the fraction of chord over which laminar flow is maintained. a=1 is the default if no value is given. For example, the NACA 654-415, has the minimum pressure placed at 50% of the chord, has a maximum thickness of 15% of the chord, design lift coefficient of 0.4 and maintains laminar flow for lift coefficients between 0 and 0.8. 7-series Further advancement in maximizing laminar flow achieved by separately identifying the low-pressure zones on upper and lower surfaces of the airfoil. The airfoil is described by seven digits in the following sequence: The number "7" indicating the series. One digit describing the distance of the minimum pressure area on the upper surface in tenths of the chord. One digit describing the distance of the minimum pressure area on the lower surface in tenths of the chord. One letter referring to a standard profile from the earlier NACA series. One digit describing the lift coefficient in tenths. Two digits describing the maximum thickness as percent of chord. For example, the NACA 712A315 has the area of minimum pressure 10% of the chord back on the upper surface and 20% of the chord back on the lower surface, uses the standard "A" profile, has a lift coefficient of 0.3, and has a maximum thickness of 15% of the chord. 8-series Supercritical airfoils designed to independently maximize laminar flow above and below the wing. The numbering is identical to the 7-series airfoils except that the sequence begins with an "8" to identify the series. See also Vought V-173 NACA cowling NACA duct References External links UIUC Airfoil Coordinate Database coordinates for nearly 1,600 airfoils John Dreese's NACA airfoil coordinate generation program Works on Windows XP, 7 and 8. NACA Airfoil Series NASA website feature on NACA airfoils Airfoil Interactive WebApp Aerodynamics Aircraft wing design Airfoil Numerical Analysis of NACA Airfoil 0012 at Different Attack Angles and Obtaining its Aerodynamic Coefficients NACA 4 & 5 digits, 16 series airfoil generator
NACA airfoil
[ "Chemistry", "Engineering" ]
2,015
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
4,075,984
https://en.wikipedia.org/wiki/Vincent%20Schaefer
Vincent Joseph Schaefer (July 4, 1906 – July 25, 1993) was an American chemist and meteorologist who developed cloud seeding. On November 13, 1946, while a researcher at the General Electric Research Laboratory, Schaefer modified clouds in the Berkshire Mountains by seeding them with dry ice. While he was self-taught and never completed high school, he was issued 14 patents. Personal life Vincent J. Schaefer was the oldest son of Peter Aloysius Schaefer and Rose Agnes (Holtslag) Schaefer. He had two younger brothers, Paul and Carl, and two younger sisters, Gertrude and Margaret. The Schaefer family lived in Schenectady, New York, and due to his mother's health, starting in 1921 the family made summer trips to the Adirondack Mountains. Vincent Schaefer had a lifelong association with the Adirondacks, as well as interests in hiking, natural history, and archeology. In his youth he was the founder of a local tribe of the Lone Scouts and with some of his tribe mates wrote and printed a tribe paper called "Archaeological Research." Schaefer credited this publication with his introduction to many prominent individuals in the Schenectady area, including Dr. Willis Rodney Whitney of the General Electric Research Laboratory. During the late 1920s and early 1930s, Schaefer built up his personal library on natural history, science, and his other areas of interest and read a great deal. He also organized groups with those who shared his many interests — the Mohawk Valley Hiking Club in 1929, the Van Epps-Hartley Chapter of the New York Archaeological Association in 1931, and the Schenectady Wintersport Club (which established snow trains to ski slopes in the Adirondacks) in 1933–34. In 1931 Schaefer began work on creating the Long Path of New York (a hiking trail beginning near New York City and ending at Whiteface Mountain in the Adirondacks). During this period Schaefer also created adult education programs on natural history topics which gave him opportunities to speak in the community. Through these many activities Schaefer continued to expand his acquaintances, including John S. Apperson, an engineer at General Electric and a devout conservationist of the Adirondacks. Apperson introduced Schaefer to Irving Langmuir, a scientist at the GE Research Laboratory who was awarded a Nobel Prize in 1932 for his work in surface chemistry. Among other things, Langmuir shared Schaefer's love of skiing and the outdoors. During his retirement, Schaefer worked with photographer John Day on A Field Guide to the Atmosphere (1981), a publication in the Peterson Field Guide series. In addition to continuing his consulting work, Schaefer was in a position to devote much more of his time to some of his lifelong interests such as environmental issues, natural and local history. This included the writing of numerous articles and the delivering of many presentations concerning the natural environment of upstate New York and the human impact on it. He also devoted much of his time to the fight for the preservation of many wilderness areas and parks, such as the Mohonk Preserve, Vroman's Nose, and the Great Flats Aquifer. Schaefer's long-term interest in Dutch barns made it possible for him to assume the editorship of Dutch Barn Miscellany for a time and to build a scale model of a Dutch barn. He also did a lot of research on the original settler families of the Schenectady and Mohawk Valley areas. During his retirement, Schaefer reflected on his extraordinary life preparing timelines, an unpublished autobiography, and indexes to some of his research notebooks and film collections. Schaefer also attended to the disposition of his papers and library. He also worked on a project he entitled "Ancient Windows of the Earth." This involved the slicing of rocks thinly so as to create a translucent effect. When he mounted such pieces on lampshades or other objects, it created a stained-glass window effect from natural rock highlighting the rock's geologic history. As part of this project, Schaefer designed and built a 6' diameter window in memory of his parents for the Saint James Church in North Creek in the Adirondacks. Schaefer married Lois Perret on July 27, 1935. Until their deaths they lived on Schermerhorn Road in Schenectady, in a house Schaefer built with his brothers, which they called Woestyne South. Woestyne North was the name the Schaefers gave to their camp in the Adirondacks. The Schaefers had three children, Susan, Katherine, and James. Professional career General Electric In 1922, Schaefer's parents asked him to leave high school and go to work to supplement the family income. On the advice of his maternal uncles, Schaefer joined a four-year apprentice machinist course at General Electric. During the second year of his apprenticeship, Schaefer was granted a one-month leave to accompany Dr. Arthur C. Parker, New York State Archaeologist, on an expedition to central New York. As Schaefer was concluding the apprentice course in 1926 he was assigned to work at the machine shop at the General Electric Research Laboratory, where he worked for a year as a journeyman toolmaker. Somewhat discouraged by the work of a toolmaker, Schaefer sought to satisfy a desire to work outdoors and to travel by joining, initially through a correspondence course, the Davey Institute of Tree Surgery in Kent, Ohio, in 1927. After a brief period working in Michigan, Schaefer asked to be transferred back to the Schenectady area and for a while worked as an independent landscape gardener. Upon the advice of Robert Palmer, Superintendent of the GE Research Laboratory, in 1929 Schaefer declined an opportunity to enter into a partnership for a plant nursery and instead rejoined the machine shop at the Research Laboratory, this time as a model maker. At the Research Laboratory machine shop, Schaefer built equipment for Langmuir and his research associate, Katharine B. Blodgett. In 1932 Langmuir asked Schaefer to become his research assistant. Schaefer accepted and in 1933 began his research work with Langmuir, Blodgett, Whitney, and others at the Research Lab and throughout the General Electric organization. With Langmuir, Blodgett and others as well as by himself, Schaefer published many reports on the areas he studied, which included surface chemistry techniques, electron microscope techniques, polarization, the affinity of ice for various surfaces, protein and other monolayers, studies of protein films, television tube brightness, and submicroscopic particulates. An example of Scaefer's lasting contribution to surface science is the description in 1938 of a technique developed by him and Langmuir (later known as the Langmuir–Schaefer method) for the controlled transfer of a monolayer to a substrate, a modification of the Langmuir–Blodgett method. After his promotion to research associate in 1938, Schaefer continued to work closely with Langmuir on the many projects Langmuir obtained through his involvement on national advisory committees, particularly related to military matters in the years immediately before and during the Second World War. This work included research on gas mask filtration of smokes, submarine detection with binaural sound, and the formation of artificial fogs using smoke generators—a project which reached fruition at Vrooman's Nose in the Schoharie Valley with a demonstration for military observers. During his years as Langmuir's assistant, Langmuir allowed and encouraged Schaefer to carry on his own research projects. As an example of this, in 1940 Schaefer became known in his own right for the development of a method to make replicas of individual snowflakes using a thin plastic coating. This discovery brought him national publicity in popular magazines and an abundance of correspondence from individuals, including many students, seeking to replicate his procedure. In 1943, the focus of Schaefer's and Langmuir's research shifted to precipitation static, aircraft icing, ice nuclei, and cloud physics, and many of their experiments were carried out at Mount Washington Observatory in New Hampshire. In the summer of 1946 Schaefer found his experimental "cold box" too warm for some laboratory tests he wanted to perform. Determined to get on with his work, he located some "dry ice" (solid CO2) and placed it into the bottom of the "cold box." Creating a cloud with his breath he observed a sudden and heretofore unseen bluish haze that suddenly turned into millions of microscopic ice crystals that dazzled him in the strobe lit chamber. He had stumbled onto the very principle that was hidden in all previous experiments—the stimulating effect of a sudden change in heat/cold, humidity, in supercooled water spontaneously producing billions of ice nuclei. Through scores of repeated experiments he quickly developed a method to "seed" supercooled clouds with dry ice. In November 1946 Schaefer conducted a successful field test seeding a natural cloud by airplane—with dramatic ice and snow effect. The resulting publicity brought an abundance of new correspondence, this time from people and businesses making requests for snow and water as well as scientists around the world also working on weather modification to change local weather conditions for the better. Schaefer's discovery also led to debates over the appropriateness of tampering with nature through cloud seeding. In addition, the successful field test enabled Langmuir to obtain federal funding to support additional research in cloud seeding and weather modification by the GE Research Laboratory. Schaefer was coordinator of the laboratory portion of Project Cirrus while the Air Force and Navy supplied the aircraft and pilots to carry out field tests and to collect the data used at the Research Laboratory. Field tests were conducted in the Schenectady area as well as in Puerto Rico and New Mexico. When the military pilots working on Project Cirrus were assigned to duties in connection with the Korean War, GE recommended that Project Cirrus be discontinued after comprehensive reports were prepared of the project and the discoveries made. The final Project Cirrus report was issued in March 1953. Munitalp Foundation While Project Cirrus was winding down, Schaefer was approached by Vernon Crudge on behalf of the trustees of the Munitalp Foundation to work on Munitalp's meteorological research program. For a time, Schaefer worked for both the Research Laboratory and Munitalp, and in 1954 he left the Research Laboratory to become the Director of Research of Munitalp. At Munitalp, Schaefer worked with the U.S. Forest Service at the Priest River Experimental Forest in northern Idaho with Harry T. Gisborne, noted fire researcher, on Project Skyfire, a program to determine the uses of cloud seeding to affect the patterns of lightning in thunderstorms (and the resulting forest fires started by lightning). Project Skyfire had its roots in an association between the Forest Service and Schaefer begun in the early days of Project Cirrus. While at Munitalp Schaefer also worked on developing a mobile atmospheric research laboratory and time-lapse films of clouds. Schaefer left Munitalp in 1958, turning down an offer to move with the Foundation to Kenya, but he remained an adviser to Munitalp for several years after that. Scientific education After leaving Munitalp, Schaefer's career turned towards scientific education, and let him put his belief in the power of experimentation and observation over book-learning into practice. He worked with the American Meteorological Society and Natural Science Foundation on an educational film program and to develop the Natural Sciences Institute summer programs which gave high school students the opportunity to work with scientists and on their own to do field research and experimentation. From 1959 to 1961 Schaefer was director of the Atmospheric Science Center at the Loomis School in Connecticut. During the 1970s he organized and led annual winter expeditions for 8-10 research scientists to Yellowstone National Park where massive amounts of supercooled clouds were produced by the many geysers, including Old Faithful. There at negative 20-50 Fahrenheit conditions enabled the assembled researchers to perform numerous experiments using dry ice, silver iodide to convert the supercooled water to ice crystals at ground level. Temperature and ice crystal formations allowed first-hand observation of the full range of halo and corona optical effects. Atmospheric Sciences Research Center (ASRC), University at Albany, State University of New York From 1962 to 1968 the NSI program was continued with Schaefer's directorship under the auspices of the Atmospheric Sciences Research Center (ASRC) at the State University of New York at Albany (as the University at Albany, State University of New York was then known). During this period Schaefer also continued his consulting work for many companies, government agencies, and universities. These consulting activities spanned most of Schaefer's career, and extended beyond his retirement from ASRC in 1976. Schaefer helped found ASRC in 1960 and served as its Director of Research until 1966 when he became Director. Schaefer brought highly qualified atmospheric science researchers to ASRC, many of whom he had met through his work at GE and Munitalp. Bernard Vonnegut, Raymond Falconer and Duncan Blanchard were all veterans of Project Cirrus who joined Schaefer at ASRC. During his years at ASRC, in addition to the NSI summer programs, Schaefer led annual research expeditions to Yellowstone National Park for atmospheric scientists to work in the outdoor laboratory it provided each January. In the 1970s Schaefer's own research interests focused on solar energy, aerosols, gases, air quality, and pollution particles in the atmosphere. His work in some of these areas culminated in a three-part report on Air Quality on the Global Scale in 1978. In addition, during the 1970s Schaefer was an instructor in the American Association for the Advancement of Science Chautauqua short courses for science teachers. Publications (selected) The presence of ozone, nitric acid, nitrogen dioxide and ammonia in the atmosphere, Atmospheric Sciences Research Center, State University of New York, 1978. The air quality patterns of aerosols on the global scale, Atmospheric Sciences Research Center, State University of New York, 1976. Hailstorms and hailstones of the western Great Plains, Smithsonian Institution, 1961. The possibilities of modifying lightning storms in the northern Rockies, Northern Rocky Mountain Forest & Range Experiment Station, 1949. Heat requirements for instruments and airfoils during icing storms on Mt. Washington, General Electric Research Laboratory, 1946. The Use of high speed model propellers for studying de-icing coatings at the Mt. Washington Observatory, General Electric Research Laboratory, 1946. The Liquid water content of summer clouds on the summit of Mt. Washington, General Electric Research Laboratory, 1946. The Preparation and use of water sensitive coatings for sampling cloud particles, General Electric Research Laboratory, 1946. A Heated, vaned pitot tube and a recorder for measuring air speed under severe icing conditions, General Electric Research Laboratory, 1946. Fossilizing snowflakes, 1941. Serendipity in Science: Twenty Years at Langmuir University, An Autobiography by Vincent J Schaefer, ScD, Compiled and Edited by Don Rittner, Square Circle Press, Voorheesville, NY 2013 (405 pages, 15 Chapters, illustrations and B/W photographs) Patents Filed Apr 12, 1935-"Treatment of Materials" Filed Dec 6, 1954-"Coating for Electric Devices" Filed Apr 12, 1941-"Light-Dividing Element" Filed Jun 27, 1941-"Method of Producing Solids of Desired Configuration" Filed Jun 21, 1944-"Cathode Ray Tube" Filed Mar 24, 1943-"Method and Apparatus for Producing Aerosols"(with Irving Langmuir) Filed Sep 18, 1947-"Cloud Moisture Meter" Filed Nov 5, 1947-"Method of Making Electrical Indicators of Mechanical Expansion"(with Katharine Blodgett) Filed Jan 21, 1948-"Method of Crystal Formation and Precipitation"(with Bernard Vonnegut) Filed Nov 18, 1947-"Electrical Moisture Meter" Filed Jan 29, 1948-"Method of Crystal Formation and Precipitation" Filed Nov 5, 1947-"Electrical Indicator of Mechanical Expansion"(with Katharine Blodgett) Filed Mar 6, 1952-"Method and Apparatus for Detecting Minute Crystal Forming Particles" Filed Dec 6, 1954-"Method of Depositing a Silver Film" References Our History, GE Global Research. Accessed February 14, 2006 Weather Services in the US: 1644-1970, National Weather Service Weather Forecast Office. <Serendipity in Science: Twenty Years at Langmuir University, and autobiography (1993), Compiled and Edited by Don Rittner, Square Circle Press, Voorheesville, NY> External links Finding Aid for the Papers of Vincent J. Schaefer, M.E. Grenander Department of Special Collections and Archives , University at Albany Libraries. Weather Modification: The Physical basis for Cloud Seeding Manipulating the weather, CBC. 1906 births 1993 deaths 20th-century American chemists American meteorologists General Electric people Scientists from Schenectady, New York University at Albany, SUNY faculty Weather modification Weather modification in North America
Vincent Schaefer
[ "Engineering" ]
3,590
[ "Planetary engineering", "Weather modification" ]
4,076,593
https://en.wikipedia.org/wiki/Emergent%20design
Emergent design is a phrase coined by David Cavallo to describe a theoretical framework for the implementation of systemic change in education and learning environments. This examines how choice of design methodology contributes to the success or failure of education reforms through studies in Thailand. It is related to the theories of situated learning and of constructionist learning. The term constructionism was coined by Seymour Papert under whom Cavallo studied. Emergent design holds that education systems cannot adapt effectively to technology change unless the education is rooted in the existing skills and needs of the local culture. Applications The most notable non-theoretical application of the principles of emergent design is in the OLPC, whose concept work is supported in Cavallo's paper "Models of growth — towards fundamental change in learning environment". Emergent design in agile software development Emergent design is a consistent topic in agile software development, as a result of the methodology's focus on delivering small pieces of working code with business value. With emergent design, a development organization starts delivering functionality and lets the design emerge. Development will take a piece of functionality A and implement it using best practices and proper test coverage and then move on to delivering functionality B. Once B is built, or while it is being built, the organization will look at what A and B have in common and refactor out the commonality, allowing the design to emerge. This process continues as the organization continually delivers functionality. At the end of an agile release cycle, development is left with the smallest set of the design needed, as opposed to the design that could have been anticipated in advance. The end result is a simpler design with a smaller code base, which is more easily understood and maintained and naturally has less room for defects. Emergent design for social change Emergent design is also being used in social change movements, such as a group of Canadian NGOs that are bringing together a group of civic leaders to discuss how their work scales up and scales deep. A series of events are being organized by the Carold Institute and Ashoka Canada in 2013 through to 2015. The project goals currently include, but are not limited to: Engage emerging leaders in redefining models and systems that will support a vibrant and dynamic civil society in Canada. Strengthen and broaden the impact of their leadership Discover and disseminate new knowledge related to systems change and emerging systems Share key learning, insights, innovative strategies and new models of engagement among participants and with key stakeholders and sponsoring organizations References External links Models of Growth David Cavallo bio page David Cavallo MIT Media Lab page Emergent design and learning environments: building on indigenous knowledge Technology integration models Educational psychology Learning Systems engineering
Emergent design
[ "Engineering" ]
538
[ "Systems engineering" ]
4,076,831
https://en.wikipedia.org/wiki/Gentzen%27s%20consistency%20proof
Gentzen's consistency proof is a result of proof theory in mathematical logic, published by Gerhard Gentzen in 1936. It shows that the Peano axioms of first-order arithmetic do not contain a contradiction (i.e. are "consistent"), as long as a certain other system used in the proof does not contain any contradictions either. This other system, today called "primitive recursive arithmetic with the additional principle of quantifier-free transfinite induction up to the ordinal ε0", is neither weaker nor stronger than the system of Peano axioms. Gentzen argued that it avoids the questionable modes of inference contained in Peano arithmetic and that its consistency is therefore less controversial. Gentzen's theorem Gentzen's theorem is concerned with first-order arithmetic: the theory of the natural numbers, including their addition and multiplication, axiomatized by the first-order Peano axioms. This is a "first-order" theory: the quantifiers extend over natural numbers, but not over sets or functions of natural numbers. The theory is strong enough to describe recursively defined integer functions such as exponentiation, factorials or the Fibonacci sequence. Gentzen showed that the consistency of the first-order Peano axioms is provable over the base theory of primitive recursive arithmetic with the additional principle of quantifier-free transfinite induction up to the ordinal ε0. Primitive recursive arithmetic is a much simplified form of arithmetic that is rather uncontroversial. The additional principle means, informally, that there is a well-ordering on the set of finite rooted trees. Formally, ε0 is the first ordinal such that , i.e. the limit of the sequence It is a countable ordinal much smaller than large countable ordinals. To express ordinals in the language of arithmetic, an ordinal notation is needed, i.e. a way to assign natural numbers to ordinals less than ε0. This can be done in various ways, one example provided by Cantor's normal form theorem. Gentzen's proof is based on the following assumption: for any quantifier-free formula A(x), if there is an ordinal a< ε0 for which A(a) is false, then there is a least such ordinal. Gentzen defines a notion of "reduction procedure" for proofs in Peano arithmetic. For a given proof, such a procedure produces a tree of proofs, with the given one serving as the root of the tree, and the other proofs being, in a sense, "simpler" than the given one. This increasing simplicity is formalized by attaching an ordinal < ε0 to every proof, and showing that, as one moves down the tree, these ordinals get smaller with every step. He then shows that if there were a proof of a contradiction, the reduction procedure would result in an infinite strictly descending sequence of ordinals smaller than ε0 produced by a primitive recursive operation on proofs corresponding to a quantifier-free formula. Relation to Hilbert's program and Gödel's theorem Gentzen's proof highlights one commonly missed aspect of Gödel's second incompleteness theorem. It is sometimes claimed that the consistency of a theory can only be proved in a stronger theory. Gentzen's theory obtained by adding quantifier-free transfinite induction to primitive recursive arithmetic proves the consistency of first-order Peano arithmetic (PA) but does not contain PA. For example, it does not prove ordinary mathematical induction for all formulae, whereas PA does (since all instances of induction are axioms of PA). Gentzen's theory is not contained in PA, either, however, since it can prove a number-theoretical fact—the consistency of PA—that PA cannot. Therefore, the two theories are, in one sense, incomparable. That said, there are other, finer ways to compare the strength of theories, the most important of which is defined in terms of the notion of interpretability. It can be shown that, if one theory T is interpretable in another B, then T is consistent if B is. (Indeed, this is a large point of the notion of interpretability.) And, assuming that T is not extremely weak, T itself will be able to prove this very conditional: If B is consistent, then so is T. Hence, T cannot prove that B is consistent, by the second incompleteness theorem, whereas B may well be able to prove that T is consistent. This is what motivates the idea of using interpretability to compare theories, i.e., the thought that, if B interprets T, then B is at least as strong (in the sense of 'consistency strength') as T is. A strong form of the second incompleteness theorem, proved by Pavel Pudlák, who was building on earlier work by Solomon Feferman, states that no consistent theory T that contains Robinson arithmetic, Q, can interpret Q plus Con(T), the statement that T is consistent. By contrast, Q+Con(T) does interpret T, by a strong form of the arithmetized completeness theorem. So Q+Con(T) is always stronger (in one good sense) than T is. But Gentzen's theory trivially interprets Q+Con(PA), since it contains Q and proves Con(PA), and so Gentzen's theory interprets PA. But, by Pudlák's result, PA cannot interpret Gentzen's theory, since Gentzen's theory (as just said) interprets Q+Con(PA), and interpretability is transitive. That is: If PA did interpret Gentzen's theory, then it would also interpret Q+Con(PA) and so would be inconsistent, by Pudlák's result. So, in the sense of consistency strength, as characterized by interpretability, Gentzen's theory is stronger than Peano arithmetic. Hermann Weyl made the following comment in 1946 regarding the significance of Gentzen's consistency result following the devastating impact of Gödel's 1931 incompleteness result on Hilbert's plan to prove the consistency of mathematics. It is likely that all mathematicians ultimately would have accepted Hilbert's approach had he been able to carry it out successfully. The first steps were inspiring and promising. But then Gödel dealt it a terrific blow (1931), from which it has not yet recovered. Gödel enumerated the symbols, formulas, and sequences of formulas in Hilbert's formalism in a certain way, and thus transformed the assertion of consistency into an arithmetic proposition. He could show that this proposition can neither be proved nor disproved within the formalism. This can mean only two things: either the reasoning by which a proof of consistency is given must contain some argument that has no formal counterpart within the system, i.e., we have not succeeded in completely formalizing the procedure of mathematical induction; or hope for a strictly "finitistic" proof of consistency must be given up altogether. When G. Gentzen finally succeeded in proving the consistency of arithmetic he trespassed those limits indeed by claiming as evident a type of reasoning that penetrates into Cantor's "second class of ordinal numbers." made the following comment in 1952 on the significance of Gentzen's result, particularly in the context of the formalist program which was initiated by Hilbert. The original proposals of the formalists to make classical mathematics secure by a consistency proof did not contemplate that such a method as transfinite induction up to ε0 would have to be used. To what extent the Gentzen proof can be accepted as securing classical number theory in the sense of that problem formulation is in the present state of affairs a matter for individual judgement, depending on how ready one is to accept induction up to ε0 as a finitary method. In contrast, commented on whether Hilbert's confinement to finitary methods was too restrictive: It thus became apparent that the 'finite Standpunkt' is not the only alternative to classical ways of reasoning and is not necessarily implied by the idea of proof theory. An enlarging of the methods of proof theory was therefore suggested: instead of a reduction to finitist methods of reasoning, it was required only that the arguments be of a constructive character, allowing us to deal with more general forms of inference. Other consistency proofs of arithmetic Gentzen's first version of his consistency proof was not published during his lifetime because Paul Bernays had objected to a method implicitly used in the proof. The modified proof, described above, was published in 1936 in the Annals. Gentzen went on to publish two more consistency proofs, one in 1938 and one in 1943. All of these are contained in . Kurt Gödel reinterpreted Gentzen's 1936 proof in a lecture in 1938 in what came to be known as the no-counterexample interpretation. Both the original proof and the reformulation can be understood in game-theoretic terms. . In 1940 Wilhelm Ackermann published another consistency proof for Peano arithmetic, also using the ordinal ε0. Another proof of consistency of Arithmetic was published by I. N. Khlodovskii, in 1959. Work initiated by Gentzen's proof Gentzen's proof is the first example of what is called proof-theoretic ordinal analysis. In ordinal analysis one gauges the strength of theories by measuring how large the (constructive) ordinals are that can be proven to be well-ordered, or equivalently for how large a (constructive) ordinal can transfinite induction be proven. A constructive ordinal is the order type of a recursive well-ordering of natural numbers. In this language, Gentzen's work establishes that the proof-theoretic ordinal of first-order Peano arithmetic is ε0. Laurence Kirby and Jeff Paris proved in 1982 that Goodstein's theorem cannot be proven in Peano arithmetic. Their proof was based on Gentzen's theorem. Notes References – Translated as "The consistency of arithmetic", in . – Translated as "New version of the consistency proof for elementary number theory", in . - an English translation of papers. Metatheorems Proof theory
Gentzen's consistency proof
[ "Mathematics" ]
2,198
[ "Mathematical logic", "Proof theory" ]
4,077,356
https://en.wikipedia.org/wiki/Neem%20cake
Neem cake organic manure is the by-product obtained in the process of cold pressing of neem tree fruits and kernels, and the solvent extraction process for neem oil cake. It is a potential source of organic manure under the Bureau of Indian Standards, Specification No. 8558. Neem has demonstrated considerable potential as a fertilizer. For this purpose, neem cake and neem leaves are especially promising. Puri (1999), in his book Neem : The Divine Tree Azadirachta, has given details about neem seed cake as manure and nitrification inhibitor. The author has described that, after processing, neem cake can be used for partial replacement of poultry and cattle feed. Components Neem cake has an adequate quantity of NPK in organic form for plant growth. Being a totally botanical product it contains 100% natural NPK content and other essential micro nutrients as N (Nitrogen 2.0% to 5.0%), P (Phosphorus 0.5% to 1.0%), K (Potassium 1.0% to 2.0%), Ca (Calcium 0.5% to 3.0%), Mg (Magnesium 0.3% to 1.0%), S (Sulphur 0.2% to 3.0%), Zn (Zinc 15 ppm to 60 ppm), Cu (Copper 4 ppm to 20 ppm), Fe (Iron 500 ppm to 1200 ppm), Mn (Manganese 20 ppm to 60 ppm). It is rich in both sulphur compounds and bitter limonoids. According to research calculations, neem cake seems to make soil more fertile due to an ingredient that blocks soil bacteria from converting nitrogenous compounds into nitrogen gas. It is a nitrification inhibitor and prolongs the availability of nitrogen to both short duration and long duration crops. Use as a fertilizer Neem cake organic manure protects plant roots from nematodes, soil grubs and termites, probably due to its residual limonoid content. It also acts as a natural fertilizer with pesticidal properties. Neem cake is widely used in India to fertilize paddy, cotton and sugarcane. Usage of neem cake have shown an increase in the dry matter in Tectona grandis (teak), Acacia nilotica (gum arabic), and other forest trees. Neem seed cake can also reduce alkalinity in soil, as it produces organic acids upon decomposition. Being totally natural, it is compatible with soil microbes and rhizosphere microflora and hence ensures fertility of the soil. Neem cake improves the organic matter content of the soil, helps improve soil texture, water holding capacity, and soil aeration for better root development. Pest control Neem cake is effective in the management of insects and pests. The bitter principles of the soil and cake have been reported to have seven types of activity: (a) antifeedant, (b) attractant, (c) repellent, (d) insecticide, (e) nematicide, (f) growth disruptor and (g) antimicrobial. The cake contains salannin, nimbin, azadirachtin, meliantriol and azadiradione as the major components. Of these, azadirachtin and meliantriol are used as locust antifeedants while salannin is used as an antifeedant for the housefly. References General references Schmutterer, H. (Editor) (2002) The Neem Tree: Source of Unique Natural Products for Integrated Pest Management, Medicine, Industry And Other Purposes (Hardcover), 2nd Edition, Weinheim, Germany: VCH Verlagsgesellschaft. Tewari, D. N. (1992), Monograph on neem (Azadirachta indica A. Juss.). Dehra Dun, India: International Book Distributors. pp.123-128 Vietmeyer, N. D. (Director) (1992), Neem: A Tree for Solving Global Problems. Report of an ad hoc panel of the Board on Science and Technology for International Development, National Research Council, Washington, DC, USA: National Academy Press. pp.74-75. Puri, H.S. (1999) Neem: The Divine Tree. Azadirachta indica. Harwood Academic Publishers, Amsterdam. See also Arid Forest Research Institute (AFRI) Neem Neem oil Azadirachtin Organic farming Plant toxin insecticides
Neem cake
[ "Chemistry" ]
956
[ "Plant toxin insecticides", "Chemical ecology" ]
8,590,426
https://en.wikipedia.org/wiki/Athermalization
Athermalization, in the field of optics, is the process of achieving optothermal stability in optomechanical systems. This is done by minimizing variations in optical performance over a range of temperatures. Optomechanical systems are typically made of several materials with different thermal properties. These materials compose the optics (refractive or reflective elements) and the mechanics (optical mounts and system housing). As the temperature of these materials change, the volume and index of refraction will change as well, increasing strain and aberration content (primarily defocus). Compensating for optical variations over a temperature range is known as athermalizing a system in optical engineering. Material property changes Thermal expansion is the driving phenomena for the extensive and intensive property changes in an optomechanical system. Extensive properties Extensive property changes, such as volume, alter the shape of optical and mechanical components. Systems are geometrically optimized for optical performance and are sensitive to components changing shape and orientation. While volume is a three dimensional parameter, thermal changes can be modeled in a single dimension with linear expansion, assuming an adequately small temperature range. For examples, glass manufacturer Schott provides the coefficient of linear thermal expansion for a temperature range of -30 C to 70 C. The change in length of a material is a function of the change in temperature with respect to the standard measurement temperature, . This temperature is typically room temperature or 22 degrees Celsius. Where is the length of a material at temperature , is the length of the material at temperature , is the change in temperature, and is the coefficient of thermal expansion. These equations describe how diameter, thickness, radius of curvature, and element spacing change as a function of temperature. Intensive properties The dominant intensive property change, in terms of optical performance, is the index of refraction. The refractive index of glass is a function of wavelength and temperature. There are multiple formulas that can be used to define the wavelength dependence, or dispersion, of a glass. Following the notation from Schott, the empirical Sellmeier equation is shown below. Where is wavelength and , , , , , and are the Sellmeier coefficients. These coefficients can be found in glass catalogs provided from manufacturers and are usually valid from the near-ultraviolet to the near-infrared. For wavelengths beyond this range, it is necessary to know the material's transmittance with respect to wavelength. From the dispersion formula, the temperature dependence of refractive index can be written: and Where , , , , , and are glass-dependent constants for an optic in vacuum. The power of an optic as a function of temperature can be written from the equations for extensive and intensive property changes, in addition to the lensmaker's equation. Where is optical power, is the radius of curvature, is the thickness of the lens. These equations assume spherical surfaces of curvature. If a system is not in vacuum, the index of refraction for air will vary with temperature and pressure according to the Ciddor equation, a modified version of the Edlén equation. Athermalization techniques To account for optical variations introduced by extensive and intensive property changes in materials, systems can be athermalized through material selection or feedback loops. Passive athermalization Passive athermalization works by choosing materials for a system that will compensate the overall change in system performance. The simplest way to do this is to choose materials for the optics and mechanics which have low CTE and values. This technique is not always possible as glass types are primarily chosen based on their refractive index and dispersion characteristics at operating temperature. Alternatively, mechanical materials can be chosen which have CTE values complementary to the change in focus introduced by the optics. A material with the preferred CTE is not always available, so two materials can be used in conjunction to effectively get the desired CTE value. Negative thermal expansion materials have recently increased the range of potential CTEs available, expanding passive athermalization options. Active athermalization When optical designs do not permit the selection of materials based on their thermal characteristics, passive athermalization may not be a viable technique. For example, the use of germanium in mid to long wave infrared systems is common because of its exceptional optical properties (high index of refraction and low dispersion). Unfortunately, germanium is also known for its large value, which makes it difficult to passively athermalize. Because the primary aberration induced by temperature change is defocus, an optical element, group, or focal plane can be mechanically moved to refocus a system and account for thermal changes. Actively athermalized systems are designed with a feedback loop including a motor, for the focusing mechanism, and temperature sensor, to indicate the magnitude of the focus adjustment. Temperature gradients When a system is not in thermal equilibrium, it complicates the process of determining system performance. A common temperature gradient to encounter is an axial gradient. This involves temperatures changing in a lens as a function of the thickness of the lens, or often along the optical axis. In optical lens design it is standard notation for the optical axis to be co-linear with the Z-axis in cartesian coordinates. A difference between the temperature of the first and second surface of a lens will cause the lens to bend. This affects each radius of curvature, therefor changing the optical power of the lens. The radius of curvature change is a function of the temperature gradient in the optic. Where is the thickness of the lens. Radial gradients are less predictable as they may cause the shape of curvature to change, making spherical surfaces aspherical. Determining temperature gradients in an optomechanical system can quickly become an arduous task, requiring an intimate understanding of the heat sources and sinks in a system. Temperature gradients are determined by heat flow and can be a result of conduction, convection, or radiation. Whether steady-state or transient solutions are adequate for an analysis is determined by operating requirements, system design, and the environment. It can be beneficial to leverage the computational power of the finite element method to solve the applicable heat flow equations to determine the temperature gradients of optical and mechanical components. External links Refractive index of air calculator Table of common material CTE values Information on glass from Schott Information on glass from Hoya Information on glass from Ohara Information on glass from CDGM References Optics Temperature
Athermalization
[ "Physics", "Chemistry" ]
1,315
[ "Scalar physical quantities", "Temperature", "Thermodynamic properties", "Applied and interdisciplinary physics", "Physical quantities", "Optics", "SI base quantities", "Intensive quantities", " molecular", "Thermodynamics", "Atomic", "Wikipedia categories named after physical quantities", "...
8,590,528
https://en.wikipedia.org/wiki/ATM25
ATM25 is an ATM (Asynchronous Transfer Mode) version wherein data is transferred at over Category 3 cable. Background ATM25 has no particular distinctions from other ATM versions. However, ATM25 chipsets were, at one time, inexpensive in comparison to faster ATM chipsets, having the result of making ATM technology available for small office/home office environments. However, these networks no longer have much potential for expansion, as Ethernet has become the first choice in this domain. ATM25 chips can typically achieve speeds of around , and typically support around 32 devices in a single loop. ATM25 is still supported today by Cat 6a cables. The WAN connection side of ATM25 systems often takes place over a fast DSL variant such as RADSL. DSL is often considered in this case, as its technology is based on an ATM core. Criticisms In March 2001, Network World described ATM25 as a "solution looking for a problem": Classified mostly as a solution looking for a problem, ATM to the desktop failed before it really got rolling. While many folks thought the idea of providing all that bandwidth to user PCs was worthwhile, the idea of paying twice as much for the luxury compared with switched Ethernet didn't fly. ATM25 was criticised for being more expensive to use than 10BASE-T Ethernet References Networking standards Asynchronous Transfer Mode
ATM25
[ "Technology", "Engineering" ]
278
[ "Networking standards", "Computer standards", "Asynchronous Transfer Mode", "Computer networks engineering" ]
8,593,160
https://en.wikipedia.org/wiki/Forisome
Forisomes are proteins occurring in the sieve tubes of Fabaceae. Their molecules are about 1–3 μm wide and 10–30 μm long. They expand and contract anisotropically in response to changes of electric field, pH, or concentration of Ca2+ ions. Unlike most other moving proteins, the change is not dependent on ATP. Forisomes function as valves in sieve tubes of the phloem system, by reversibly changing shape between low-volume ordered crystalloid spindles and high-volume disordered spherical conformations. The change from ordered to disordered conformation involves tripling of the protein's volume, loss of birefringence present in the crystalline phase, 120% radial expansion and 30% longitudinal shrinkage. In Vicia it was shown that forisomes are associated to the endoplasmic reticulum at sieve plates. There are evidences that the forisomes's behavior could depend on Ca2+ changes provoked by Ca2+-permeable ion channels, located on the endoplasmic reticulum and plasma membrane of sieve elements. responsible for shape changes. Forisomes have possible applications as biomimetic smart materials (e.g. valves in microdevices) or smart composite materials. References External links Forisome: A smart plant protein inside a phloem system Forisome based biomimetic smart materials Forisome Protein, a Key to Biomimetic Materials Motor proteins Smart materials Plant proteins
Forisome
[ "Chemistry", "Materials_science", "Engineering" ]
316
[ "Protein stubs", "Motor proteins", "Materials science", "Biochemistry stubs", "Molecular machines", "Smart materials" ]
8,593,762
https://en.wikipedia.org/wiki/300B
In electronics, the 300B is a directly-heated power triode vacuum tube with a four-pin base, introduced in 1938 by Western Electric to amplify telephone signals. It measures high and wide, and the anode can dissipate 40 watts thermal. In the 1980s it began to be used increasingly by audiophiles in home audio equipment. The 300B has good linearity, low noise and good reliability; it is often used in single-ended triode (SET) audio amplifiers of about eight watts output. A push-pull pair can output 20 watts. manufacturers of 300B and other tubes of similar characteristics included EkspoPUL (Electro Harmonix brand), ELROG, Emission Labs - EML, JJ Electronic, KR Audio, TJ FullMusic, Hengyang Electronics (Psvane brand), Linlai, Takatsuki Electric and Western Electric. Prices for new 300B tubes ranged from US$175 to $2,000 per matched pair. Western Electric (tube manufacturer), a small, privately owned company in Rossville, Georgia resumed production of the original 300B in 2018 using the original, 1938 manufacturing standards on a modernized assembly line housed at the Rossville Works. See also List of vacuum tubes References http://www.aes.org/e-lib/browse.cfm?elib=6058 The 300B's history 300B data sheet External links Stereophile: In Search of the Perfect 300B Tube Reviews of 300B tubes. Vacuum tubes
300B
[ "Physics" ]
316
[ "Vacuum tubes", "Vacuum", "Matter" ]
8,594,100
https://en.wikipedia.org/wiki/EXIT%20chart
An extrinsic information transfer chart, commonly called an EXIT chart, is a technique to aid the construction of good iteratively-decoded error-correcting codes (in particular low-density parity-check (LDPC) codes and Turbo codes). EXIT charts were developed by Stephan ten Brink, building on the concept of extrinsic information developed in the Turbo coding community. An EXIT chart includes the response of elements of decoder (for example a convolutional decoder of a Turbo code, the LDPC parity-check nodes or the LDPC variable nodes). The response can either be seen as extrinsic information or a representation of the messages in belief propagation. If there are two components which exchange messages, the behaviour of the decoder can be plotted on a two-dimensional chart. One component is plotted with its input on the horizontal axis and its output on the vertical axis. The other component is plotted with its input on the vertical axis and its output on the horizontal axis. The decoding path followed is found by stepping between the two curves. For a successful decoding, there must be a clear swath between the curves so that iterative decoding can proceed from 0 bits of extrinsic information to 1 bit of extrinsic information. A key assumption is that the messages to and from an element of the decoder can be described by a single number, the extrinsic information. This is true when decoding codes from a binary erasure channel but otherwise the messages are often samples from a Gaussian distribution with the correct extrinsic information. The other key assumption is that the messages are independent (equivalent to an infinite block-size code without local structure between the components) To make an optimal code, the two transfer curves need to lie close to each other. This observation is supported by the theoretical result that for capacity to be reached for a code over a binary-erasure channel there must be no area between the curves and also by the insight that a large number of iterations are required for information to be spread throughout all bits of a code. References T. Richardson and R. Urbanke: "Modern Coding Theory" External links Lecture notes on EXIT charts(PDF) Error detection and correction Information theory
EXIT chart
[ "Mathematics", "Technology", "Engineering" ]
468
[ "Telecommunications engineering", "Reliability engineering", "Applied mathematics", "Error detection and correction", "Computer science", "Information theory" ]
8,594,839
https://en.wikipedia.org/wiki/Comparison%20of%20disk%20encryption%20software
This is a technical feature comparison of different disk encryption software. Background information Operating systems Features Hidden containers: Whether hidden containers (an encrypted container (A) within another encrypted container (B) so the existence of container A can not be established) can be created for deniable encryption. Note that some modes of operation like CBC with a plain IV can be more prone to watermarking attacks than others. Pre-boot authentication: Whether authentication can be required before booting the computer, thus allowing one to encrypt the boot disk. Single sign-on: Whether credentials provided during pre-boot authentication will automatically log the user into the host operating system, thus preventing password fatigue and reducing the need to remember multiple passwords. Custom authentication: Whether custom authentication mechanisms can be implemented with third-party applications. Multiple keys: Whether an encrypted volume can have more than one active key. Passphrase strengthening: Whether key strengthening is used with plain text passwords to frustrate dictionary attacks, usually using PBKDF2 or Argon2. Hardware acceleration: Whether dedicated cryptographic accelerator expansion cards can be taken advantage of. Trusted Platform Module: Whether the implementation can use a TPM cryptoprocessor. Filesystems: What filesystems are supported. Two-factor authentication: Whether optional security tokens (hardware security modules, such as Aladdin eToken and smart cards) are supported (for example using PKCS#11) Layering Whole disk: Whether the whole physical disk or logical volume can be encrypted, including the partition tables and master boot record. Note that this does not imply that the encrypted disk can be used as the boot disk itself; refer to pre-boot authentication in the features comparison table. Partition: Whether individual disk partitions can be encrypted. File: Whether the encrypted container can be stored in a file (usually implemented as encrypted loop devices). Swap space: Whether the swap space (called a "pagefile" on Windows) can be encrypted individually/explicitly. Hibernation file: Whether the hibernation file is encrypted (if hibernation is supported). Modes of operation Different modes of operation supported by the software. Note that an encrypted volume can only use one mode of operation. CBC with predictable IVs: The CBC (cipher block chaining) mode where initialization vectors are statically derived from the sector number and are not secret; this means that IVs are re-used when overwriting a sector and the vectors can easily be guessed by an attacker, leading to watermarking attacks. CBC with secret IVs: The CBC mode where initialization vectors are statically derived from the encryption key and sector number. The IVs are secret, but they are re-used with overwrites. Methods for this include ESSIV and encrypted sector numbers (CGD). CBC with random per-sector keys: The CBC mode where random keys are generated for each sector when it is written to, thus does not exhibit the typical weaknesses of CBC with re-used initialization vectors. The individual sector keys are stored on disk and encrypted with a master key. (See GBDE for details) LRW: The Liskov-Rivest-Wagner tweakable narrow-block mode, a mode of operation specifically designed for disk encryption. Superseded by the more secure XTS mode due to security concerns. XTS: XEX-based Tweaked CodeBook mode (TCB) with CipherText Stealing (CTS), the SISWG (IEEE P1619) standard for disk encryption. Authenticated encryption: Protection against ciphertext modification by an attacker See also Cold boot attack Comparison of encrypted external drives Disk encryption software Disk encryption theory List of cryptographic file systems Notes and references External links DiskCryptor vs Truecrypt – Comparison between DiskCryptor and TrueCrypt Buyer's Guide to Full Disk Encryption – Overview of full-disk encryption, how it works, and how it differs from file-level encryption Disk encryption software Disk encryption software
Comparison of disk encryption software
[ "Mathematics", "Technology" ]
870
[ "Computing-related lists", "Cryptographic software", "Cryptography lists and comparisons", "Mathematical software" ]
8,597,086
https://en.wikipedia.org/wiki/Von%20Neumann%20universal%20constructor
John von Neumann's universal constructor is a self-replicating machine in a cellular automaton (CA) environment. It was designed in the 1940s, without the use of a computer. The fundamental details of the machine were published in von Neumann's book Theory of Self-Reproducing Automata, completed in 1966 by Arthur W. Burks after von Neumann's death. It is regarded as foundational for automata theory, complex systems, and artificial life. Indeed, Nobel Laureate Sydney Brenner considered Von Neumann's work on self-reproducing automata (together with Turing's work on computing machines) central to biological theory as well, allowing us to "discipline our thoughts about machines, both natural and artificial." Von Neumann's goal, as specified in his lectures at the University of Illinois in 1949, was to design a machine whose complexity could grow automatically akin to biological organisms under natural selection. He asked what is the threshold of complexity that must be crossed for machines to be able to evolve. His answer was to specify an abstract machine which, when run, would replicate itself. In his design, the self-replicating machine consists of three parts: a "description" of ('blueprint' or program for) itself, a universal constructor mechanism that can read any description and construct the machine (sans description) encoded in that description, and a universal copy machine that can make copies of any description. After the universal constructor has been used to construct a new machine encoded in the description, the copy machine is used to create a copy of that description, and this copy is passed on to the new machine, resulting in a working replication of the original machine that can keep on reproducing. Some machines will do this backwards, copying the description and then building a machine. Crucially, the self-reproducing machine can evolve by accumulating mutations of the description, not the machine itself, thus gaining the ability to grow in complexity. To define his machine in more detail, von Neumann invented the concept of a cellular automaton. The one he used consists of a two-dimensional grid of cells, each of which can be in one of 29 states at any point in time. At each timestep, each cell updates its state depending on the states of the surrounding cells at the prior timestep. The rules governing these updates are identical for all cells. The universal constructor is a certain pattern of cell states in this cellular automaton. It contains one line of cells that serve as the description (akin to Turing's tape), encoding a sequence of instructions that serve as a 'blueprint' for the machine. The machine reads these instructions one by one and performs the corresponding actions. The instructions direct the machine to use its 'construction arm' (another automaton that functions like an operating system) to build a copy of the machine, without the description tape, at some other location in the cell grid. The description cannot contain instructions to build an equally long description tape, just as a container cannot contain a container of the same size. Therefore, the machine includes the separate copy machine which reads the description tape and passes a copy to the newly constructed machine. The resulting new set of universal constructor and copy machines plus description tape is identical to the old one, and it proceeds to replicate again. Purpose Von Neumann's design has traditionally been understood to be a demonstration of the logical requirements for machine self-replication. However, it is clear that far simpler machines can achieve self-replication. Examples include trivial crystal-like growth, template replication, and Langton's loops. But von Neumann was interested in something more profound: construction, universality, and evolution. Note that the simpler self-replicating CA structures (especially, Byl's loop and the Chou–Reggia loop) cannot exist in a wide variety of forms and thus have very limited evolvability. Other CA structures such as the Evoloop are somewhat evolvable but still don't support open-ended evolution. Commonly, simple replicators do not fully contain the machinery of construction, there being a degree to which the replicator is information copied by its surrounding environment. Although the Von Neumann design is a logical construction, it is in principle a design that could be instantiated as a physical machine. Indeed, this universal constructor can be seen as an abstract simulation of a physical universal assembler. The issue of the environmental contribution to replication is somewhat open, since there are different conceptions of raw material and its availability. Von Neumann's crucial insight is that the description of the machine, which is copied and passed to offspring separately via the universal copier, has a double use; being both an active component of the construction mechanism in reproduction, and being the target of a passive copying process. This part is played by the description (akin to Turing's tape of instructions) in Von Neumann's combination of universal constructor and universal copier. The combination of a universal constructor and copier, plus a tape of instructions conceptualizes and formalizes i) self-replication, and ii) open-ended evolution, or growth of complexity observed in biological organisms. This insight is all the more remarkable because it preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell—though it followed the Avery–MacLeod–McCarty experiment which identified DNA as the molecular carrier of genetic information in living organisms. The DNA molecule is processed by separate mechanisms that carry out its instructions (translation) and copy (replicate) the DNA for newly constructed cells. The ability to achieve open-ended evolution lies in the fact that, just as in nature, errors (mutations) in the copying of the genetic tape can lead to viable variants of the automaton, which can then evolve via natural selection. As Brenner put it: Evolution of Complexity Von Neumann's goal, as specified in his lectures at the University of Illinois in 1949, was to design a machine whose complexity could grow automatically akin to biological organisms under natural selection. He asked what is the threshold of complexity that must be crossed for machines to be able to evolve and grow in complexity. His “proof-of-principle” designs showed how it is logically possible. By using an architecture that separates a general purpose programmable (“universal”) constructor from a general purpose copier, he showed how the descriptions (tapes) of machines could accumulate mutations in self-replication and thus evolve more complex machines (the image below illustrates this possibility.). This is a very important result, as prior to that, it might have been conjectured that there is a fundamental logical barrier to the existence of such machines; in which case, biological organisms, which do evolve and grow in complexity, could not be “machines”, as conventionally understood. Von Neumann's insight was to think of life as a Turing Machine, which, is similarly defined by a state-determined machine "head" separated from a memory tape. In practice, when we consider the particular automata implementation Von Neumann pursued, we conclude that it does not yield much evolutionary dynamics because the machines are too fragile - the vast majority of perturbations cause them effectively to disintegrate. Thus, it is the conceptual model outlined in his Illinois lectures that is of greater interest today because it shows how a machine can in principle evolve. This insight is all the more remarkable because the model preceded the discovery of the structure of the DNA molecule as discussed above. It is also noteworthy that Von Neumann's design considers that mutations towards greater complexity need to occur in the (descriptions of) subsystems not involved in self-reproduction itself, as conceptualized by the additional automaton D he considered to perform all functions not directly involved in reproduction (see Figure above with Von Neumann's System of Self-Replication Automata with the ability to evolve.) Indeed, in biological organisms only very minor variations of the genetic code have been observed, which matches Von Neumann's rationale that the universal constructor (A) and Copier (B) would not themselves evolve, leaving all evolution (and growth of complexity) to automaton D. In his unfinished work, Von Neumann also briefly considers conflict and interactions between his self-reproducing machines, towards understanding the evolution of ecological and social interactions from his theory of self-reproducing machines. Implementations In automata theory, the concept of a universal constructor is non-trivial because of the existence of Garden of Eden patterns (configurations that have no predecessor). But a simple definition is that a universal constructor is able to construct any finite pattern of non-excited (quiescent) cells. Arthur Burks and others extended the work of von Neumann, giving a much clearer and complete set of details regarding the design and operation of von Neumann's self-replicator. The work of J. W. Thatcher is particularly noteworthy, for he greatly simplified the design. Still, their work did not yield a complete design, cell by cell, of a configuration capable of demonstrating self-replication. Renato Nobili and Umberto Pesavento published the first fully implemented self-reproducing cellular automaton in 1995, nearly fifty years after von Neumann's work. They used a 32-state cellular automaton instead of von Neumann's original 29-state specification, extending it to allow for easier signal-crossing, explicit memory function and a more compact design. They also published an implementation of a general constructor within the original 29-state CA but not one capable of complete replication - the configuration cannot duplicate its tape, nor can it trigger its offspring; the configuration can only construct. In 2004, D. Mange et al. reported an implementation of a self-replicator that is consistent with the designs of von Neumann. In 2007, Nobili published a 32-state implementation that uses run-length encoding to greatly reduce the size of the tape. In 2008, William R. Buckley published two configurations which are self-replicators within the original 29-state CA of von Neumann. Buckley claims that the crossing of signal within von Neumann 29-state cellular automata is not necessary to the construction of self-replicators. Buckley also points out that for the purposes of evolution, each replicator should return to its original configuration after replicating, in order to be capable (in theory) of making more than one copy. As published, the 1995 design of Nobili-Pesavento does not fulfill this requirement but the 2007 design of Nobili does; the same is true of Buckley's configurations. In 2009, Buckley published with Golly a third configuration for von Neumann 29-state cellular automata, which can perform either holistic self-replication, or self-replication by partial construction. This configuration also demonstrates that signal crossing is not necessary to the construction of self-replicators within von Neumann 29-state cellular automata. C. L. Nehaniv in 2002, and also Y. Takada et al. in 2004, proposed a universal constructor directly implemented upon an asynchronous cellular automaton, rather than upon a synchronous cellular automaton. Comparison of implementations As defined by von Neumann, universal construction entails the construction of passive configurations, only. As such, the concept of universal construction constituted nothing more than a literary (or, in this case, mathematical) device. It facilitated other proof, such as that a machine well constructed may engage in self-replication, while universal construction itself was simply assumed over a most minimal case. Universal construction under this standard is trivial. Hence, while all the configurations given here can construct any passive configuration, none can construct the real-time crossing organ devised by Gorman. Practicality and computational cost All the implementations of von Neumann's self-reproducing machine require considerable resources to run on computer. For example, in the Nobili-Pesavento 32-state implementation shown above, while the body of the machine is just 6,329 non-empty cells (within a rectangle of size 97x170), it requires a tape that is 145,315 cells long, and takes 63 billion timesteps to replicate. A simulator running at 1,000 timesteps per second would take over 2 years to make the first copy. In 1995, when the first implementation was published, the authors had not seen their own machine replicate. However, in 2008, the hashlife algorithm was extended to support the 29-state and 32-state rulesets in Golly. On a modern desktop PC, replication now takes only a few minutes, although a significant amount of memory is required. Animation gallery See also Codd's cellular automaton Langton's loops Nobili cellular automata Quine, a program that produces itself as output Santa Claus machine Wireworld References External links John von Neumann's 29 state Cellular Automata Implemented in OpenLaszlo by Don Hopkins Artificial life Cellular automaton patterns Self-replicating machines 3D printing
Von Neumann universal constructor
[ "Physics", "Technology", "Biology" ]
2,690
[ "Physical systems", "Self-replication", "Machines", "Self-replicating machines" ]
15,894,497
https://en.wikipedia.org/wiki/Bing%27s%20recognition%20theorem
In topology, a branch of mathematics, Bing's recognition theorem, named for R. H. Bing, asserts that a necessary and sufficient condition for a M to be homeomorphic to the is that every Jordan curve in M be contained within a topological ball. It is a weak version of the Poincaré conjecture. References 3-manifolds Geometric topology Theorems in topology
Bing's recognition theorem
[ "Mathematics" ]
77
[ "Mathematical theorems", "Theorems in topology", "Geometric topology", "Topology stubs", "Topology", "Mathematical problems" ]
15,895,296
https://en.wikipedia.org/wiki/Equisatisfiability
In mathematical logic (a subtopic within the field of formal logic), two formulae are equisatisfiable if the first formula is satisfiable whenever the second is and vice versa; in other words, either both formulae are satisfiable or both are not. Equisatisfiable formulae may disagree, however, for a particular choice of variables. As a result, equisatisfiability is different from logical equivalence, as two equivalent formulae always have the same models. Whereas within equisatisfiable formulae, only the primitive proposition the formula imposes is valued. Equisatisfiability is generally used in the context of translating formulae, so that one can define a translation to be correct if the original and resulting formulae are equisatisfiable. Examples of translations that preserve equisatisfiability are Skolemization and some translations into conjunctive normal form such as the Tseytin transformation. Examples A translation from propositional logic into propositional logic in which every binary disjunction is replaced by , where is a new variable (one for each replaced disjunction) is a transformation in which satisfiability is preserved: the original and resulting formulae are equisatisfiable. Note that these two formulae are not equivalent: the first formula has the model in which is true while and are false (the model's truth value for being irrelevant to the truth value of the formula), but this is not a model of the second formula, in which has to be true in this case. References Metalogic Model theory Concepts in logic
Equisatisfiability
[ "Mathematics" ]
344
[ "Mathematical logic", "Model theory" ]
15,895,901
https://en.wikipedia.org/wiki/Prescribed%20Ricci%20curvature%20problem
In Riemannian geometry, a branch of mathematics, the prescribed Ricci curvature problem is as follows: given a smooth manifold M and a symmetric 2-tensor h, construct a metric on M whose Ricci curvature tensor equals h. See also Prescribed scalar curvature problem References Thierry Aubin, Some nonlinear problems in Riemannian geometry. Springer Monographs in Mathematics, 1998. Arthur L. Besse. Einstein manifolds. Reprint of the 1987 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2008. xii+516 pp. Dennis M. DeTurck, Existence of metrics with prescribed Ricci curvature: local theory. Invent. Math. 65 (1981/82), no. 1, 179–207. Riemannian geometry Mathematical problems ricci curvature
Prescribed Ricci curvature problem
[ "Physics", "Mathematics" ]
161
[ "Geometric measurement", "Mathematical problems", "Physical quantities", "Curvature (mathematics)" ]
15,898,987
https://en.wikipedia.org/wiki/Calcium%20channel%2C%20voltage-dependent%2C%20T%20type%2C%20alpha%201H%20subunit
Calcium channel, voltage-dependent, T type, alpha 1H subunit, also known as CACNA1H, is a protein which in humans is encoded by the CACNA1H gene. Function This gene encodes Cav3.2, a T-type member of the α1 subunit family, a protein in the voltage-dependent calcium channel complex. Calcium channels mediate the influx of calcium ions into the cell upon membrane polarization and consist of a complex of α1, α2δ, β, and γ subunits in a 1:1:1:1 ratio. The α1 subunit has 24 transmembrane segments and forms the pore through which ions pass into the cell. There are multiple isoforms of each of the proteins in the complex, either encoded by different genes or the result of alternative splicing of transcripts. Alternate transcriptional splice variants, encoding different isoforms, have been characterized for the gene described here. Clinical significance Studies suggest certain mutations in this gene lead to childhood absence epilepsy (CAE). Variants of Cav3.2 with increased channel activity contribute to susceptibility to idiopathic generalized epilepsy (IGE), but are not sufficient to induce epilepsy on their own. The SFARIgene database lists CACNA1H with an autism score of 2.1, indicating a candidate causal relationship with autism. See also T-type calcium channel References External links Further reading Ion channels Integral membrane proteins
Calcium channel, voltage-dependent, T type, alpha 1H subunit
[ "Chemistry" ]
309
[ "Neurochemistry", "Ion channels" ]
15,899,375
https://en.wikipedia.org/wiki/GTF2A1
Transcription initiation factor IIA subunit 1 is a protein that in humans is encoded by the GTF2A1 gene. Interactions GTF2A1 has been shown to interact with TATA binding protein and TBPL1. See also Transcription Factor II A References Further reading Transcription factors
GTF2A1
[ "Chemistry", "Biology" ]
55
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
13,213,280
https://en.wikipedia.org/wiki/Kachurovskii%27s%20theorem
In mathematics, Kachurovskii's theorem is a theorem relating the convexity of a function on a Banach space to the monotonicity of its Fréchet derivative. Statement of the theorem Let K be a convex subset of a Banach space V and let f : K → R ∪ {+∞} be an extended real-valued function that is Fréchet differentiable with derivative df(x) : V → R at each point x in K. (In fact, df(x) is an element of the continuous dual space V∗.) Then the following are equivalent: f is a convex function; for all x and y in K, df is an (increasing) monotone operator, i.e., for all x and y in K, References (Proposition 7.4) Convex analysis Theorems in functional analysis
Kachurovskii's theorem
[ "Mathematics" ]
179
[ "Theorems in mathematical analysis", "Theorems in functional analysis" ]
13,213,701
https://en.wikipedia.org/wiki/K%C5%8Dmura%27s%20theorem
In mathematics, Kōmura's theorem is a result on the differentiability of absolutely continuous Banach space-valued functions, and is a substantial generalization of Lebesgue's theorem on the differentiability of the indefinite integral, which is that Φ : [0, T] → R given by is differentiable at t for almost every 0 < t < T when φ : [0, T] → R lies in the Lp space L1([0, T]; R). Statement Let (X, || ||) be a reflexive Banach space and let φ : [0, T] → X be absolutely continuous. Then φ is (strongly) differentiable almost everywhere, the derivative φ′ lies in the Bochner space L1([0, T]; X), and, for all 0 ≤ t ≤ T, References (Theorem III.1.7) Theorems in measure theory Theorems in functional analysis
Kōmura's theorem
[ "Mathematics" ]
199
[ "Theorems in mathematical analysis", "Theorems in measure theory", "Theorems in functional analysis" ]
13,215,001
https://en.wikipedia.org/wiki/Conveyor%20system
A conveyor system is a common piece of mechanical handling equipment that moves materials from one location to another. Conveyors are especially useful in applications involving the transport of heavy or bulky materials. Conveyor systems allow quick and efficient transport for a wide variety of materials, which make them very popular in the material handling and packaging industries. They also have popular consumer applications, as they are often found in supermarkets and airports, constituting the final leg of item/ bag delivery to customers. Many kinds of conveying systems are available and are used according to the various needs of different industries. There are chain conveyors (floor and overhead) as well. Chain conveyors consist of enclosed tracks, I-Beam, towline, power & free, and hand pushed trolleys. Industries where used Conveyor systems are used widespread across a range of industries due to the numerous benefits they provide. Conveyors are able to safely transport materials from one level to another, which when done by human labor would be strenuous and expensive. They can be installed almost anywhere, and are much safer than using a forklift or other machine to move materials. They can move loads of all shapes, sizes and weights. Also, many have advanced safety features that help prevent accidents. There are a variety of options available for running conveying systems, including the hydraulic, mechanical and fully automated systems, which are equipped to fit individual needs. Conveyor systems are commonly used in many industries, including the Mining, automotive, agricultural, computer, electronic, food processing, aerospace, pharmaceutical, chemical, bottling and canning, print finishing and packaging. Although a wide variety of materials can be conveyed, some of the most common include food items such as beans and nuts, bottles and cans, automotive components, scrap metal, pills and powders, wood and furniture and grain and animal feed. Many factors are important in the accurate selection of a conveyor system. It is important to know how the conveyor system will be used beforehand. Some individual areas that are helpful to consider are the required conveyor operations, such as transport, accumulation and sorting, the material sizes, weights and shapes and where the loading and pickup points need to be. Care and maintenance A conveyor system is often the lifeline to a company's ability to effectively move its product in a timely fashion. The steps that a company can take to ensure that it performs at peak capacity, include regular inspections and system audits, close monitoring of motors and reducers, keeping key parts in stock, and proper training of personnel. Increasing the service life of a conveyor system involves: choosing the right conveyor type, the right system design and paying attention to regular maintenance practices. A conveyor system that is designed properly will last a long time with proper maintenance. Overhead conveyor systems have been used in numerous applications from shop displays, assembly lines to paint finishing plants and more. Impact and wear-resistant materials used in manufacturing Conveyor systems require materials suited to the displacement of heavy loads and the wear-resistance to hold-up over time without seizing due to deformation. Where static control is a factor, special materials designed to either dissipate or conduct electrical charges are used. Examples of conveyor handling materials include UHMW, nylon, Nylatron NSM, HDPE, Tivar, Tivar ESd, and polyurethane. Growth in various industries As far as growth is concerned the material handling and conveyor system makers are getting utmost exposure in the industries like automotive, pharmaceutical, packaging and different production plants. The portable conveyors are likewise growing fast in the construction sector and by the year 2014 the purchase rate for conveyor systems in North America, Europe and Asia is likely to grow even further. The most commonly purchased types of conveyors are line-shaft roller conveyors, chain conveyors and conveyor belts at packaging factories and industrial plants where usually product finishing and monitoring are carried. Commercial and civil sectors are increasingly implementing conveyors at airports, shopping malls, etc. Types Aero-mechanical conveyor Automotive conveyor Belt conveyor Belt-driven live roller conveyor Bucket conveyor Chain conveyor Chain-driven live roller conveyor Drag conveyor Dust-proof conveyor Electric track vehicle system Flexible conveyor Gravity conveyor Gravity skate-wheel conveyor Lineshaft roller conveyor Motorized-drive roller conveyor Overhead I-beam conveyor Overland conveyor Pharmaceutical conveyor Plastic belt conveyor Pneumatic conveyor Screw or auger conveyor Spiral conveyor Tube chain conveyor Tubular Gallery conveyor Vacuum conveyor Vertical conveyor Vibrating conveyor Walking Beam Wire mesh conveyor Pneumatic Every pneumatic system uses pipes or ducts called transport lines that carry a mixture of materials and a stream of air. These materials are free flowing powdery materials like cement and fly ash. Products are moved through tubes by air pressure. Pneumatic conveyors are either carrier systems or dilute-phase systems; carrier systems simply push items from one entry point to one exit point, such as the money-exchanging pneumatic tubes used at a bank drive-through window. Dilute-phase systems use push-pull pressure to guide materials through various entry and exit points. Air compressors or blowers can be used to generate the air flow. Three systems used to generate high-velocity air stream: Suction or vacuum systems, utilizing a vacuum created in the pipeline to draw the material with the surrounding air. The system operated at a low pressure, which is practically 0.4–0.5 atm below atmosphere, and is utilized mainly in conveying light free flowing materials. Pressure-type systems, in which a positive pressure is used to push material from one point to the next. The system is ideal for conveying material from one loading point to a number of unloading points. It operates at a pressure of 6 atm and upwards. Combination systems, in which a suction system is used to convey material from a number of loading points and a pressure system is employed to deliver it to a number of unloading points. Vibrating A vibrating conveyor is a machine with a solid conveying surface which is turned up on the side to form a trough. They are used extensively in food-grade applications to convey dry bulk solids where sanitation, washdown, and low maintenance are essential. Vibrating conveyors are also suitable for harsh, very hot, dirty, or corrosive environments. They can be used to convey newly-cast metal parts which may reach upwards of . Due to the fixed nature of the conveying pans vibrating conveyors can also perform tasks such as sorting, screening, classifying and orienting parts. Vibrating conveyors have been built to convey material at angles exceeding 45° from horizontal using special pan shapes. Flat pans will convey most materials at a 5° incline from horizontal line. Flexible The flexible conveyor is based on a conveyor beam in aluminum or stainless steel, with low-friction slide rails guiding a plastic multi-flexing chain. Products to be conveyed travel directly on the conveyor, or on pallets/carriers. These conveyors can be worked around obstacles and keep production lines flowing. They are made at varying levels and can work in multiple environments. They are used in food packaging, case packing, and pharmaceutical industries and also in large retail stores such as Wal-Mart and Kmart. Spiral Like vertical conveyors, spiral conveyors raise and lower materials to different levels of a facility. In contrast, spiral conveyors are able to transport material loads in a continuous flow. A helical spiral or screw rotates within a sealed tube and the speed makes the product in the conveyor rotate with the screw. The tumbling effect provides a homogeneous mix of particles in the conveyor, which is essential when feeding pre-mixed ingredients and maintaining mixing integrity. Industries that require a higher output of materials - food and beverage, retail case packaging, pharmaceuticals - typically incorporate these conveyors into their systems over standard vertical conveyors due to their ability to facilitate high throughput. Most spiral conveyors also have a lower angle of incline or decline (11 degrees or less) to prevent sliding and tumbling during operation. Vertical Vertical conveyors, also commonly referred to as freight lifts and material lifts, are conveyor systems used to raise or lower materials to different levels of a facility during the handling process. Examples of these conveyors applied in the industrial assembly process include transporting materials to different floors. While similar in look to freight elevators, vertical conveyors are not equipped to transport people, only materials. Vertical lift conveyors contain two adjacent, parallel conveyors for simultaneous upward movement of adjacent surfaces of the parallel conveyors. One of the conveyors normally has spaced apart flights (pans) for transporting bulk food items. The dual conveyors rotate in opposite directions, but are operated from one gear box to ensure equal belt speed. One of the conveyors is pivotally hinged to the other conveyor for swinging the attached conveyor away from the remaining conveyor for access to the facing surfaces of the parallel conveyors. Vertical lift conveyors can be manually or automatically loaded and controlled. Almost all vertical conveyors can be systematically integrated with horizontal conveyors, since both of these conveyor systems work in tandem to create a cohesive material handling assembly line. Like spiral conveyors, vertical conveyors that use forks can transport material loads in a continuous flow. With these forks the load can be taken from one horizontal conveyor and put down on another horizontal conveyor on a different level. By adding more forks, more products can be lifted at the same time. Conventional vertical conveyors must have input and output of material loads moving in the same direction. By using forks many combinations of different input- and output- levels in different directions are possible. A vertical conveyor with forks can even be used as a vertical sorter. Compared to a spiral conveyor a vertical conveyor - with or without forks - takes up less space. Vertical reciprocating conveyors (or VRC) are another type of unit handling system. Typical applications include moving unit loads between floor levels, working with multiple accumulation conveyors, and interfacing overhead conveyors line. Common material to be conveyed includes pallets, sacks, custom fixtures or product racks and more. Motorized Drive Roller (MDR) Motorized Drive Roller (MDR) conveyor utilize drive rollers that have a Brushless DC (BLDC) motor embedded within a conveyor roller tube. A single motorized roller tube is then mechanically linked to a small number of non-powered rollers to create a controllable zone of powered conveyor. A linear collection of these individually powered zones are arranged end to end to form a line of contiguous conveyor. The mechanical performance (torque, speed, efficiency, etc.) of drive rollers equipped with BLDC motors is right in the range of that needed for roller conveyor zones when they need to convey general use carton boxes of the size and weight seen in typical modern warehouse and distribution applications. A typical motorized roller conveyor zone can handle carton items weighing up to approximately 35 kg (75 lbs.). Heavy-duty roller Heavy-duty roller conveyors are used for moving items that weigh at least . This type of conveyor makes the handling of such heavy equipment/products easier and more time effective. Many of the heavy duty roller conveyors can move as fast as . Other types of heavy-duty roller conveyors are gravity roller conveyors, chain-driven live roller conveyors, pallet accumulation conveyors, multi-strand chain conveyors, and chain and roller transfers. Gravity roller conveyors are easy to use and are used in many different types of industries such as automotive and retail. Chain-driven live roller conveyors are used for single or bi-directional material handling. Large, heavy loads are moved by chain driven live roller conveyors. Pallet accumulation conveyors are powered through a mechanical clutch. This is used instead of individually powered and controlled sections of conveyors. Multi-strand chain conveyors are used for double-pitch roller chains. Products that cannot be moved on traditional roller conveyors can be moved by a multi-strand chain conveyor. Chain and roller conveyors are short runs of two or more strands of double-pitch chain conveyors built into a chain-driven line roller conveyor. These pop up under the load and move the load off of the conveyor. Walking Beam It usually consists of two fluid power cylinders or also can use a motor driven cam. For the cylinder driven fluid power type, one axis is for vertical motion and the other for horizontal. Both cam and fluid power types require nests at each station to retain the part that is being moved. The beam is raised, raising the part from its station nest and holding the part in a nest on the walking beam, then moved horizontally, transporting the part to the next nest, then lowered vertically, placing the part in the next station's nest. The beam is then returned to its home position while it is in the lowered position out of the way of the parts. This type of conveying system is useful for parts that need to be accurately physically located or relatively heavy parts. All stations are equidistance and require a nest to retain the part. See also Belt conveyor Chain conveyor Checkweigher Manufacturing Material handling Moving bed heat exchanger Moving walkway Treadmill References M.Marcu-Pipeline Conveyors (theory,photos,state of the art 1990-Pneumatic Pipeline conveyors with wheeled containers) at page 45 in: "Material handling in pyrometallurgy: proceedings of the International Symposium on Materials Handling in Pyrometallurgy, Hamilton, Ontario, August 26–30, 1990-Pergamon press" External links The importance of conveyor and sortation systems for controlled growth (2:03 min. video) Freight transport Packaging machinery Material-handling equipment
Conveyor system
[ "Engineering" ]
2,816
[ "Packaging machinery", "Industrial machinery" ]
13,221,438
https://en.wikipedia.org/wiki/Stephen%20Parke
Stephen Parke is a New Zealand-American theoretical physicist. He is a distinguished scientist and former head (2010–2015) of the Theoretical Physics Department at the Fermi National Accelerator Laboratory Batavia, Illinois. Born in Gisborne, New Zealand, Parke attended Campion College, Gisborne and St Peter's College, Auckland. He did his undergraduate studies, mathematics and physics, at the University of Auckland in New Zealand where his mentor was Dan Walls. He obtained a Fulbright Travel Grant and was awarded a Frank Knox Memorial Fellowship to attend graduate school at Harvard University. He was a graduate student of Sidney Coleman, obtaining a PhD in theoretical particle physics in 1980. He held a postdoctoral fellowship at the Stanford Linear Accelerator Center (1980–1983) collaborating with Sidney Drell before moving to the Fermi National Accelerator Laboratory as an Associate Scientist. He became an APS fellow in 1996 and in 2018 he was awarded a Doctorate of Science from the University of Auckland for his work on "Amplitudes in Gauge Theories". Parke's Erdos number is 3, having written papers with both Sidney Coleman and mathematician Terence Tao. Contributions to physics He is an originator of Parke–Taylor amplitudes, which he developed with his colleague, Tomasz Taylor. Parke-Taylor amplitudes represent a new approach to computing scattering amplitudes in quantum chromodynamics using symmetry methods such as supersymmetry. This work was further extended in collaboration with Michelangelo Mangano and Xu Zhan. The discovery of the Parke-Taylor amplitudes ignited the amplitude revolution: a major advance in our understanding and calculability of scattering amplitudes in gauge theories, the foundation of particle physics. This advance is discussed in detail in Chapter 11 of Graham Farmelo's book "The Universe speaks in Numbers". Parke's important Amplitude papers are linked here. With collaborator Gregory Mahon and others he pioneered the study of spin correlations in Top Quark pair production at Hadron collider which has lead to the confirmation of quantum entanglement at the highest possible energy by ATLAS and CMS. Here is a link to his important Top Quark papers. Parke is also an expert on neutrino physics. He gave the first analytical solution to the MSW effect including the non-adabatic region and has made important contributions to the physics of Long baseline Neutrino Oscillation experiments, T2K, NOvA, Hyper-Kamiokande and DUNE as well as the reactor experiments RENO, Daya Bay and JUNO. Here is a link to his important Neutrino papers. He has also written papers on Magnetic Monopoles and the decay of the false vacuum in curved space time. Personnal Life Parke's father was the orthopedic surgeon William Parke and mother Muriel Parke (née Stephens), a school teacher. His parents both born in Liverpool immigrated from the UK to New Zealand in 1949 to help with the polio epidemic raging in New Zealand at that time. Parke is a nephew of marine botanist Mary Parke. Parke is married to Winifred Haun, the MacArthur Foundation and 3Arts award-winning Choreographer and artistic director of the contemporary dance company Winifred Haun & Dancers. WH&D is one of the more diverse and innovative dance companies in Chicago. They have three amazing daughters: Athena, Iris and Selene. See also List of alumni of St Peter's College, Auckland for more biographical details References External links Parke's scientific publications are available on the INSPIRE-HEP Literature Database . HEPNames profile: Stephen Parke Stephen Parke at Fermilab Theoretical Physics Department Particle physicists Neutrino physicists American theoretical physicists Fellows of the American Physical Society 20th-century American physicists 21st-century American physicists Harvard University alumni American people of New Zealand descent New Zealand scientists New Zealand physicists University of Auckland alumni People from Gisborne, New Zealand Living people People educated at St Peter's College, Auckland Theoretical physicists Th People associated with Fermilab People educated at Campion College, Gisborne Year of birth missing (living people)
Stephen Parke
[ "Physics" ]
846
[ "Theoretical physics", "Theoretical physicists", "Particle physics", "Particle physicists" ]
13,222,289
https://en.wikipedia.org/wiki/Roman%20concrete
Roman concrete, also called , was used in construction in ancient Rome. Like its modern equivalent, Roman concrete was based on a hydraulic-setting cement added to an aggregate. Many buildings and structures still standing today, such as bridges, reservoirs and aqueducts, were built with this material, which attests to both its versatility and its durability. Its strength was sometimes enhanced by the incorporation of pozzolanic ash where available (particularly in the Bay of Naples). The addition of ash prevented cracks from spreading. Recent research has shown that the incorporation of mixtures of different types of lime, forming conglomerate "clasts" allowed the concrete to self-repair cracks. Roman concrete was in widespread use from about 150 BC; some scholars believe it was developed a century before that. It was often used in combination with facings and other supports, and interiors were further decorated by stucco, fresco paintings, or coloured marble. Further innovative developments in the material, part of the so-called concrete revolution, contributed to structurally complicated forms. The most prominent example of these is the Pantheon dome, the world's largest and oldest unreinforced concrete dome. Roman concrete differs from modern concrete in that the aggregates often included larger components; hence, it was laid rather than poured. Roman concretes, like any hydraulic concrete, were usually able to set underwater, which was useful for bridges and other waterside construction. Historic references Vitruvius, writing around 25 BC in his Ten Books on Architecture, distinguished types of materials appropriate for the preparation of lime mortars. For structural mortars, he recommended pozzolana ( in Latin), the volcanic sand from the beds of Pozzuoli, which are brownish-yellow-gray in colour in that area around Naples, and reddish-brown near Rome. Vitruvius specifies a ratio of 1 part lime to 3 parts pozzolana for mortar used in buildings and a 1:2 ratio for underwater work. The Romans first used hydraulic concrete in coastal underwater structures, probably in the harbours around Baiae before the end of the 2nd century BC. The harbour of Caesarea is an example (22-15 BC) of the use of underwater Roman concrete technology on a large scale, for which enormous quantities of pozzolana were imported from Puteoli. For rebuilding Rome after the fire in 64 AD which destroyed large portions of the city, Nero's new building code largely called for brick-faced concrete. This appears to have encouraged the development of the brick and concrete industries. Material properties Roman concrete, like any concrete, consists of an aggregate and hydraulic mortar, a binder mixed with water that hardens over time. The composition of the aggregate varied, and included pieces of rock, ceramic tile, lime clasts, and brick rubble from the remains of previously demolished buildings. In Rome, readily available tuff was often used as an aggregate. Gypsum and quicklime were used as binders. Volcanic dusts, called pozzolana or "pit sand", were favoured where they could be obtained. Pozzolana makes the concrete more resistant to salt water than modern-day concrete. Pozzolanic mortar had a high content of alumina and silica. Research in 2023 found that lime clasts, previously considered a sign of poor aggregation technique, react with water seeping into any cracks. This produces reactive calcium, which allows new calcium carbonate crystals to form and reseal the cracks. These lime clasts have a brittle structure that was most likely created in a "hot-mixing" technique with quicklime rather than traditional slaked lime, causing cracks to preferentially move through the lime clasts, thus potentially playing a critical role in the self-healing mechanism. Concrete and, in particular, the hydraulic mortar responsible for its cohesion, was a type of structural ceramic whose utility derived largely from its rheological plasticity in the paste state. The setting and hardening of hydraulic cements derived from hydration of materials and the subsequent chemical and physical interaction of these hydration products. This differed from the setting of slaked lime mortars, the most common cements of the pre-Roman world. Once set, Roman concrete exhibited little plasticity, although it retained some resistance to tensile stresses.The setting of pozzolanic cements has much in common with setting of their modern counterpart, Portland cement. The high silica composition of Roman pozzolana cements is very close to that of modern cement to which blast furnace slag, fly ash, or silica fume have been added. The strength and longevity of Roman 'marine' concrete is understood to benefit from a reaction of seawater with a mixture of volcanic ash and quicklime to create a rare crystal called tobermorite, which may resist fracturing. As seawater percolated within the tiny cracks in the Roman concrete, it reacted with phillipsite naturally found in the volcanic rock and created aluminous tobermorite crystals. The result is a candidate for "the most durable building material in human history". In contrast, modern concrete exposed to saltwater deteriorates within decades. The Roman concrete at the Tomb of Caecilia Metella is another variation higher in potassium that triggered changes that "reinforce interfacial zones and potentially contribute to improved mechanical performance". Seismic technology For an environment as prone to earthquakes as the Italian peninsula, interruptions and internal constructions within walls and domes created discontinuities in the concrete mass. Portions of the building could then shift slightly when there was movement of the earth to accommodate such stresses, enhancing the overall strength of the structure. It was in this sense that bricks and concrete were flexible. It may have been precisely for this reason that, although many buildings sustained serious cracking from a variety of causes, they continue to stand to this day. Another technology used to improve the strength and stability of concrete was its gradation in domes. One example is the Pantheon, where the aggregate of the upper dome region consists of alternating layers of light tuff and pumice, giving the concrete a density of . The foundation of the structure used travertine as an aggregate, having a much higher density of . Modern use Scientific studies of Roman concrete since 2010 have attracted both media and industry attention. Because of its unusual durability, longevity, and lessened environmental footprint, corporations and municipalities are starting to explore the use of Roman-style concrete in North America. This involves replacing the volcanic ash with coal fly ash that has similar properties. Proponents say that concrete made with fly ash can cost up to 60% less, because it requires less cement. It also has a reduced environmental footprint, due to its lower cooking temperature and much longer lifespan. Usable examples of Roman concrete exposed to harsh marine environments have been found to be 2000 years old with little or no wear. In 2013, the University of California Berkeley published an article that described for the first time the mechanism by which the suprastable calcium-aluminium-silicate-hydrate compound binds the material together. During its production, less carbon dioxide is released into the atmosphere than any modern concrete production process. It is no coincidence that the walls of Roman buildings are thicker than those of modern buildings. However, Roman concrete was still gaining its strength for several decades after construction had been completed. See also Literature References External links Ancient Roman architecture Concrete Concrete buildings and structures Building materials Masonry Pavements Sculpture materials Ancient inventions Architecture in Italy Architectural history Ancient Roman construction techniques
Roman concrete
[ "Physics", "Engineering" ]
1,531
[ "Structural engineering", "Architectural history", "Masonry", "Building engineering", "Architecture", "Construction", "Materials", "Concrete", "Matter", "Building materials" ]
966,654
https://en.wikipedia.org/wiki/DNA-binding%20protein
DNA-binding proteins are proteins that have DNA-binding domains and thus have a specific or general affinity for single- or double-stranded DNA. Sequence-specific DNA-binding proteins generally interact with the major groove of B-DNA, because it exposes more functional groups that identify a base pair. Examples DNA-binding proteins include transcription factors which modulate the process of transcription, various polymerases, nucleases which cleave DNA molecules, and histones which are involved in chromosome packaging and transcription in the cell nucleus. DNA-binding proteins can incorporate such domains as the zinc finger, the helix-turn-helix, and the leucine zipper (among many others) that facilitate binding to nucleic acid. There are also more unusual examples such as transcription activator like effectors. Non-specific DNA-protein interactions Structural proteins that bind DNA are well-understood examples of non-specific DNA-protein interactions. Within chromosomes, DNA is held in complexes with structural proteins. These proteins organize the DNA into a compact structure called chromatin. In eukaryotes, this structure involves DNA binding to a complex of small basic proteins called histones. In prokaryotes, multiple types of proteins are involved. The histones form a disk-shaped complex called a nucleosome, which contains two complete turns of double-stranded DNA wrapped around its surface. These non-specific interactions are formed through basic residues in the histones making ionic bonds to the acidic sugar-phosphate backbone of the DNA, and are therefore largely independent of the base sequence. Chemical modifications of these basic amino acid residues include methylation, phosphorylation and acetylation. These chemical changes alter the strength of the interaction between the DNA and the histones, making the DNA more or less accessible to transcription factors and changing the rate of transcription. Other non-specific DNA-binding proteins in chromatin include the high-mobility group (HMG) proteins, which bind to bent or distorted DNA. Biophysical studies show that these architectural HMG proteins bind, bend and loop DNA to perform its biological functions. These proteins are important in bending arrays of nucleosomes and arranging them into the larger structures that form chromosomes. Recently FK506 binding protein 25 (FBP25) was also shown to non-specifically bind to DNA which helps in DNA repair. Proteins that specifically bind single-stranded DNA A distinct group of DNA-binding proteins are the DNA-binding proteins that specifically bind single-stranded DNA. In humans, replication protein A is the best-understood member of this family and is used in processes where the double helix is separated, including DNA replication, recombination and DNA repair. These binding proteins seem to stabilize single-stranded DNA and protect it from forming stem-loops or being degraded by nucleases. Binding to specific DNA sequences In contrast, other proteins have evolved to bind to specific DNA sequences. The most intensively studied of these are the various transcription factors, which are proteins that regulate transcription. Each transcription factor binds to one specific set of DNA sequences and activates or inhibits the transcription of genes that have these sequences near their promoters. The transcription factors do this in two ways. Firstly, they can bind the RNA polymerase responsible for transcription, either directly or through other mediator proteins; this locates the polymerase at the promoter and allows it to begin transcription. Alternatively, transcription factors can bind enzymes that modify the histones at the promoter. This alters the accessibility of the DNA template to the polymerase. These DNA targets can occur throughout an organism's genome. Thus, changes in the activity of one type of transcription factor can affect thousands of genes. Thus, these proteins are often the targets of the signal transduction processes that control responses to environmental changes or cellular differentiation and development. The specificity of these transcription factors' interactions with DNA come from the proteins making multiple contacts to the edges of the DNA bases, allowing them to read the DNA sequence. Most of these base-interactions are made in the major groove, where the bases are most accessible. Mathematical descriptions of protein-DNA binding taking into account sequence-specificity, and competitive and cooperative binding of proteins of different types are usually performed with the help of the lattice models. Computational methods to identify the DNA binding sequence specificity have been proposed to make a good use of the abundant sequence data in the post-genomic era. In addition, progress has happened on structure-based prediction of binding specificity across protein families using deep learning. Protein–DNA interactions Protein–DNA interactions occur when a protein binds a molecule of DNA, often to regulate the biological function of DNA, usually the expression of a gene. Among the proteins that bind to DNA are transcription factors that activate or repress gene expression by binding to DNA motifs and histones that form part of the structure of DNA and bind to it less specifically. Also proteins that repair DNA such as uracil-DNA glycosylase interact closely with it. In general, proteins bind to DNA in the major groove; however, there are exceptions. Protein–DNA interaction are of mainly two types, either specific interaction, or non-specific interaction. Recent single-molecule experiments showed that DNA binding proteins undergo of rapid rebinding in order to bind in correct orientation for recognizing the target site. Design Designing DNA-binding proteins that have a specified DNA-binding site has been an important goal for biotechnology. Zinc finger proteins have been designed to bind to specific DNA sequences and this is the basis of zinc finger nucleases. Recently transcription activator-like effector nucleases (TALENs) have been created which are based on natural proteins secreted by Xanthomonas bacteria via their type III secretion system when they infect various plant species. Detection methods There are many in vitro and in vivo techniques which are useful in detecting DNA-Protein Interactions. The following lists some methods currently in use: Electrophoretic mobility shift assay (EMSA) is a widespread qualitative technique to study protein–DNA interactions of known DNA binding proteins. DNA-Protein-Interaction - Enzyme-Linked ImmunoSorbant Assay (DPI-ELISA) allows the qualitative and quantitative analysis of DNA-binding preferences of known proteins in vitro. This technique allows the analysis of protein complexes that bind to DNA (DPI-Recruitment-ELISA) or is suited for automated screening of several nucleotide probes due to its standard ELISA plate formate. DNase footprinting assay can be used to identify the specific sites of binding of a protein to DNA at basepair resolution. Chromatin immunoprecipitation is used to identify the in vivo DNA target regions of a known transcription factor. This technique when combined with high throughput sequencing is known as ChIP-Seq and when combined with microarrays it is known as ChIP-chip. Yeast one-hybrid System (Y1H) is used to identify which protein binds to a particular DNA fragment. Bacterial one-hybrid system (B1H) is used to identify which protein binds to a particular DNA fragment. Structure determination using X-ray crystallography has been used to give a highly detailed atomic view of protein–DNA interactions. Besides these methods, other techniques such as SELEX, PBM (protein binding microarrays), DNA microarray screens, DamID, FAIRE or more recently DAP-seq are used in the laboratory to investigate DNA-protein interaction in vivo and in vitro. Manipulating the interactions The protein–DNA interactions can be modulated using stimuli like ionic strength of the buffer, macromolecular crowding, temperature, pH and electric field. This can lead to reversible dissociation/association of the protein–DNA complex. See also bZIP domain ChIP-exo Comparison of nucleic acid simulation software DNA-binding domain Helix-loop-helix Helix-turn-helix HMG-box Leucine zipper Lexitropsin (a semi-synthetic DNA-binding ligand) Deoxyribonucleoprotein Protein–DNA interaction site prediction software RNA-binding protein Single-strand binding protein Zinc finger References External links Protein-DNA binding: data, tools & models (annotated list, constantly updated) Abalone tool for modeling DNA-ligand interactions. DBD database of predicted transcription factors Uses a curated set of DNA-binding domains to predict transcription factors in all completely sequenced genomes DNA-binding proteins Molecular genetics DNA replication Transcription factors Biophysics
DNA-binding protein
[ "Physics", "Chemistry", "Biology" ]
1,747
[ "Genetics techniques", "Applied and interdisciplinary physics", "Gene expression", "Signal transduction", "DNA replication", "Molecular genetics", "Biophysics", "Induced stem cells", "Molecular biology", "Transcription factors" ]
966,794
https://en.wikipedia.org/wiki/Isoelectric%20focusing
Isoelectric focusing (IEF), also known as electrofocusing, is a technique for separating different molecules by differences in their isoelectric point (pI). It is a type of zone electrophoresis usually performed on proteins in a gel that takes advantage of the fact that overall charge on the molecule of interest is a function of the pH of its surroundings. Procedure IEF involves adding an ampholyte solution into immobilized pH gradient (IPG) gels. IPGs are the acrylamide gel matrix co-polymerized with the pH gradient, which result in completely stable gradients except the most alkaline (>12) pH values. The immobilized pH gradient is obtained by the continuous change in the ratio of immobilines. An immobiline is a weak acid or base defined by its pK value. A protein that is in a pH region below its isoelectric point (pI) will be positively charged and so will migrate toward the cathode (negatively charged electrode). As it migrates through a gradient of increasing pH, however, the protein's overall charge will decrease until the protein reaches the pH region that corresponds to its pI. At this point it has no net charge and so migration ceases (as there is no electrical attraction toward either electrode). As a result, the proteins become focused into sharp stationary bands with each protein positioned at a point in the pH gradient corresponding to its pI. The technique is capable of extremely high resolution with proteins differing by a single charge being fractionated into separate bands. Molecules to be focused are distributed over a medium that has a pH gradient (usually created by aliphatic ampholytes). An electric current is passed through the medium, creating a "positive" anode and "negative" cathode end. Negatively charged molecules migrate through the pH gradient in the medium toward the "positive" end while positively charged molecules move toward the "negative" end. As a particle moves toward the pole opposite of its charge it moves through the changing pH gradient until it reaches a point in which the pH of that molecule's isoelectric point is reached. At this point the molecule no longer has a net electric charge (due to the protonation or deprotonation of the associated functional groups) and as such will not proceed any further within the gel. The gradient is established before adding the particles of interest by first subjecting a solution of small molecules such as polyampholytes with varying pI values to electrophoresis. The method is applied particularly often in the study of proteins, which separate based on their relative content of acidic and basic residues, whose value is represented by the pI. Proteins are introduced into an immobilized pH gradient gel composed of polyacrylamide, starch, or agarose where a pH gradient has been established. Gels with large pores are usually used in this process to eliminate any "sieving" effects, or artifacts in the pI caused by differing migration rates for proteins of differing sizes. Isoelectric focusing can resolve proteins that differ in pI value by as little as 0.01. Isoelectric focusing is the first step in two-dimensional gel electrophoresis, in which proteins are first separated by their pI value and then further separated by molecular weight through SDS-PAGE. Isoelectric focusing, on the other hand, is the only step in preparative native PAGE at constant pH. Living cells According to some opinions, living eukaryotic cells perform isoelectric focusing of proteins in their interior to overcome a limitation of the rate of metabolic reaction by diffusion of enzymes and their reactants, and to regulate the rate of particular biochemical processes. By concentrating the enzymes of particular metabolic pathways into distinct and small regions of its interior, the cell can increase the rate of particular biochemical pathways by several orders of magnitude. By modification of the isoelectric point (pI) of molecules of an enzyme by, e.g., phosphorylation or dephosphorylation, the cell can transfer molecules of the enzyme between different parts of its interior, to switch on or switch off particular biochemical processes. Microfluidic chip based Microchip based electrophoresis is a promising alternative to capillary electrophoresis since it has the potential to provide rapid protein analysis, straightforward integration with other microfluidic unit operations, whole channel detection, nitrocellulose films, smaller sample sizes and lower fabrication costs. Multi-junction The increased demand for faster and easy-to-use protein separation tools has accelerated the evolution of IEF towards in-solution separations. In this context, a multi-junction IEF system was developed to perform fast and gel-free IEF separations. The multi-junction IEF system utilizes a series of vessels with a capillary passing through each vessel. Part of the capillary in each vessel is replaced by a semipermeable membrane. The vessels contain buffer solutions with different pH values, so that a pH gradient is effectively established inside the capillary. The buffer solution in each vessel has an electrical contact with a voltage divider connected to a high-voltage power supply, which establishes an electrical field along the capillary. When a sample (a mixture of peptides or proteins) is injected in the capillary, the presence of the electrical field and the pH gradient separates these molecules according to their isoelectric points. The multi-junction IEF system has been used to separate tryptic peptide mixtures for two-dimensional proteomics and blood plasma proteins from Alzheimer's disease patients for biomarker discovery. References Electrophoresis Industrial processes Protein methods Molecular biology techniques
Isoelectric focusing
[ "Chemistry", "Biology" ]
1,180
[ "Biochemistry methods", "Instrumental analysis", "Protein methods", "Protein biochemistry", "Biochemical separation processes", "Molecular biology techniques", "Molecular biology", "Electrophoresis" ]
966,859
https://en.wikipedia.org/wiki/Total%20Ozone%20Mapping%20Spectrometer
The Total Ozone Mapping Spectrometer (TOMS) was a NASA satellite instrument, specifically a spectrometer, for measuring the ozone layer. Of the five TOMS instruments which were built, four entered successful orbit. The satellites carrying TOMS instruments were: Nimbus 7; launched October 24, 1978. Operated until 1 August 1994. Carried TOMS instrument number 1. Meteor-3-5; launched 15 August 1991. Operated until December 1994. Was the first and last Soviet satellite to carry a USA made instrument. Carried TOMS instrument number 2. ADEOS I; launched 17 August 1996. Operated until 30 June 1997. Mission was cut short by a spacecraft failure. TOMS-Earth Probe; launched on July 2, 1996. Operated until 2 December 2006. Carried TOMS instrument number 3. QuikTOMS; launched 21 September 2001. Suffered launch failure and did not enter orbit. Nimbus 7 and Meteor-3-5 provided global measurements of total column ozone on a daily basis and together provided a complete data set of daily ozone from November 1978 to December 1994. After an eighteen-month period when the program had no on-orbit capability, TOMS-Earth Probe launched on 2 July 1996, followed by . ADEOS I was launched on August 17, 1996, and the TOMS-instrument onboard provided data until the satellite which housed it lost power on June 30, 1997. TOMS-Earth Probe (Total Ozone Mapping Spectrometer - Earth Probe, TOMS-EP, originally just TOMS, COSPAR 1996-037A) was launched on July 2, 1996, from Vandenberg AFB by a Pegasus XL rocket. The satellite project was originally known as TOMS, back in 1989 when it was selected as a SMEX mission in the Explorer program. However, it found no funding as an Explorer mission and transferred to NASA's Earth Probe program, getting funding and becoming TOMS-EP. The small, 295 kg satellite was built for NASA by TRW; the single instrument was the TOMS 3 spectrometer. The satellite had a two-year planned life. TOMS-EP suffered a two-year delay to its launch due to launch failures of the first two Pegasus XL rockets. The launch delays led to alternations in the mission; the satellite was placed in a lower than originally planned orbit to achieve higher resolution and to enable more thorough study of UV-absorbing aerosols in the troposphere. The lower orbit was meant to complement measurements from ADEOS I enabling TOMS-EP to provide supplemental measurements. After ADEOS I failed in orbit, TOMS-EP was boosted to a higher orbit to replace ADEOS I. The transmitter for TOMS-Earth Probe failed on December 2, 2006. The only total failure in the series was QuikTOMS, which was launched on September 21, 2001, on a Taurus rocket from Vandenberg AFB, but did not achieve orbit. Since January 1, 2006, data from the Aura Ozone Monitoring Instrument (OMI) has replaced data from TOMS-Earth Probe. The Ozone Mapping and Profiler Suite on Suomi NPP and NOAA-20 have further continued the data record. Gallery References External links TOMS home page TOMS Volcanic Emissions Group Further reading Scientific instruments Satellite meteorology Spacecraft instruments Spectrometers Ozone depletion Earth observation satellite sensors
Total Ozone Mapping Spectrometer
[ "Physics", "Chemistry", "Technology", "Engineering" ]
690
[ "Spectrum (physical sciences)", "Scientific instruments", "Measuring instruments", "Spectrometers", "Spectroscopy" ]
967,164
https://en.wikipedia.org/wiki/Edman%20degradation
Edman degradation, developed by Pehr Edman, is a method of sequencing amino acids in a peptide. In this method, the amino-terminal residue is labeled and cleaved from the peptide without disrupting the peptide bonds between other amino acid residues. Mechanism Phenyl isothiocyanate is reacted with an uncharged N-terminal amino group, under mildly alkaline conditions, to form a cyclical phenylthiocarbamoyl derivative. Then, under acidic conditions, this derivative of the terminal amino acid is cleaved as a thiazolinone derivative. The thiazolinone amino acid is then selectively extracted into an organic solvent and treated with acid to form the more stable phenylthiohydantoin (PTH)- amino acid derivative that can be identified by using chromatography or electrophoresis. This procedure can then be repeated again to identify the next amino acid. A major drawback to this technique is that the peptides being sequenced in this manner cannot have more than 50 to 60 residues (and in practice, under 30). The peptide length is limited due to the cyclical derivatization not always going to completion. The derivatization problem can be resolved by cleaving large peptides into smaller peptides before proceeding with the reaction. It is able to accurately sequence up to 30 amino acids with modern machines capable of over 99% efficiency per amino acid. An advantage of the Edman degradation is that it only uses 10 - 100 pico-moles of peptide for the sequencing process. The Edman degradation reaction was automated in 1967 by Edman and Beggs to speed up the process and 100 automated devices were in use worldwide by 1973. Limitations Because the Edman degradation proceeds from the N-terminus of the protein, it will not work if the N-terminus has been chemically modified (e.g. by acetylation or formation of pyroglutamic acid). Sequencing will stop if a non-α-amino acid is encountered (e.g. isoaspartic acid), since the favored five-membered ring intermediate is unable to be formed. Edman degradation is generally not useful to determine the positions of disulfide bridges. It also requires peptide amounts of 1 picomole or above for discernible results. Coupled analysis Following 2D SDS PAGE the proteins can be transferred to a polyvinylidene difluoride (PVDF) blotting membrane for further analysis. Edman degradations can be performed directly from a PVDF membrane. N-terminal residue sequencing resulting in five to ten amino acid may be sufficient to identify a Protein of Interest (POI). See also Bergmann degradation Dansyl chloride References Molecular biology Organic reactions Protein structure Proteomics Name reactions Chemical tests Degradation reactions
Edman degradation
[ "Chemistry", "Biology" ]
583
[ "Chemical tests", "Organic reactions", "Name reactions", "Structural biology", "Molecular biology", "Biochemistry", "Degradation reactions", "Protein structure" ]
967,315
https://en.wikipedia.org/wiki/Temporal%20paradox
A temporal paradox, time paradox, or time travel paradox, is a paradox, an apparent contradiction, or logical contradiction associated with the idea of time travel or other foreknowledge of the future. While the notion of time travel to the future complies with the current understanding of physics via relativistic time dilation, temporal paradoxes arise from circumstances involving hypothetical time travel to the past – and are often used to demonstrate its impossibility. Types Temporal paradoxes fall into three broad groups: bootstrap paradoxes, consistency paradoxes, and Newcomb's paradox. Bootstrap paradoxes violate causality by allowing future events to influence the past and cause themselves, or "bootstrapping", which derives from the idiom "." Consistency paradoxes, on the other hand, are those where future events influence the past to cause an apparent contradiction, exemplified by the grandfather paradox, where a person travels to the past to prevent the conception of one of their ancestors, thus eliminating all the ancestor's descendants. Newcomb's paradox stems from the apparent contradictions that stem from the assumptions of both free will and foreknowledge of future events. All of these are sometimes referred to individually as "causal loops." The term "time loop" is sometimes referred to as a causal loop, but although they appear similar, causal loops are unchanging and self-originating, whereas time loops are constantly resetting. Bootstrap paradox A bootstrap paradox, also known as an information loop, an information paradox, an ontological paradox, or a "predestination paradox" is a paradox of time travel that occurs when any event, such as an action, information, an object, or a person, ultimately causes itself, as a consequence of either retrocausality or time travel. Backward time travel would allow information, people, or objects whose histories seem to "come from nowhere". Such causally looped events then exist in spacetime, but their origin cannot be determined. The notion of objects or information that are "self-existing" in this way is often viewed as paradoxical. A notable example occurs in the 1958 science fiction short story "—All You Zombies—", by Robert A. Heinlein, wherein the main character, an intersex individual, becomes both their own mother and father; it was adapted with great fidelity in the 2014 film Predestination. Allen Everett gives the movie Somewhere in Time as an example involving an object with no origin: an old woman gives a watch to a playwright who later travels back in time and meets the same woman when she was young, and gives her the same watch that she will later give to him. An example of information which "came from nowhere" is in the movie Star Trek IV: The Voyage Home, in which a 23rd-century engineer travels back in time, and gives the formula for transparent aluminum to the 20th-century engineer who supposedly invented it. Predestination paradox Smeenk uses the term "predestination paradox" to refer specifically to situations in which a time traveler goes back in time to try to prevent some event in the past. The "predestination paradox" is a concept in time travel and temporal mechanics, often explored in science fiction. It occurs when a future event is the cause of a past event, which in turn becomes the cause of the future event, forming a self-sustaining loop in time. This paradox challenges conventional understandings of cause and effect, as the events involved are both the origin and the result of each other. A notable example is found in the TV series Doctor Who, where a character saves her father in the past, fulfilling a memory he had shared with her as a child about a strange woman having saved his life. The predestination paradox raises philosophical questions about free will, determinism, and the nature of time itself. It is commonly used as a narrative device in fiction to highlight the interconnectedness of events and the inevitability of certain outcomes. Consistency paradox The consistency paradox or grandfather paradox occurs when the past is changed in any way that directly negates the conditions required for the time travel to occur in the first place, thus creating a contradiction. A common example given is traveling to the past and intervening with the conception of one's ancestors (such as causing the death of the parent beforehand), thus affecting the conception of oneself. If the time traveler were not born, then it would not be possible for the traveler to undertake such an act in the first place. Therefore, the ancestor lives to conceive the time-traveler's next-generation ancestor, and eventually the time traveler. There is thus no predicted outcome to this. Consistency paradoxes occur whenever changing the past is possible. A possible resolution is that a time traveller can do anything that did happen, but cannot do anything that did not happen. Doing something that did not happen results in a contradiction. This is referred to as the Novikov self-consistency principle. Variants The grandfather paradox encompasses any change to the past, and it is presented in many variations, including killing one's past self. Both the "retro-suicide paradox" and the "grandfather paradox" appeared in letters written into Amazing Stories in the 1920s. Another variant of the grandfather paradox is the "Hitler paradox" or "Hitler's murder paradox", in which the protagonist travels back in time to murder Adolf Hitler before he can instigate World War II and the Holocaust. Rather than necessarily physically preventing time travel, the action removes any reason for the travel, along with any knowledge that the reason ever existed. Physicist John Garrison et al. give a variation of the paradox of an electronic circuit that sends a signal through a time machine to shut itself off, and receives the signal before it sends it. Newcomb's paradox Newcomb's paradox is a thought experiment showing an apparent contradiction between the expected utility principle and the strategic dominance principle. The thought experiment is often extended to explore causality and free will by allowing for "perfect predictors": if perfect predictors of the future exist, for example if time travel exists as a mechanism for making perfect predictions, then perfect predictions appear to contradict free will because decisions apparently made with free will are already known to the perfect predictor. Predestination does not necessarily involve a supernatural power, and could be the result of other "infallible foreknowledge" mechanisms. Problems arising from infallibility and influencing the future are explored in Newcomb's paradox. Proposed resolutions Logical impossibility Even without knowing whether time travel to the past is physically possible, it is possible to show using modal logic that changing the past results in a logical contradiction. If it is necessarily true that the past happened in a certain way, then it is false and impossible for the past to have occurred in any other way. A time traveler would not be able to change the past from the way it is, but would only act in a way that is already consistent with what necessarily happened. Consideration of the grandfather paradox has led some to the idea that time travel is by its very nature paradoxical and therefore logically impossible. For example, the philosopher Bradley Dowden made this sort of argument in the textbook Logical Reasoning, arguing that the possibility of creating a contradiction rules out time travel to the past entirely. However, some philosophers and scientists believe that time travel into the past need not be logically impossible provided that there is no possibility of changing the past, as suggested, for example, by the Novikov self-consistency principle. Dowden revised his view after being convinced of this in an exchange with the philosopher Norman Swartz. Illusory time Consideration of the possibility of backward time travel in a hypothetical universe described by a Gödel metric led famed logician Kurt Gödel to assert that time might itself be a sort of illusion. He suggests something along the lines of the block time view, in which time is just another dimension like space, with all events at all times being fixed within this four-dimensional "block". Physical impossibility Sergey Krasnikov writes that these bootstrap paradoxes – information or an object looping through time – are the same; the primary apparent paradox is a physical system evolving into a state in a way that is not governed by its laws. He does not find these paradoxical and attributes problems regarding the validity of time travel to other factors in the interpretation of general relativity. Self-sufficient loops A 1992 paper by physicists Andrei Lossev and Igor Novikov labeled such items without origin as Jinn, with the singular term Jinnee. This terminology was inspired by the Jinn of the Quran, which are described as leaving no trace when they disappear. Lossev and Novikov allowed the term "Jinn" to cover both objects and information with the reflexive origin; they called the former "Jinn of the first kind", and the latter "Jinn of the second kind". They point out that an object making circular passage through time must be identical whenever it is brought back to the past, otherwise it would create an inconsistency; the second law of thermodynamics seems to require that the object tends to a lower energy state throughout its history, and such objects that are identical in repeating points in their history seem to contradict this, but Lossev and Novikov argued that since the second law only requires entropy to increase in closed systems, a Jinnee could interact with its environment in such a way as to regain "lost" entropy. They emphasize that there is no "strict difference" between Jinn of the first and second kind. Krasnikov equivocates between "Jinn", "self-sufficient loops", and "self-existing objects", calling them "lions" or "looping or intruding objects", and asserts that they are no less physical than conventional objects, "which, after all, also could appear only from either infinity or a singularity." Novikov self-consistency principle The self-consistency principle developed by Igor Dmitriyevich Novikov expresses one view as to how backward time travel would be possible without the generation of paradoxes. According to this hypothesis, even though general relativity permits some exact solutions that allow for time travel that contain closed timelike curves that lead back to the same point in spacetime, physics in or near closed timelike curves (time machines) can only be consistent with the universal laws of physics, and thus only self-consistent events can occur. Anything a time traveler does in the past must have been part of history all along, and the time traveler can never do anything to prevent the trip back in time from happening, since this would represent an inconsistency. The authors concluded that time travel need not lead to unresolvable paradoxes, regardless of what type of object was sent to the past. Physicist Joseph Polchinski considered a potentially paradoxical situation involving a billiard ball that is fired into a wormhole at just the right angle such that it will be sent back in time and collides with its earlier self, knocking it off course, which would stop it from entering the wormhole in the first place. Kip Thorne referred to this problem as "Polchinski's paradox". Thorne and two of his students at Caltech, Fernando Echeverria and Gunnar Klinkhammer, went on to find a solution that avoided any inconsistencies, and found that there was more than one self-consistent solution, with slightly different angles for the glancing blow in each case. Later analysis by Thorne and Robert Forward showed that for certain initial trajectories of the billiard ball, there could be an infinite number of self-consistent solutions. It is plausible that there exist self-consistent extensions for every possible initial trajectory, although this has not been proven. The lack of constraints on initial conditions only applies to spacetime outside of the chronology-violating region of spacetime; the constraints on the chronology-violating region might prove to be paradoxical, but this is not yet known. Novikov's views are not widely accepted. Visser views causal loops and Novikov's self-consistency principle as an ad hoc solution, and supposes that there are far more damaging implications of time travel. Krasnikov similarly finds no inherent fault in causal loops but finds other problems with time travel in general relativity. Another conjecture, the cosmic censorship hypothesis, suggests that every closed timelike curve passes through an event horizon, which prevents such causal loops from being observed. Parallel universes The interacting-multiple-universes approach is a variation of the many-worlds interpretation of quantum mechanics that involves time travelers arriving in a different universe than the one from which they came; it has been argued that, since travelers arrive in a different universe's history and not their history, this is not "genuine" time travel. Stephen Hawking has argued for the chronology protection conjecture, that even if the MWI is correct, we should expect each time traveler to experience a single self-consistent history so that time travelers remain within their world rather than traveling to a different one. David Deutsch has proposed that quantum computation with a negative delay—backward time travel—produces only self-consistent solutions, and the chronology-violating region imposes constraints that are not apparent through classical reasoning. However Deutsch's self-consistency condition has been demonstrated as capable of being fulfilled to arbitrary precision by any system subject to the laws of classical statistical mechanics, even if it is not built up by quantum systems. Allen Everett has also argued that even if Deutsch's approach is correct, it would imply that any macroscopic object composed of multiple particles would be split apart when traveling back in time, with different particles emerging in different worlds. See also Quantum mechanics of time travel Fermi paradox Cosmic censorship hypothesis Retrocausality Wormhole Causality Causal structure Chronology protection conjecture Münchhausen trilemma Time loop Time travel in fiction Time travel References Causality Physical paradoxes Thought experiments in physics Time travel
Temporal paradox
[ "Physics" ]
2,873
[ "Spacetime", "Physical quantities", "Time", "Time travel" ]
967,440
https://en.wikipedia.org/wiki/Riesz%E2%80%93Thorin%20theorem
In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin. This theorem bounds the norms of linear maps acting between spaces. Its usefulness stems from the fact that some of these spaces have rather simpler structure than others. Usually that refers to which is a Hilbert space, or to and . Therefore one may prove theorems about the more complicated cases by proving them in two simple cases and then using the Riesz–Thorin theorem to pass from the simple cases to the complicated cases. The Marcinkiewicz theorem is similar but applies also to a class of non-linear maps. Motivation First we need the following definition: Definition. Let be two numbers such that . Then for define by: . By splitting up the function in as the product and applying Hölder's inequality to its power, we obtain the following result, foundational in the study of -spaces: This result, whose name derives from the convexity of the map on , implies that . On the other hand, if we take the layer-cake decomposition , then we see that and , whence we obtain the following result: In particular, the above result implies that is included in , the sumset of and in the space of all measurable functions. Therefore, we have the following chain of inclusions: In practice, we often encounter operators defined on the sumset . For example, the Riemann–Lebesgue lemma shows that the Fourier transform maps boundedly into , and Plancherel's theorem shows that the Fourier transform maps boundedly into itself, hence the Fourier transform extends to by setting for all and . It is therefore natural to investigate the behavior of such operators on the intermediate subspaces . To this end, we go back to our example and note that the Fourier transform on the sumset was obtained by taking the sum of two instantiations of the same operator, namely These really are the same operator, in the sense that they agree on the subspace . Since the intersection contains simple functions, it is dense in both and . Densely defined continuous operators admit unique extensions, and so we are justified in considering and to be the same. Therefore, the problem of studying operators on the sumset essentially reduces to the study of operators that map two natural domain spaces, and , boundedly to two target spaces: and , respectively. Since such operators map the sumset space to , it is natural to expect that these operators map the intermediate space to the corresponding intermediate space . Statement of the theorem There are several ways to state the Riesz–Thorin interpolation theorem; to be consistent with the notations in the previous section, we shall use the sumset formulation. In other words, if is simultaneously of type and of type , then is of type for all . In this manner, the interpolation theorem lends itself to a pictorial description. Indeed, the Riesz diagram of is the collection of all points in the unit square such that is of type . The interpolation theorem states that the Riesz diagram of is a convex set: given two points in the Riesz diagram, the line segment that connects them will also be in the diagram. The interpolation theorem was originally stated and proved by Marcel Riesz in 1927. The 1927 paper establishes the theorem only for the lower triangle of the Riesz diagram, viz., with the restriction that and . Olof Thorin extended the interpolation theorem to the entire square, removing the lower-triangle restriction. The proof of Thorin was originally published in 1938 and was subsequently expanded upon in his 1948 thesis. Proof We will first prove the result for simple functions and eventually show how the argument can be extended by density to all measurable functions. Simple functions By symmetry, let us assume (the case trivially follows from ()). Let be a simple function, that is for some finite , and , . Similarly, let denote a simple function , namely for some finite , and , . Note that, since we are assuming and to be -finite metric spaces, and for all . Then, by proper normalization, we can assume and , with and with , as defined by the theorem statement. Next, we define the two complex functions Note that, for , and . We then extend and to depend on a complex parameter as follows: so that and . Here, we are implicitly excluding the case , which yields : In that case, one can simply take , independently of , and the following argument will only require minor adaptations. Let us now introduce the function where are constants independent of . We readily see that is an entire function, bounded on the strip . Then, in order to prove (), we only need to show that for all and as constructed above. Indeed, if () holds true, by Hadamard three-lines theorem, for all and . This means, by fixing , that where the supremum is taken with respect to all simple functions with . The left-hand side can be rewritten by means of the following lemma. In our case, the lemma above implies for all simple function with . Equivalently, for a generic simple function, Proof of () Let us now prove that our claim () is indeed certain. The sequence consists of disjoint subsets in and, thus, each belongs to (at most) one of them, say . Then, for , which implies that . With a parallel argument, each belongs to (at most) one of the sets supporting , say , and We can now bound : By applying Hölder’s inequality with conjugate exponents and , we have We can repeat the same process for to obtain , and, finally, Extension to all measurable functions in Lpθ So far, we have proven that when is a simple function. As already mentioned, the inequality holds true for all by the density of simple functions in . Formally, let and let be a sequence of simple functions such that , for all , and pointwise. Let and define , , and . Note that, since we are assuming , and, equivalently, and . Let us see what happens in the limit for . Since , and , by the dominated convergence theorem one readily has Similarly, , and imply and, by the linearity of as an operator of types and (we have not proven yet that it is of type for a generic ) It is now easy to prove that and in measure: For any , Chebyshev’s inequality yields and similarly for . Then, and a.e. for some subsequence and, in turn, a.e. Then, by Fatou’s lemma and recalling that () holds true for simple functions, Interpolation of analytic families of operators The proof outline presented in the above section readily generalizes to the case in which the operator is allowed to vary analytically. In fact, an analogous proof can be carried out to establish a bound on the entire function from which we obtain the following theorem of Elias Stein, published in his 1956 thesis: The theory of real Hardy spaces and the space of bounded mean oscillations permits us to wield the Stein interpolation theorem argument in dealing with operators on the Hardy space and the space of bounded mean oscillations; this is a result of Charles Fefferman and Elias Stein. Applications Hausdorff–Young inequality It has been shown in the first section that the Fourier transform maps boundedly into and into itself. A similar argument shows that the Fourier series operator, which transforms periodic functions into functions whose values are the Fourier coefficients maps boundedly into and into . The Riesz–Thorin interpolation theorem now implies the following: where and . This is the Hausdorff–Young inequality. The Hausdorff–Young inequality can also be established for the Fourier transform on locally compact Abelian groups. The norm estimate of 1 is not optimal. See the main article for references. Convolution operators Let be a fixed integrable function and let be the operator of convolution with , i.e., for each function we have . It follows from Fubini's theorem that is bounded from to and it is trivial that it is bounded from to (both bounds are by ). Therefore the Riesz–Thorin theorem gives We take this inequality and switch the role of the operator and the operand, or in other words, we think of as the operator of convolution with , and get that is bounded from to Lp. Further, since is in we get, in view of Hölder's inequality, that is bounded from to , where again . So interpolating we get where the connection between p, r and s is The Hilbert transform The Hilbert transform of is given by where p.v. indicates the Cauchy principal value of the integral. The Hilbert transform is a Fourier multiplier operator with a particularly simple multiplier: It follows from the Plancherel theorem that the Hilbert transform maps boundedly into itself. Nevertheless, the Hilbert transform is not bounded on or , and so we cannot use the Riesz–Thorin interpolation theorem directly. To see why we do not have these endpoint bounds, it suffices to compute the Hilbert transform of the simple functions and . We can show, however, that for all Schwartz functions , and this identity can be used in conjunction with the Cauchy–Schwarz inequality to show that the Hilbert transform maps boundedly into itself for all . Interpolation now establishes the bound for all , and the self-adjointness of the Hilbert transform can be used to carry over these bounds to the case. Comparison with the real interpolation method While the Riesz–Thorin interpolation theorem and its variants are powerful tools that yield a clean estimate on the interpolated operator norms, they suffer from numerous defects: some minor, some more severe. Note first that the complex-analytic nature of the proof of the Riesz–Thorin interpolation theorem forces the scalar field to be . For extended-real-valued functions, this restriction can be bypassed by redefining the function to be finite everywhere—possible, as every integrable function must be finite almost everywhere. A more serious disadvantage is that, in practice, many operators, such as the Hardy–Littlewood maximal operator and the Calderón–Zygmund operators, do not have good endpoint estimates. In the case of the Hilbert transform in the previous section, we were able to bypass this problem by explicitly computing the norm estimates at several midway points. This is cumbersome and is often not possible in more general scenarios. Since many such operators satisfy the weak-type estimates real interpolation theorems such as the Marcinkiewicz interpolation theorem are better-suited for them. Furthermore, a good number of important operators, such as the Hardy-Littlewood maximal operator, are only sublinear. This is not a hindrance to applying real interpolation methods, but complex interpolation methods are ill-equipped to handle non-linear operators. On the other hand, real interpolation methods, compared to complex interpolation methods, tend to produce worse estimates on the intermediate operator norms and do not behave as well off the diagonal in the Riesz diagram. The off-diagonal versions of the Marcinkiewicz interpolation theorem require the formalism of Lorentz spaces and do not necessarily produce norm estimates on the -spaces. Mityagin's theorem B. Mityagin extended the Riesz–Thorin theorem; this extension is formulated here in the special case of spaces of sequences with unconditional bases (cf. below). Assume: Then for any unconditional Banach space of sequences , that is, for any and any , . The proof is based on the Krein–Milman theorem. See also Marcinkiewicz interpolation theorem Interpolation space Notes References . . Translated from the Russian and edited by G. P. Barker and G. Kuerti. . . External links Theorems involving convexity Theorems in Fourier analysis Theorems in functional analysis Theorems in harmonic analysis Banach spaces Lp spaces Operator theory
Riesz–Thorin theorem
[ "Mathematics" ]
2,551
[ "Theorems in mathematical analysis", "Theorems in functional analysis", "Theorems in harmonic analysis" ]
968,734
https://en.wikipedia.org/wiki/Integrability%20conditions%20for%20differential%20systems
In mathematics, certain systems of partial differential equations are usefully formulated, from the point of view of their underlying geometric and algebraic structure, in terms of a system of differential forms. The idea is to take advantage of the way a differential form restricts to a submanifold, and the fact that this restriction is compatible with the exterior derivative. This is one possible approach to certain over-determined systems, for example, including Lax pairs of integrable systems. A Pfaffian system is specified by 1-forms alone, but the theory includes other types of example of differential system. To elaborate, a Pfaffian system is a set of 1-forms on a smooth manifold (which one sets equal to 0 to find solutions to the system). Given a collection of differential 1-forms on an -dimensional manifold , an integral manifold is an immersed (not necessarily embedded) submanifold whose tangent space at every point is annihilated by (the pullback of) each . A maximal integral manifold is an immersed (not necessarily embedded) submanifold such that the kernel of the restriction map on forms is spanned by the at every point of . If in addition the are linearly independent, then is ()-dimensional. A Pfaffian system is said to be completely integrable if admits a foliation by maximal integral manifolds. (Note that the foliation need not be regular; i.e. the leaves of the foliation might not be embedded submanifolds.) An integrability condition is a condition on the to guarantee that there will be integral submanifolds of sufficiently high dimension. Necessary and sufficient conditions The necessary and sufficient conditions for complete integrability of a Pfaffian system are given by the Frobenius theorem. One version states that if the ideal algebraically generated by the collection of αi inside the ring Ω(M) is differentially closed, in other words then the system admits a foliation by maximal integral manifolds. (The converse is obvious from the definitions.) Example of a non-integrable system Not every Pfaffian system is completely integrable in the Frobenius sense. For example, consider the following one-form : If dθ were in the ideal generated by θ we would have, by the skewness of the wedge product But a direct calculation gives which is a nonzero multiple of the standard volume form on R3. Therefore, there are no two-dimensional leaves, and the system is not completely integrable. On the other hand, for the curve defined by then θ defined as above is 0, and hence the curve is easily verified to be a solution (i.e. an integral curve) for the above Pfaffian system for any nonzero constant c. Examples of applications In pseudo-Riemannian geometry, we may consider the problem of finding an orthogonal coframe θi, i.e., a collection of 1-forms that form a basis of the cotangent space at every point with that are closed (dθi = 0, ). By the Poincaré lemma, the θi locally will have the form dxi for some functions xi on the manifold, and thus provide an isometry of an open subset of M with an open subset of Rn. Such a manifold is called locally flat. This problem reduces to a question on the coframe bundle of M. Suppose we had such a closed coframe If we had another coframe , then the two coframes would be related by an orthogonal transformation If the connection 1-form is ω, then we have On the other hand, But is the Maurer–Cartan form for the orthogonal group. Therefore, it obeys the structural equation , and this is just the curvature of M: After an application of the Frobenius theorem, one concludes that a manifold M is locally flat if and only if its curvature vanishes. Generalizations Many generalizations exist to integrability conditions on differential systems thar are not necessarily generated by one-forms. The most famous of these are the Cartan–Kähler theorem, which only works for real analytic differential systems, and the Cartan–Kuranishi prolongation theorem. See for details. The Newlander–Nirenberg theorem gives integrability conditions for an almost-complex structure. Further reading Bryant, Chern, Gardner, Goldschmidt, Griffiths, Exterior Differential Systems, Mathematical Sciences Research Institute Publications, Springer-Verlag, Olver, P., Equivalence, Invariants, and Symmetry, Cambridge, Ivey, T., Landsberg, J.M., Cartan for Beginners: Differential Geometry via Moving Frames and Exterior Differential Systems, American Mathematical Society, Dunajski, M., Solitons, Instantons and Twistors, Oxford University Press, Partial differential equations Differential topology Differential systems
Integrability conditions for differential systems
[ "Mathematics" ]
1,011
[ "Topology", "Differential topology" ]
968,834
https://en.wikipedia.org/wiki/VO2%20max
{{DISPLAYTITLE:VO2 max}} V̇O2 max (also maximal oxygen consumption, maximal oxygen uptake or maximal aerobic capacity) is the maximum rate of oxygen consumption attainable during physical exertion. The name is derived from three abbreviations: "V̇" for volume (the dot over the V indicates "per unit of time" in Newton's notation), "O2" for oxygen, and "max" for maximum and usually normalized per kilogram of body mass. A similar measure is V̇O2 peak (peak oxygen consumption), which is the measurable value from a session of physical exercise, be it incremental or otherwise. It could match or underestimate the actual V̇O2 max. Confusion between the values in older and popular fitness literature is common. The capacity of the lung to exchange oxygen and carbon dioxide is constrained by the rate of blood oxygen transport to active tissue. The measurement of V̇O2 max in the laboratory provides a quantitative value of endurance fitness for comparison of individual training effects and between people in endurance training. Maximal oxygen consumption reflects cardiorespiratory fitness and endurance capacity in exercise performance. Elite athletes, such as competitive distance runners, racing cyclists or Olympic cross-country skiers, can achieve V̇O2 max values exceeding 90 mL/(kg·min), while some endurance animals, such as Alaskan huskies, have V̇O2 max values exceeding 200 mL/(kg·min). In physical training, especially in its academic literature, V̇O2 max is often used as a reference level to quantify exertion levels, such as 65% V̇O2 max as a threshold for sustainable exercise, which is generally regarded as more rigorous than heart rate, but is more elaborate to measure. Normalization per body mass V̇O2 max is expressed either as an absolute rate in (for example) litres of oxygen per minute (L/min) or as a relative rate in (for example) millilitres of oxygen per kilogram of the body mass per minute (e.g., mL/(kg·min)). The latter expression is often used to compare the performance of endurance sports athletes. However, V̇O2 max generally does not vary linearly with body mass, either among individuals within a species or among species, so comparisons of the performance capacities of individuals or species that differ in body size must be done with appropriate statistical procedures, such as analysis of covariance. Measurement and calculation Measurement Accurately measuring V̇O2 max involves a physical effort sufficient in duration and intensity to fully tax the aerobic energy system. In general clinical and athletic testing, this usually involves a graded exercise test in which exercise intensity is progressively increased while measuring: ventilation and oxygen and carbon dioxide concentration of the inhaled and exhaled air. V̇O2 max is measured during a cardiopulmonary exercise test (CPX test). The test is done on a treadmill or cycle ergometer. In untrained subjects, V̇O2 max is 10% to 20% lower when using a cycle ergometer compared with a treadmill. However, trained cyclists' results on the cycle ergometer are equal to or even higher than those obtained on the treadmill. The classic V̇O2 max, in the sense of Hill and Lupton (1923), is reached when oxygen consumption remains at a steady state ("plateau") despite an increase in workload. The occurrence of a plateau is not guaranteed and may vary by person and sampling interval, leading to modified protocols with varied results. Calculation: the Fick equation V̇O2 may also be calculated by the Fick equation: , when these values are obtained during exertion at a maximal effort. Here Q is the cardiac output of the heart, CaO2 is the arterial oxygen content, and CvO2 is the venous oxygen content. (CaO2 – CvO2) is also known as the arteriovenous oxygen difference. The Fick equation may be used to measure V̇O2 in critically ill patients, but its usefulness is low even in non-exerted cases. Using a breath-based VO2 to estimate cardiac output, on the other hand, seems to be reliable enough. Estimation using submaximal exercise testing The necessity for a subject to exert maximum effort in order to accurately measure V̇O2 max can be dangerous in those with compromised respiratory or cardiovascular systems; thus, sub-maximal tests for estimating V̇O2 max have been developed. The heart rate ratio method An estimate of V̇O2 max is based on maximum and resting heart rates. In the Uth et al. (2004) formulation, it is given by: This equation uses the ratio of maximum heart rate (HRmax) to resting heart rate (HRrest) to predict V̇O2 max. The researchers cautioned that the conversion rule was based on measurements on well-trained men aged 21 to 51 only, and may not be reliable when applied to other sub-groups. They also advised that the formula is most reliable when based on actual measurement of maximum heart rate, rather than an age-related estimate. The Uth constant factor of 15.3 is given for well-trained men. Later studies have revised the constant factor for different populations. According to Voutilainen et al. 2020, the constant factor should be 14 in around 40-year-old normal weight never-smoking men with no cardiovascular diseases, bronchial asthma, or cancer. Every 10 years of age reduces the coefficient by one, as well as does the change in body weight from normal weight to obese or the change from never-smoker to current smoker. Consequently, V̇O2 max of 60-year-old obese current smoker men should be estimated by multiplying the HRmax to HRrest ratio by 10. Cooper test Kenneth H. Cooper conducted a study for the United States Air Force in the late 1960s. One of the results of this was the Cooper test in which the distance covered running in 12 minutes is measured. Based on the measured distance, an estimate of V̇O2 max [in mL/(kg·min)] can be calculated by inverting the linear regression equation, giving us: where d12 is the distance (in metres) covered in 12 minutes. An alternative equation is: where d′12 is distance (in miles) covered in 12 minutes. Multi-stage fitness test There are several other reliable tests and V̇O2 max calculators to estimate V̇O2 max, most notably the multi-stage fitness test (or beep test). Rockport fitness walking test Estimation of V̇O2 max from a timed one-mile track walk (as fast as possible) in decimal minutes (, e.g.: 20:35 would be specified as 20.58), sex, age in years, body weight in pounds (, lbs), and 60-second heart rate in beats-per-minute (, bpm) at the end of the mile. The constant is 6.3150 for males, 0 for females. Correlation coefficient for the generalized formula is 0.88. Reference values Men have a V̇O2 max that is 26% higher (6.6 mL/(kg·min)) than women for treadmill and 37.9% higher (7.6 mL/(kg·min)) than women for cycle ergometer on average. V̇O2 max is on average 22% higher (4.5 mL/(kg·min)) when measured using a treadmill compared with a cycle ergometer. Effect of training Non-athletes The average untrained healthy male has a V̇O2 max of approximately 35–40 mL/(kg·min). The average untrained healthy female has a V̇O2 max of approximately 27–31 mL/(kg·min). These scores can improve with training and decrease with age, though the degree of trainability also varies widely. Athletes In sports where endurance is an important component in performance, such as road cycling, rowing, cross-country skiing, swimming, and long-distance running, world-class athletes typically have high V̇O2 max values. Elite male runners can consume up to 85 mL/(kg·min), and female elite runners can consume about 77 mL/(kg·min). Norwegian cyclist Oskar Svendsen holds the record for the highest V̇O2 ever tested with 97.5 mL/(kg·min). Animals V̇O2 max has been measured in other animal species. During loaded swimming, mice had a V̇O2 max of around 140 mL/(kg·min). Thoroughbred horses had a V̇O2 max of around 193 mL/(kg·min) after 18 weeks of high-intensity training. Alaskan huskies running in the Iditarod Trail Sled Dog Race had V̇O2 max values as high as 240 mL/(kg·min). Estimated V̇O2 max for pronghorn antelopes was as high as 300 mL/(kg·min). Limiting factors The factors affecting V̇O2 may be separated into supply and demand. Supply is the transport of oxygen from the lungs to the mitochondria (combining pulmonary function, cardiac output, blood volume, and capillary density of the skeletal muscle) while demand is the rate at which the mitochondria can reduce oxygen in the process of oxidative phosphorylation. Of these, the supply factors may be more limiting. However, it has also been argued that while trained subjects are probably supply limited, untrained subjects can indeed have a demand limitation. General characteristics that affect V̇O2 max include age, sex, fitness and training, and altitude. V̇O2 max can be a poor predictor of performance in runners due to variations in running economy and fatigue resistance during prolonged exercise. The body works as a system. If one of these factors is sub-par, then the whole system's normal capacity is reduced. The drug erythropoietin (EPO) can boost V̇O2 max by a significant amount in both humans and other mammals. This makes EPO attractive to athletes in endurance sports, such as professional cycling. EPO has been banned since the 1990s as an illicit performance-enhancing substance, but by 1998 it had become widespread in cycling and led to the Festina affair as well as being mentioned ubiquitously in the USADA 2012 report on the U.S. Postal Service Pro Cycling Team. Greg LeMond has suggested establishing a baseline for riders' V̇O2 max (and other attributes) to detect abnormal performance increases. Clinical use to assess cardiorespiratory fitness and mortality V̇O2 max/peak is widely used as an indicator of cardiorespiratory fitness (CRF) in select groups of athletes or, rarely, in people under assessment for disease risk. In 2016, the American Heart Association (AHA) published a scientific statement recommending that CRF quantifiable as V̇O2 max/peak be regularly assessed and used as a clinical vital sign; ergometry (exercise wattage measurement) may be used if V̇O2 is unavailable. This statement was based on evidence that lower fitness levels are associated with a higher risk of cardiovascular disease, all-cause mortality, and mortality rates. In addition to risk assessment, the AHA recommendation cited the value for measuring fitness to validate exercise prescriptions, physical activity counseling, and improve both management and health of people being assessed. A 2023 meta-analysis of observational cohort studies showed an inverse and independent association between V̇O2 max and all-cause mortality risk. Every one metabolic equivalent increase in estimated cardiorespiratory fitness was associated with an 11% reduction in mortality. The top third of V̇O2 max scores represented a 45% lower mortality in people compared with the lowest third. As of 2023, V̇O2 max is rarely employed in routine clinical practice to assess cardiorespiratory fitness or mortality due to its considerable demand for resources and costs. History British physiologist Archibald Hill introduced the concepts of maximal oxygen uptake and oxygen debt in 1922. Hill and German physician Otto Meyerhof shared the 1922 Nobel Prize in Physiology or Medicine for their independent work related to muscle energy metabolism. Building on this work, scientists began measuring oxygen consumption during exercise. Key contributions were made by Henry Taylor at the University of Minnesota, Scandinavian scientists Per-Olof Åstrand and Bengt Saltin in the 1950s and 60s, the Harvard Fatigue Laboratory, German universities, and the Copenhagen Muscle Research Centre. See also Anaerobic exercise Arteriovenous oxygen difference Cardiorespiratory fitness Comparative physiology Oxygen pulse Respirometry Running economy Training effect VDOT vVO2max References Exercise biochemistry Sports terminology Respiratory physiology
VO2 max
[ "Chemistry", "Biology" ]
2,589
[ "Biochemistry", "Exercise biochemistry" ]
968,879
https://en.wikipedia.org/wiki/Vortex%20tube
The vortex tube, also known as the Ranque-Hilsch vortex tube, is a mechanical device that separates a compressed gas into hot and cold streams. The gas emerging from the hot end can reach temperatures of , and the gas emerging from the cold end can reach . It has no moving parts and is considered an environmentally friendly technology because it can work solely on compressed air and does not use Freon. Its efficiency is low, however, counteracting its other environmental advantages. Pressurised gas is injected tangentially into a swirl chamber near one end of a tube, leading to a rapid rotation—the first vortex—as it moves along the inner surface of the tube to the far end. A conical nozzle allows gas specifically from this outer layer to escape at that end through a valve. The remainder of the gas is forced to return in an inner vortex of reduced diameter within the outer vortex. Gas from the inner vortex transfers energy to the gas in the outer vortex, so the outer layer is hotter at the far end than it was initially. The gas in the central vortex is likewise cooler upon its return to the starting-point, where it is released from the tube. Method of operation To explain the temperature separation in a vortex tube, there are two main approaches: Fundamental approach: the physics This approach is based on first-principles physics alone and is not limited to vortex tubes only, but applies to moving gas in general. It shows that temperature separation in a moving gas is due only to enthalpy conservation in a moving frame of reference. The thermal process in the vortex tube can be estimated in the following way: The main physical phenomenon of the vortex tube is the temperature separation between the cold vortex core and the warm vortex periphery. The "vortex tube effect" is fully explained with the work equation of Euler, also known as Euler's turbine equation, which can be written in its most general vectorial form as: , where is the total, or stagnation temperature of the rotating gas at radial position , the absolute gas velocity as observed from the stationary frame of reference is denoted with ; the angular velocity of the system is and is the isobaric heat capacity of the gas. This equation was published in 2012; it explains the fundamental operating principle of vortex tubes (Here's a video with animated demonstration of how this works). The search for this explanation began in 1933 when the vortex tube was discovered and continued for more than 80 years. The above equation is valid for an adiabatic turbine passage; it clearly shows that while gas moving towards the center is getting colder, the peripheral gas in the passage is "getting faster". Therefore, vortex cooling is due to angular propulsion. The more the gas cools by reaching the center, the more rotational energy it delivers to the vortex and thus the vortex rotates even faster. This explanation stems directly from the law of energy conservation. Compressed gas at room temperature is expanded in order to gain speed through a nozzle; it then climbs the centrifugal barrier of rotation during which energy is also lost. The lost energy is delivered to the vortex, which speeds its rotation. In a vortex tube, the cylindrical surrounding wall confines the flow at periphery and thus forces conversion of kinetic into internal energy, which produces hot air at the hot exit. Therefore, the vortex tube is a rotorless turboexpander. It consists of a rotorless radial inflow turbine (cold end, center) and a rotorless centrifugal compressor (hot end, periphery). The work output of the turbine is converted into heat by the compressor at the hot end. Phenomenological approach This approach relies on observation and experimental data. It is specifically tailored to the geometrical shape of the vortex tube and the details of its flow and is designed to match the particular observables of the complex vortex tube flow, namely turbulence, acoustic phenomena, pressure fields, air velocities and many others. The earlier published models of the vortex tube are phenomenological. They are: Radial pressure difference: centrifugal compression and air expansion Radial transfer of angular momentum Radial acoustic streaming of energy Radial heat pumping More on these models can be found in recent review articles on vortex tubes. The phenomenological models were developed at an earlier time when the turbine equation of Euler was not thoroughly analyzed; in the engineering literature, this equation is studied mostly to show the work output of a turbine; while temperature analysis is not performed since turbine cooling has more limited application unlike power generation, which is the main application of turbines. Phenomenological studies of the vortex tube in the past have been useful in presenting empirical data. However, due to the complexity of the vortex flow this empirical approach was able to show only aspects of the effect but was unable to explain its operating principle. Dedicated to empirical details, for a long time the empirical studies made the vortex tube effect appear enigmatic and its explanation – a matter of debate. History The vortex tube was invented in 1931 by French physicist Georges J. Ranque. It was rediscovered by Paul Dirac in 1934 while he was searching for a device to perform isotope separation, leading to development of the Helikon vortex separation process. German physicist improved the design and published a widely read paper in 1947 on the device, which he called a Wirbelrohr (literally, whirl pipe). In 1954, Westley published a comprehensive survey entitled "A bibliography and survey of the vortex tube", which included over 100 references. In 1951 Curley and McGree, in 1956 Kalvinskas, in 1964 Dobratz, in 1972 Nash, and in 1979 Hellyar made important contribution to the RHVT literature by their extensive reviews on the vortex tube and its applications. From 1952 to 1963, C. Darby Fulton, Jr. obtained four U.S. patents relating to the development of the vortex tube. In 1961, Fulton began manufacturing the vortex tube under the company name Fulton Cryogenics. Fulton sold the company to Vortec, Inc. The vortex tube was used to separate gas mixtures, oxygen and nitrogen, carbon dioxide and helium, carbon dioxide and air in 1967 by Linderstrom-Lang. Vortex tubes also seem to work with liquids to some extent, as demonstrated by Hsueh and Swenson in a laboratory experiment where free body rotation occurs from the core and a thick boundary layer at the wall. Air is separated causing a cooler air stream coming out the exhaust hoping to chill as a refrigerator. In 1988 R. T. Balmer applied liquid water as the working medium. It was found that when the inlet pressure is high, for instance 20-50 bar, the heat energy separation process exists in incompressible (liquids) vortex flow as well. Note that this separation is only due to heating; there is no longer cooling observed since cooling requires compressibility of the working fluid. Efficiency Vortex tubes have lower efficiency than traditional air conditioning equipment. They are commonly used for inexpensive spot cooling, when compressed air is available. Applications Current applications Commercial vortex tubes are designed for industrial applications to produce a temperature drop of up to . With no moving parts, no electricity, and no refrigerant, a vortex tube can produce refrigeration up to using 100 standard cubic feet per minute (2.832 m3/min) of filtered compressed air at . A control valve in the hot air exhaust adjusts temperatures, flows and refrigeration over a wide range. Vortex tubes are used for cooling of cutting tools (lathes and mills, both manually-operated and CNC machines) during machining. The vortex tube is well-matched to this application: machine shops generally already use compressed air, and a fast jet of cold air provides both cooling and removal of the chips produced by the tool. This eliminates or drastically reduces the need for liquid coolant, which is messy, expensive, and environmentally hazardous. See also Heat pump Maxwell's demon Windhexe References Further reading G. Ranque, (1933) "Expériences sur la détente giratoire avec productions simultanées d'un echappement d'air chaud et d'un echappement d'air froid," Journal de Physique et Le Radium, Supplement, 7th series, 4 : 112 S – 114 S. H. C. Van Ness, Understanding Thermodynamics, New York: Dover, 1969, starting on page 53. A discussion of the vortex tube in terms of conventional thermodynamics. Mark P. Silverman, And Yet it Moves: Strange Systems and Subtle Questions in Physics, Cambridge, 1993, Chapter 6 Samuel B. Hsueh and Frank R. Swenson,"Vortex Diode Interior Flows," 1970 Missouri Academy of Science Proceedings, Warrensburg, Mo. C. L. Stong, The Amateur Scientist, London: Heinemann Educational Books Ltd, 1962, Chapter IX, Section 4, The "Hilsch" Vortex Tube, p514-519. M. Kurosaka, Acoustic Streaming in Swirling Flow and the Ranque-Hilsch (vortex-tube) Effect, Journal of Fluid Mechanics, 1982, 124:139-172 M. Kurosaka, J.Q. Chu, J.R. Goodman, Ranque-Hilsch Effect Revisited: Temperature Separation Traced to Orderly Spinning Waves or 'Vortex Whistle', Paper AIAA-82-0952 presented at the AIAA/ASME 3rd Joint Thermophysics Conference (June 1982) R. Ricci, A. Secchiaroli, V. D’Alessandro, S. Montelpare. Numerical analysis of compressible turbulent helical flow in a Ranque-Hilsch vortex tube. Computational Methods and Experimental Measurement XIV, pp. 353–364, Ed. C. Brebbia, C.M. Carlomagno, . A. Secchiaroli, R. Ricci, S. Montelpare, V. D’Alessandro. Fluid Dynamics Analysis of a Ranque-Hilsch Vortex-Tube. Il Nuovo Cimento C, vol.32, 2009, . A. Secchiaroli, R. Ricci, S. Montelpare, V. D’Alessandro. Numerical simulation of turbulent flow in a Ranque-Hilsch vortex-tube. International Journal of Heat and Mass Transfer, Vol. 52, Issues 23–24, November 2009, pp. 5496–5511, . N. Pourmahmoud, A. Hassanzadeh, O. Moutaby. Numerical Analysis of The Effect of Helical Nozzles Gap on The Cooling Capacity of Ranque Hilsch Vortex Tube. International Journal of Refrigeration, Vol. 35, Issue 5, 2012, pp. 1473–1483, . M. G. Ranque, 1933, "Experiences sur la detente giratoire avec production simulanees d’un echappement d’air chaud et d’air froid", Journal de Physique et le Radium (in French), Supplement, 7th series, Vol. 4, pp. 112 S–114 S. R. Hilsch, 1947, "The Use of the Expansion of Gases in a Centrifugal Field as Cooling Process", Review of Scientific Instruments, Vol. 18, No. 2, pp. 108–113. J Reynolds, 1962, "A Note on Vortex Tube Flows", Journal of Fluid Mechanics, Vol. 14, pp. 18–20. T. T. Cockerill, 1998, "Thermodynamics and Fluid Mechanics of a Ranque-Hilsch Vortex Tube", Ph.D. Thesis, University of Cambridge, Department of Engineering. W. Fröhlingsdorf, and H. Unger, 1999, "Numerical Investigations of the Compressible Flow and the Energy Separation in the Ranque-Hilsch Vortex Tube", Int. J. Heat Mass Transfer, Vol. 42, pp. 415–422. J. Lewins, and A. Bejan, 1999, "Vortex Tube Optimization Theory", Energy, Vol. 24, pp. 931–943. J. P. Hartnett, and E. R. G. Eckert, 1957, "Experimental Study of the Velocity and Temperature Distribution in a high-velocity vortex-type flow", Transactions of the ASME, Vol. 79, No. 4, pp. 751–758. M. Kurosaka, 1982, "Acoustic Streaming in Swirling Flows", Journal of Fluid Mechanics, Vol. 124, pp. 139–172. K. Stephan, S. Lin, M. Durst, F. Huang, and D. Seher, 1983, "An Investigation of Energy Separation in a Vortex Tube", International Journal of Heat and Mass Transfer, Vol. 26, No. 3, pp. 341–348. B. K. Ahlborn, and J. M. Gordon, 2000, "The Vortex Tube as a Classical Thermodynamic Refrigeration Cycle", Journal of Applied Physics, Vol. 88, No. 6, pp. 3645–3653. G. W. Sheper, 1951, Refrigeration Engineering, Vol. 59, No. 10, pp. 985–989. J. M. Nash, 1991, "Vortex Expansion Devices for High Temperature Cryogenics", Proc. of the 26th Intersociety Energy Conversion Engineering Conference, Vol. 4, pp. 521–525. D. Li, J. S. Baek, E. A. Groll, and P. B. Lawless, 2000, "Thermodynamic Analysis of Vortex Tube and Work Output Devices for the Transcritical Carbon Dioxide Cycle", Preliminary Proceedings of the 4th IIR-Gustav Lorentzen Conference on Natural Working Fluids at Purdue, E. A. Groll & D. M. Robinson, editors, Ray W. Herrick Laboratories, Purdue University, pp. 433–440. H. Takahama, 1965, "Studies on Vortex Tubes", Bulletin of JSME, Vol. 8, No. 3, pp. 433–440. B. Ahlborn, and S. Groves, 1997, "Secondary Flow in a Vortex Tube", Fluid Dyn. Research, Vol. 21, pp. 73–86. H. Takahama, and H. Yokosawa, 1981, "Energy Separation in Vortex Tubes with a Divergent Chamber", ASME Journal of Heat Transfer, Vol. 103, pp. 196–203. M. Sibulkin, 1962, "Unsteady, Viscous, Circular Flow. Part 3: Application to the Ranque-Hilsch Vortex Tube", Journal of Fluid Mechanics, Vol. 12, pp. 269–293. K. Stephan, S. Lin, M. Durst, F. Huang, and D. Seher, 1984, "A Similarity Relation for Energy Separation in a Vortex Tube", Int. J. Heat Mass Transfer, Vol. 27, No. 6, pp. 911–920. H. Takahama, and H. Kawamura, 1979, "Performance Characteristics of Energy Separation in a Steam-Operated Vortex Tube", International Journal of Engineering Science, Vol. 17, pp. 735–744. G. Lorentzen, 1994, "Revival of Carbon Dioxide as a Refrigerant", H&V Engineer, Vol. 66. No. 721, pp. 9–14. D. M. Robinson, and E. A. Groll, 1996, "Using Carbon Dioxide in a Transcritical Vapor Compression Refrigeration Cycle", Proceedings of the 1996 International Refrigeration Conference at Purdue, J. E. Braun and E. A. Groll, editors, Ray W. Herrick Laboratories, Purdue University, pp. 329–336. W. A. Little, 1998, "Recent Developments in Joule-Thomson Cooling: Gases, Coolers, and Compressors", Proc. Of the 5th Int. Cryocooler Conference, pp. 3–11. A. P. Kleemenko, 1959, "One Flow Cascade Cycle (in schemes of Natural Gas Liquefaction and Separation)", Proceedings of the 10th International Congress on Refrigeration, Pergamon Press, London, p. 34. J. Marshall, 1977, "Effect of Operating Conditions, Physical Size, and Fluid Characteristics on the Gas Separation Performance of a Linderstrom-Lang Vortex Tube", Int. J. Heat Mass Transfer, Vol. 20, pp. 227–231 External links G. J. Ranque's U.S. Patent Detailed explanation of the vortex tube effect with many pictures Oberlin college physics demo Building a Vortex Tube This Old Tony, YouTube Vortex'n 2 This Old Tony, YouTube Cooling technology Thermodynamics Gas technologies
Vortex tube
[ "Physics", "Chemistry", "Mathematics" ]
3,514
[ "Thermodynamics", "Dynamical systems" ]
969,136
https://en.wikipedia.org/wiki/Temporal%20Key%20Integrity%20Protocol
Temporal Key Integrity Protocol (TKIP ) is a security protocol used in the IEEE 802.11 wireless networking standard. TKIP was designed by the IEEE 802.11i task group and the Wi-Fi Alliance as an interim solution to replace WEP without requiring the replacement of legacy hardware. This was necessary because the breaking of WEP had left Wi-Fi networks without viable link-layer security, and a solution was required for already deployed hardware. However, TKIP itself is no longer considered secure, and was deprecated in the 2012 revision of the 802.11 standard. Background On October 31, 2002, the Wi-Fi Alliance endorsed TKIP under the name Wi-Fi Protected Access (WPA). The IEEE endorsed the final version of TKIP, along with more robust solutions such as 802.1X and the AES based CCMP, when they published IEEE 802.11i-2004 on 23 July 2004. The Wi-Fi Alliance soon afterwards adopted the full specification under the marketing name WPA2. TKIP was resolved to be deprecated by the IEEE in January 2009. Technical details TKIP and the related WPA standard implement three new security features to address security problems encountered in WEP protected networks. First, TKIP implements a key mixing function that combines the secret root key with the initialization vector before passing it to the RC4 cipher initialization. WEP, in comparison, merely concatenated the initialization vector to the root key, and passed this value to the RC4 routine. This permitted the vast majority of the RC4 based WEP related key attacks. Second, WPA implements a sequence counter to protect against replay attacks. Packets received out of order will be rejected by the access point. Finally, TKIP implements a 64-bit Message Integrity Check (MIC) and re-initializes the sequence number each time when a new key (Temporal Key) is used. To be able to run on legacy WEP hardware with minor upgrades, TKIP uses RC4 as its cipher. TKIP also provides a rekeying mechanism. TKIP ensures that every data packet is sent with a unique encryption key(Interim Key/Temporal Key + Packet Sequence Counter). Key mixing increases the complexity of decoding the keys by giving an attacker substantially less data that has been encrypted using any one key. WPA2 also implements a new message integrity code, MIC. The message integrity check prevents forged packets from being accepted. Under WEP it was possible to alter a packet whose content was known even if it had not been decrypted. Security TKIP uses the same underlying mechanism as WEP, and consequently is vulnerable to a number of similar attacks. The message integrity check, per-packet key hashing, broadcast key rotation, and a sequence counter discourage many attacks. The key mixing function also eliminates the WEP key recovery attacks. Notwithstanding these changes, the weakness of some of these additions have allowed for new, although narrower, attacks. Packet spoofing and decryption TKIP is vulnerable to a MIC key recovery attack that, if successfully executed, permits an attacker to transmit and decrypt arbitrary packets on the network being attacked. The current publicly available TKIP-specific attacks do not reveal the Pairwise Master Key or the Pairwise Temporal Keys. On November 8, 2008, Martin Beck and Erik Tews released a paper detailing how to recover the MIC key and transmit a few packets. This attack was improved by Mathy Vanhoef and Frank Piessens in 2013, where they increase the amount of packets an attacker can transmit, and show how an attacker can also decrypt arbitrary packets. The basis of the attack is an extension of the WEP chop-chop attack. Because WEP uses a cryptographically insecure checksum mechanism (CRC32), an attacker can guess individual bytes of a packet, and the wireless access point will confirm or deny whether or not the guess is correct. If the guess is correct, the attacker will be able to detect the guess is correct and continue to guess other bytes of the packet. However, unlike the chop-chop attack against a WEP network, the attacker must wait for at least 60 seconds after an incorrect guess (a successful circumvention of the CRC32 mechanism) before continuing the attack. This is because although TKIP continues to use the CRC32 checksum mechanism, it implements an additional MIC code named Michael. If two incorrect Michael MIC codes are received within 60 seconds, the access point will implement countermeasures, meaning it will rekey the TKIP session key, thus changing future keystreams. Accordingly, attacks on TKIP will wait an appropriate amount of time to avoid these countermeasures. Because ARP packets are easily identified by their size, and the vast majority of the contents of this packet would be known to an attacker, the number of bytes an attacker must guess using the above method is rather small (approximately 14 bytes). Beck and Tews estimate recovery of 12 bytes is possible in about 12 minutes on a typical network, which would allow an attacker to transmit 3–7 packets of at most 28 bytes. Vanhoef and Piessens improved this technique by relying on fragmentation, allowing an attacker to transmit arbitrarily many packets, each at most 112 bytes in size. The Vanhoef–Piessens attacks also can be used to decrypt arbitrary packets of the attack's choice. An attacker already has access to the entire ciphertext packet. Upon retrieving the entire plaintext of the same packet, the attacker has access to the keystream of the packet, as well as the MIC code of the session. Using this information the attacker can construct a new packet and transmit it on the network. To circumvent the WPA implemented replay protection, the attacks use QoS channels to transmit these newly constructed packets. An attacker able to transmit these packets may be able to implement any number of attacks, including ARP poisoning attacks, denial of service, and other similar attacks, with no need of being associated with the network. Royal Holloway attack A group of security researchers at the Information Security Group at Royal Holloway, University of London reported a theoretical attack on TKIP which exploits the underlying RC4 encryption mechanism. TKIP uses a similar key structure to WEP with the low 16-bit value of a sequence counter (used to prevent replay attacks) being expanded into the 24-bit "IV", and this sequence counter always increments on every new packet. An attacker can use this key structure to improve existing attacks on RC4. In particular, if the same data is encrypted multiple times, an attacker can learn this information from only 224 connections. While they claim that this attack is on the verge of practicality, only simulations were performed, and the attack has not been demonstrated in practice. NOMORE attack In 2015, security researchers from KU Leuven presented new attacks against RC4 in both TLS and WPA-TKIP. Dubbed the Numerous Occurrence MOnitoring & Recovery Exploit (NOMORE) attack, it is the first attack of its kind that was demonstrated in practice. The attack against WPA-TKIP can be completed within an hour, and allows an attacker to decrypt and inject arbitrary packets. Legacy ZDNet reported on June 18, 2010, that WEP & TKIP would soon be disallowed on Wi-Fi devices by the Wi-Fi alliance. However, a survey in 2013 showed that it was still in widespread use. The IEEE 802.11n standard prohibits the data rate exceed 54 Mbps if TKIP is used as the Wi-Fi cipher. See also Wireless network interface controller CCMP Wi-Fi Protected Access IEEE 802.11i-2004 References Broken cryptography algorithms Cryptographic protocols Key management Secure communication Wireless networking IEEE 802.11
Temporal Key Integrity Protocol
[ "Technology", "Engineering" ]
1,632
[ "Wireless networking", "Computer networks engineering" ]
970,579
https://en.wikipedia.org/wiki/Nationwide%20Urban%20Runoff%20Program
The Nationwide Urban Runoff Program (NURP) was a research project conducted by the United States Environmental Protection Agency (EPA) between 1979 and 1983. It was the first comprehensive study of urban stormwater pollution across the United States. Study objectives The principal focus areas of the study consisted of: Examine the water quality aspects of urban runoff, and a comparison of results across various urban sites Assess the impact of urban runoff on overall water quality Implement stormwater management best practices. A major component of the project was an analysis of water samples collected during 2,300 storms in 28 major metropolitan areas. Findings Among the conclusions of the report are the following: "Heavy metals (especially copper, lead and zinc) are by far the most prevalent priority pollutant constituents found in urban runoff...Copper is suggested to be the most significant [threat] of the three." "Coliform bacteria are present at high levels in urban runoff." "Nutrients are generally present in urban runoff, but... [generally] concentrations do not appear to be high in comparison with other possible discharges." "Oxygen demanding substances are present in urban runoff at concentrations approximating those in secondary treatment plant discharges." "The physical aspects of urban runoff, e.g. erosion and scour, can be a significant cause of habitat disruption and can affect the type of fishery present." "Detention basins... [and] recharge devices are capable of providing very effective removal of pollutants in urban runoff." "Wet basins (designs which maintain a permanent water pool) have the greatest performance capabilities." "Wetlands are considered to be a promising technique for control of urban runoff quality." An interesting finding of the NURP was that street sweeping was considered to be, "ineffective as a technique for improving the quality of urban runoff". Impact of the report In 1987, the results of the report were used as the basis of an amendment to the Clean Water Act requiring local governments and industry to address the pollution sources indicated by the report. The amendment requires all industrial stormwater dischargers (including many construction sites) and municipal storm sewer systems, affecting virtually all cities and towns in the country, to obtain discharge permits. EPA published national stormwater regulations in 1990 and 1999. EPA and state agencies began issuing stormwater permits in 1991. See Stormwater management permits. About "NURP ponds" The term "NURP ponds" refers to retention basins (also called "wet ponds") that capture sediment from stormwater runoff as it is detained, and that are designed to perform to the level of the more effective ponds observed in the NURP studies. Some practitioners may assume that a "NURP pond" design conforms to some particular standard issued by EPA, but in fact EPA has issued no regulations or other requirements regarding the design of stormwater ponds. (However, some states and municipalities have issued stormwater design manuals, and these publications may include a reference to a "NURP pond".) See also Green infrastructure Stormwater management Water pollution in the United States References External links EPA Stormwater Permit Program EPA Nonpoint Source Management Program Stormwater management Water pollution in the United States United States Environmental Protection Agency
Nationwide Urban Runoff Program
[ "Chemistry", "Environmental_science" ]
648
[ "Water treatment", "Stormwater management", "Water pollution" ]
970,650
https://en.wikipedia.org/wiki/Specific%20absorption%20rate
Specific absorption rate (SAR) is a measure of the rate at which energy is absorbed per unit mass by a human body when exposed to a radio frequency (RF) electromagnetic field. It is defined as the power absorbed per mass of tissue and has units of watts per kilogram (W/kg). SAR is usually averaged either over the whole body, or over a small sample volume (typically 1 g or 10 g of tissue). The value cited is then the maximum level measured in the body part studied over the stated volume or mass. Calculation SAR for electromagnetic energy can be calculated from the electric field within the tissue as where is the sample electrical conductivity, is the RMS electric field, is the sample density, is the volume of the sample. SAR measures exposure to fields between 100 kHz and 10 GHz (known as radio waves). It is commonly used to measure power absorbed from mobile phones and during MRI scans. The value depends heavily on the geometry of the part of the body that is exposed to the RF energy and on the exact location and geometry of the RF source. Thus tests must be made with each specific source, such as a mobile-phone model and at the intended position of use. Mobile phone SAR testing When measuring the SAR due to a mobile phone the phone is placed against a representation of a human head (a "SAR Phantom") in a talk position. The SAR value is then measured at the location that has the highest absorption rate in the entire head, which in the case of a mobile phone is often as close to the phone's antenna as possible. Measurements are made for different positions on both sides of the head and at different frequencies representing the frequency bands at which the device can transmit. Depending on the size and capabilities of the phone, additional testing may also be required to represent usage of the device while placed close to the user's body and/or extremities. Various governments have defined maximum SAR levels for RF energy emitted by mobile devices: United States: the FCC requires that phones sold have a SAR level at or below 1.6 watts per kilogram (W/kg) taken over the volume containing a mass of 1 gram of tissue that is absorbing the most signal. European Union: CENELEC specify SAR limits within the EU, following IEC standards. For mobile phones, and other such hand-held devices, the SAR limit is 2 W/kg averaged over the 10 g of tissue absorbing the most signal (IEC 62209-1). India: switched from the EU limits to the US limits for mobile handsets in 2012. Unlike the US, India will not rely solely on SAR measurements provided by manufacturers; random compliance tests are done by a government-run Telecommunication Engineering Center (TEC) SAR Laboratory on handsets and 10% of towers. All handsets must have a hands-free mode. SAR values are heavily dependent on the size of the averaging volume. Without information about the averaging volume used, comparisons between different measurements cannot be made. Thus, the European 10-gram ratings should be compared among themselves, and the American 1-gram ratings should only be compared among themselves. To check SAR on your mobile phone, review the documentation provided with the phone, dial *#07# (only works on some models) or visit the manufacturer's website. MRI scanner SAR testing For magnetic resonance imaging the limits (described in IEC 60601-2-33) are slightly more complicated: Note: Averaging time of 6 minutes. (a) Local SAR is determined over the mass of 10 g. (b) The limit scales dynamically with the ratio "exposed patient mass / patient mass": Normal operating mode: Partial body SAR = 10 W/kg − (8 W/kg × exposed patient mass / patient mass). 1st level controlled: Partial body SAR = 10 W/kg − (6 W/kg × exposed patient mass / patient mass). (c) In cases where the orbit is in the field of a small local RF transmit coil, care should be taken to ensure that the temperature rise is limited to 1 °C. Criticism SAR limits set by law do not consider that the human body is particularly sensitive to the power peaks or frequencies responsible for the microwave hearing effect. Frey reports that the microwave hearing effect occurs with average power density exposures of 400 μW/cm2, well below SAR limits (as set by government regulations). Notes: In comparison to the short term, relatively intensive exposures described above, for long-term environmental exposure of the general public there is a limit of 0.08 W/kg averaged over the whole body. A whole-body average SAR of 0.4 W/kg has been chosen as the restriction that provides adequate protection for occupational exposure. An additional safety factor of 5 is introduced for exposure of the public, giving an average whole-body SAR limit of 0.08 W/kg. FCC advice The FCC guide "Specific Absorption Rate (SAR) For Cell Phones: What It Means For You", after detailing the limitations of SAR values, offers the following "bottom line" editorial: MSBE (minimum SAR with biological effect) In order to find out possible advantages and the interaction mechanisms of electromagnetic fields (EMF), the minimum SAR (or intensity) that could have biological effect (MSBE) would be much more valuable in comparison to studying high-intensity fields. Such studies can possibly shed light on thresholds of non-ionizing radiation effects and cell capabilities (e.g., oxidative response). In addition, it is more likely to reduce the complexity of the EMF interaction targets in cell cultures by lowering the exposure power, which at least reduces the overall rise in temperature. This parameter might differ regarding the case under study and depends on the physical and biological conditions of the exposed target. FCC regulations The FCC regulations for SAR are contained in 47 C.F.R. 1.1307(b), 1.1310, 2.1091, 2.1093 and also discussed in OET Bulletin No. 56, "Questions and Answers About the Biological Effects and Potential Hazards of Radiofrequency Electromagnetic Fields." European regulations Specific energy absorption rate (SAR) averaged over the whole body or over parts of the body, is defined as the rate at which energy is absorbed per unit mass of body tissue and is expressed in watts per kilogram (W/kg). Whole body SAR is a widely accepted measure for relating adverse thermal effects to RF exposure. Legislative acts in the European Union include directive 2013/35/EU of the European Parliament and of the Council of 26 June 2013 on the minimum health and safety requirements regarding the exposure of workers to the risks arising from physical agents (electromagnetic fields) (20th individual Directive within the meaning of Article 16(1) of Directive 89/391/EEC) and repealing Directive 2004/40/EC) in its annex III "THERMAL EFFECTS" for "EXPOSURE LIMIT VALUES AND ACTION LEVELS IN THE FREQUENCY RANGE FROM 100 kHz TO 300 GHz". See also Dielectric heating Electromagnetic radiation and health References External links Specific Absorption Rate (SAR) for Cellular Telephones at the US Federal Communications Commission (FCC) "Evaluating Compliance with FCC Guidelines for Human Exposure to Radiofrequency Electromagnetic Field" (Supplement C to OET Bulletin 65), June 2001; a detailed technical document about measuring SAR Electromagnetic fields and public health at the World Health Organization (WHO) "An Update on SAR Standards and the Basic Requirements for SAR Assessment" at ETS-Lindgren website (Archive.org link), April 2005 Example of a detailed SAR report from the FCC web site (for an Apple iPod Touch 4th generation); hosted at 3rd party website Manufacturers' SAR official websites Apple Samsung Sony Huawei FCC Regulations OET Bulletin No. 56, "Questions and Answers About the Biological Effects and Potential Hazards of Radiofrequency Electromagnetic Fields." Radiobiology Biophysics Rates
Specific absorption rate
[ "Physics", "Chemistry", "Biology" ]
1,614
[ "Radiobiology", "Radioactivity", "Applied and interdisciplinary physics", "Biophysics" ]
14,367,590
https://en.wikipedia.org/wiki/P1-derived%20artificial%20chromosome
A P1-derived artificial chromosome, or PAC, is a DNA construct derived from the DNA of P1 bacteriophages and Bacterial artificial chromosome. It can carry large amounts (about 100–300 kilobases) of other sequences for a variety of bioengineering purposes in bacteria. It is one type of the efficient cloning vector used to clone DNA fragments (100- to 300-kb insert size; average,150 kb) in Escherichia coli cells. History of PAC The bacteriophage P1 was first isolated by Dr. Giuseppe Bertani. In his study, he noticed that the lysogen produced abnormal non-continuous phages, and later found phage P1 was produced from the Lisbonne lysogen strain, in addition to bacteriophages P2 and P3. P1 has the ability to copy a bacteria's host genome and integrate that DNA information into other bacteria hosts, also known as generalized transduction. Later on, P1 was developed as a cloning vector by Nat Sternberg and colleagues in the 1990s. It is capable of Cre-Lox recombination. The P1 vector system was first developed to carry relatively large DNA fragments in plasmids (95-100kb). Construction PAC has 2 loxP sites, which can be used by phage recombinases to form the product from its cre-gene recognition during Cre-Lox recombination. This process circularizes the DNA strand, forming a plasmid, which can then be inserted into bacteria such as Escherichia coli. The transformation is usually done by electroporation, which uses electricity to allow the plasmids permeate into the cells. If high expression levels are desired, the P1 lytic replicon can be used in constructs. Electroporation allows for lysogeny of PACs so that they can replicate within cells without disturbing other chromosomes. Comparison with Other Artificial Chromosomes PAC is one of the artificial chromosome vectors. Some other artificial chromosomes include: bacterial artificial chromosome, yeast artificial chromosome and the human artificial chromosome. Compared to other artificial chromosomes, it can carry relatively large DNA fragments, however less so than the yeast artificial chromosome(YAC). Some advantages of PACs compared to YACs includes easier manipulation of bacteria system, easier separation from DNA hosts, higher transformation rate, more stable inserts, and they are non-chimeric which means they do not rearrange and ligate to form new DNA strand, allowing for a user friendly vector choice. Applications PAC is commonly used as a large capacity vector which allows propagation of large DNA inserts in Escherichia coli. This feature has been commonly used for: building genome libraries for human, mouse, etc, helps with projects such as Human Genome Project libraries served as the template for gene sequencing (example: used as gene template in mouse gene function analysis) genome analysis on specific functions of different genes for more complex organisms (plants, animals, etc.) facilitate gene expression Since PAC was derived from phages, PAC and its variants are also useful in the PAC-based phage therapy and antibiotic studies. See also Bacterial artificial chromosome Human artificial chromosome Yeast artificial chromosome References External links Online Medical Dictionary P1-derived artificial chromosome P1-derived artificial chromosome (PAC) definition DNA Bacteriophages Molecular biology
P1-derived artificial chromosome
[ "Chemistry", "Biology" ]
699
[ "Biochemistry", "Molecular biology" ]
14,369,650
https://en.wikipedia.org/wiki/Strengthening%20mechanisms%20of%20materials
Methods have been devised to modify the yield strength, ductility, and toughness of both crystalline and amorphous materials. These strengthening mechanisms give engineers the ability to tailor the mechanical properties of materials to suit a variety of different applications. For example, the favorable properties of steel result from interstitial incorporation of carbon into the iron lattice. Brass, a binary alloy of copper and zinc, has superior mechanical properties compared to its constituent metals due to solution strengthening. Work hardening (such as beating a red-hot piece of metal on anvil) has also been used for centuries by blacksmiths to introduce dislocations into materials, increasing their yield strengths. Basic description Plastic deformation occurs when large numbers of dislocations move and multiply so as to result in macroscopic deformation. In other words, it is the movement of dislocations in the material which allows for deformation. If we want to enhance a material's mechanical properties (i.e. increase the yield and tensile strength), we simply need to introduce a mechanism which prohibits the mobility of these dislocations. Whatever the mechanism may be, (work hardening, grain size reduction, etc.) they all hinder dislocation motion and render the material stronger than previously. The stress required to cause dislocation motion is orders of magnitude lower than the theoretical stress required to shift an entire plane of atoms, so this mode of stress relief is energetically favorable. Hence, the hardness and strength (both yield and tensile) critically depend on the ease with which dislocations move. Pinning points, or locations in the crystal that oppose the motion of dislocations, can be introduced into the lattice to reduce dislocation mobility, thereby increasing mechanical strength. Dislocations may be pinned due to stress field interactions with other dislocations and solute particles, creating physical barriers from second phase precipitates forming along grain boundaries. There are five main strengthening mechanisms for metals, each is a method to prevent dislocation motion and propagation, or make it energetically unfavorable for the dislocation to move. For a material that has been strengthened, by some processing method, the amount of force required to start irreversible (plastic) deformation is greater than it was for the original material. In amorphous materials such as polymers, amorphous ceramics (glass), and amorphous metals, the lack of long range order leads to yielding via mechanisms such as brittle fracture, crazing, and shear band formation. In these systems, strengthening mechanisms do not involve dislocations, but rather consist of modifications to the chemical structure and processing of the constituent material. The strength of materials cannot infinitely increase. Each of the mechanisms explained below involves some trade-off by which other material properties are compromised in the process of strengthening. Strengthening mechanisms in metals Work hardening The primary species responsible for work hardening are dislocations. Dislocations interact with each other by generating stress fields in the material. The interaction between the stress fields of dislocations can impede dislocation motion by repulsive or attractive interactions. Additionally, if two dislocations cross, dislocation line entanglement occurs, causing the formation of a jog which opposes dislocation motion. These entanglements and jogs act as pinning points, which oppose dislocation motion. As both of these processes are more likely to occur when more dislocations are present, there is a correlation between dislocation density and shear strength. The shear strengthening provided by dislocation interactions can be described by: where is a proportionality constant, is the shear modulus, is the Burgers vector, and is the dislocation density. Dislocation density is defined as the dislocation line length per unit volume: Similarly, the axial strengthening will be proportional to the dislocation density. This relationship does not apply when dislocations form cell structures. When cell structures are formed, the average cell size controls the strengthening effect. Increasing the dislocation density increases the yield strength which results in a higher shear stress required to move the dislocations. This process is easily observed while working a material (by a process of cold working in metals). Theoretically, the strength of a material with no dislocations will be extremely high () because plastic deformation would require the breaking of many bonds simultaneously. However, at moderate dislocation density values of around 107-109 dislocations/m2, the material will exhibit a significantly lower mechanical strength. Analogously, it is easier to move a rubber rug across a surface by propagating a small ripple through it than by dragging the whole rug. At dislocation densities of 1014 dislocations/m2 or higher, the strength of the material becomes high once again. Also, the dislocation density cannot be infinitely high, because then the material would lose its crystalline structure. Solid solution strengthening and alloying For this strengthening mechanism, solute atoms of one element are added to another, resulting in either substitutional or interstitial point defects in the crystal (see Figure on the right). The solute atoms cause lattice distortions that impede dislocation motion, increasing the yield stress of the material. Solute atoms have stress fields around them which can interact with those of dislocations. The presence of solute atoms impart compressive or tensile stresses to the lattice, depending on solute size, which interfere with nearby dislocations, causing the solute atoms to act as potential barriers. The shear stress required to move dislocations in a material is: where is the solute concentration and is the strain on the material caused by the solute. Increasing the concentration of the solute atoms will increase the yield strength of a material, but there is a limit to the amount of solute that can be added, and one should look at the phase diagram for the material and the alloy to make sure that a second phase is not created. In general, the solid solution strengthening depends on the concentration of the solute atoms, shear modulus of the solute atoms, size of solute atoms, valency of solute atoms (for ionic materials), and the symmetry of the solute stress field. The magnitude of strengthening is higher for non-symmetric stress fields because these solutes can interact with both edge and screw dislocations, whereas symmetric stress fields, which cause only volume change and not shape change, can only interact with edge dislocations. Precipitation hardening In most binary systems, alloying above a concentration given by the phase diagram will cause the formation of a second phase. A second phase can also be created by mechanical or thermal treatments. The particles that compose the second phase precipitates act as pinning points in a similar manner to solutes, though the particles are not necessarily single atoms. The dislocations in a material can interact with the precipitate atoms in one of two ways (see Figure 2). If the precipitate atoms are small, the dislocations would cut through them. As a result, new surfaces (b in Figure 2) of the particle would get exposed to the matrix and the particle-matrix interfacial energy would increase. For larger precipitate particles, looping or bowing of the dislocations would occur and result in dislocations getting longer. Hence, at a critical radius of about 5 nm, dislocations will preferably cut across the obstacle, while for a radius of 30 nm, the dislocations will readily bow or loop to overcome the obstacle. The mathematical descriptions are as follows: For particle bowing- For particle cutting- Dispersion strengthening Dispersion strengthening is a type of particulate strengthening in which incoherent precipitates attract and pin dislocations. These particles are typically larger than those in the Orowon precipitation hardening discussed above. The effect of dispersion strengthening is effective at high temperatures whereas precipitation strengthening from heat treatments are typically limited to temperatures much lower than the melting temperature of the material. One common type of dispersion strengthening is oxide dispersion strengthening. Grain boundary strengthening In a polycrystalline metal, grain size has a tremendous influence on the mechanical properties. Because grains usually have varying crystallographic orientations, grain boundaries arise. While undergoing deformation, slip motion will take place. Grain boundaries act as an impediment to dislocation motion for the following two reasons: 1. Dislocation must change its direction of motion due to the differing orientation of grains. 2. Discontinuity of slip planes from grain one to grain two. The stress required to move a dislocation from one grain to another in order to plastically deform a material depends on the grain size. The average number of dislocations per grain decreases with average grain size (see Figure 3). A lower number of dislocations per grain results in a lower dislocation 'pressure' building up at grain boundaries. This makes it more difficult for dislocations to move into adjacent grains. This relationship is the Hall-Petch relationship and can be mathematically described as follows: , where is a constant, is the average grain diameter and is the original yield stress. The fact that the yield strength increases with decreasing grain size is accompanied by the caveat that the grain size cannot be decreased infinitely. As the grain size decreases, more free volume is generated resulting in lattice mismatch. Below approximately 10 nm, the grain boundaries will tend to slide instead; a phenomenon known as grain-boundary sliding. If the grain size gets too small, it becomes more difficult to fit the dislocations in the grain and the stress required to move them is less. It was not possible to produce materials with grain sizes below 10 nm until recently, so the discovery that strength decreases below a critical grain size is still finding new applications. Transformation hardening This method of hardening is used for steels. High-strength steels generally fall into three basic categories, classified by the strengthening mechanism employed. 1- solid-solution-strengthened steels (rephos steels) 2- grain-refined steels or high strength low alloy steels (HSLA) 3- transformation-hardened steels Transformation-hardened steels are the third type of high-strength steels. These steels use predominantly higher levels of C and Mn along with heat treatment to increase strength. The finished product will have a duplex micro-structure of ferrite with varying levels of degenerate martensite. This allows for varying levels of strength. There are three basic types of transformation-hardened steels. These are dual-phase (DP), transformation-induced plasticity (TRIP), and martensitic steels. The annealing process for dual -phase steels consists of first holding the steel in the alpha + gamma temperature region for a set period of time. During that time C and Mn diffuse into the austenite leaving a ferrite of greater purity. The steel is then quenched so that the austenite is transformed into martensite, and the ferrite remains on cooling. The steel is then subjected to a temper cycle to allow some level of marten-site decomposition. By controlling the amount of martensite in the steel, as well as the degree of temper, the strength level can be controlled. Depending on processing and chemistry, the strength level can range from 350 to 960 MPa. TRIP steels also use C and Mn, along with heat treatment, in order to retain small amounts of austenite and bainite in a ferrite matrix. Thermal processing for TRIP steels again involves annealing the steel in the a + g region for a period of time sufficient to allow C and Mn to diffuse into austenite. The steel is then quenched to a point above the martensite start temperature and held there. This allows the formation of bainite, an austenite decomposition product. While at this temperature, more C is allowed to enrich the retained austenite. This, in turn, lowers the martensite start temperature to below room temperature. Upon final quenching a metastable austenite is retained in the predominantly ferrite matrix along with small amounts of bainite (and other forms of decomposed austenite). This combination of micro-structures has the added benefits of higher strengths and resistance to necking during forming. This offers great improvements in formability over other high-strength steels. Essentially, as the TRIP steel is being formed, it becomes much stronger. Tensile strengths of TRIP steels are in the range of 600-960 MPa. Martensitic steels are also high in C and Mn. These are fully quenched to martensite during processing. The martensite structure is then tempered back to the appropriate strength level, adding toughness to the steel. Tensile strengths for these steels range as high as 1500 MPa. Strengthening mechanisms in amorphous materials Polymer Polymers fracture via breaking of inter- and intra molecular bonds; hence, the chemical structure of these materials plays a huge role in increasing strength. For polymers consisting of chains which easily slide past each other, chemical and physical cross linking can be used to increase rigidity and yield strength. In thermoset polymers (thermosetting plastic), disulfide bridges and other covalent cross links give rise to a hard structure which can withstand very high temperatures. These cross-links are particularly helpful in improving tensile strength of materials which contain much free volume prone to crazing, typically glassy brittle polymers. In thermoplastic elastomer, phase separation of dissimilar monomer components leads to association of hard domains within a sea of soft phase, yielding a physical structure with increased strength and rigidity. If yielding occurs by chains sliding past each other (shear bands), the strength can also be increased by introducing kinks into the polymer chains via unsaturated carbon-carbon bonds. Adding filler materials such as fibers, platelets, and particles is a commonly employed technique for strengthening polymer materials. Fillers such as clay, silica, and carbon network materials have been extensively researched and used in polymer composites in part due to their effect on mechanical properties. Stiffness-confinement effects near rigid interfaces, such as those between a polymer matrix and stiffer filler materials, enhance the stiffness of composites by restricting polymer chain motion. This is especially present where fillers are chemically treated to strongly interact with polymer chains, increasing the anchoring of polymer chains to the filler interfaces and thus further restricting the motion of chains away from the interface. Stiffness-confinement effects have been characterized in model nanocomposites, and shows that composites with length scales on the order of nanometers increase the effect of the fillers on polymer stiffness dramatically. Increasing the bulkiness of the monomer unit via incorporation of aryl rings is another strengthening mechanism. The anisotropy of the molecular structure means that these mechanisms are heavily dependent on the direction of applied stress. While aryl rings drastically increase rigidity along the direction of the chain, these materials may still be brittle in perpendicular directions. Macroscopic structure can be adjusted to compensate for this anisotropy. For example, the high strength of Kevlar arises from a stacked multilayer macrostructure where aromatic polymer layers are rotated with respect to their neighbors. When loaded oblique to the chain direction, ductile polymers with flexible linkages, such as oriented polyethylene, are highly prone to shear band formation, so macroscopic structures which place the load parallel to the draw direction would increase strength. Mixing polymers is another method of increasing strength, particularly with materials that show crazing preceding brittle fracture such as atactic polystyrene (APS). For example, by forming a 50/50 mixture of APS with polyphenylene oxide (PPO), this embrittling tendency can be almost completely suppressed, substantially increasing the fracture strength. Interpenetrating polymer networks (IPNs), consisting of interlacing crosslinked polymer networks that are not covalently bonded to one another, can lead to enhanced strength in polymer materials. The use of an IPN approach imposes compatibility (and thus macroscale homogeneity) on otherwise immiscible blends, allowing for a blending of mechanical properties. For example, silicone-polyurethane IPNs show increased tear and flexural strength over base silicone networks, while preserving the high elastic recovery of the silicone network at high strains. Increased stiffness can also be achieved by pre-straining polymer networks and then sequentially forming a secondary network within the strained material. This takes advantage of the anisotropic strain hardening of the original network (chain alignment from stretching of the polymer chains) and provides a mechanism whereby the two networks transfer stress to one another due to the imposed strain on the pre-strained network. Glass Many silicate glasses are strong in compression but weak in tension. By introducing compression stress into the structure, the tensile strength of the material can be increased. This is typically done via two mechanisms: thermal treatment (tempering) or chemical bath (via ion exchange). In tempered glasses, air jets are used to rapidly cool the top and bottom surfaces of a softened (hot) slab of glass. Since the surface cools quicker, there is more free volume at the surface than in the bulk melt. The core of the slab then pulls the surface inward, resulting in an internal compressive stress at the surface. This substantially increases the tensile strength of the material as tensile stresses exerted on the glass must now resolve the compressive stresses before yielding. Alternately, in chemical treatment, a glass slab treated containing network formers and modifiers is submerged into a molten salt bath containing ions larger than those present in the modifier. Due to a concentration gradient of the ions, mass transport must take place. As the larger cation diffuses from the molten salt into the surface, it replaces the smaller ion from the modifier. The larger ion squeezing into surface introduces compressive stress in the glass's surface. A common example is treatment of sodium oxide modified silicate glass in molten potassium chloride. Examples of chemically strengthened glass are Gorilla Glass developed and manufactured by Corning, AGC Inc.'s Dragontrail and Schott AG's Xensation. Composite strengthening Many of the basic strengthening mechanisms can be classified based on their dimensionality. At 0-D there is precipitate and solid solution strengthening with particulates strengthening structure, at 1-D there is work/forest hardening with line dislocations as the hardening mechanism, and at 2-D there is grain boundary strengthening with surface energy of granular interfaces providing strength improvement. The two primary types of composite strengthening, fiber reinforcement and laminar reinforcement, fall in the 1-D and 2-D classes, respectively. The anisotropy of fiber and laminar composite strength reflects these dimensionalities. The primary idea behind composite strengthening is to combine materials with opposite strengths and weaknesses to create a material which transfers load onto the stiffer material but benefits from the ductility and toughness of the softer material. Fiber reinforcement Fiber-reinforced composites (FRCs) consist of a matrix of one material containing parallel embedded fibers. There are two variants of fiber-reinforced composites, one with stiff fibers and a ductile matrix and one with ductile fibers and a stiff matrix. The former variant is exemplified by fiberglass which contains very strong but delicate glass fibers embedded in a softer plastic matrix resilient to fracture. The latter variant is found in almost all buildings as reinforced concrete with ductile, high tensile-strength steel rods embedded in brittle, high compressive-strength concrete. In both cases, the matrix and fibers have complimentary mechanical properties and the resulting composite material is therefore more practical for applications in the real world. For a composite containing aligned, stiff fibers which span the length of the material and a soft, ductile matrix, the following descriptions provide a rough model. Four stages of deformation The condition of a fiber-reinforced composite under applied tensile stress along the direction of the fibers can be decomposed into four stages from small strain to large strain. Since the stress is parallel to the fibers, the deformation is described by the isostrain condition, i.e., the fiber and matrix experience the same strain. At each stage, the composite stress () is given in terms of the volume fractions of the fiber and matrix (), the Young's moduli of the fiber and matrix (), the strain of the composite (), and the stress of the fiber and matrix as read from a stress-strain curve (). Both fiber and composite remain in the elastic strain regime. In this stage, we also note that the composite Young's modulus is a simple weighted sum of the two component moduli. The fiber remains in the elastic regime but the matrix yields and plastically deforms. Both fiber and composite yield and plastically deform. This stage often features significant Poisson strain which is not captured by model below. The fiber fractures while the matrix continues to plastically deform. While in reality the fractured pieces of fiber still contribute some strength, it is left out of this simple model. Tensile strength Due to the heterogeneous nature of FRCs, they also feature multiple tensile strengths (TS), one corresponding to each component. Given the assumptions outlined above, the first tensile strength would correspond to failure of the fibers, with some support from the matrix plastic deformation strength, and the second with failure of the matrix. Anisotropy (Orientation effects) As a result of the aforementioned dimensionality (1-D) of fiber reinforcement, significant anisotropy is observed in its mechanical properties. The following equations model the tensile strength of a FRC as a function of the misalignment angle () between the fibers and the applied force, the stresses in the parallel and perpendicular, or and , cases (), and the shear strength of the matrix (). Small Misalignment Angle (longitudinal fracture) The angle is small enough to maintain load transfer onto the fibers and prevent delamination of fibers and the misaligned stress samples a slightly larger cross-sectional area of the fiber so the strength of the fiber is not just maintained but actually increases compared to the parallel case. Significant Misalignment Angle (shear failure) The angle is large enough that the load is not effectively transferred to the fibers and the matrix experiences enough strain to fracture. Near Perpendicular Misalignment Angle (transverse fracture) The angle is close to 90 so most of the load remains in the matrix and thus tensile transverse matrix fracture is the dominant failure condition. This can be seen as complementary to the small angle case, with similar form but with an angle . Laminar reinforcement Applications Strengthening of materials is useful in many applications. A primary application of strengthened materials is for construction. In order to have stronger buildings and bridges, one must have a strong frame that can support high tensile or compressive load and resist plastic deformation. The steel frame used to make the building should be as strong as possible so that it does not bend under the entire weight of the building. Polymeric roofing materials would also need to be strong so that the roof does not cave in when there is build-up of snow on the rooftop. Research is also currently being done to increase the strength of metallic materials through the addition of polymer materials such as bonded carbon fiber reinforced polymer to (CFRP). Current research Molecular dynamics simulation assisted studies The molecular dynamics (MD) method has been widely applied in materials science as it can yield information about the structure, properties, and dynamics on the atomic scale that cannot be easily resolved with experiments. The fundamental mechanism behind MD simulation is based on classical mechanics, from which we know the force exerted on a particle is caused by the negative gradient of the potential energy with respect to the particle position. Therefore, a standard procedure to conduct MD simulation is to divide the time into discrete time steps and solve the equations of motion over these intervals repeatedly to update the positions and energies of the particles. Direct observation of atomic arrangements and energetics of particles on the atomic scale makes it a powerful tool to study microstructural evolution and strengthening mechanisms. Grain boundary strengthening There have been extensive studies on different strengthening mechanisms using MD simulation. These studies reveal the microstructural evolution that cannot be either easily observed from an experiment or predicted by a simplified model. Han et al. investigated the grain boundary strengthening mechanism and the effects of grain size in nanocrystalline graphene through a series of MD simulations. Previous studies observed inconsistent grain size dependence of the strength of graphene at the length scale of nm and the conclusions remained unclear. Therefore, Han et al. utilized MD simulation to observe the structural evolution of graphene with nanosized grains directly. The nanocrystalline graphene samples were generated with random shapes and distribution to simulate well-annealed polycrystalline samples. The samples were then loaded with uniaxial tensile stress, and the simulations were carried out at room temperature. By decreasing the grain size of graphene, Han et al. observed a transition from an inverse pseudo Hall-Petch behavior to pseudo Hall-Petch behavior and the critical grain size is 3.1 nm. Based on the arrangement and energetics of simulated particles, the inverse pseudo Hall-Petch behavior can be attributed to the creation of stress concentration sites due to the increase in the density of grain boundary junctions. Cracks then preferentially nucleate on these sites and the strength decreases. However, when the grain size is below the critical value, the stress concentration at the grain boundary junctions decreases because of stress cancellation between 5 and 7 defects. This cancellation helps graphene sustain the tensile load and exhibit a pseudo Hall-Petch behavior. This study explains the previous inconsistent experimental observations and provides an in-depth understanding of the grain boundary strengthening mechanism of nanocrystalline graphene, which cannot be easily obtained from either in-situ or ex-situ experiments. Precipitate strengthening There are also MD studies done on precipitate strengthening mechanisms. Shim et al. applied MD simulations to study the precipitate strengthening effects of nanosized body-centered-cubic (bcc) Cu on face-centered-cubic (fcc) Fe. As discussed in the previous section, the precipitate strengthening effects are caused by the interaction between dislocations and precipitates. Therefore, the characteristics of dislocation play an important role on the strengthening effects. It is known that a screw dislocation in bcc metals has very complicated features, including a non-planar core and the twinning-anti-twinning asymmetry. This complicates the strengthening mechanism analysis and modeling and it cannot be easily revealed by high resolution electron microscopy. Thus, Shim et al. simulated coherent bcc Cu precipitates with diameters ranging from 1 to 4 nm embedded in the fcc Fe matrix. A screw dislocation is then introduced and driven to glide on a {112} plane by an increasing shear stress until it detaches from the precipitates. The shear stress that causes the detachment is regarded as the critical resolved shear stress (CRSS). Shim et al. observed that the screw dislocation velocity in the twinning direction is 2-4 times larger than that in the anti-twinning direction. The reduced velocity in the anti-twinning direction is mainly caused by a transition in the screw dislocation glide from the kink-pair to the cross-kink mechanism. In contrast, a screw dislocation overcomes the precipitates of 1–3.5 nm by shearing in the twinning direction. In addition, it also has been observed that the screw dislocation detachment mechanism with the larger, transformed precipitates involves annihilation-and-renucleation and Orowan looping in the twinning and anti-twinning direction, respectively. To fully characterize the involved mechanisms, it requires intensive transmission electron microscopy analysis and it is normally hard to give a comprehensive characterization. Solid solution strengthening and alloying A similar study has been done by Zhang et al. on studying the solid solution strengthening of Co, Ru, and Re of different concentrations in fcc Ni. The edge dislocation was positioned at the center of Ni and its slip system was set to be <110> {111}. Shear stress was then applied to the top and bottom surfaces of the Ni with a solute atom (Co, Ru, or Re) embedded at the center at 300 K. Previous studies have shown that the general view of size and modulus effects cannot fully explain the solid solution strengthening caused by Re in this system due to their small values. Zhang et al. took a step further to combine the first-principle DFT calculations with MD to study the influence of stacking fault energy (SFE) on strengthening, as partial dislocations can easily form in this material structure. MD simulation results indicate that Re atoms strongly drag to edge dislocation motion and the DFT calculation reveals a dramatic increase in SFE, which is due to the interaction between host atoms and solute atoms located in the slip plane. Further, similar relations have also been found in fcc Ni embedded with Ru and Co. Limitation of the MD studies of strengthening mechanisms These studies show great examples of how the MD method can assist the studies of strengthening mechanisms and provides more insights on the atomic scale. However, it is important to note the limitations of the method. To obtain accurate MD simulation results, it is essential to build a model that properly describes the interatomic potential based on bonding. The interatomic potentials are approximations rather than exact descriptions of interactions. The accuracy of the description varies significantly with the system and complexity of the potential form. For example, if the bonding is dynamic, which means that there is a change in bonding depending on atomic positions, the dedicated interatomic potential is required to enable the MD simulation to yield accurate results. Therefore, interatomic potentials need to be tailored based on bonding. The following interatomic potential models are commonly used in materials science: Born-Mayer potential, Morse potential, Lennard Jones potential, and Mie potential. Although they give very similar results for the variation of potential energy with respect to the particle position, there is a non-negligible difference in their repulsive tails. These characteristics make them better describe materials systems with specific chemical bonds, respectively. In addition to inherent errors in interatomic potentials, the number of atoms and the time steps in MD is limited by the computational power. Nowadays, it is common to simulate an MD system with multimillion atoms and it can even achieve simulations with multimillion atoms. However this still limits the length scale of the simulation to roughly a micron in size. The time steps in MD are also very small and a long simulation will only yield results at the time scale of a few nanoseconds. To further extend the scale of simulation time, it is common to apply a bias potential that changes the barrier height, therefore, accelerating the dynamics. This method is called hyperdynamics. The proper application of this method typically can extend the simulation times to microseconds. Nanostructure fabrication for material strengthening Based on the mechanism of strengthening discussed in the previous contents, nowadays people are also working on enhancing the strength by purposely fabricating nanostructures in materials. Here we introduce several representative methods, including hierarchical nanotwined structures, pushing the limit of grain size for strengthening and dislocation engineering. Hierarchical nanotwinned structures As mentioned in the previous content, hindering dislocation motion renders great strengthening to materials. Nanoscale twins – crystalline regions related by symmetry have the ability to effectively block the dislocation motion due to the microstructure change at the interface. The formation of hierarchical nanotwinned structures pushes the hindrance effect to the extreme, due to the construction of a complex 3D nanotwinned network. Thus, the delicate design of hierarchical nanotwinned structures is of great importance for inventing materials with super strength. For instance, Yue et al. constructed a diamond composite with hierarchically nanotwinned structure by manipulating the synthesis pressure. The obtained composite showed the higher strength than typical engineering metals and ceramics. Pushing the limit of grain size for strengthening The Hall-Petch effect illustrates that the yield strength of materials increases with decreasing grain size. However, many researchers have found that the nanocrystalline materials will soften when the grain size decreases to the critical point, which is called the inverse Hall-Petch effect. The interpretations of this phenomenon is that the extremely small grains are not able to support dislocation pileup which provides extra stress concentration in the large grains. At this point, the strengthening mechanism changes from dislocation-dominated strain hardening to growth softening and grain rotation. Typically, the inverse Hall-Petch effect will happens at grain size ranging from 10 nm to 30 nm and makes it hard for nanocrystalline materials to achieve a high strength. To push the limit of grain size for strengthening, the hindrance of grain rotation and growth could be achieved by grain boundary stabilization. The construction of nanolaminated structure with low-angle grain boundaries is one method to obtain ultrafine grained materials with ultra-strength. Lu et al. applied a very high rate shear deformation with high strain gradients on the top surface layer of bulk Ni sample and introduced nanolaminated structures. This material exhibits an ultra-high hardness, higher than any reported ultrafine-grained nickel. The exceptional strength is resulted from the appearance of low-angle grain boundaries, which have low-energy states efficient for enhancing structure stability. Another method to stabilize grain boundaries is the addition of nonmetallic impurities. Nonmetallic impurities often aggregate at grain boundaries and have the ability to impact the strength of materials by changing the grain boundary energy. Rupert et al. conducted first-principles simulations to study the impact of the addition of common nonmetallic impurities on Σ5 (310) grain boundary energy in Cu. They claimed that the decrease of covalent radius of the impurity and the increase of electronegativity of the impurity would lead to the increase of the grain boundary energy and further strengthen the materials. For instance, boron stabilized the grain boundaries by enhancing the charge density among the adjacent Cu atoms to improve the connection between two grain boundaries. Dislocation engineering Previous studies on the impact of dislocation motion on materials strengthening mainly focused on high density dislocation, which is effective for enhancing strength with the cost of reducing ductility. Engineering dislocation structures and distribution is promising to comprehensively improve the performance of material. Solutes tend to aggregate at dislocations and are promising for dislocation engineering. Kimura et al. conducted atom probe tomograph and observed the aggregation of niobium atoms to the dislocations. The segregation energy was calculated to be almost the same as the grain boundary segregation energy. That's to say, the interaction between niobium atoms and dislocations hindered the recovery of dislocations and thus strengthened the materials. Introducing dislocations with heterogeneous characteristics could also be utilized for material strengthening. Lu et al. introduced ordered oxygen complexes into TiZrHfNb alloy. Unlike the traditional interstitial strengthening, the introduction of the ordered oxygen complexes enhanced the strength of the alloy without the sacrifice of ductility. The mechanism was that the ordered oxygen complexes changed the dislocation motion mode from planar slip to wavy slip and promoted double cross-slip. See also Grain boundary strengthening Precipitation strengthening Solid solution strengthening Strength of materials Tempering (metallurgy) Work hardening References External links Grain boundary strengthening in alumina by rare earth impurities Mechanism of grain boundary strengthening of steels An open source Matlab toolbox for analysis of slip transfer through grain boundaries Materials science
Strengthening mechanisms of materials
[ "Physics", "Materials_science", "Engineering" ]
7,476
[ "Strengthening mechanisms of materials", "Applied and interdisciplinary physics", "Materials science", "nan" ]
14,370,049
https://en.wikipedia.org/wiki/Schwarzschild%20criterion
Discovered by Martin Schwarzschild, the Schwarzschild criterion is a criterion in astrophysics where a stellar medium is stable against convection when the rate of change in temperature (T) by altitude (Z) satisfies where is gravity and is the heat capacity at constant pressure. If a gas is unstable against convection then if an element is displaced upwards its buoyancy will cause it to keep rising or, if it is displaced downwards, it is denser than its surroundings and will continue to sink. Therefore, the Schwarzschild criterion dictates whether an element of a star will rise or sink if displaced by random fluctuations within the star or if the forces the element experiences will return it to its original position. For the Schwarzschild criterion to hold the displaced element must have a bulk velocity which is highly subsonic. If this is the case then the time over which the pressures surrounding the element changes is much longer than the time it takes for a sound wave to travel through the element and smooth out pressure differences between the element and its surroundings. If this were not the case the element would not hold together as it traveled through the star. In order to keep rising or sinking in the star the displaced element must not be able to become the same density as the gas surrounding it. In other words, it must respond adiabatically to its surroundings. In order for this to be true it must move fast enough for there to be insufficient time for the element to exchange heat with its surroundings. De Schwarzschild criterion is often written as which indicates that convection takes place whenever the adiabatic temperature gradient is less steep than the radiative temperature gradient (both gradients are usually negative). Stellar-structure models indicate that the two gradients are seldom of the same order of magnitude, so that the smaller can usually be neglected, even if both are always present. See also Archimedes' principle Brunt–Väisälä frequency Convection References Concepts in stellar astronomy
Schwarzschild criterion
[ "Physics", "Astronomy" ]
403
[ "Concepts in astrophysics", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Concepts in stellar astronomy" ]
7,100,728
https://en.wikipedia.org/wiki/Quantum%20vortex
In physics, a quantum vortex represents a quantized flux circulation of some physical quantity. In most cases, quantum vortices are a type of topological defect exhibited in superfluids and superconductors. The existence of quantum vortices was first predicted by Lars Onsager in 1949 in connection with superfluid helium. Onsager reasoned that quantisation of vorticity is a direct consequence of the existence of a superfluid order parameter as a spatially continuous wavefunction. Onsager also pointed out that quantum vortices describe the circulation of superfluid and conjectured that their excitations are responsible for superfluid phase transitions. These ideas of Onsager were further developed by Richard Feynman in 1955 and in 1957 were applied to describe the magnetic phase diagram of type-II superconductors by Alexei Alexeyevich Abrikosov. In 1935 Fritz London published a very closely related work on magnetic flux quantization in superconductors. London's fluxoid can also be viewed as a quantum vortex. Quantum vortices are observed experimentally in type-II superconductors (the Abrikosov vortex), liquid helium, and atomic gases (see Bose–Einstein condensate), as well as in photon fields (optical vortex) and exciton-polariton superfluids. In a superfluid, a quantum vortex "carries" quantized orbital angular momentum, thus allowing the superfluid to rotate; in a superconductor, the vortex carries quantized magnetic flux. The term "quantum vortex" is also used in the study of few body problems. Under the de Broglie–Bohm theory, it is possible to derive a "velocity field" from the wave function. In this context, quantum vortices are zeros on the wave function, around which this velocity field has a solenoidal shape, similar to that of irrotational vortex on potential flows of traditional fluid dynamics. Vortex-quantisation in a superfluid In a superfluid, a quantum vortex is a hole with the superfluid circulating around the vortex axis; the inside of the vortex may contain excited particles, air, vacuum, etc. The thickness of the vortex depends on a variety of factors; in liquid helium, the thickness is of the order of a few Angstroms. A superfluid has the special property of having phase, given by the wavefunction, and the velocity of the superfluid is proportional to the gradient of the phase (in the parabolic mass approximation). The circulation around any closed loop in the superfluid is zero if the region enclosed is simply connected. The superfluid is deemed irrotational; however, if the enclosed region actually contains a smaller region with an absence of superfluid, for example a rod through the superfluid or a vortex, then the circulation is: where is the Planck constant divided by , m is the mass of the superfluid particle, and is the total phase difference around the vortex. Because the wave-function must return to its same value after an integer number of turns around the vortex (similar to what is described in the Bohr model), then , where is an integer. Thus, the circulation is quantized: London's flux quantization in a superconductor A principal property of superconductors is that they expel magnetic fields; this is called the Meissner effect. If the magnetic field becomes sufficiently strong it will, in some cases, “quench” the superconductive state by inducing a phase transition. In other cases, however, it will be energetically favorable for the superconductor to form a lattice of quantum vortices, which carry quantized magnetic flux through the superconductor. A superconductor that is capable of supporting vortex lattices is called a type-II superconductor, vortex-quantization in superconductors is general. Over some enclosed area S, the magnetic flux is where is the vector potential of the magnetic induction Substituting a result of London's equation: , we find (with ): where ns, m, and es are, respectively, number density, mass, and charge of the Cooper pairs. If the region, S, is large enough so that along , then The flow of current can cause vortices in a superconductor to move, causing the electric field due to the phenomenon of electromagnetic induction. This leads to energy dissipation and causes the material to display a small amount of electrical resistance while in the superconducting state. Constrained vortices in ferromagnets and antiferromagnets The vortex states in ferromagnetic or antiferromagnetic material are also important, mainly for information technology. They are exceptional, since in contrast to superfluids or superconducting material one has a more subtle mathematics: instead of the usual equation of the type where is the vorticity at the spatial and temporal coordinates, and where is the Dirac function, one has: where now at any point and at any time there is the constraint . Here is constant, the constant magnitude of the non-constant magnetization vector . As a consequence the vector in eqn. (*) has been modified to a more complex entity . This leads, among other points, to the following fact: In ferromagnetic or antiferromagnetic material a vortex can be moved to generate bits for information storage and recognition, corresponding, e.g., to changes of the quantum number n. But although the magnetization has the usual azimuthal direction, and although one has vorticity quantization as in superfluids, as long as the circular integration lines surround the central axis at far enough perpendicular distance, this apparent vortex magnetization will change with the distance from an azimuthal direction to an upward or downward one, as soon as the vortex center is approached. Thus, for each directional element there are now not two, but four bits to be stored by a change of vorticity: The first two bits concern the sense of rotation, clockwise or counterclockwise; the remaining bits three and four concern the polarization of the central singular line, which may be polarized up- or downwards. The change of rotation and/or polarization involves subtle topology. Statistical mechanics of vortex lines As first discussed by Onsager and Feynman, if the temperature in a superfluid or a superconductor is raised, the vortex loops undergo a second-order phase transition. This happens when the configurational entropy overcomes the Boltzmann factor, which suppresses the thermal or heat generation of vortex lines. The lines form a condensate. Since the centre of the lines, the vortex cores, are normal liquid or normal conductors, respectively, the condensation transforms the superfluid or superconductor into the normal state. The ensembles of vortex lines and their phase transitions can be described efficiently by a gauge theory. Statistical mechanics of point vortices In 1949 Onsager analysed a toy model consisting of a neutral system of point vortices confined to a finite area. He was able to show that, due to the properties of two-dimensional point vortices the bounded area (and consequently, bounded phase space), allows the system to exhibit negative temperatures. Onsager provided the first prediction that some isolated systems can exhibit negative Boltzmann temperature. Onsager's prediction was confirmed experimentally for a system of quantum vortices in a Bose-Einstein condensate in 2019. Pair-interactions of quantum vortices In a nonlinear quantum fluid, the dynamics and configurations of the vortex cores can be studied in terms of effective vortex–vortex pair interactions. The effective intervortex potential is predicted to affect quantum phase transitions and giving rise to different few-vortex molecules and many-body vortex patterns. Preliminary experiments in the specific system of exciton-polaritons fluids showed an effective attractive–repulsive intervortex dynamics between two cowinding vortices, whose attractive component can be modulated by the nonlinearity amount in the fluid. Spontaneous vortices Quantum vortices can form via the Kibble–Zurek mechanism. As a condensate forms by quench cooling, separate protocondensates form with independent phases. As these phase domains merge quantum vortices can be trapped in the emerging condensate order parameter. Spontaneous quantum vortices were observed in atomic Bose–Einstein condensates in 2008. See also Vortex Optical vortex Macroscopic quantum phenomena Abrikosov vortex Josephson vortex Fractional vortices Superfluid helium-4 Superfluid film Superconductor Type-II superconductor Type-1.5 superconductor Quantum turbulence Bose–Einstein condensate Negative temperature References Vortices Quantum mechanics Superconductivity Superfluidity
Quantum vortex
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,859
[ "Electrical resistance and conductance", "Physical phenomena", "Phase transitions", "Physical quantities", "Vortices", "Superconductivity", "Phases of matter", "Theoretical physics", "Quantum mechanics", "Superfluidity", "Materials science", "Condensed matter physics", "Exotic matter", "Dy...
7,102,158
https://en.wikipedia.org/wiki/A514%20steel
A514 is a particular type of high strength steel, which is quenched and tempered alloy steel, with a yield strength of 100,000 psi (100 ksi or approximately 700 MPa). The ArcelorMittal trademarked name is T-1. A514 is primarily used as a structural steel for building construction. A517 is a closely related alloy that is used for the production of high-strength pressure vessels. This is a standard set by the standards organization ASTM International, a voluntary standards development organization that sets technical standards for materials, products, systems, and services. Specifications A514 The tensile yield strength of A514 alloys is specified as at least for thicknesses up to thick plate, and at least ultimate tensile strength, with a specified ultimate range of . Plates from thick have specified strength of (yield) and (ultimate). A517 A517 steel has equal tensile yield strength, but slightly higher specified ultimate strength of for thicknesses up to and for thicknesses . Usage A514 steels are used where a weldable, machinable, very high strength steel is required to save weight or meet ultimate strength requirements. It is normally used as a structural steel in building construction, cranes, or other large machines supporting high loads. In addition, A514 steels are specified by military standards (ETL 18-11) for use as small-arms firing range baffles and deflector plates. References Steels Structural steel
A514 steel
[ "Engineering" ]
308
[ "Steels", "Structural engineering", "Alloys", "Structural steel" ]
7,102,599
https://en.wikipedia.org/wiki/Lutz%E2%80%93Kelker%20bias
The Lutz–Kelker bias is a supposed systematic bias that results from the assumption that the probability of a star being at distance increases with the square of the distance which is equivalent to the assumption that the distribution of stars in space is uniform. In particular, it causes measured parallaxes to stars to be larger than their actual values. The bias towards measuring larger parallaxes in turn results in an underestimate of distance and therefore an underestimate on the object's luminosity. For a given parallax measurement with an accompanying uncertainty, both stars closer and farther may, because of uncertainty in measurement, appear at the given parallax. Assuming uniform stellar distribution in space, the probability density of the true parallax per unit range of parallax will be proportional to (where is the true parallax), and therefore, there will be more stars in the volume shells at farther distance. As a result of this dependence, more stars will have their true parallax smaller than the observed parallax. Thus, the measured parallax will be systematically biased towards a value larger than the true parallax. This causes inferred luminosities and distances to be too small, which poses an apparent problem to astronomers trying to measure distance. The existence (or otherwise) of this bias and the necessity of correcting for it has become relevant in astronomy with the precision parallax measurements made by the Hipparcos satellite and more recently with the high-precision data releases of the Gaia mission. The correction method due to Lutz and Kelker placed a bound on the true parallax of stars. This is not valid because true parallax (as distinct from measured parallax) cannot be known. Integrating over all true parallaxes (all space) assumes that stars are equally visible at all distances, and leads to divergent integrals yielding an invalid calculation. Consequently, the Lutz-Kelker correction should not be used. In general, other corrections for systematic bias are required, depending on the selection criteria of the stars under consideration. The scope of effects of the bias are also discussed in the context of the current higher-precision measurements and the choice of stellar sample where the original stellar distribution assumptions are not valid. These differences result in the original discussion of effects to be largely overestimated and highly dependent on the choice of stellar sample. It also remains possible that relations to other forms of statistical bias such as the Malmquist bias may have a counter-effect on the Lutz–Kelker bias for at least some samples. Mathematical Description Original Description The Distribution Function Mathematically, the Lutz-Kelker Bias originates from the dependence of the number density on the observed parallax that is translated into the conditional probability of parallax measurements. Assuming a Gaussian distribution of the observed parallax about the true parallax due to errors in measurement, we can write the conditional probability distribution function of measuring a parallax of given that the true parallax is as since the estimation is of a true parallax based on the measured parallax, the conditional probability of the true parallax being , given that the observed parallax is is of interest. In the original treatment of the phenomenon by Lutz & Kelker, this probability, using Bayes theorem, is given as where and are the prior probabilities of the true and observed parallaxes respectively. Dependence on Distance The probability density of finding a star with apparent magnitude at a distance can be similarly written as where is the probability density of finding a star with apparent magnitude m with a given distance . Here, will be dependent on the luminosity function of the star, which depends on its absolute magnitude. is the probability density function of the apparent magnitude independent of distance. The probability of a star being at distance will be proportional to such that Assuming a uniform distribution of stars in space, the number density becomes a constant and we can write , where . Since we deal with the probability distribution of the true parallax based on a fixed observed parallax, the probability density becomes irrelevant and we can conclude that the distribution will have the proportionality and thus, Normalization The conditional probability of the true parallax based on the observed parallax is divergent around zero for the true parallax. Therefore, it is not possible to normalize this probability. Following the original description of the bias, we can define a normalization by including the observed parallax as The inclusion of does not affect proportionality since it is a fixed constant. Moreover, in this defined "normalization", we will get a probability of 1 when the true parallax is equal to the observed parallax, regardless of the errors in measurement. Therefore, we can define a dimensionless parallax and get the dimensionless distribution of the true parallax as Here, represents the point where the measurement in parallax is equal to its true value, where the probability distribution should be centered. However, this distribution, due to factor will deviate from the point to smaller values. This presents the systematic Lutz-Kelker Bias. The value of this bias will be based on the value of , the marginal uncertainty in parallax measurement. Scope of Effects Original Treatment In the original treatment of the Lutz–Kelker bias as it was first proposed the uncertainty in parallax measurement is considered to be the sole source of bias. As a result of the parallax dependence of stellar distributions, smaller uncertainty in the observed parallax will result in only a slight bias from the true parallax value. Larger uncertainties in contrast would yield higher systematic deviations of the observed parallax from its true value. Large errors in parallax measurement become apparent in luminosity calculations and are therefore easy to detect. Consequently, the original treatment of the phenomenon considered the bias to be effective when the uncertainty in the observed parallax, , is close to about 15% of the measured value, . This was a very strong statement indicating that if the uncertainty in the parallax in about 15–20%, the bias is so effective that we lose most of the parallax and distance information. Several subsequent work on the phenomenon refuted this argument and it was shown that the scope is actually very sample based and may be dependent on other sources of bias. Therefore, more recently it is argued that the scope for most stellar samples is not as drastic as first proposed. Subsequent Discussions Following the original statement, the scope of the effects of the bias, as well as its existence and relative methods of correction have been discussed in many works in recent literature, including subsequent work of Lutz himself. Several subsequent work state that the assumption of uniform stellar distribution may not be applicable depending on the choice of stellar sample. Moreover, the effects of different distributions of stars in space as well as that of measurement errors would yield different forms of bias. This suggests the bias is largely dependent on the specific choice of sample and measurement error distributions, although the term Lutz–Kelker bias is commonly used generically for the phenomenon on all stellar samples. It is also questioned whether other sources of error and bias such as the Malmquist Bias actually counter-effect or even cancel the Lutz–Kelker bias, so that the effects are not as drastic as initially described by Lutz and Kelker. Overall, such differences are discussed to result in effects of the bias to be largely overestimated in the original treatment. More recently, the effects of the Lutz–Kelker bias became relevant in the context of the high-precision measurements of Gaia mission. The scope of effects of Lutz–Kelker bias on certain samples is discussed in the recent Gaia data releases, including the original assumptions and the possibility of different distributions. It remains important to take bias effects with caution regarding sample selection as stellar distribution is expected to be non-uniform at large distance scales. As a result, it is questioned whether correction methods, including the Lutz-Kelker correction proposed in the original work, are applicable for a given stellar sample, since effects are expected to depend on the stellar distribution. Moreover, following the original description and the dependence of the bias on the measurement errors, the effects are expected to be lower due to the higher precision of current instruments such as Gaia. History The original description of the phenomenon was presented in a paper by Thomas E. Lutz and Douglas H. Kelker in the Publications of the Astronomical Society of the Pacific, Vol. 85, No. 507, p. 573 article entitled "On the Use of Trigonometric Parallaxes for the Calibration of Luminosity Systems: Theory." although it was known following the work of Trumpler & Weaver in 1953. The discussion on statistical bias on measurements in astronomy date back to as early as to Eddington in 1913. References Astrometry
Lutz–Kelker bias
[ "Astronomy" ]
1,834
[ "Astrometry", "Astronomical sub-disciplines" ]
7,102,642
https://en.wikipedia.org/wiki/Leak-down%20tester
A leak-down tester is a measuring instrument used to determine the condition of internal combustion engines by introducing compressed air into the cylinder and measuring the rate at which it leaks out. Compression testing is a crude form of leak-down testing which also includes effects due to compression ratio, valve timing, cranking speed, and other factors. Compression tests should normally be done with all spark plugs removed to maximize cranking speed. Cranking compression is a dynamic test of the actual low-speed pumping action, where peak cylinder pressure is measured and stored. Leak-down testing is a static test. Leak-down tests cylinder leakage paths. Leak-down primarily tests pistons and rings, seated valve sealing, and the head gasket. Leak-down will not show valve timing and movement problems, or piston movement related sealing problems. Any test should include both compression and leak-down. Testing is done on an engine which is not running, and normally with the tested cylinder at top dead center on compression, although testing can be done at other points in the compression and power stroke. Pressure is fed into a cylinder via the spark plug hole and the flow, which represents any leakage from the cylinder, is measured. Leak-down tests tend to rotate the engine, and often require some method of holding the crankshaft in the proper position for each tested cylinder. This can be as simple as a breaker bar on a crankshaft bolt in an automatic transmission vehicle, or leaving a manual transmission vehicle in a high gear with the parking brake locked. Leakage is given in wholly arbitrary percentages but these “percentages” do not relate to any actual quantity or real dimension. The meaning of the readings is only relative to other tests done with the same tester design. Leak-down readings of up to 20% are usually acceptable. Leakages over 20% generally indicate internal repairs are required. Racing engines would be in the 1-10% range for top performance, although this number can vary. Ideally, a baseline number should be taken on a fresh engine and recorded. The same leakage tester, or the same leakage tester design, can be used to determine wear. In the United States, FAA specifications state that engines up to engine displacement require an orifice diameter, long, 60-degree approach angle. The input pressure is set for , and minimum cylinder pressure is the accepted standard. While the leak-down tester pressurizes the cylinder, the mechanic can listen to various parts to determine where any leak may originate. For example, a leaking exhaust valve will make a hissing noise in the exhaust pipe while a head gasket may cause bubbling in the cooling system. How it works A leak-down tester is essentially a miniature flow meter similar in concept to an air flow bench. The measuring element is the restriction orifice and the leakage in the engine is compared to the flow of this orifice. There will be a pressure drop across the orifice and another across any points of leakage in the engine. Since the meter and engine are connected in series, the flow is the same across both. (For example: If the meter was unconnected so that all the air escapes then the reading would be 0, or 100% leakage. Conversely, if there is no leakage there will be no pressure drop across either the orifice or the leak, giving a reading of 100, or 0% leakage). Gauge meter faces can be numbered 0-100 or 100-0, indicating either 0% at full pressure or 100% at full pressure. There is no standard regarding the size of the restriction orifice for non-aviation use and that is what leads to differences in readings between leak-down testers generally available from different manufacturers. Most often quoted though is a restriction with a .040in. hole drilled in it. Some poorly designed units do not include a restriction orifice at all, relying on the internal restriction of the regulator, and give much less accurate results. In addition, large engines and small engines will be measured in the same way (compared to the same orifice) but a small leak in a large engine would be a large leak in a small engine. A locomotive engine which gives a leak-down of 10% on a leak-down tester is virtually perfectly sealed while the same tester giving a 10% reading on a model airplane engine indicates a catastrophic leak. With a non-turbulent .040" orifice, and with a cylinder leakage effective orifice size of .040", leakage would be 50% at any pressure. At higher leakages the orifice can become turbulent, and this makes flow non-linear. Also, leakage paths in cylinders can be turbulent at fairly low flow rates. This makes leakage non-linear with test pressure. Further complicating things, nonstandard restriction orifice sizes will cause different indicated leakage percentages with the same cylinder leakage. Leak down testers are most accurate at low leakage levels, and the exact leakage reading is just a relative indication that can vary significantly between instruments. Some manufacturers use only a single gauge. In these instruments, the orifice inlet pressure is maintained automatically by the pressure regulator. A single gauge works well as long as leakage flow is much less than regulator flow. Any error in the input pressure will produce a corresponding error in the reading. As a single gauge instrument approaches 100% leakage, the leakage scale error reaches maximum. This may or may not induce significant error, depending on regulator flow and orifice flow. At low and modest leakage percentages, there is little or no difference between single and dual gauges. In instruments with two gauges the operator manually resets the pressure to 100 after connection to the engine guaranteeing consistent input pressure and greater accuracy. Most instruments use as the input pressure simply because ordinary 100psi gauges can be used which corresponds to 100% but there is no necessity for that pressure beyond that. Any pressure above will function just as well for measurement purposes although the sound of leaks will not be quite as loud. Besides leakage noise, indicated percentage of leakage will sometimes vary with regulator pressure and orifice size. With 100 psi and a .030" orifice, a given cylinder might show 20% leakage. At 50 psi, the same cylinder might show 30% leakage or 15% leakage with the same orifice. This happens because leakage flow is almost always very turbulent. Because of turbulence and other factors, such as seating pressures, test pressure changes almost always change the effective orifice formed by cylinder leakage paths. Metering orifice size has a direct effect on leakage percentage. Generally, a typical automotive engine pressurized to more than 30-40 psi must be locked or it will rotate under test pressure. The exact test pressure tolerated before rotation is highly dependent on connecting rod angle, bore, compression of other cylinders, and friction. There is less tendency to rotate when the piston is at top dead center, especially with small bore engines. Maximum tendency to rotate occurs at about half stroke, when the rod is at right angles to the crankshaft's throw. Due to the simple construction, many mechanics build their own testers. Homemade instruments can function as well as commercial testers, providing they employ proper orifice sizes, good pressure gauges, and good regulators. References External links Vacuum Leak Tester Engine tuning instruments
Leak-down tester
[ "Technology", "Engineering" ]
1,513
[ "Engine tuning instruments", "Mechanical engineering", "Measuring instruments" ]
7,104,660
https://en.wikipedia.org/wiki/Metal-induced%20crystallization
Combined with certain metallic species, amorphous films can crystallize in a process known as metal-induced crystallization (MIC). The effect was discovered in 1969, when amorphous germanium (a-Ge) films crystallized at surprisingly low temperatures when in contact with Al, Ag, Cu, or Sn. The effect was also verified in amorphous silicon (a-Si) films, as well as in amorphous carbon and various metal-oxide films. Likewise, the MIC evolved from simple temperature-driven annealing approaches to others involving laser or microwave radiation, for example. A very common variant of the MIC procedure is the metal-induced lateral crystallization (MILC). In this case, the metal is deposited (onto the top or at the bottom) of some selected areas of the desired amorphous film. Upon annealing, crystallization starts from the portion of the amorphous film that is in contact with the metal species, and the MIC proceeds laterally. So far, lots of studies have been carried out to investigate the MIC phenomenon -- invariably by applying different sample production methods and characterization tools. According to them, the MIC process is highly susceptible to the type and amount of the metallic species, the sample history (production method, geometry and annealing details), as well as to the methodology to determine crystallization. Besides, the MIC process is well beyond the mere diffusion of species (as it is usually discussed in studies involving layered sample structures) and involves many complex atomic-thermodynamic processes at the microscopic level. References Semiconductor device fabrication Inorganic chemistry Chemical processes Crystallography
Metal-induced crystallization
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
334
[ "Microtechnology", "Materials science", "Chemical processes", "Semiconductor device fabrication", "Crystallography", "Condensed matter physics", "nan", "Chemical process engineering" ]
7,107,008
https://en.wikipedia.org/wiki/Critical%20relative%20humidity
The critical relative humidity (CRH) of a salt is defined as the relative humidity of the surrounding atmosphere (at a certain temperature) at which the material begins to absorb moisture from the atmosphere and below which it will not absorb atmospheric moisture. When the humidity of the atmosphere is equal to (or is greater than) the critical relative humidity of a sample of salt, the sample will take up water until all of the salt is dissolved to yield a saturated solution. All water-soluble salts and mixtures have characteristic critical humidities; it is a unique material property. The critical relative humidity of most salts decreases with increasing temperature. For instance, the critical relative humidity of ammonium nitrate decreases 22% with a temperature from 0 °C to 40 °C (32 °F to 104 °F). The critical relative humidity of several fertilizer salts is given in table 1: Table 1: Critical relative humidities of pure salts at 30°C. Mixtures of salts usually have lower critical humidities than either of the constituents. Fertilizers that contain Urea as an ingredient usually exhibit a much lower Critical Relative Humidity than Fertilizers without Urea. Table 2 shows CRH data for two-component mixtures: Table 2: Critical relative humidities of mixtures of salts at 30°C (values are percent relative humidity). As shown, the effect of salt mixing is most dramatic in the case of ammonium nitrate with urea. This mixture has an extremely low critical relative humidity and can therefore only be used in liquid fertilisers (so called UAN-solutions). See also Deliquescent Hygroscopy Humidity References Chemical properties Agricultural chemicals Atmospheric thermodynamics Humidity
Critical relative humidity
[ "Physics", "Chemistry", "Mathematics" ]
346
[ "Physical phenomena", "Physical quantities", "Quantity", "nan", "Physical properties" ]
7,107,162
https://en.wikipedia.org/wiki/Calcium-binding%20protein
Calcium-binding proteins are proteins that participate in calcium cell signaling pathways by binding to Ca2+, the calcium ion that plays an important role in many cellular processes. Calcium-binding proteins have specific domains that bind to calcium and are known to be heterogeneous. One of the functions of calcium binding proteins is to regulate the amount of free (unbound) Ca2+ in the cytosol of the cell. The cellular regulation of calcium is known as calcium homeostasis. Types Many different calcium-binding proteins exist, with different cellular and tissue distribution and involvement in specific functions. Calcium binding proteins also serve an important physiological role for cells. The most ubiquitous Ca2+-sensing protein, found in all eukaryotic organisms including yeasts, is calmodulin. Intracellular storage and release of Ca2+ from the sarcoplasmic reticulum is associated with the high-capacity, low-affinity calcium-binding protein calsequestrin. Calretinin is another type of Calcium binding protein weighing 29kD. It is involved in cell signaling and shown to exist in neurons. This type of protein is also found in large quantities in malignant mesothelial cells, which can be easily differentiated from carcinomas. This differentiation is later applied for a diagnosis on ovarian stromal tumors. Also, another member of the EF-hand superfamily is the S100B protein, which regulates p53. P53 is known as a tumor suppressor protein and in this case acts as a transcriptional activator or repressor of numerous genes. S100B proteins are abundantly found in cancerous tumor cells causing them to be overexpressed, therefore making these proteins useful for classifying tumors. In addition, this explains why this protein can easily interact with p53 when transcriptional regulation takes place. Calcium-binding proteins can be either intracellular and extracellular. Those that are intracellular can contain or lack a structural EF-hand domain. Extracellular calcium-binding proteins are classified into six groups. Since Ca (2+) is an important second messenger, it can act as an activator or inhibitor in gene transcription. Those that belong to the EF-hand superfamily such as Calmodulin and Calcineurin have been linked to transcription regulation. When levels of Ca(2+) increase in the cell, these members of the EF-hand superfamily regulate transcription indirectly by phosphorylating/dephosphorylating transcription factors. Secretory calcium-binding phosphoprotein The secretory calcium-binding phosphoprotein (SCPP) gene family consists of an ancient group of genes emerging around the same time as bony fish. SCPP genes are roughly divided into acidic and P/Q-rich types: the former mostly participates in bone and dentin formation, while the latter usually participate in enamel/enameloid formation. In mammals, P/Q-rich SCPP is also found in saliva and milk and includes unorthodox members such as MUC7 (a mucin) and casein. SCPP genes are recognized by exon structure rather than protein sequence. Functions With their role in signal transduction, calcium-binding proteins contribute to all aspects of the cell's functioning, from homeostasis to learning and memory. For example, the neuron-specific calexcitin has been found to have an excitatory effect on neurons, and interacts with proteins that control the firing state of neurons, such as the voltage-dependent potassium channel. Compartmentalization of calcium binding proteins such as calretinin and calbindin-28 kDa has been noted within cells, suggesting that these proteins perform distinct functions in localized calcium signaling. It also indicates that in addition to freely diffusing through the cytoplasm to attain a homogeneous distribution, calcium binding proteins can bind to cellular structures through interactions that are likely important for their functions. See also Calbindin Calmodulin Calsequestrin Troponin References External links Proteins by function Calcium signaling
Calcium-binding protein
[ "Chemistry" ]
836
[ "Calcium signaling", "Signal transduction" ]
7,107,907
https://en.wikipedia.org/wiki/Power%20shovel
A power shovel, also known as a motor shovel, stripping shovel, front shovel, mining shovel or rope shovel, is a bucket-equipped machine usually powered by steam, diesel fuel, gasoline or electricity and used for digging and loading earth or fragmented rock and for mineral extraction. Power shovels are a type of rope/cable excavator, where the digging arm is controlled and powered by winches and steel ropes, rather than hydraulics like in the modern hydraulic excavators. Basic parts of a power shovel include the track system, cabin, cables, rack, stick, boom foot-pin, saddle block, boom, boom point sheaves and bucket. The size of bucket varies from 0.73 to 53 cubic meters. Design Power shovels normally consist of a revolving deck with a power plant, drive and control mechanisms, usually a counterweight, and a front attachment, such as a crane ("boom") which supports a handle ("dipper" or "dipper stick") with a digger ("bucket") at the end. The term "dipper" is also sometimes used to refer to the handle and digger combined. The machinery is mounted on a base platform with tracks or wheels. Modern bucket capacities range from 8m3 to nearly 80m3. Use Power shovels are used principally for excavation and removal of overburden in open-cut mining operations; they may also be used for the loading of minerals, such as coal. They are the classic equivalent of excavators, and operate in a similar fashion. Other uses of the power shovel include: Close range work. Digging very hard materials. Removing large boulders. Excavating material and loading trucks. Various other types of jobs such as digging in gravel banks, in clay pits, cuts in support of road work, road-side berms, etc. Operation The shovel operates using several main motions including: Hoisting - Pulling the bucket up through the bank of material being dug. Crowding - Moving the dipper handle in or out in order to control the depth of cut or to position for dumping. Swinging - Rotating the shovel between the dig site and dumping location. Propelling - Moving the shovel unit to different locations or dig positions. A shovel's work cycle, or digging cycle, consists of four phases: 1 Digging 2 Swinging 3 Dumping 4 Returning The digging phase consists of crowding the dipper into the bank, hoisting the dipper to fill it, then retracting the full dipper from the bank. The swinging phase occurs once the dipper is clear of the bank both vertically and horizontally. The operator controls the dipper through a planned swing path and dump height until it is suitably positioned over the haul unit (e.g. truck). Dumping involves opening the dipper door to dump the load, while maintaining the correct dump height. Returning is when the dipper swings back to the bank, and involves lowering the dipper into the track position to close the dipper door. Giant stripping shovels In the 1950s with the demand for coal at a peak high and more coal companies turning to the cheaper method of strip mining, excavator manufacturers started offering a new super class of power shovels, commonly called giant stripping shovels. Most were built between the 1950s and the 1970s. The world's first giant stripping shovel for the coal fields was the Marion 5760. Unofficially known to its crew and eastern Ohio residents alike as The Mountaineer, it was erected in 1955/56 near Cadiz, Ohio off of Interstate I-70. Larger models followed the successful 5760, culminating in the mid 60s with the gigantic 12,700 ton Marion 6360, nicknamed The Captain. One stripping shovel, The Bucyrus-Erie 1850-B known as "Big Brutus" has been preserved as a national landmark and a museum with tours and camping. Another stripping shovel, The Bucyrus-Erie 3850-B known as "Big Hog" was eventually cut down in 1985 and buried on the Peabody Sinclair Surface Mining site near the Paradise Mining Plant where it was operated. It remains there on non-public, government-owned land. Notable examples Ranked by bucket capacity. See also P&H Mining Bucyrus International Dragline Excavator Marion Power Shovel Steam shovel Hulett Further reading Extreme Mining Machines - Stripping shovels and walking draglines, by Keith Haddock, pub by MBI, References Stripping shovels Engineering vehicles Maintenance of way equipment
Power shovel
[ "Engineering" ]
914
[ "Engineering vehicles", "Stripping shovels", "Mining equipment" ]
9,233,284
https://en.wikipedia.org/wiki/Reef%20Ball%20Foundation
Reef Ball Foundation, Inc. is a 501(c)(3) non-profit organization that functions as an international environmental non-governmental organization. The foundation uses reef ball artificial reef technology, combined with coral propagation, transplant technology, public education, and community training to build, restore and protect coral reefs. The foundation has established "reef ball reefs" in 59 countries. Over 550,000 reef balls have been deployed in more than 4,000 projects. History Reef Ball Development Group was founded in 1993 by Todd Barber, with the goal of helping to preserve and protect coral reefs for the benefit of future generations. Barber witnessed his favorite coral reef on Grand Cayman destroyed by Hurricane Gilbert, and wanted to do something to help increase the resiliency of eroding coral reefs. Barber and his father patented the idea of building reef substrate modules with a central inflatable bladder, so that the modules would be buoyant, making them easy to deploy by hand or with a small boat, rather than requiring heavy machinery. Over the next few years, with the help of research colleagues at University of Georgia, Nationwide Artificial Reef Coordinators and the Florida Institute of Technology (FIT), Barber, his colleagues, and business partners worked to perfect the design. In 1997, Kathy Kirbo established The Reef Ball Foundation, Inc as a non-profit organization with original founders being Todd Barber as chairman and charter member, Kathy Kirbo founding executive director, board secretary, and charter member, Larry Beggs as vice president and a charter member and Eric Krasle as treasurer and a charter member, Jay Jorgensen as a charter member. Reef balls can be found in almost every coastal state in the United States, and on every continent including Antarctica. The foundation has expanded the scope of its projects to include coral rescue, propagation and transplant operations, beach restorations, mangrove restorations and nursery development. Reef Ball also participates in education and outreach regarding environmental stewardship and coral reefs. In 2001, Reef Ball Foundation took control of the Reef Ball Development Group, and operates all aspects of the business as a non-profit organization. By 2007, the foundation has deployed 550,000 reef balls worldwide. In 2019, Reef Ball Foundation deployed 1,400 reef balls in the shores of Progreso, Yucatán in Mexico. Artificial reefs were also built in Quintana Roo, Baja California, Colima, Veracruz, and Campache. Almost 25,000 reef balls have been established in the surrounding seas of Mexico. Technology and research The Reef Ball Foundation manufactures reef balls for open ocean deployment in sizes from in diameter and in weight. Reef balls are hollow, and typically have several convex-concave holes of varying sizes to most closely approximate natural coral reef conditions by creating whirlpools. Reef balls are made from pH-balanced microsilica concrete, and are treated to create a rough surface texture, in order to promote settling by marine organisms such as corals, algae, coralline algae and sponges. Over the last decade, research has been conducted with respect to the ability of artificial reefs to produce or attract biomass, the effectiveness of reef balls in replicating natural habitat, and mitigating disasters. The use of reef balls as breakwaters and for beach stabilization has been extensively studied. Projects The foundation undertakes an array of projects including artificial reef deployment, estuary restoration, mangrove plantings, oyster reef creation, coral propagation, natural disaster recovery, erosion control, and education. Notable projects include: In Antigua, undertaking 4,700 modules were deployed around the island. In Malaysia, 5,000 reef balls were deployed around protected sea turtle nesting islands to deter netting, successfully increasing nesting numbers. In Campeche, Mexico, over 4,000 reef balls deployed by local fishing communities to enhance fishery resources. In Tampa Bay, USA, reef balls were installed beneath docks, in front of sea walls, and as a submerged breakwater to create oyster reefs. In Phuket, Thailand, reef balls were planted with corals after the Boxing Day Tsunami to help restore tourism. In Indonesia, locals and P.T. Newmont used reef balls to mitigate damage from mining operations and restore thousands of coral heads. In Australia, reef balls have been used to enhance fisheries in New South Wales. Designed artificial reefs The trend in artificial reef development has been toward the construction of designed artificial reefs, built from materials specifically designed to function as reefs. Designed systems (such as reef balls) can be modified to achieve a variety of goals. These include coral reef rehabilitation, fishery enhancement, snorkeling and diving trails, beach erosion protection, surfing enhancement, fish spawning sites, planters for mangrove replanting, enhancement of lobster fisheries, creation of oyster reefs, estuary rehabilitation, and even exotic uses such as deep water Oculina coral replanting. Designed systems can overcome many of the problems associated with "materials of opportunity" such as stability in storms, durability, biological fit, lack of potential pollution problems, availability, and reduction in long-term artificial reef costs. Designed reefs have been developed specifically for coral reef rehabilitation, and can therefore be used in a more specific niche than materials of opportunity. Some examples of specialized adaptations which "designed reefs" can use include: specialized surface textures, coral planting attachment points, specialized pH-neutral surfaces (such as neutralized concrete, ceramics, or mineral accretion surfaces), fissures to create currents for corals, and avoidance of materials such as iron (which may cause algae to overgrow coral). Other types of designed systems can create aquaculture opportunities for lobsters, create oyster beds, or be used for a large variety of other specialized needs. See also Underwater sculptures Project AWARE Reef Check References External links Reef Ball Foundation Environmental organizations based in Georgia (U.S. state) Environmental engineering Ecology organizations 501(c)(3) organizations Coral reefs Fisheries organizations
Reef Ball Foundation
[ "Chemistry", "Engineering", "Biology" ]
1,195
[ "Coral reefs", "Chemical engineering", "Biogeomorphology", "Civil engineering", "Environmental engineering" ]
9,233,359
https://en.wikipedia.org/wiki/Vibrational%20circular%20dichroism
Vibrational circular dichroism (VCD) is a spectroscopic technique which detects differences in attenuation of left and right circularly polarized light passing through a sample. It is the extension of circular dichroism spectroscopy into the infrared and near infrared ranges. Because VCD is sensitive to the mutual orientation of distinct groups in a molecule, it provides three-dimensional structural information. Thus, it is a powerful technique as VCD spectra of enantiomers can be simulated using ab initio calculations, thereby allowing the identification of absolute configurations of small molecules in solution from VCD spectra. Among such quantum computations of VCD spectra resulting from the chiral properties of small organic molecules are those based on density functional theory (DFT) and gauge-including atomic orbitals (GIAO). As a simple example of the experimental results that were obtained by VCD are the spectral data obtained within the carbon-hydrogen (C-H) stretching region of 21 amino acids in heavy water solutions. Measurements of vibrational optical activity (VOA) have thus numerous applications, not only for small molecules, but also for large and complex biopolymers such as muscle proteins (myosin, for example) and DNA. Vibrational modes Theory While the fundamental quantity associated with the infrared absorption is the dipole strength, the differential absorption is also proportional to the rotational strength, a quantity which depends on both the electric and magnetic dipole transition moments. Sensitivity of the handedness of a molecule toward circularly polarized light results from the form of the rotational strength. A rigorous theoretical development of VCD was developed concurrently by the late Professor P.J. Stephens, FRS, at the University of Southern California, and the group of Professor A.D. Buckingham, FRS, at Cambridge University in the UK, and first implemented analytically in the Cambridge Analytical Derivative Package (CADPAC) by R.D. Amos. Previous developments by D.P. Craig and T. Thirmachandiman at the Australian National University and Larry A. Nafie and Teresa B. Freedman at Syracuse University though theoretically correct, were not able to be straightforwardly implemented, which prevented their use. Only with the development of the Stephens formalism as implemented in CADPAC did a fast efficient and theoretically rigorous theoretical calculation of the VCD spectra of chiral molecules become feasible. This also stimulated the commercialization of VCD instruments by Biotools, Bruker, Jasco and Thermo-Nicolet (now Thermo-Fisher). Peptides and proteins Extensive VCD studies have been reported for both polypeptides and several proteins in solution; several recent reviews were also compiled. An extensive but not comprehensive VCD publications list is also provided in the "References" section. The published reports over the last 22 years have established VCD as a powerful technique with improved results over those previously obtained by visible/UV circular dichroism (CD) or optical rotatory dispersion (ORD) for proteins and nucleic acids. The effects due to solvent on stabilizing the structures (conformers and zwitterionic species) of amino acids and peptides and the corresponding effects seen in the vibrational circular dichroism (VCD) and Raman optical activity spectra (ROA) have been recently documented by a combined theoretical and experimental work on L-alanine and N-acetyl L-alanine N'-methylamide. Similar effects have also been seen in the nuclear magnetic resonance (NMR) spectra by the Weise and Weisshaar NMR groups at the University of Wisconsin–Madison. Nucleic acids VCD spectra of nucleotides, synthetic polynucleotides and several nucleic acids, including DNA, have been reported and assigned in terms of the type and number of helices present in A-, B-, and Z-DNA. Instrumentation VCD can be regarded as a relatively recent technique. Although Vibrational Optical Activity and in particular Vibrational Circular Dichroism, has been known for a long time, the first VCD instrument was developed in 1973 and commercial instruments were available only since 1997. For biopolymers such as proteins and nucleic acids, the difference in absorbance between the levo- and dextro- configurations is five orders of magnitude smaller than the corresponding (unpolarized) absorbance. Therefore, VCD of biopolymers requires the use of very sensitive, specially built instrumentation as well as time-averaging over relatively long intervals of time even with such sensitive VCD spectrometers. Most CD instruments produce left- and right- circularly polarized light which is then either sine-wave or square-wave modulated, with subsequent phase-sensitive detection and lock-in amplification of the detected signal. In the case of FT-VCD, a photo-elastic modulator (PEM) is employed in conjunction with an FTIR interferometer set-up. An example is that of a Bomem model MB-100 FTIR interferometer equipped with additional polarizing optics/ accessories needed for recording VCD spectra. A parallel beam emerges through a side port of the interferometer which passes first through a wire grid linear polarizer and then through an octagonal-shaped ZnSe crystal PEM which modulates the polarized beam at a fixed, lower frequency such as 37.5 kHz. A mechanically stressed crystal such as ZnSe exhibits birefringence when stressed by an adjacent piezoelectric transducer. The linear polarizer is positioned close to, and at 45 degrees, with respect to the ZnSe crystal axis. The polarized radiation focused onto the detector is doubly modulated, both by the PEM and by the interferometer setup. A very low noise detector, such as MCT (HgCdTe), is also selected for the VCD signal phase-sensitive detection. The first dedicated VCD spectrometer brought to market was the ChiralIR from Bomem/BioTools, Inc. in 1997. Today, Thermo-Electron, Bruker, Jasco and BioTools offer either VCD accessories or stand-alone instrumentation. To prevent detector saturation an appropriate, long wave pass filter is placed before the very low noise MCT detector, which allows only radiation below 1750 cm−1 to reach the MCT detector; the latter however measures radiation only down to 750 cm−1. FT-VCD spectra accumulation of the selected sample solution is then carried out, digitized and stored by an in-line computer. Published reviews that compare various VCD methods are also available. Magnetic VCD VCD spectra have also been reported in the presence of an applied external magnetic field. This method can enhance the VCD spectral resolution for small molecules. Raman optical activity (ROA) ROA is a technique complementary to VCD especially useful in the 50–1600 cm−1 spectral region; it is considered as the technique of choice for determining optical activity for photon energies less than 600 cm−1. See also Amino acid Birefringence Circular dichroism Density functional theory DNA DNA structure Hyper–Rayleigh scattering optical activity IR spectroscopy Magnetic circular dichroism Molecular models of DNA Nucleic acid Optical rotatory dispersion Photoelastic modulator Polarization Protein Protein structure Quantum chemistry Raman optical activity (ROA) References Polarization (waves) Physical chemistry Proteins Peptides Nucleic acids Infrared spectroscopy Biochemistry Biophysics DNA Molecular biology Molecular geometry Quantum chemistry
Vibrational circular dichroism
[ "Physics", "Chemistry", "Biology" ]
1,546
[ "Biomolecules by chemical classification", "Quantum mechanics", "Theoretical chemistry", "Proteins", "Spectroscopy", "Physical chemistry", "Biophysics", " molecular", "Peptides", " and optical physics", "Spectrum (physical sciences)", "Molecular geometry", "Atomic", "Nucleic acids", "App...
9,234,084
https://en.wikipedia.org/wiki/Fluorometer
A fluorometer, fluorimeter or fluormeter is a device used to measure parameters of visible spectrum fluorescence: its intensity and wavelength distribution of emission spectrum after excitation by a certain spectrum of light. These parameters are used to identify the presence and the amount of specific molecules in a medium. Modern fluorometers are capable of detecting fluorescent molecule concentrations as low as 1 part per trillion. Fluorescence analysis can be orders of magnitude more sensitive than other techniques. Applications include chemistry/biochemistry, medicine, environmental monitoring. For instance, they are used to measure chlorophyll fluorescence to investigate plant physiology. Components and design Typically fluorometers utilize a double beam. These two beams work in tandem to decrease the noise created from radiant power fluctuations. The upper beam is passed through a filter or monochromator and passes through the sample. The lower beam is passed through an attenuator and adjusted to try and match the fluorescent power given off from the sample. Light from the fluorescence of the sample and the lower, attenuated beam are detected by separate transducers and converted to an electrical signal that is interpreted by a computer system. Within the machine the transducer that detects fluorescence created from the upper beam is located a distance away from the sample and at a 90-degree angle from the incident, upper beam. The machine is constructed like this to decrease the stray light from the upper beam that may strike the detector. The optimal angle is 90 degrees. There are two different approaches to handling the selection of incident light that gives way to different types fluorometers. If filters are used to select wavelengths of light, the machine is called a fluorometer. While a spectrofluorometer will typically use two monochromators, some spectrofluorometers may use one filter and one monochromator. Where, in this case, the broad band filter acts to reduce stray light, including from unwanted diffraction orders of the diffraction grating in the monochromator. Light sources for fluorometers are often dependent on the type of sample being tested. Among the most common light source for fluorometers is the low-pressure mercury lamp. This provides many excitation wavelengths, making it the most versatile. However, this lamp is not a continuous source of radiation. The xenon arc lamp is used when a continuous source of radiation is needed. Both of these sources provide a suitable spectrum of ultraviolet light that induces chemiluminescence. These are just two of the many possible light sources. Glass and silica cuvettes are often the vessels in which the sample is placed. Care must be taken to not leave fingerprints or any other sort of mark on the outside of the cuvette, because this can produce unwanted fluorescence. "Spectro grade" solvents such as methanol are sometimes used to clean the vessel surfaces to minimize these problems. Uses Dairy industry Fluorimetry is widely used by the dairy industry to verify whether pasteurization has been successful. This is done using a reagent which is hydrolysed to a fluorophore and phosphoric acid by alkaline phosphatase in milk. If pasteurization has been successful then alkaline phosphatase will be entirely denatured and the sample will not fluoresce. This works because pathogens in milk are killed by any heat treatment which denatures alkaline phosphatase. Fluorescence assays are required by milk producers in the UK to prove successful pasteurization has occurred, so all UK dairies contain fluorimetry equipment. Protein aggregation and TSE detection Thioflavins are dyes used for histology staining and biophysical studies of protein aggregation. For example, thioflavin T is used in the RT-QuIC technique to detect transmissible spongiform encephalopathy-causing misfolded prions. Oceanography Fluorometers are widely used in oceanography to measure chlorophyll concentrations based on chlorophyll fluorescence by phytoplankton cell pigments. Chlorophyll fluorescence is a widely-used proxy for the quantity (biomass) of microscopic algae in the water. In the lab after water sampling, researchers extract the pigments out of a filter that has phytoplankton cells on it, then measure the fluorescence of the extract in a benchtop fluorometer in a dark room. To directly measure chlorophyll fluorescence "in situ" (in the water), researchers use instruments designed to measure fluorescence optically (for example, sondes with extra electronic optical sensors attached). The optical sensors emit blue light to excite phytoplankton pigments and make them fluoresce or emit red light. The sensor measures this induced fluorescence by measuring the red light as a voltage, and the instrument saves it to a data file. The voltage signal of the sensor gets converted to a concentration with a calibration curve in the lab, using either red-colored dyes like Rhodamine, standards like Fluorescein, or live phytoplankton cultures. Ocean chlorophyll fluorescence is measured on research vessels, small boats, buoys, docks, and piers all over the world. Fluorometry measurements are used to map chlorophyll concentrations in support of ocean color remote sensing. Special fluorometers for ocean waters can measure properties beyond the total amount of fluorescence, such as the quantum yield of photochemistry, the timing of the fluorescence, and the fluorescence of cells when subjected to increasing amounts of light. Aquaculture operations such as fish farms us fluorometers to measure food availability for filter feeding animals like mussels and to detect the onset of Harmful Algal Blooms (HABs) and/or "red tides" (not necessarily the same thing). Molecular biology Fluorometers can be used to determine the nucleic acid concentration in a sample. Fluorometer types There are two basic types of fluorometers: the filter fluorometers and spectrofluorometer. The difference between them is the way they select the wavelengths of incident light; filter fluorometers use filters while spectrofluorometers use grating monochromators. Filter fluorometers are often purchased or built at a lower cost but are less sensitive and have less resolution than spectrofluorometers. Filter fluorometers are also capable of operation only at the wavelengths of the available filters, whereas monochromators are generally freely tunable over a relatively wide range. The potential disadvantage of monochromators arises from that same property, because the monochromator is capable of miscalibration or misadjustment, whereas the wavelength of filters are fixed when manufactured. Filter fluorometer Spectrofluorometer Integrated fluorometer See also Fluorescence spectroscopy, for a fuller discussion of instrumentation Chlorophyll fluorescence, to investigate plant ecophysiology. Integrated fluorometer to measure gas exchange and chlorophyll fluorescence of leaves. Radiometer, to measure various electromagnetic radiation Spectrometer, to analyze spectrum of electromagnetic radiation Scatterometer, to measure scattered radiation Microfluorimetry, to measure fluorescence on a microscopic level Interference filter, thin film filters that work by optical interference, showing how they can be tuned in some cases References Laboratory equipment Electromagnetic radiation meters Spectrometers nl:Fluorimeter
Fluorometer
[ "Physics", "Chemistry", "Technology", "Engineering" ]
1,573
[ "Spectrum (physical sciences)", "Electromagnetic radiation meters", "Electromagnetic spectrum", "Measuring instruments", "Spectrometers", "Spectroscopy" ]
9,235,009
https://en.wikipedia.org/wiki/Records%20of%20Early%20English%20Drama
The Records of Early English Drama (REED) is a performance history research project, based at the University of Toronto, Ontario, Canada. It was founded in 1976 by a group of international scholars interested in understanding “the native tradition of English playmaking that apparently flourished in late medieval provincial towns” and formed the context for the development of the English Renaissance theatre, including the work of Shakespeare and his contemporaries. REED's primary focus is to locate, transcribe, edit, and publish historical documents from England, Wales, and Scotland containing evidence of drama, secular music, and other communal entertainment and mimetic ceremony from the late Middle Ages until 1642, when the Puritans closed the London public theatres. From its inception in 1976 to 2016, REED published twenty-seven print collections of records edited by over thirty international scholars. REED is also engaged in creating a collection of free digital resources for research and education including Patrons and Performances (2003) and Early Modern London Theatres (2011). In March 2017, REED moved to digital publication of records with the launch of REED Online, a publication site where records will be freely available. History During a 1970-71 research trip in York, England, to study manuscripts related to the York cycle of biblical plays (also known as the York Mystery Plays), Alexandra F. Johnston, an early drama scholar from the University of Toronto, came across a manuscript transcription of a 1433 indenture agreement between the leaders of the medieval Mercers' Guild and their pageant masters. The document contained details of a medieval pageant wagon and sophisticated staging unknown to researchers of the time. Johnston also met Margaret Dorrell, an Australian graduate student at the University of Leeds, who was working on a similar project related to the York records; the two women decided to collaborate. Within the next two years, Johnston and Dorrell met other scholars of medieval and Renaissance drama working independently on manuscripts from other English cities (David Galloway of the University of New Brunswick on Norwich, Reginald Ingram of the University of British Columbia on Coventry, and Lawrence Clopper of Indiana University Bloomington on Chester). The idea of a scholarly publishing project to find, transcribe, and edit documentary evidence of performance arose from these meetings and was met with interest by the individual researchers and their academic communities. In January 1974, Johnston circulated a position paper on the project. Discussions and planning followed and, in February 1975, the inaugural REED meeting was held at Victoria University in the University of Toronto. In 1975–76, Johnston received a Canada Council personal grant for the publication of the York records as a pilot project, and in late 1976, REED was officially launched with a Canada Council ten-year Major Editorial Grant for the proposed series of collections, establishing REED as a long-term research and publishing project. Because three of the four initial collections were edited by Canadian researchers, Toronto, Canada, became the home of the project. In 1979, REED published its first two collections of records: York, edited by Alexandra F. Johnston and Margaret Rogerson (née Dorrell), and Chester, edited by Lawrence D. Clopper. Since then the project has expanded its scope from major cities and towns to all the counties of England, Wales, and Scotland, based on historic pre-1642 county borders. After its inception in 1976, REED produced the bi-annual REED Newsletter which, in 1997, became the refereed scholarly journal Early Theatre. REED has had close ties to the English Department, the Centre for Medieval Studies (CMS), the Centre for Reformation and Renaissance Studies (CRRS), and the Graduate Centre for Study of Drama. From 1976 to 2009 the project was based at Victoria University in the University of Toronto. In 2009 the offices of the project moved to the English Department. REED retains active relationships with the English Department, the CMS, and the University of Toronto Libraries. REED's internal governance is provided by an executive board of senior scholars in early drama and related fields, with digital advisors and collections editors drawn from Canada, the United States, Australia, New Zealand, and the United Kingdom. REED has collaborated with the Poculi Ludique Societas (PLS) to mount four productions of full cycles of medieval biblical dramas: the York Plays (also called the York Mystery Plays) in 1977 and 1998, and the Chester Plays (also called the Chester Mystery Plays) in 1983 and 2010, with participation from international amateur theatre groups. In November 2002, REED, in partnership with the Art Gallery of Ontario, hosted the Picturing Shakespeare symposium, an exhibition of and an accompanying public symposium regarding the Sanders portrait, an Elizabethan painting reputed to be the only one of Shakespeare made during his lifetime. In addition to revealing evidence of vernacular entertainment activities, the research work for the collections produces a body of knowledge regarding professional travelling entertainers, their patronage, and their performance venues. This cumulative information was first launched for public use through the Patrons and Performances website in 2003. In 2011, REED collaborated with the Department of Digital Humanities, King's College London, and the Department of English at the University of Southampton to create Early Modern London Theatres (EMLoT), a research database and educational resource, with learning modules. EMLoT gathers documents related to professional theatres north and south of the Thames up to 1642 and bibliographic information about their subsequent transcriptions, documenting how scholars “got [their] information about the early theatres, from whom and when.” In 2016, to mark the 400th anniversary of Shakespeare's death, REED collaborated with the BBC and The British Library to produce an ongoing public website titled Shakespeare on Tour. Many REED editors contributed stories and images from their research in the Elizabethan period to help raise “the curtain on performances of The Bard’s plays countrywide from the 16th Century to the present day.” Throughout its existence, REED maintained its primary focus and published about six collections each decade. In 2015, REED published its last print collection (Civic London to 1558, edited by Anne Lancashire), and in March 2017, the first digital collection (Staffordshire, edited by Alan B. Somerset) was made freely available on its publication website, REED Online. All subsequent collections will be added to this database and website. REED has received substantial funding from private individuals and foundations (including the Jackman Foundation), the Canada Council, and the Social Sciences and Humanities Research Council in Canada; the National Endowment for the Humanities and the Andrew W. Mellon Foundation in the U.S.; as well as the Arts and Humanities Research Council and The British Academy in the U.K. Notes References External links Records of Early English Drama—Official website Medieval drama English drama Folk plays 16th-century theatre 17th-century theatre History of theatre Digital humanities Text Encoding Initiative Renaissance and early modern research centres
Records of Early English Drama
[ "Technology" ]
1,370
[ "Digital humanities", "Computing and society" ]
9,238,840
https://en.wikipedia.org/wiki/Speiss
Speisses are alloys of heavy metals like iron, cobalt, nickel and copper with arsenic, antimony and, occasionally, tin. The latter elements lower the melting point to around 1000 °C. Speisses commonly occur in lead smelting operations and copper smelting operations. Speisses are only partially miscible with mattes, and if there is enough arsenic or antimony in the copper feed to a matte smelting furnace, a separate speiss melt can form. Speisses show high affinities for platinum group metals and gold. The mass concentration of platinum group metals in the speiss phase is about 1000 times that of the concentration in the matte phase, while the ratio for gold is about 100 times. Speisses are also immiscible in liquid lead and flow out of lead blast furnaces as a separate phase. See also Agglomerate (Steel industry) References Metallurgical processes
Speiss
[ "Chemistry", "Materials_science" ]
196
[ "Metallurgical processes", "Metallurgy" ]
9,239,056
https://en.wikipedia.org/wiki/Matte%20%28metallurgy%29
Matte is a term used in the field of pyrometallurgy given to the molten metal sulfide phases typically formed during smelting of copper, nickel, and other base metals. Typically, a matte is the phase in which the principal metal being extracted is recovered prior to a final reduction process (usually converting) to produce blister copper. The matte may also collect some valuable minor constituents such as noble metals, minor base metals, selenium or tellurium. Mattes may also be used to collect impurities from a metal phase, such as in the case of antimony smelting. Molten mattes are insoluble in both slag and metal phases. This insolubility, combined with differences in specific gravities between mattes, slags, and metals, allows for separation of the molten phases. References Metallurgy
Matte (metallurgy)
[ "Chemistry", "Materials_science", "Engineering" ]
182
[ "Metallurgy", "Materials science stubs", "Materials science", "nan" ]
154,242
https://en.wikipedia.org/wiki/PH%20meter
A pH meter is a scientific instrument that measures the hydrogen-ion activity in water-based solutions, indicating its acidity or alkalinity expressed as pH. The pH meter measures the difference in electrical potential between a pH electrode and a reference electrode, and so the pH meter is sometimes referred to as a "potentiometric pH meter". The difference in electrical potential relates to the acidity or pH of the solution. Testing of pH via pH meters (pH-metry) is used in many applications ranging from laboratory experimentation to quality control. Applications The rate and outcome of chemical reactions taking place in water often depends on the acidity of the water, and it is therefore useful to know the acidity of the water, typically measured by means of a pH meter. Knowledge of pH is useful or critical in many situations, including chemical laboratory analyses. pH meters are used for soil measurements in agriculture, water quality for municipal water supplies, swimming pools, environmental remediation; brewing of wine or beer; manufacturing, healthcare and clinical applications such as blood chemistry; and many other applications. Advances in the instrumentation and in detection have expanded the number of applications in which pH measurements can be conducted. The devices have been miniaturized, enabling direct measurement of pH inside of living cells. In addition to measuring the pH of liquids, specially designed electrodes are available to measure the pH of semi-solid substances, such as foods. These have tips suitable for piercing semi-solids, have electrode materials compatible with ingredients in food, and are resistant to clogging. Design and use Principle of operation Potentiometric pH meters measure the voltage between two electrodes and display the result converted into the corresponding pH value. They comprise a simple electronic amplifier and a pair of electrodes, or alternatively a combination electrode, and some form of display calibrated in pH units. It usually has a glass electrode and a reference electrode, or a combination electrode. The electrodes, or probes, are inserted into the solution to be tested. pH meters may also be based on the antimony electrode (typically used for rough conditions) or the quinhydrone electrode. In order to accurately measure the potential difference between the two sides of the glass membrane reference electrode, typically a silver chloride electrode or calomel electrode are required on each side of the membrane. Their purpose is to measure changes in the potential on their respective side. One is built into the glass electrode. The other, which makes contact with the test solution through a porous plug, may be a separate reference electrode or may be built into a combination electrode. The resulting voltage will be the potential difference between the two sides of the glass membrane possibly offset by some difference between the two reference electrodes, that can be compensated for. The article on the glass electrode has a good description and figure. The design of the electrodes is the key part: These are rod-like structures usually made of glass, with a bulb containing the sensor at the bottom. The glass electrode for measuring the pH has a glass bulb specifically designed to be selective to hydrogen-ion concentration. On immersion in the solution to be tested, hydrogen ions in the test solution exchange for other positively charged ions on the glass bulb, creating an electrochemical potential across the bulb. The electronic amplifier detects the difference in electrical potential between the two electrodes generated in the measurement and converts the potential difference to pH units. The magnitude of the electrochemical potential across the glass bulb is linearly related to the pH according to the Nernst equation. The reference electrode is insensitive to the pH of the solution, being composed of a metallic conductor, which connects to the display. This conductor is immersed in an electrolyte solution, typically potassium chloride, which comes into contact with the test solution through a porous ceramic membrane. The display consists of a voltmeter, which displays voltage in units of pH. On immersion of the glass electrode and the reference electrode in the test solution, an electrical circuit is completed, in which there is a potential difference created and detected by the voltmeter. The circuit can be thought of as going from the conductive element of the reference electrode to the surrounding potassium-chloride solution, through the ceramic membrane to the test solution, the hydrogen-ion-selective glass of the glass electrode, to the solution inside the glass electrode, to the silver of the glass electrode, and finally the voltmeter of the display device. The voltage varies from test solution to test solution depending on the potential difference created by the difference in hydrogen-ion concentrations on each side of the glass membrane between the test solution and the solution inside the glass electrode. All other potential differences in the circuit do not vary with pH and are corrected for by means of the calibration. For simplicity, many pH meters use a combination probe, constructed with the glass electrode and the reference electrode contained within a single probe. A detailed description of combination electrodes is given in the article on glass electrodes. The pH meter is calibrated with solutions of known pH, typically before each use, to ensure accuracy of measurement. To measure the pH of a solution, the electrodes are used as probes, which are dipped into the test solutions and held there sufficiently long for the hydrogen ions in the test solution to equilibrate with the ions on the surface of the bulb on the glass electrode. This equilibration provides a stable pH measurement. pH electrode and reference electrode design Details of the fabrication and resulting microstructure of the glass membrane of the pH electrode are maintained as trade secrets by the manufacturers. However, certain aspects of design are published. Glass is a solid electrolyte, for which alkali-metal ions can carry current. The pH-sensitive glass membrane is generally spherical to simplify the manufacture of a uniform membrane. These membranes are up to 0.4 millimeters in thickness, thicker than original designs, so as to render the probes durable. The glass has silicate chemical functionality on its surface, which provides binding sites for alkali-metal ions and hydrogen ions from the solutions. This provides an ion-exchange capacity in the range of 10−6 to 10−8 mol/cm2. Selectivity for hydrogen ions (H+) arises from a balance of ionic charge, volume requirements versus other ions, and the coordination number of other ions. Electrode manufacturers have developed compositions that suitably balance these factors, most notably lithium glass. The silver chloride electrode is most commonly used as a reference electrode in pH meters, although some designs use the saturated calomel electrode. The silver chloride electrode is simple to manufacture and provides high reproducibility. The reference electrode usually consists of a platinum wire that has contact with a silver/silver chloride mixture, which is immersed in a potassium chloride solution. There is a ceramic plug, which serves as a contact to the test solution, providing low resistance while preventing mixing of the two solutions. With these electrode designs, the voltmeter is detecting potential differences of ±1400 millivolts. The electrodes are further designed to rapidly equilibrate with test solutions to facilitate ease of use. The equilibration times are typically less than one second, although equilibration times increase as the electrodes age. Maintenance Because of the sensitivity of the electrodes to contaminants, cleanliness of the probes is essential for accuracy and precision. Probes are generally kept moist when not in use with a medium appropriate for the particular probe, which is typically an aqueous solution available from probe manufacturers. Probe manufacturers provide instructions for cleaning and maintaining their probe designs. For illustration, one maker of laboratory-grade pH gives cleaning instructions for specific contaminants: general cleaning (15-minute soak in a solution of bleach and detergent), salt (hydrochloric acid solution followed by sodium hydroxide and water), grease (detergent or methanol), clogged reference junction (KCl solution), protein deposits (pepsin and HCl, 1% solution), and air bubbles. Calibration and operation The German Institute for Standardization publishes a standard for pH measurement using pH meters, DIN 19263. Very precise measurements necessitate that the pH meter is calibrated before each measurement. More typically calibration is performed once per day of operation. Calibration is needed because the glass electrode does not give reproducible electrostatic potentials over longer periods of time. Consistent with principles of good laboratory practice, calibration is performed with at least two standard buffer solutions that span the range of pH values to be measured. For general purposes, buffers at pH 4.00 and pH 10.00 are suitable. The pH meter has one calibration control to set the meter reading equal to the value of the first standard buffer and a second control to adjust the meter reading to the value of the second buffer. A third control allows the temperature to be set. Standard buffer sachets, available from a variety of suppliers, usually document the temperature dependence of the buffer control. More precise measurements sometimes require calibration at three different pH values. Some pH meters provide built-in temperature-coefficient correction, with temperature thermocouples in the electrode probes. The calibration process correlates the voltage produced by the probe (approximately 0.06 volts per pH unit) with the pH scale. Good laboratory practice dictates that, after each measurement, the probes are rinsed with distilled water or deionized water to remove any traces of the solution being measured, blotted with a scientific wipe to absorb any remaining water, which could dilute the sample and thus alter the reading, and then immersed in a storage solution suitable for the particular probe type. Types of pH meters In general there are three major categories of pH meters. Benchtop pH meters are often used in laboratories and are used to measure samples which are brought to the pH meter for analysis. Portable, or field pH meters, are handheld pH meters that are used to take the pH of a sample in a field or production site. In-line or in situ pH meters, also called pH analyzers, are used to measure pH continuously in a process, and can stand-alone, or be connected to a higher level information system for process control. pH meters range from simple and inexpensive pen-like devices to complex and expensive laboratory instruments with computer interfaces and several inputs for indicator and temperature measurements to be entered to adjust for the variation in pH caused by temperature. The output can be digital or analog, and the devices can be battery-powered or rely on line power. Some versions use telemetry to connect the electrodes to the voltmeter display device. Specialty meters and probes are available for use in special applications, such as harsh environments and biological microenvironments. There are also holographic pH sensors, which allow pH measurement colorimetrically, making use of the variety of pH indicators that are available. Additionally, there are commercially available pH meters based on solid state electrodes, rather than conventional glass electrodes. History The concept of pH was defined in 1909 by S. P. L. Sørensen, and electrodes were used for pH measurement in the 1920s. In October 1934, Arnold Orville Beckman registered the first patent for a complete chemical instrument for the measurement of pH, U.S. Patent No. 2,058,761, for his "acidimeter", later renamed the pH meter. Beckman developed the prototype as an assistant professor of chemistry at the California Institute of Technology, when asked to devise a quick and accurate method for measuring the acidity of lemon juice for the California Fruit Growers Exchange (Sunkist). On April 8, 1935, Beckman's renamed National Technical Laboratories focused on the manufacture of scientific instruments, with the Arthur H. Thomas Company as a distributor for its pH meter. In its first full year of sales, 1936, the company sold 444 pH meters for $60,000 in sales. In years to come, the company sold millions of the units. In 2004 the Beckman pH meter was designated an ACS National Historic Chemical Landmark in recognition of its significance as the first commercially successful electronic pH meter. The Radiometer Corporation of Denmark was founded in 1935, and began marketing a pH meter for medical use around 1936, but "the development of automatic pH-meters for industrial purposes was neglected. Instead American instrument makers successfully developed industrial pH-meters with a wide variety of applications, such as in breweries, paper works, alum works, and water treatment systems." In the 1940s the electrodes for pH meters were often difficult to make, or unreliable due to brittle glass. Dr. Werner Ingold began to industrialize the production of single-rod measuring cells, a combination of measurement and reference electrode in one construction unit, which led to broader acceptance in a wide range of industries including pharmaceutical production. Beckman marketed a portable "Pocket pH Meter" as early as 1956, but it did not have a digital read-out. In the 1970s Jenco Electronics of Taiwan designed and manufactured the first portable digital pH meter. This meter was sold under the label of the Cole-Parmer Corporation. Building a pH meter Specialized manufacturing is required for the electrodes, and details of their design and construction are typically trade secrets. However, with purchase of suitable electrodes, a standard multimeter can be used to complete the construction of the pH meter. However, commercial suppliers offer voltmeter displays that simplify use, including calibration and temperature compensation. See also Antimony electrode Ion-selective electrodes ISFET pH electrode Potentiometry Quinhydrone electrode Saturated calomel electrode Silver chloride electrode Standard hydrogen electrode References External links Introduction to pH measurement – Overview of pH and pH measurement at the Omega Engineering website Development of the Beckman pH Meter – National Historic Chemical Landmark of the American Chemical Society pH Measurement Handbook - A publication of the Thermo-Scientific Co. Acid–base chemistry Electrochemistry Measuring instruments Scientific instruments
PH meter
[ "Chemistry", "Technology", "Engineering" ]
2,872
[ "Acid–base chemistry", "Measuring instruments", "Scientific instruments", "Equilibrium chemistry", "Electrochemistry", "nan" ]
154,473
https://en.wikipedia.org/wiki/Fermi%20level
The Fermi level of a solid-state body is the thermodynamic work required to add one electron to the body. It is a thermodynamic quantity usually denoted by μ or EF for brevity. The Fermi level does not include the work required to remove the electron from wherever it came from. A precise understanding of the Fermi level—how it relates to electronic band structure in determining electronic properties; how it relates to the voltage and flow of charge in an electronic circuit—is essential to an understanding of solid-state physics. In band structure theory, used in solid state physics to analyze the energy levels in a solid, the Fermi level can be considered to be a hypothetical energy level of an electron, such that at thermodynamic equilibrium this energy level would have a 50% probability of being occupied at any given time. The position of the Fermi level in relation to the band energy levels is a crucial factor in determining electrical properties. The Fermi level does not necessarily correspond to an actual energy level (in an insulator the Fermi level lies in the band gap), nor does it require the existence of a band structure. Nonetheless, the Fermi level is a precisely defined thermodynamic quantity, and differences in Fermi level can be measured simply with a voltmeter. Voltage measurement Sometimes it is said that electric currents are driven by differences in electrostatic potential (Galvani potential), but this is not exactly true. As a counterexample, multi-material devices such as p–n junctions contain internal electrostatic potential differences at equilibrium, yet without any accompanying net current; if a voltmeter is attached to the junction, one simply measures zero volts. Clearly, the electrostatic potential is not the only factor influencing the flow of charge in a material—Pauli repulsion, carrier concentration gradients, electromagnetic induction, and thermal effects also play an important role. In fact, the quantity called voltage as measured in an electronic circuit has a simple relationship to the chemical potential for electrons (Fermi level). When the leads of a voltmeter are attached to two points in a circuit, the displayed voltage is a measure of the total work transferred when a unit charge is allowed to move from one point to the other. If a simple wire is connected between two points of differing voltage (forming a short circuit), current will flow from positive to negative voltage, converting the available work into heat. The Fermi level of a body expresses the work required to add an electron to it, or equally the work obtained by removing an electron. Therefore, VA − VB, the observed difference in voltage between two points, A and B, in an electronic circuit is exactly related to the corresponding chemical potential difference, μA − μB, in Fermi level by the formula where −e is the electron charge. From the above discussion it can be seen that electrons will move from a body of high μ (low voltage) to low μ (high voltage) if a simple path is provided. This flow of electrons will cause the lower μ to increase (due to charging or other repulsion effects) and likewise cause the higher μ to decrease. Eventually, μ will settle down to the same value in both bodies. This leads to an important fact regarding the equilibrium (off) state of an electronic circuit: This also means that the voltage (measured with a voltmeter) between any two points will be zero, at equilibrium. Note that thermodynamic equilibrium here requires that the circuit be internally connected and not contain any batteries or other power sources, nor any variations in temperature. Band structure of solids In the band theory of solids, electrons occupy a series of bands composed of single-particle energy eigenstates each labelled by ϵ. Although this single particle picture is an approximation, it greatly simplifies the understanding of electronic behaviour and it generally provides correct results when applied correctly. The Fermi–Dirac distribution, , gives the probability that (at thermodynamic equilibrium) a state having energy ϵ is occupied by an electron: Here, T is the absolute temperature and kB is the Boltzmann constant. If there is a state at the Fermi level (ϵ = μ), then this state will have a 50% chance of being occupied. The distribution is plotted in the left figure. The closer f is to 1, the higher chance this state is occupied. The closer f is to 0, the higher chance this state is empty. The location of μ within a material's band structure is important in determining the electrical behaviour of the material. In an insulator, μ lies within a large band gap, far away from any states that are able to carry current. In a metal, semimetal or degenerate semiconductor, μ lies within a delocalized band. A large number of states nearby μ are thermally active and readily carry current. In an intrinsic or lightly doped semiconductor, μ is close enough to a band edge that there are a dilute number of thermally excited carriers residing near that band edge. In semiconductors and semimetals the position of μ relative to the band structure can usually be controlled to a significant degree by doping or gating. These controls do not change μ which is fixed by the electrodes, but rather they cause the entire band structure to shift up and down (sometimes also changing the band structure's shape). For further information about the Fermi levels of semiconductors, see (for example) Sze. Local conduction band referencing, internal chemical potential and the parameter ζ If the symbol ℰ is used to denote an electron energy level measured relative to the energy of the edge of its enclosing band, ϵC, then in general we have We can define a parameter ζ that references the Fermi level with respect to the band edge:It follows that the Fermi–Dirac distribution function can be written asThe band theory of metals was initially developed by Sommerfeld, from 1927 onwards, who paid great attention to the underlying thermodynamics and statistical mechanics. Confusingly, in some contexts the band-referenced quantity ζ may be called the Fermi level, chemical potential, or electrochemical potential, leading to ambiguity with the globally-referenced Fermi level. In this article, the terms conduction-band referenced Fermi level or internal chemical potential are used to refer to ζ. ζ is directly related to the number of active charge carriers as well as their typical kinetic energy, and hence it is directly involved in determining the local properties of the material (such as electrical conductivity). For this reason it is common to focus on the value of ζ when concentrating on the properties of electrons in a single, homogeneous conductive material. By analogy to the energy states of a free electron, the ℰ of a state is the kinetic energy of that state and ϵC is its potential energy. With this in mind, the parameter, ζ, could also be labelled the Fermi kinetic energy. Unlike μ, the parameter, ζ, is not a constant at equilibrium, but rather varies from location to location in a material due to variations in ϵC, which is determined by factors such as material quality and impurities/dopants. Near the surface of a semiconductor or semimetal, ζ can be strongly controlled by externally applied electric fields, as is done in a field effect transistor. In a multi-band material, ζ may even take on multiple values in a single location. For example, in a piece of aluminum there are two conduction bands crossing the Fermi level (even more bands in other materials); each band has a different edge energy, ϵC, and a different ζ. The value of ζ at zero temperature is widely known as the Fermi energy, sometimes written ζ0. Confusingly (again), the name Fermi energy sometimes is used to refer to ζ at non-zero temperature. Temperature out of equilibrium The Fermi level, μ, and temperature, T, are well defined constants for a solid-state device in thermodynamic equilibrium situation, such as when it is sitting on the shelf doing nothing. When the device is brought out of equilibrium and put into use, then strictly speaking the Fermi level and temperature are no longer well defined. Fortunately, it is often possible to define a quasi-Fermi level and quasi-temperature for a given location, that accurately describe the occupation of states in terms of a thermal distribution. The device is said to be in quasi-equilibrium when and where such a description is possible. The quasi-equilibrium approach allows one to build a simple picture of some non-equilibrium effects as the electrical conductivity of a piece of metal (as resulting from a gradient of μ) or its thermal conductivity (as resulting from a gradient in T). The quasi-μ and quasi-T can vary (or not exist at all) in any non-equilibrium situation, such as: If the system contains a chemical imbalance (as in a battery). If the system is exposed to changing electromagnetic fields (as in capacitors, inductors, and transformers). Under illumination from a light-source with a different temperature, such as the sun (as in solar cells), When the temperature is not constant within the device (as in thermocouples), When the device has been altered, but has not had enough time to re-equilibrate (as in piezoelectric or pyroelectric substances). In some situations, such as immediately after a material experiences a high-energy laser pulse, the electron distribution cannot be described by any thermal distribution. One cannot define the quasi-Fermi level or quasi-temperature in this case; the electrons are simply said to be non-thermalized. In less dramatic situations, such as in a solar cell under constant illumination, a quasi-equilibrium description may be possible but requiring the assignment of distinct values of μ and T to different bands (conduction band vs. valence band). Even then, the values of μ and T may jump discontinuously across a material interface (e.g., p–n junction) when a current is being driven, and be ill-defined at the interface itself. Technicalities Nomenclature The term Fermi level is mainly used in discussing the solid state physics of electrons in semiconductors, and a precise usage of this term is necessary to describe band diagrams in devices comprising different materials with different levels of doping. In these contexts, however, one may also see Fermi level used imprecisely to refer to the band-referenced Fermi level, μ − ϵC, called ζ above. It is common to see scientists and engineers refer to "controlling", "pinning", or "tuning" the Fermi level inside a conductor, when they are in fact describing changes in ϵC due to doping or the field effect. In fact, thermodynamic equilibrium guarantees that the Fermi level in a conductor is always fixed to be exactly equal to the Fermi level of the electrodes; only the band structure (not the Fermi level) can be changed by doping or the field effect (see also band diagram). A similar ambiguity exists between the terms, chemical potential and electrochemical potential. It is also important to note that Fermi level is not necessarily the same thing as Fermi energy. In the wider context of quantum mechanics, the term Fermi energy usually refers to the maximum kinetic energy of a fermion in an idealized non-interacting, disorder free, zero temperature Fermi gas. This concept is very theoretical (there is no such thing as a non-interacting Fermi gas, and zero temperature is impossible to achieve). However, it finds some use in approximately describing white dwarfs, neutron stars, atomic nuclei, and electrons in a metal. On the other hand, in the fields of semiconductor physics and engineering, Fermi energy often is used to refer to the Fermi level described in this article. Fermi level referencing and the location of zero Fermi level Much like the choice of origin in a coordinate system, the zero point of energy can be defined arbitrarily. Observable phenomena only depend on energy differences. When comparing distinct bodies, however, it is important that they all be consistent in their choice of the location of zero energy, or else nonsensical results will be obtained. It can therefore be helpful to explicitly name a common point to ensure that different components are in agreement. On the other hand, if a reference point is inherently ambiguous (such as "the vacuum", see below) it will instead cause more problems. A practical and well-justified choice of common point is a bulky, physical conductor, such as the electrical ground or earth. Such a conductor can be considered to be in a good thermodynamic equilibrium and so its μ is well defined. It provides a reservoir of charge, so that large numbers of electrons may be added or removed without incurring charging effects. It also has the advantage of being accessible, so that the Fermi level of any other object can be measured simply with a voltmeter. Why it is not advisable to use "the energy in vacuum" as a reference zero In principle, one might consider using the state of a stationary electron in the vacuum as a reference point for energies. This approach is not advisable unless one is careful to define exactly where the vacuum is. The problem is that not all points in the vacuum are equivalent. At thermodynamic equilibrium, it is typical for electrical potential differences of order 1 V to exist in the vacuum (Volta potentials). The source of this vacuum potential variation is the variation in work function between the different conducting materials exposed to vacuum. Just outside a conductor, the electrostatic potential depends sensitively on the material, as well as which surface is selected (its crystal orientation, contamination, and other details). The parameter that gives the best approximation to universality is the Earth-referenced Fermi level suggested above. This also has the advantage that it can be measured with a voltmeter. Discrete charging effects in small systems In cases where the "charging effects" due to a single electron are non-negligible, the above definitions should be clarified. For example, consider a capacitor made of two identical parallel-plates. If the capacitor is uncharged, the Fermi level is the same on both sides, so one might think that it should take no energy to move an electron from one plate to the other. But when the electron has been moved, the capacitor has become (slightly) charged, so this does take a slight amount of energy. In a normal capacitor, this is negligible, but in a nano-scale capacitor it can be more important. In this case one must be precise about the thermodynamic definition of the chemical potential as well as the state of the device: is it electrically isolated, or is it connected to an electrode? When the body is able to exchange electrons and energy with an electrode (reservoir), it is described by the grand canonical ensemble. The value of chemical potential can be said to be fixed by the electrode, and the number of electrons on the body may fluctuate. In this case, the chemical potential of a body is the infinitesimal amount of work needed to increase the average number of electrons by an infinitesimal amount (even though the number of electrons at any time is an integer, the average number varies continuously.): where is the free energy function of the grand canonical ensemble. If the number of electrons in the body is fixed (but the body is still thermally connected to a heat bath), then it is in the canonical ensemble. We can define a "chemical potential" in this case literally as the work required to add one electron to a body that already has exactly electrons, where is the free energy function of the canonical ensemble, alternatively, These chemical potentials are not equivalent, , except in the thermodynamic limit. The distinction is important in small systems such as those showing Coulomb blockade. The parameter, , (i.e., in the case where the number of electrons is allowed to fluctuate) remains exactly related to the voltmeter voltage, even in small systems. To be precise, then, the Fermi level is defined not by a deterministic charging event by one electron charge, but rather a statistical charging event by an infinitesimal fraction of an electron. Notes References Electronic band structures Fermi–Dirac statistics th:ระดับพลังงานแฟร์มี vi:Mức Fermi
Fermi level
[ "Physics", "Chemistry", "Materials_science" ]
3,406
[ "Electron", "Electronic band structures", "Condensed matter physics" ]
154,664
https://en.wikipedia.org/wiki/Turbulence
In fluid dynamics, turbulence or turbulent flow is fluid motion characterized by chaotic changes in pressure and flow velocity. It is in contrast to laminar flow, which occurs when a fluid flows in parallel layers with no disruption between those layers. Turbulence is commonly observed in everyday phenomena such as surf, fast flowing rivers, billowing storm clouds, or smoke from a chimney, and most fluid flows occurring in nature or created in engineering applications are turbulent. Turbulence is caused by excessive kinetic energy in parts of a fluid flow, which overcomes the damping effect of the fluid's viscosity. For this reason, turbulence is commonly realized in low viscosity fluids. In general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. The onset of turbulence can be predicted by the dimensionless Reynolds number, the ratio of kinetic energy to viscous damping in a fluid flow. However, turbulence has long resisted detailed physical analysis, and the interactions within turbulence create a very complex phenomenon. Physicist Richard Feynman described turbulence as the most important unsolved problem in classical physics. The turbulence intensity affects many fields, for examples fish ecology, air pollution, precipitation, and climate change. Examples of turbulence Smoke rising from a cigarette. For the first few centimeters, the smoke is laminar. The smoke plume becomes turbulent as its Reynolds number increases with increases in flow velocity and characteristic length scale. Flow over a golf ball. (This can be best understood by considering the golf ball to be stationary, with air flowing over it.) If the golf ball were smooth, the boundary layer flow over the front of the sphere would be laminar at typical conditions. However, the boundary layer would separate early, as the pressure gradient switched from favorable (pressure decreasing in the flow direction) to unfavorable (pressure increasing in the flow direction), creating a large region of low pressure behind the ball that creates high form drag. To prevent this, the surface is dimpled to perturb the boundary layer and promote turbulence. This results in higher skin friction, but it moves the point of boundary layer separation further along, resulting in lower drag. Clear-air turbulence experienced during airplane flight, as well as poor astronomical seeing (the blurring of images seen through the atmosphere). Most of the terrestrial atmospheric circulation. The oceanic and atmospheric mixed layers and intense oceanic currents. The flow conditions in many industrial equipment (such as pipes, ducts, precipitators, gas scrubbers, dynamic scraped surface heat exchangers, etc.) and machines (for instance, internal combustion engines and gas turbines). The external flow over all kinds of vehicles such as cars, airplanes, ships, and submarines. The motions of matter in stellar atmospheres. A jet exhausting from a nozzle into a quiescent fluid. As the flow emerges into this external fluid, shear layers originating at the lips of the nozzle are created. These layers separate the fast moving jet from the external fluid, and at a certain critical Reynolds number they become unstable and break down to turbulence. Biologically generated turbulence resulting from swimming animals affects ocean mixing. Snow fences work by inducing turbulence in the wind, forcing it to drop much of its snow load near the fence. Bridge supports (piers) in water. When river flow is slow, water flows smoothly around the support legs. When the flow is faster, a higher Reynolds number is associated with the flow. The flow may start off laminar but is quickly separated from the leg and becomes turbulent. In many geophysical flows (rivers, atmospheric boundary layer), the flow turbulence is dominated by the coherent structures and turbulent events. A turbulent event is a series of turbulent fluctuations that contain more energy than the average flow turbulence. The turbulent events are associated with coherent flow structures such as eddies and turbulent bursting, and they play a critical role in terms of sediment scour, accretion and transport in rivers as well as contaminant mixing and dispersion in rivers and estuaries, and in the atmosphere. In the medical field of cardiology, a stethoscope is used to detect heart sounds and bruits, which are due to turbulent blood flow. In normal individuals, heart sounds are a product of turbulent flow as heart valves close. However, in some conditions turbulent flow can be audible due to other reasons, some of them pathological. For example, in advanced atherosclerosis, bruits (and therefore turbulent flow) can be heard in some vessels that have been narrowed by the disease process. Recently, turbulence in porous media became a highly debated subject. Strategies used by animals for olfactory navigation, and their success, are heavily influenced by turbulence affecting the odor plume. Features Turbulence is characterized by the following features: Irregularity Turbulent flows are always highly irregular. For this reason, turbulence problems are normally treated statistically rather than deterministically. Turbulent flow is chaotic. However, not all chaotic flows are turbulent. Diffusivity The readily available supply of energy in turbulent flows tends to accelerate the homogenization (mixing) of fluid mixtures. The characteristic which is responsible for the enhanced mixing and increased rates of mass, momentum and energy transports in a flow is called "diffusivity". Turbulent diffusion is usually described by a turbulent diffusion coefficient. This turbulent diffusion coefficient is defined in a phenomenological sense, by analogy with the molecular diffusivities, but it does not have a true physical meaning, being dependent on the flow conditions, and not a property of the fluid itself. In addition, the turbulent diffusivity concept assumes a constitutive relation between a turbulent flux and the gradient of a mean variable similar to the relation between flux and gradient that exists for molecular transport. In the best case, this assumption is only an approximation. Nevertheless, the turbulent diffusivity is the simplest approach for quantitative analysis of turbulent flows, and many models have been postulated to calculate it. For instance, in large bodies of water like oceans this coefficient can be found using Richardson's four-third power law and is governed by the random walk principle. In rivers and large ocean currents, the diffusion coefficient is given by variations of Elder's formula. Rotationality Turbulent flows have non-zero vorticity and are characterized by a strong three-dimensional vortex generation mechanism known as vortex stretching. In fluid dynamics, they are essentially vortices subjected to stretching associated with a corresponding increase of the component of vorticity in the stretching direction—due to the conservation of angular momentum. On the other hand, vortex stretching is the core mechanism on which the turbulence energy cascade relies to establish and maintain identifiable structure function. In general, the stretching mechanism implies thinning of the vortices in the direction perpendicular to the stretching direction due to volume conservation of fluid elements. As a result, the radial length scale of the vortices decreases and the larger flow structures break down into smaller structures. The process continues until the small scale structures are small enough that their kinetic energy can be transformed by the fluid's molecular viscosity into heat. Turbulent flow is always rotational and three dimensional. For example, atmospheric cyclones are rotational but their substantially two-dimensional shapes do not allow vortex generation and so are not turbulent. On the other hand, oceanic flows are dispersive but essentially non rotational and therefore are not turbulent. Dissipation To sustain turbulent flow, a persistent source of energy supply is required because turbulence dissipates rapidly as the kinetic energy is converted into internal energy by viscous shear stress. Turbulence causes the formation of eddies of many different length scales. Most of the kinetic energy of the turbulent motion is contained in the large-scale structures. The energy "cascades" from these large-scale structures to smaller scale structures by an inertial and essentially inviscid mechanism. This process continues, creating smaller and smaller structures which produces a hierarchy of eddies. Eventually this process creates structures that are small enough that molecular diffusion becomes important and viscous dissipation of energy finally takes place. The scale at which this happens is the Kolmogorov length scale. Via this energy cascade, turbulent flow can be realized as a superposition of a spectrum of flow velocity fluctuations and eddies upon a mean flow. The eddies are loosely defined as coherent patterns of flow velocity, vorticity and pressure. Turbulent flows may be viewed as made of an entire hierarchy of eddies over a wide range of length scales and the hierarchy can be described by the energy spectrum that measures the energy in flow velocity fluctuations for each length scale (wavenumber). The scales in the energy cascade are generally uncontrollable and highly non-symmetric. Nevertheless, based on these length scales these eddies can be divided into three categories. Integral time scale The integral time scale for a Lagrangian flow can be defined as: where u′ is the velocity fluctuation, and is the time lag between measurements. Integral length scales Large eddies obtain energy from the mean flow and also from each other. Thus, these are the energy production eddies which contain most of the energy. They have the large flow velocity fluctuation and are low in frequency. Integral scales are highly anisotropic and are defined in terms of the normalized two-point flow velocity correlations. The maximum length of these scales is constrained by the characteristic length of the apparatus. For example, the largest integral length scale of pipe flow is equal to the pipe diameter. In the case of atmospheric turbulence, this length can reach up to the order of several hundreds kilometers.: The integral length scale can be defined as where r is the distance between two measurement locations, and u′ is the velocity fluctuation in that same direction. Kolmogorov length scales Smallest scales in the spectrum that form the viscous sub-layer range. In this range, the energy input from nonlinear interactions and the energy drain from viscous dissipation are in exact balance. The small scales have high frequency, causing turbulence to be locally isotropic and homogeneous. Taylor microscales The intermediate scales between the largest and the smallest scales which make the inertial subrange. Taylor microscales are not dissipative scales, but pass down the energy from the largest to the smallest without dissipation. Some literatures do not consider Taylor microscales as a characteristic length scale and consider the energy cascade to contain only the largest and smallest scales; while the latter accommodate both the inertial subrange and the viscous sublayer. Nevertheless, Taylor microscales are often used in describing the term "turbulence" more conveniently as these Taylor microscales play a dominant role in energy and momentum transfer in the wavenumber space. Although it is possible to find some particular solutions of the Navier–Stokes equations governing fluid motion, all such solutions are unstable to finite perturbations at large Reynolds numbers. Sensitive dependence on the initial and boundary conditions makes fluid flow irregular both in time and in space so that a statistical description is needed. The Russian mathematician Andrey Kolmogorov proposed the first statistical theory of turbulence, based on the aforementioned notion of the energy cascade (an idea originally introduced by Richardson) and the concept of self-similarity. As a result, the Kolmogorov microscales were named after him. It is now known that the self-similarity is broken so the statistical description is presently modified. A complete description of turbulence is one of the unsolved problems in physics. According to an apocryphal story, Werner Heisenberg was asked what he would ask God, given the opportunity. His reply was: "When I meet God, I am going to ask him two questions: Why relativity? And why turbulence? I really believe he will have an answer for the first." A similar witticism has been attributed to Horace Lamb in a speech to the British Association for the Advancement of Science: "I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather more optimistic." Onset of turbulence The onset of turbulence can be, to some extent, predicted by the Reynolds number, which is the ratio of inertial forces to viscous forces within a fluid which is subject to relative internal movement due to different fluid velocities, in what is known as a boundary layer in the case of a bounding surface such as the interior of a pipe. A similar effect is created by the introduction of a stream of higher velocity fluid, such as the hot gases from a flame in air. This relative movement generates fluid friction, which is a factor in developing turbulent flow. Counteracting this effect is the viscosity of the fluid, which as it increases, progressively inhibits turbulence, as more kinetic energy is absorbed by a more viscous fluid. The Reynolds number quantifies the relative importance of these two types of forces for given flow conditions, and is a guide to when turbulent flow will occur in a particular situation. This ability to predict the onset of turbulent flow is an important design tool for equipment such as piping systems or aircraft wings, but the Reynolds number is also used in scaling of fluid dynamics problems, and is used to determine dynamic similitude between two different cases of fluid flow, such as between a model aircraft, and its full size version. Such scaling is not always linear and the application of Reynolds numbers to both situations allows scaling factors to be developed. A flow situation in which the kinetic energy is significantly absorbed due to the action of fluid molecular viscosity gives rise to a laminar flow regime. For this the dimensionless quantity the Reynolds number () is used as a guide. With respect to laminar and turbulent flow regimes: laminar flow occurs at low Reynolds numbers, where viscous forces are dominant, and is characterized by smooth, constant fluid motion; turbulent flow occurs at high Reynolds numbers and is dominated by inertial forces, which tend to produce chaotic eddies, vortices and other flow instabilities. The Reynolds number is defined as where: is the density of the fluid (SI units: kg/m3) is a characteristic velocity of the fluid with respect to the object (m/s) is a characteristic linear dimension (m) is the dynamic viscosity of the fluid (Pa·s or N·s/m2 or kg/(m·s)). While there is no theorem directly relating the non-dimensional Reynolds number to turbulence, flows at Reynolds numbers larger than 5000 are typically (but not necessarily) turbulent, while those at low Reynolds numbers usually remain laminar. In Poiseuille flow, for example, turbulence can first be sustained if the Reynolds number is larger than a critical value of about 2040; moreover, the turbulence is generally interspersed with laminar flow until a larger Reynolds number of about 4000. The transition occurs if the size of the object is gradually increased, or the viscosity of the fluid is decreased, or if the density of the fluid is increased. Heat and momentum transfer When flow is turbulent, particles exhibit additional transverse motion which enhances the rate of energy and momentum exchange between them thus increasing the heat transfer and the friction coefficient. Assume for a two-dimensional turbulent flow that one was able to locate a specific point in the fluid and measure the actual flow velocity of every particle that passed through that point at any given time. Then one would find the actual flow velocity fluctuating about a mean value: and similarly for temperature () and pressure (), where the primed quantities denote fluctuations superposed to the mean. This decomposition of a flow variable into a mean value and a turbulent fluctuation was originally proposed by Osborne Reynolds in 1895, and is considered to be the beginning of the systematic mathematical analysis of turbulent flow, as a sub-field of fluid dynamics. While the mean values are taken as predictable variables determined by dynamics laws, the turbulent fluctuations are regarded as stochastic variables. The heat flux and momentum transfer (represented by the shear stress ) in the direction normal to the flow for a given time are where is the heat capacity at constant pressure, is the density of the fluid, is the coefficient of turbulent viscosity and is the turbulent thermal conductivity. Kolmogorov's theory of 1941 Richardson's notion of turbulence was that a turbulent flow is composed by "eddies" of different sizes. The sizes define a characteristic length scale for the eddies, which are also characterized by flow velocity scales and time scales (turnover time) dependent on the length scale. The large eddies are unstable and eventually break up originating smaller eddies, and the kinetic energy of the initial large eddy is divided into the smaller eddies that stemmed from it. These smaller eddies undergo the same process, giving rise to even smaller eddies which inherit the energy of their predecessor eddy, and so on. In this way, the energy is passed down from the large scales of the motion to smaller scales until reaching a sufficiently small length scale such that the viscosity of the fluid can effectively dissipate the kinetic energy into internal energy. In his original theory of 1941, Kolmogorov postulated that for very high Reynolds numbers, the small-scale turbulent motions are statistically isotropic (i.e. no preferential spatial direction could be discerned). In general, the large scales of a flow are not isotropic, since they are determined by the particular geometrical features of the boundaries (the size characterizing the large scales will be denoted as ). Kolmogorov's idea was that in the Richardson's energy cascade this geometrical and directional information is lost, while the scale is reduced, so that the statistics of the small scales has a universal character: they are the same for all turbulent flows when the Reynolds number is sufficiently high. Thus, Kolmogorov introduced a second hypothesis: for very high Reynolds numbers the statistics of small scales are universally and uniquely determined by the kinematic viscosity and the rate of energy dissipation . With only these two parameters, the unique length that can be formed by dimensional analysis is This is today known as the Kolmogorov length scale (see Kolmogorov microscales). A turbulent flow is characterized by a hierarchy of scales through which the energy cascade takes place. Dissipation of kinetic energy takes place at scales of the order of Kolmogorov length , while the input of energy into the cascade comes from the decay of the large scales, of order . These two scales at the extremes of the cascade can differ by several orders of magnitude at high Reynolds numbers. In between there is a range of scales (each one with its own characteristic length ) that has formed at the expense of the energy of the large ones. These scales are very large compared with the Kolmogorov length, but still very small compared with the large scale of the flow (i.e. ). Since eddies in this range are much larger than the dissipative eddies that exist at Kolmogorov scales, kinetic energy is essentially not dissipated in this range, and it is merely transferred to smaller scales until viscous effects become important as the order of the Kolmogorov scale is approached. Within this range inertial effects are still much larger than viscous effects, and it is possible to assume that viscosity does not play a role in their internal dynamics (for this reason this range is called "inertial range"). Hence, a third hypothesis of Kolmogorov was that at very high Reynolds number the statistics of scales in the range are universally and uniquely determined by the scale and the rate of energy dissipation . The way in which the kinetic energy is distributed over the multiplicity of scales is a fundamental characterization of a turbulent flow. For homogeneous turbulence (i.e., statistically invariant under translations of the reference frame) this is usually done by means of the energy spectrum function , where is the modulus of the wavevector corresponding to some harmonics in a Fourier representation of the flow velocity field : where is the Fourier transform of the flow velocity field. Thus, represents the contribution to the kinetic energy from all the Fourier modes with , and therefore, where is the mean turbulent kinetic energy of the flow. The wavenumber corresponding to length scale is . Therefore, by dimensional analysis, the only possible form for the energy spectrum function according with the third Kolmogorov's hypothesis is where would be a universal constant. This is one of the most famous results of Kolmogorov 1941 theory, describing transport of energy through scale space without any loss or gain. The Kolmogorov five-thirds law was first observed in a tidal channel, and considerable experimental evidence has since accumulated that supports it. Outside of the inertial area, one can find the formula below : In spite of this success, Kolmogorov theory is at present under revision. This theory implicitly assumes that the turbulence is statistically self-similar at different scales. This essentially means that the statistics are scale-invariant and non-intermittent in the inertial range. A usual way of studying turbulent flow velocity fields is by means of flow velocity increments: that is, the difference in flow velocity between points separated by a vector (since the turbulence is assumed isotropic, the flow velocity increment depends only on the modulus of ). Flow velocity increments are useful because they emphasize the effects of scales of the order of the separation when statistics are computed. The statistical scale-invariance without intermittency implies that the scaling of flow velocity increments should occur with a unique scaling exponent , so that when is scaled by a factor , should have the same statistical distribution as with independent of the scale . From this fact, and other results of Kolmogorov 1941 theory, it follows that the statistical moments of the flow velocity increments (known as structure functions in turbulence) should scale as where the brackets denote the statistical average, and the would be universal constants. There is considerable evidence that turbulent flows deviate from this behavior. The scaling exponents deviate from the value predicted by the theory, becoming a non-linear function of the order of the structure function. The universality of the constants have also been questioned. For low orders the discrepancy with the Kolmogorov value is very small, which explain the success of Kolmogorov theory in regards to low order statistical moments. In particular, it can be shown that when the energy spectrum follows a power law with , the second order structure function has also a power law, with the form Since the experimental values obtained for the second order structure function only deviate slightly from the value predicted by Kolmogorov theory, the value for is very near to (differences are about 2%). Thus the "Kolmogorov − spectrum" is generally observed in turbulence. However, for high order structure functions, the difference with the Kolmogorov scaling is significant, and the breakdown of the statistical self-similarity is clear. This behavior, and the lack of universality of the constants, are related with the phenomenon of intermittency in turbulence and can be related to the non-trivial scaling behavior of the dissipation rate averaged over scale . This is an important area of research in this field, and a major goal of the modern theory of turbulence is to understand what is universal in the inertial range, and how to deduce intermittency properties from the Navier-Stokes equations, i.e. from first principles. See also Astronomical seeing Atmospheric dispersion modeling Chaos theory Clear-air turbulence Different types of boundary conditions in fluid dynamics Eddy covariance Fluid dynamics Darcy–Weisbach equation Eddy Navier–Stokes equations Large eddy simulation Hagen–Poiseuille equation Kelvin–Helmholtz instability Lagrangian coherent structure Turbulence kinetic energy Mesocyclones Navier–Stokes existence and smoothness Swing bowling Taylor microscale Turbulence modeling Velocimetry Vertical draft Vortex Vortex generator Wake turbulence Wave turbulence Wingtip vortices Wind tunnel Notes References Further reading Original scientific research papers and classic monographs Translated into English: Translated into English: External links Center for Turbulence Research, Scientific papers and books on turbulence Center for Turbulence Research, Stanford University Scientific American article Air Turbulence Forecast international CFD database iCFDdatabase Fluid Mechanics website with movies, Q&A, etc Johns Hopkins public database with direct numerical simulation data TurBase public database with experimental data from European High Performance Infrastructures in Turbulence (EuHIT) Concepts in physics Aerodynamics Chaos theory Transport phenomena Fluid dynamics Flow regimes
Turbulence
[ "Physics", "Chemistry", "Engineering" ]
5,097
[ "Transport phenomena", "Physical phenomena", "Turbulence", "Chemical engineering", "Aerodynamics", "Flow regimes", "nan", "Aerospace engineering", "Piping", "Fluid dynamics" ]
155,443
https://en.wikipedia.org/wiki/Corrosion
Corrosion is a natural process that converts a refined metal into a more chemically stable oxide. It is the gradual deterioration of materials (usually a metal) by chemical or electrochemical reaction with their environment. Corrosion engineering is the field dedicated to controlling and preventing corrosion. In the most common use of the word, this means electrochemical oxidation of metal in reaction with an oxidant such as oxygen, hydrogen, or hydroxide. Rusting, the formation of red-orange iron oxides, is a well-known example of electrochemical corrosion. This type of corrosion typically produces oxides or salts of the original metal and results in a distinctive coloration. Corrosion can also occur in materials other than metals, such as ceramics or polymers, although in this context, the term "degradation" is more common. Corrosion degrades the useful properties of materials and structures including mechanical strength, appearance, and permeability to liquids and gases. Corrosive is distinguished from caustic: the former implies mechanical degradation, the latter chemical. Many structural alloys corrode merely from exposure to moisture in air, but the process can be strongly affected by exposure to certain substances. Corrosion can be concentrated locally to form a pit or crack, or it can extend across a wide area, more or less uniformly corroding the surface. Because corrosion is a diffusion-controlled process, it occurs on exposed surfaces. As a result, methods to reduce the activity of the exposed surface, such as passivation and chromate conversion, can increase a material's corrosion resistance. However, some corrosion mechanisms are less visible and less predictable. The chemistry of corrosion is complex; it can be considered an electrochemical phenomenon. During corrosion at a particular spot on the surface of an object made of iron, oxidation takes place and that spot behaves as an anode. The electrons released at this anodic spot move through the metal to another spot on the object, and reduce oxygen at that spot in presence of H+ (which is believed to be available from carbonic acid () formed due to dissolution of carbon dioxide from air into water in moist air condition of atmosphere. Hydrogen ion in water may also be available due to dissolution of other acidic oxides from the atmosphere). This spot behaves as a cathode. Galvanic corrosion Galvanic corrosion occurs when two different metals have physical or electrical contact with each other and are immersed in a common electrolyte, or when the same metal is exposed to electrolyte with different concentrations. In a galvanic couple, the more active metal (the anode) corrodes at an accelerated rate and the more noble metal (the cathode) corrodes at a slower rate. When immersed separately, each metal corrodes at its own rate. What type of metal(s) to use is readily determined by following the galvanic series. For example, zinc is often used as a sacrificial anode for steel structures. Galvanic corrosion is of major interest to the marine industry and also anywhere water (containing salts) contacts pipes or metal structures. Factors such as relative size of anode, types of metal, and operating conditions (temperature, humidity, salinity, etc.) affect galvanic corrosion. The surface area ratio of the anode and cathode directly affects the corrosion rates of the materials. Galvanic corrosion is often prevented by the use of sacrificial anodes. Galvanic series In any given environment (one standard medium is aerated, room-temperature seawater), one metal will be either more noble or more active than others, based on how strongly its ions are bound to the surface. Two metals in electrical contact share the same electrons, so that the "tug-of-war" at each surface is analogous to competition for free electrons between the two materials. Using the electrolyte as a host for the flow of ions in the same direction, the noble metal will take electrons from the active one. The resulting mass flow or electric current can be measured to establish a hierarchy of materials in the medium of interest. This hierarchy is called a galvanic series and is useful in predicting and understanding corrosion. Corrosion removal Often, it is possible to chemically remove the products of corrosion. For example, phosphoric acid in the form of naval jelly is often applied to ferrous tools or surfaces to remove rust. Corrosion removal should not be confused with electropolishing, which removes some layers of the underlying metal to make a smooth surface. For example, phosphoric acid may also be used to electropolish copper but it does this by removing copper, not the products of copper corrosion. Resistance to corrosion Some metals are more intrinsically resistant to corrosion than others (for some examples, see galvanic series). There are various ways of protecting metals from corrosion (oxidation) including painting, hot-dip galvanization, cathodic protection, and combinations of these. Intrinsic chemistry The materials most resistant to corrosion are those for which corrosion is thermodynamically unfavorable. Any corrosion products of gold or platinum tend to decompose spontaneously into pure metal, which is why these elements can be found in metallic form on Earth and have long been valued. More common "base" metals can only be protected by more temporary means. Some metals have naturally slow reaction kinetics, even though their corrosion is thermodynamically favorable. These include such metals as zinc, magnesium, and cadmium. While corrosion of these metals is continuous and ongoing, it happens at an acceptably slow rate. An extreme example is graphite, which releases large amounts of energy upon oxidation, but has such slow kinetics that it is effectively immune to electrochemical corrosion under normal conditions. Passivation Passivation refers to the spontaneous formation of an ultrathin film of corrosion products, known as a passive film, on the metal's surface that act as a barrier to further oxidation. The chemical composition and microstructure of a passive film are different from the underlying metal. Typical passive film thickness on aluminium, stainless steels, and alloys is within 10 nanometers. The passive film is different from oxide layers that are formed upon heating and are in the micrometer thickness range – the passive film recovers if removed or damaged whereas the oxide layer does not. Passivation in natural environments such as air, water and soil at moderate pH is seen in such materials as aluminium, stainless steel, titanium, and silicon. Passivation is primarily determined by metallurgical and environmental factors. The effect of pH is summarized using Pourbaix diagrams, but many other factors are influential. Some conditions that inhibit passivation include high pH for aluminium and zinc, low pH or the presence of chloride ions for stainless steel, high temperature for titanium (in which case the oxide dissolves into the metal, rather than the electrolyte) and fluoride ions for silicon. On the other hand, unusual conditions may result in passivation of materials that are normally unprotected, as the alkaline environment of concrete does for steel rebar. Exposure to a liquid metal such as mercury or hot solder can often circumvent passivation mechanisms. It has been shown using electrochemical scanning tunneling microscopy that during iron passivation, an n-type semiconductor Fe(III) oxide grows at the interface with the metal that leads to the buildup of an electronic barrier opposing electron flow and an electronic depletion region that prevents further oxidation reactions. These results indicate a mechanism of "electronic passivation". The electronic properties of this semiconducting oxide film also provide a mechanistic explanation of corrosion mediated by chloride, which creates surface states at the oxide surface that lead to electronic breakthrough, restoration of anodic currents, and disruption of the electronic passivation mechanism. Corrosion in passivated materials Passivation is extremely useful in mitigating corrosion damage, however even a high-quality alloy will corrode if its ability to form a passivating film is hindered. Proper selection of the right grade of material for the specific environment is important for the long-lasting performance of this group of materials. If breakdown occurs in the passive film due to chemical or mechanical factors, the resulting major modes of corrosion may include pitting corrosion, crevice corrosion, and stress corrosion cracking. Pitting corrosion Certain conditions, such as low concentrations of oxygen or high concentrations of species such as chloride which compete as anions, can interfere with a given alloy's ability to re-form a passivating film. In the worst case, almost all of the surface will remain protected, but tiny local fluctuations will degrade the oxide film in a few critical points. Corrosion at these points will be greatly amplified, and can cause corrosion pits of several types, depending upon conditions. While the corrosion pits only nucleate under fairly extreme circumstances, they can continue to grow even when conditions return to normal, since the interior of a pit is naturally deprived of oxygen and locally the pH decreases to very low values and the corrosion rate increases due to an autocatalytic process. In extreme cases, the sharp tips of extremely long and narrow corrosion pits can cause stress concentration to the point that otherwise tough alloys can shatter; a thin film pierced by an invisibly small hole can hide a thumb sized pit from view. These problems are especially dangerous because they are difficult to detect before a part or structure fails. Pitting remains among the most common and damaging forms of corrosion in passivated alloys, but it can be prevented by control of the alloy's environment. Pitting results when a small hole, or cavity, forms in the metal, usually as a result of de-passivation of a small area. This area becomes anodic, while part of the remaining metal becomes cathodic, producing a localized galvanic reaction. The deterioration of this small area penetrates the metal and can lead to failure. This form of corrosion is often difficult to detect due to the fact that it is usually relatively small and may be covered and hidden by corrosion-produced compounds. Weld decay and knifeline attack Stainless steel can pose special corrosion challenges, since its passivating behavior relies on the presence of a major alloying component (chromium, at least 11.5%). Because of the elevated temperatures of welding and heat treatment, chromium carbides can form in the grain boundaries of stainless alloys. This chemical reaction robs the material of chromium in the zone near the grain boundary, making those areas much less resistant to corrosion. This creates a galvanic couple with the well-protected alloy nearby, which leads to "weld decay" (corrosion of the grain boundaries in the heat affected zones) in highly corrosive environments. This process can seriously reduce the mechanical strength of welded joints over time. A stainless steel is said to be "sensitized" if chromium carbides are formed in the microstructure. A typical microstructure of a normalized type 304 stainless steel shows no signs of sensitization, while a heavily sensitized steel shows the presence of grain boundary precipitates. The dark lines in the sensitized microstructure are networks of chromium carbides formed along the grain boundaries. Special alloys, either with low carbon content or with added carbon "getters" such as titanium and niobium (in types 321 and 347, respectively), can prevent this effect, but the latter require special heat treatment after welding to prevent the similar phenomenon of "knifeline attack". As its name implies, corrosion is limited to a very narrow zone adjacent to the weld, often only a few micrometers across, making it even less noticeable. Crevice corrosion Crevice corrosion is a localized form of corrosion occurring in confined spaces (crevices), to which the access of the working fluid from the environment is limited. Formation of a differential aeration cell leads to corrosion inside the crevices. Examples of crevices are gaps and contact areas between parts, under gaskets or seals, inside cracks and seams, spaces filled with deposits, and under sludge piles. Crevice corrosion is influenced by the crevice type (metal-metal, metal-non-metal), crevice geometry (size, surface finish), and metallurgical and environmental factors. The susceptibility to crevice corrosion can be evaluated with ASTM standard procedures. A critical crevice corrosion temperature is commonly used to rank a material's resistance to crevice corrosion. Hydrogen grooving In the chemical industry, hydrogen grooving is the corrosion of piping at grooves created by the interaction of a corrosive agent, corroded pipe constituents, and hydrogen gas bubbles. For example, when sulfuric acid () flows through steel pipes, the iron in the steel reacts with the acid to form a passivation coating of iron sulfate () and hydrogen gas (). The iron sulfate coating will protect the steel from further reaction; however, if hydrogen bubbles contact this coating, it will be removed. Thus, a groove can be formed by a travelling bubble, exposing more steel to the acid, causing a vicious cycle. The grooving is exacerbated by the tendency of subsequent bubbles to follow the same path. High-temperature corrosion High-temperature corrosion is chemical deterioration of a material (typically a metal) as a result of heating. This non-galvanic form of corrosion can occur when a metal is subjected to a hot atmosphere containing oxygen, sulfur ("sulfidation"), or other compounds capable of oxidizing (or assisting the oxidation of) the material concerned. For example, materials used in aerospace, power generation, and even in car engines must resist sustained periods at high temperature, during which they may be exposed to an atmosphere containing the potentially highly-corrosive products of combustion. Some products of high-temperature corrosion can potentially be turned to the advantage of the engineer. The formation of oxides on stainless steels, for example, can provide a protective layer preventing further atmospheric attack, allowing for a material to be used for sustained periods at both room and high temperatures in hostile conditions. Such high-temperature corrosion products, in the form of compacted oxide layer glazes, prevent or reduce wear during high-temperature sliding contact of metallic (or metallic and ceramic) surfaces. Thermal oxidation is also commonly used to produce controlled oxide nanostructures, including nanowires and thin films. Microbial corrosion Microbial corrosion, or commonly known as microbiologically influenced corrosion (MIC), is a corrosion caused or promoted by microorganisms, usually chemoautotrophs. It can apply to both metallic and non-metallic materials, in the presence or absence of oxygen. Sulfate-reducing bacteria are active in the absence of oxygen (anaerobic); they produce hydrogen sulfide, causing sulfide stress cracking. In the presence of oxygen (aerobic), some bacteria may directly oxidize iron to iron oxides and hydroxides, other bacteria oxidize sulfur and produce sulfuric acid causing biogenic sulfide corrosion. Concentration cells can form in the deposits of corrosion products, leading to localized corrosion. Accelerated low-water corrosion (ALWC) is a particularly aggressive form of MIC that affects steel piles in seawater near the low water tide mark. It is characterized by an orange sludge, which smells of hydrogen sulfide when treated with acid. Corrosion rates can be very high and design corrosion allowances can soon be exceeded leading to premature failure of the steel pile. Piles that have been coated and have cathodic protection installed at the time of construction are not susceptible to ALWC. For unprotected piles, sacrificial anodes can be installed locally to the affected areas to inhibit the corrosion or a complete retrofitted sacrificial anode system can be installed. Affected areas can also be treated using cathodic protection, using either sacrificial anodes or applying current to an inert anode to produce a calcareous deposit, which will help shield the metal from further attack. Metal dusting Metal dusting is a catastrophic form of corrosion that occurs when susceptible materials are exposed to environments with high carbon activities, such as synthesis gas and other high-CO environments. The corrosion manifests itself as a break-up of bulk metal to metal powder. The suspected mechanism is firstly the deposition of a graphite layer on the surface of the metal, usually from carbon monoxide (CO) in the vapor phase. This graphite layer is then thought to form metastable M3C species (where M is the metal), which migrate away from the metal surface. However, in some regimes, no M3C species is observed indicating a direct transfer of metal atoms into the graphite layer. Protection from corrosion Various treatments are used to slow corrosion damage to metallic objects which are exposed to the weather, salt water, acids, or other hostile environments. Some unprotected metallic alloys are extremely vulnerable to corrosion, such as those used in neodymium magnets, which can spall or crumble into powder even in dry, temperature-stable indoor environments unless properly treated. Surface treatments When surface treatments are used to reduce corrosion, great care must be taken to ensure complete coverage, without gaps, cracks, or pinhole defects. Small defects can act as an "Achilles' heel", allowing corrosion to penetrate the interior and causing extensive damage even while the outer protective layer remains apparently intact for a period of time. Applied coatings Plating, painting, and the application of enamel are the most common anti-corrosion treatments. They work by providing a barrier of corrosion-resistant material between the damaging environment and the structural material. Aside from cosmetic and manufacturing issues, there may be tradeoffs in mechanical flexibility versus resistance to abrasion and high temperature. Platings usually fail only in small sections, but if the plating is more noble than the substrate (for example, chromium on steel), a galvanic couple will cause any exposed area to corrode much more rapidly than an unplated surface would. For this reason, it is often wise to plate with active metal such as zinc or cadmium. If the zinc coating is not thick enough the surface soon becomes unsightly with rusting obvious. The design life is directly related to the metal coating thickness. Painting either by roller or brush is more desirable for tight spaces; spray would be better for larger coating areas such as steel decks and waterfront applications. Flexible polyurethane coatings, like Durabak-M26 for example, can provide an anti-corrosive seal with a highly durable slip resistant membrane. Painted coatings are relatively easy to apply and have fast drying times although temperature and humidity may cause dry times to vary. Reactive coatings If the environment is controlled (especially in recirculating systems), corrosion inhibitors can often be added to it. These chemicals form an electrically insulating or chemically impermeable coating on exposed metal surfaces, to suppress electrochemical reactions. Such methods make the system less sensitive to scratches or defects in the coating, since extra inhibitors can be made available wherever metal becomes exposed. Chemicals that inhibit corrosion include some of the salts in hard water (Roman water systems are known for their mineral deposits), chromates, phosphates, polyaniline, other conducting polymers, and a wide range of specially designed chemicals that resemble surfactants (i.e., long-chain organic molecules with ionic end groups). Anodization Aluminium alloys often undergo a surface treatment. Electrochemical conditions in the bath are carefully adjusted so that uniform pores, several nanometers wide, appear in the metal's oxide film. These pores allow the oxide to grow much thicker than passivating conditions would allow. At the end of the treatment, the pores are allowed to seal, forming a harder-than-usual surface layer. If this coating is scratched, normal passivation processes take over to protect the damaged area. Anodizing is very resilient to weathering and corrosion, so it is commonly used for building facades and other areas where the surface will come into regular contact with the elements. While being resilient, it must be cleaned frequently. If left without cleaning, panel edge staining will naturally occur. Anodization is the process of converting an anode into cathode by bringing a more active anode in contact with it. Biofilm coatings A new form of protection has been developed by applying certain species of bacterial films to the surface of metals in highly corrosive environments. This process increases the corrosion resistance substantially. Alternatively, antimicrobial-producing biofilms can be used to inhibit mild steel corrosion from sulfate-reducing bacteria. Controlled permeability formwork Controlled permeability formwork (CPF) is a method of preventing the corrosion of reinforcement by naturally enhancing the durability of the cover during concrete placement. CPF has been used in environments to combat the effects of carbonation, chlorides, frost, and abrasion. Cathodic protection Cathodic protection (CP) is a technique to control the corrosion of a metal surface by making it the cathode of an electrochemical cell. Cathodic protection systems are most commonly used to protect steel pipelines and tanks; steel pier piles, ships, and offshore oil platforms. Sacrificial anode protection For effective CP, the potential of the steel surface is polarized (pushed) more negative until the metal surface has a uniform potential. With a uniform potential, the driving force for the corrosion reaction is halted. For galvanic CP systems, the anode material corrodes under the influence of the steel, and eventually it must be replaced. The polarization is caused by the current flow from the anode to the cathode, driven by the difference in electrode potential between the anode and the cathode. The most common sacrificial anode materials are aluminum, zinc, magnesium and related alloys. Aluminum has the highest capacity, and magnesium has the highest driving voltage and is thus used where resistance is higher. Zinc is general purpose and the basis for galvanizing. A number of problems are associated with sacrificial anodes. Among these, from an environmental perspective, is the release of zinc, magnesium, aluminum and heavy metals such as cadmium into the environment including seawater. From a working perspective, sacrificial anodes systems are considered to be less precise than modern cathodic protection systems such as Impressed Current Cathodic Protection (ICCP) systems. Their ability to provide requisite protection has to be checked regularly by means of underwater inspection by divers. Furthermore, as they have a finite lifespan, sacrificial anodes need to be replaced regularly over time. Impressed current cathodic protection For larger structures, galvanic anodes cannot economically deliver enough current to provide complete protection. Impressed current cathodic protection (ICCP) systems use anodes connected to a DC power source (such as a cathodic protection rectifier). Anodes for ICCP systems are tubular and solid rod shapes of various specialized materials. These include high silicon cast iron, graphite, mixed metal oxide or platinum coated titanium or niobium coated rod and wires. Anodic protection Anodic protection impresses anodic current on the structure to be protected (opposite to the cathodic protection). It is appropriate for metals that exhibit passivity (e.g. stainless steel) and suitably small passive current over a wide range of potentials. It is used in aggressive environments, such as solutions of sulfuric acid. Anodic protection is an electrochemical method of corrosion protection by keeping metal in passive state Rate of corrosion The formation of an oxide layer is described by the Deal–Grove model, which is used to predict and control oxide layer formation in diverse situations. A simple test for measuring corrosion is the weight loss method. The method involves exposing a clean weighed piece of the metal or alloy to the corrosive environment for a specified time followed by cleaning to remove corrosion products and weighing the piece to determine the loss of weight. The rate of corrosion () is calculated as where is a constant, is the weight loss of the metal in time , is the surface area of the metal exposed, and is the density of the metal (in g/cm3). Other common expressions for the corrosion rate is penetration depth and change of mechanical properties. Economic impact In 2002, the US Federal Highway Administration released a study titled "Corrosion Costs and Preventive Strategies in the United States" on the direct costs associated with metallic corrosion in the US industry. In 1998, the total annual direct cost of corrosion in the US roughly $276 billion (or 3.2% of the US gross domestic product at the time). Broken down into five specific industries, the economic losses are $22.6 billion in infrastructure, $17.6 billion in production and manufacturing, $29.7 billion in transportation, $20.1 billion in government, and $47.9 billion in utilities. Rust is one of the most common causes of bridge accidents. As rust displaces a much higher volume than the originating mass of iron, its build-up can also cause failure by forcing apart adjacent components. It was the cause of the collapse of the Mianus River Bridge in 1983, when support bearings rusted internally and pushed one corner of the road slab off its support. Three drivers on the roadway at the time died as the slab fell into the river below. The following NTSB investigation showed that a drain in the road had been blocked for road re-surfacing, and had not been unblocked; as a result, runoff water penetrated the support hangers. Rust was also an important factor in the Silver Bridge disaster of 1967 in West Virginia, when a steel suspension bridge collapsed within a minute, killing 46 drivers and passengers who were on the bridge at the time. Similarly, corrosion of concrete-covered steel and iron can cause the concrete to spall, creating severe structural problems. It is one of the most common failure modes of reinforced concrete bridges. Measuring instruments based on the half-cell potential can detect the potential corrosion spots before total failure of the concrete structure is reached. Until 20–30 years ago, galvanized steel pipe was used extensively in the potable water systems for single and multi-family residents as well as commercial and public construction. Today, these systems have long ago consumed the protective zinc and are corroding internally, resulting in poor water quality and pipe failures. The economic impact on homeowners, condo dwellers, and the public infrastructure is estimated at $22 billion as the insurance industry braces for a wave of claims due to pipe failures. Corrosion in nonmetals Most ceramic materials are almost entirely immune to corrosion. The strong chemical bonds that hold them together leave very little free chemical energy in the structure; they can be thought of as already corroded. When corrosion does occur, it is almost always a simple dissolution of the material or chemical reaction, rather than an electrochemical process. A common example of corrosion protection in ceramics is the lime added to soda–lime glass to reduce its solubility in water; though it is not nearly as soluble as pure sodium silicate, normal glass does form sub-microscopic flaws when exposed to moisture. Due to its brittleness, such flaws cause a dramatic reduction in the strength of a glass object during its first few hours at room temperature. Corrosion of polymers Polymer degradation involves several complex and often poorly understood physiochemical processes. These are strikingly different from the other processes discussed here, and so the term "corrosion" is only applied to them in a loose sense of the word. Because of their large molecular weight, very little entropy can be gained by mixing a given mass of polymer with another substance, making them generally quite difficult to dissolve. While dissolution is a problem in some polymer applications, it is relatively simple to design against. A more common and related problem is "swelling", where small molecules infiltrate the structure, reducing strength and stiffness and causing a volume change. Conversely, many polymers (notably flexible vinyl) are intentionally swelled with plasticizers, which can be leached out of the structure, causing brittleness or other undesirable changes. The most common form of degradation, however, is a decrease in polymer chain length. Mechanisms which break polymer chains are familiar to biologists because of their effect on DNA: ionizing radiation (most commonly ultraviolet light), free radicals, and oxidizers such as oxygen, ozone, and chlorine. Ozone cracking is a well-known problem affecting natural rubber for example. Plastic additives can slow these process very effectively, and can be as simple as a UV-absorbing pigment (e.g., titanium dioxide or carbon black). Plastic shopping bags often do not include these additives so that they break down more easily as ultrafine particles of litter. Corrosion of glass Glass is characterized by a high degree of corrosion resistance. Because of its high water resistance, it is often used as primary packaging material in the pharmaceutical industry since most medicines are preserved in a watery solution. Besides its water resistance, glass is also robust when exposed to certain chemically-aggressive liquids or gases. Glass disease is the corrosion of silicate glasses in aqueous solutions. It is governed by two mechanisms: diffusion-controlled leaching (ion exchange) and hydrolytic dissolution of the glass network. Both mechanisms strongly depend on the pH of contacting solution: the rate of ion exchange decreases with pH as 10−0.5pH, whereas the rate of hydrolytic dissolution increases with pH as 100.5pH. Mathematically, corrosion rates of glasses are characterized by normalized corrosion rates of elements (g/cm2·d) which are determined as the ratio of total amount of released species into the water (g) to the water-contacting surface area (cm2), time of contact (days), and weight fraction content of the element in the glass : . The overall corrosion rate is a sum of contributions from both mechanisms (leaching + dissolution): . Diffusion-controlled leaching (ion exchange) is characteristic of the initial phase of corrosion and involves replacement of alkali ions in the glass by a hydronium (H3O+) ion from the solution. It causes an ion-selective depletion of near surface layers of glasses and gives an inverse-square-root dependence of corrosion rate with exposure time. The diffusion-controlled normalized leaching rate of cations from glasses (g/cm2·d) is given by: , where is time, is the th cation effective diffusion coefficient (cm2/d), which depends on pH of contacting water as , and is the density of the glass (g/cm3). Glass network dissolution is characteristic of the later phases of corrosion and causes a congruent release of ions into the water solution at a time-independent rate in dilute solutions (g/cm2·d): , where is the stationary hydrolysis (dissolution) rate of the glass (cm/d). In closed systems, the consumption of protons from the aqueous phase increases the pH and causes a fast transition to hydrolysis. However, a further saturation of solution with silica impedes hydrolysis and causes the glass to return to an ion-exchange; e.g., diffusion-controlled regime of corrosion. In typical natural conditions, normalized corrosion rates of silicate glasses are very low and are of the order of 10−7 to 10−5 g/(cm2·d). The very high durability of silicate glasses in water makes them suitable for hazardous and nuclear waste immobilisation. Glass corrosion tests There exist numerous standardized procedures for measuring the corrosion (also called chemical durability) of glasses in neutral, basic, and acidic environments, under simulated environmental conditions, in simulated body fluid, at high temperature and pressure, and under other conditions. The standard procedure ISO 719 describes a test of the extraction of water-soluble basic compounds under neutral conditions: 2 g of glass, particle size 300–500 μm, is kept for 60 min in 50 mL de-ionized water of grade 2 at 98 °C; 25 mL of the obtained solution is titrated against 0.01 mol/L HCl solution. The volume of HCl required for neutralization is classified according to the table below. The standardized test ISO 719 is not suitable for glasses with poor or not extractable alkaline components, but which are still attacked by water; e.g., quartz glass, B2O3 glass or P2O5 glass. Usual glasses are differentiated into the following classes: Hydrolytic class 1 (Type I): This class, which is also called neutral glass, includes borosilicate glasses (e.g., Duran, Pyrex, Fiolax). Glass of this class contains essential quantities of boron oxides, aluminium oxides and alkaline earth oxides. Through its composition neutral glass has a high resistance against temperature shocks and the highest hydrolytic resistance. Against acid and neutral solutions it shows high chemical resistance, because of its poor alkali content against alkaline solutions. Hydrolytic class 2 (Type II): This class usually contains sodium silicate glasses with a high hydrolytic resistance through surface finishing. Sodium silicate glass is a silicate glass, which contains alkali- and alkaline earth oxide and primarily sodium oxide and calcium oxide. Hydrolytic class 3 (Type III): Glass of the 3rd hydrolytic class usually contains sodium silicate glasses and has a mean hydrolytic resistance, which is two times poorer than of type 1 glasses. Acid class DIN 12116 and alkali class DIN 52322 (ISO 695) are to be distinguished from the hydrolytic class DIN 12111 (ISO 719). See also References Further reading Glass chemistry Metallurgy
Corrosion
[ "Chemistry", "Materials_science", "Engineering" ]
7,008
[ "Glass engineering and science", "Glass chemistry", "Metallurgy", "Materials science", "Corrosion", "Electrochemistry", "nan", "Materials degradation" ]
155,558
https://en.wikipedia.org/wiki/Sandia%20National%20Laboratories
Sandia National Laboratories (SNL), also known as Sandia, is one of three research and development laboratories of the United States Department of Energy's National Nuclear Security Administration (NNSA). Headquartered in Kirtland Air Force Base in Albuquerque, New Mexico, it has a second principal facility next to Lawrence Livermore National Laboratory in Livermore, California, and a test facility in Waimea, Kauai, Hawaii. Sandia is owned by the U.S. federal government but privately managed and operated by National Technology and Engineering Solutions of Sandia, a wholly owned subsidiary of Honeywell International. Established in 1949, SNL is a "multimission laboratory" with the primary goal of advancing U.S. national security by developing various science-based technologies. Its work spans roughly 70 areas of activity, including nuclear deterrence, arms control, nonproliferation, hazardous waste disposal, and climate change. Sandia hosts a wide variety of research initiatives, including computational biology, physics, materials science, alternative energy, psychology, MEMS, and cognitive science. Most notably, it hosted some of the world's earliest and fastest supercomputers, ASCI Red and ASCI Red Storm, and is currently home to the Z Machine, the largest X-ray generator in the world, which is designed to test materials in conditions of extreme temperature and pressure. Sandia conducts research through partnership agreements with academic, governmental, and commercial entities; educational opportunities are available through several programs, including the Securing Top Academic Research & Talent at Historically Black Colleges and Universities (START HBCU) Program and the Sandia University Partnerships Network (a collaboration with Purdue University, University of Texas at Austin, Georgia Institute of Technology, University of Illinois Urbana–Champaign, and University of New Mexico). Lab history Sandia National Laboratories' roots go back to World War II and the Manhattan Project. Prior to the United States formally entering the war, the U.S. Army leased land near an Albuquerque, New Mexico airport known as Oxnard Field to service transient Army and U.S. Navy aircraft. In January 1941 construction began on the Albuquerque Army Air Base, leading to establishment of the Bombardier School-Army Advanced Flying School near the end of the year. Soon thereafter it was renamed Kirtland Field, after early Army military pilot Colonel Roy C. Kirtland, and in mid-1942 the Army acquired Oxnard Field. During the war years facilities were expanded further and Kirtland Field served as a major Army Air Forces training installation. In the many months leading up to successful detonation of the first atomic bomb, the Trinity test, and delivery of the first airborne atomic weapon, Project Alberta, J. Robert Oppenheimer, Director of Los Alamos Laboratory, and his technical advisor, Hartly Rowe, began looking for a new site convenient to Los Alamos for the continuation of weapons development especially its non-nuclear aspects. They felt a separate division would be best to perform these functions. Kirtland had fulfilled Los Alamos' transportation needs for both the Trinity and Alberta projects, thus, Oxnard Field was transferred from the jurisdiction of the Army Air Corps to the U.S. Army Service Forces Chief of Engineer District, and thereafter, assigned to the Manhattan Engineer District. In July 1945, the forerunner of Sandia Laboratory, known as "Z" Division, was established at Oxnard Field to handle future weapons development, testing, and bomb assembly for the Manhattan Engineer District. The District-directive calling for establishing a secure area and construction of "Z" Division facilities referred to this as "Sandia Base" , after the nearby Sandia Mountains — apparently the first official recognition of the "Sandia" name. Sandia Laboratory was operated by the University of California until 1949, when President Harry S. Truman asked Western Electric, a subsidiary of American Telephone and Telegraph (AT&T), to assume the operation as an "opportunity to render an exceptional service in the national interest." Sandia Corporation, a wholly owned subsidiary of Western Electric, was formed on October 5, 1949, and, on November 1, 1949, took over management of the Laboratory. The United States Congress designated Sandia Laboratories as a National laboratory in 1979. In October 1993, Sandia National Laboratories (SNL) was managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin. In December 2016, it was announced that National Technology and Engineering Solutions of Sandia, under the direction of Honeywell International, would take over the management of Sandia National Laboratories beginning May 1, 2017; this contract remains in effect as of November 2022, covering government-owned facilities in Albuquerque, New Mexico (SNL/NM); Livermore, California (SNL/CA); Tonopah, Nevada; Shoreview, Minnesota; and Kauai, Hawaii. SNL/NM is the headquarters and the largest laboratory, employing more than 12,000 employees, while SNL/CA is a smaller laboratory, with around 1,700 employees. Tonopah and Kauai are occupied on a "campaign" basis, as test schedules dictate. The lab also managed the DOE/SNL Scaled Wind Farm Technology (SWiFT) Facility in Lubbock, Texas. Sandia led a project that studied how to decontaminate a subway system in the event of a biological weapons attack (such as anthrax). As of September 2017, the process to decontaminate subways in such an event is "virtually ready to implement," said a lead Sandia engineer. Sandia's integration with its local community includes a program through the Department of Energy's Tribal Energy program to deliver alternative renewable power to remote Navajo communities, spearheaded by senior engineer Sandra Begay. Legal issues On February 13, 2007, a New Mexico State Court found Sandia Corporation liable for $4.7 million in damages for the firing of a former network security analyst, Shawn Carpenter, who had reported to his supervisors that hundreds of military installations and defense contractors' networks were compromised and sensitive information was being stolen including hundreds of sensitive Lockheed documents on the Mars Reconnaissance Orbiter project. When his supervisors told him to drop the investigation and do nothing with the information, he went to intelligence officials in the United States Army and later the Federal Bureau of Investigation to address the national security breaches. When Sandia managers discovered his actions months later, they revoked his security clearance and fired him. In 2014, an investigation determined Sandia Corp. used lab operations funds to pay for lobbying related to the renewal of its $2 billion contract to operate the lab. Sandia Corp. and its parent company, Lockheed Martin, agreed to pay a $4.8 million fine. Technical areas SNL/NM consists of five technical areas (TA) and several additional test areas. Each TA has its own distinctive operations; however, the operations of some groups at Sandia may span more than one TA, with one part of a team working on a problem from one angle, and another subset of the same team located in a different building or area working with other specialized equipment. A description of each area is given below. TA-I operations are dedicated primarily to three activities: the design, research, and development of weapon systems; limited production of weapon system components; and energy programs. TA-I facilities include the main library and offices, laboratories, and shops used by administrative and technical staff. TA-II is a facility that was established in 1948 for the assembly of chemical high explosive main charges for nuclear weapons and later for production scale assembly of nuclear weapons. Activities in TA-II include the decontamination, decommissioning, and remediation of facilities and landfills used in past research and development activities. Remediation of the Classified Waste Landfill which started in March 1998, neared completion in FY2000. A testing facility, the Explosive Component Facility, integrates many of the previous TA-II test activities as well as some testing activities previously performed in other remote test areas. The Access Delay Technology Test Facility is also located in TA-II. TA-III is adjacent to and south of TA-V [both are approximately seven miles (11 km) south of TA-I]. TA-III facilities include extensive design-test facilities such as rocket sled tracks, centrifuges and a radiant heat facility. Other facilities in TA-III include a paper destructor, the Melting and Solidification Laboratory and the Radioactive and Mixed Waste Management Facility (RMWMF). RMWMF serves as central processing facility for packaging and storage of low-level and mixed waste. The remediation of the Chemical Waste Landfill, which started in September 1998, is an ongoing activity in TA-III. TA-IV, located approximately south of TA-I, consists of several inertial-confinement fusion research and pulsed power research facilities, including the High Energy Radiation Megavolt Electron Source (Hermes-III), the Z Facility, the Short Pulsed High Intensity Nanosecond X-Radiator (SPHINX) Facility, and the Saturn Accelerator. TA-IV also hosts some computer science and cognition research. TA-V contains two research reactor facilities, an intense gamma irradiation facility (using cobalt-60 and caesium-137 sources), and the Hot Cell Facility. SNL/NM also has test areas outside of the five technical areas listed above. These test areas, collectively known as Coyote Test Field, are located southeast of TA-III and/or in the canyons on the west side of the Manzanita Mountains. Facilities in the Coyote Canyon Test Field include the Solar Tower Facility (34.9623 N, 106.5097 W), the Lurance Canyon Burn Site and the Aerial Cable Facility. DOE/SNL Scaled Wind Farm Technology (SWIFT) Facility In collaboration with the Wind Energy Technologies Office (WETO) of U.S. Department of Energy, Texas Tech University, and the Vestas wind turbine corporation, SNL operates the Scaled Wind Farm Technology (SWiFT) Facility in Lubbock, Texas. Open-source software In the 1970s, the Sandia, Los Alamos, Air Force Weapons Laboratory Technical Exchange Committee initiated the development of the SLATEC library of mathematical and statistical routines, written in FORTRAN 77. Today, Sandia National Laboratories is home to several open-source software projects: FCLib (Feature Characterization Library) is a library for the identification and manipulation of coherent regions or structures from spatio-temporal data. FCLib focuses on providing data structures that are "feature-aware" and support feature-based analysis. It is written in C and developed under a "BSD-like" license. LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is a molecular dynamics library that can be used to model parallel atomic/subatomic processes at large scale. It is produced under the GNU General Public License (GPL) and distributed on the Sandia National Laboratories website as well as SourceForge. LibVMI is a library for simplifying the reading and writing of memory in running virtual machines, a technique known as virtual machine introspection. It is licensed under the GNU Lesser General Public License. MapReduce-MPI Library is an implementation of MapReduce for distributed-memory parallel machines, utilizing the Message Passing Interface (MPI) for communication. It is developed under a modified Berkeley Software Distribution license. MultiThreaded Graph Library (MTGL) is a collection of graph-based algorithms designed to take advantage of parallel, shared-memory architectures such as the Cray XMT, Symmetric Multiprocessor (SMP) machines, and multi-core workstations. It is developed under a BSD License. ParaView is a cross-platform application for performing data analysis and visualization. It is a collaborative effort, developed by Sandia National Laboratories, Los Alamos National Laboratories, and the United States Army Research Laboratory, and funded by the Advanced Simulation and Computing Program. It is developed under a BSD license. Pyomo is a python-based optimization Mathematical Programming Language which supports most commercial and open-source solver engines. Soccoro, a collaborative effort with Wake Forest and Vanderbilt Universities, is object-oriented software for performing electronic-structure calculations based on density-functional theory. It utilizes libraries such as MPI, BLAS, and LAPACK and is developed under the GNU General Public License. Titan Informatics Toolkit is a collection of cross-platform libraries for ingesting, analyzing, and displaying scientific and informatics data. It is a collaborative effort with Kitware, Inc., and uses various open-source components such as the Boost Graph Library. It is developed under a New BSD license. Trilinos is an object-oriented library for building scalable scientific and engineering applications, with a focus on linear algebra techniques. Most Trilinos packages are licensed under a Modified BSD License. Xyce is an open source, SPICE-compatible, high-performance analog circuit simulator, capable of solving extremely large circuit problems. Charon is a TCAD simulator which was open-sourced by Sandia in 2020. It is significant as previously there were no major TCAD simulators for large-scale simulations that were open source. In addition, Sandia National Laboratories collaborates with Kitware, Inc. in developing the Visualization Toolkit (VTK), a cross-platform graphics and visualization software suite. This collaboration has focused on enhancing the information visualization capabilities of VTK and has in turn fed back into other projects such as ParaView and Titan. Self-guided bullet On January 30, 2012, Sandia announced that it successfully test-fired a self-guided dart that can hit targets at . The dart is long, has its center of gravity at the nose, and is made to be fired from a small-caliber smoothbore gun. It is kept straight in flight by four electromagnetically actuated fins encased in a plastic puller sabot that falls off when the dart leaves the bore. The dart cannot be fired from conventional rifled barrels because the gyroscopic stability provided by rifling grooves for regular bullets would prevent the self-guided bullet from reliably turning towards a target when in flight, so fins are responsible for stabilizing rather than spinning. A laser designator marks a target, which is tracked by the dart's optical sensor and 8-bit CPU. The guided projectile is kept cheap because it does not need an inertial measurement unit, since its small size allows it to make the fast corrections necessary without the aid of an IMU. The natural body frequency of the bullet is about 30 hertz, so corrections can be made 30 times per second in flight. Muzzle velocity with commercial gunpowder is (Mach 2.1), but military customized gunpowder can increase its speed and range. Computer modeling shows that a standard bullet would miss a target at by , while an equivalent guided bullet would hit within . Accuracy increases as distances get longer, since the bullet's motions settle more the longer it is in flight. Supercomputers List of supercomputers that have been operated by or resided at Sandia: Intel Paragon XP/S 140, 1993 to ? ASCI Red, 1997 to 2006 Red Storm, 2005 to 2012 Cielo, 2010 to 2016 Trinity, 2015 to current Astra, 2018 to current, based on ARM processors Attaway, 2019 to current See also Brookhaven National Laboratory Decontamination foam Jess (programming language) Lawrence Livermore National Laboratory National Renewable Energy Laboratory Test Readiness Program Titan Rain VxInsight References Further reading Computerworld article "Reverse Hacker Case Gets Costlier for Sandia Labs" San Jose Mercury News article "Ill Lab Workers Fight For Federal Compensation" Wired Magazine article "Linkin Park's Mysterious Cyberstalker" Slate article "Stalking Linkin Park" FedSmith.com article "Linkin Park, Nuclear Research and Obsession" The Santa Fe New Mexican article "Judge Upholds $4.3 Million Jury Award to Fired Sandia Lab Analyst" TIME article "A Security Analyst Wins Big in Court" The Santa Fe New Mexican article "Jury Awards Fired Sandia Analyst $4.3 Million" HPCwire article "Sandia May Unwittingly Have Sold Supercomputer to China" Federal Computer Weekly article "Intercepts: Chinese Checkers" Congressional Research Service report "China: Suspected Acquisition of U.S. Nuclear Weapon Secrets" Sandia National Laboratory Cooperative Monitoring Center article "Engagement with China" BBC News "Security Overhaul at US Nuclear Labs" Fox News "Iowa Republican Demands Tighter Nuclear Lab Security" UPI article "Workers Get Bonus After Being Disciplined" IndustryWeek article "3D Silicon Photonic Lattice" October 6, 2005 The Santa Fe New Mexican article "Sandia Security Managers Recorded Workers' Calls" May 17, 2002 New Mexico Business Weekly article "Sandia National Laboratories Says it's Worthless" External links DOE Laboratory Fact Sheet 1949 establishments in New Mexico Economy of Albuquerque, New Mexico Federally Funded Research and Development Centers Honeywell Livermore, California Lockheed Martin Military research of the United States Nuclear weapons infrastructure of the United States Plasma physics facilities Research institutes in New Mexico Supercomputer sites United States Department of Energy national laboratories Weapons manufacturing companies
Sandia National Laboratories
[ "Physics" ]
3,534
[ "Plasma physics facilities", "Plasma physics" ]
155,650
https://en.wikipedia.org/wiki/Fluoride
Fluoride () is an inorganic, monatomic anion of fluorine, with the chemical formula (also written ), whose salts are typically white or colorless. Fluoride salts typically have distinctive bitter tastes, and are odorless. Its salts and minerals are important chemical reagents and industrial chemicals, mainly used in the production of hydrogen fluoride for fluorocarbons. Fluoride is classified as a weak base since it only partially associates in solution, but concentrated fluoride is corrosive and can attack the skin. Fluoride is the simplest fluorine anion. In terms of charge and size, the fluoride ion resembles the hydroxide ion. Fluoride ions occur on Earth in several minerals, particularly fluorite, but are present only in trace quantities in bodies of water in nature. Nomenclature Fluorides include compounds that contain ionic fluoride and those in which fluoride does not dissociate. The nomenclature does not distinguish these situations. For example, sulfur hexafluoride and carbon tetrafluoride are not sources of fluoride ions under ordinary conditions. The systematic name fluoride, the valid IUPAC name, is determined according to the additive nomenclature. However, the name fluoride is also used in compositional IUPAC nomenclature which does not take the nature of bonding involved into account. Fluoride is also used non-systematically, to describe compounds which release fluoride upon dissolving. Hydrogen fluoride is itself an example of a non-systematic name of this nature. However, it is also a trivial name, and the preferred IUPAC name for fluorane. Occurrence Fluorine is estimated to be the 13th-most abundant element in Earth's crust and is widely dispersed in nature, entirely in the form of fluorides. The vast majority is held in mineral deposits, the most commercially important of which is fluorite (CaF2). Natural weathering of some kinds of rocks, as well as human activities, releases fluorides into the biosphere through what is sometimes called the fluorine cycle. In water Fluoride is naturally present in groundwater, fresh and saltwater sources, as well as in rainwater, particularly in urban areas. Seawater fluoride levels are usually in the range of 0.86 to 1.4 mg/L, and average 1.1 mg/L (milligrams per litre). For comparison, chloride concentration in seawater is about 19 g/L. The low concentration of fluoride reflects the insolubility of the alkaline earth fluorides, e.g., CaF2. Concentrations in fresh water vary more significantly. Surface water such as rivers or lakes generally contains between 0.01 and 0.3 mg/L. Groundwater (well water) concentrations vary even more, depending on the presence of local fluoride-containing minerals. For example, natural levels of under 0.05 mg/L have been detected in parts of Canada but up to 8 mg/L in parts of China; in general levels rarely exceed 10 mg/litre In parts of Asia the groundwater can contain dangerously high levels of fluoride, leading to serious health problems. Worldwide, 50 million people receive water from water supplies that naturally have close to the "optimal level". In other locations the level of fluoride is very low, sometimes leading to fluoridation of public water supplies to bring the level to around 0.7–1.2 ppm. Mining can increase local fluoride levels Fluoride can be present in rain, with its concentration increasing significantly upon exposure to volcanic activity or atmospheric pollution derived from burning fossil fuels or other sorts of industry, particularly aluminium smelters. In plants All vegetation contains some fluoride, which is absorbed from soil and water. Some plants concentrate fluoride from their environment more than others. All tea leaves contain fluoride; however, mature leaves contain as much as 10 to 20 times the fluoride levels of young leaves from the same plant. Chemical properties Basicity Fluoride can act as a base. It can combine with a proton (): This neutralization reaction forms hydrogen fluoride (HF), the conjugate acid of fluoride. In aqueous solution, fluoride has a pKb value of 10.8. It is therefore a weak base, and tends to remain as the fluoride ion rather than generating a substantial amount of hydrogen fluoride. That is, the following equilibrium favours the left-hand side in water: However, upon prolonged contact with moisture, soluble fluoride salts will decompose to their respective hydroxides or oxides, as the hydrogen fluoride escapes. Fluoride is distinct in this regard among the halides. The identity of the solvent can have a dramatic effect on the equilibrium shifting it to the right-hand side, greatly increasing the rate of decomposition. Structure of fluoride salts Salts containing fluoride are numerous and adopt myriad structures. Typically the fluoride anion is surrounded by four or six cations, as is typical for other halides. Sodium fluoride and sodium chloride adopt the same structure. For compounds containing more than one fluoride per cation, the structures often deviate from those of the chlorides, as illustrated by the main fluoride mineral fluorite (CaF2) where the Ca2+ ions are surrounded by eight F− centers. In CaCl2, each Ca2+ ion is surrounded by six Cl− centers. The difluorides of the transition metals often adopt the rutile structure whereas the dichlorides have cadmium chloride structures. Inorganic chemistry Upon treatment with a standard acid, fluoride salts convert to hydrogen fluoride and metal salts. With strong acids, it can be doubly protonated to give . Oxidation of fluoride gives fluorine. Solutions of inorganic fluorides in water contain F− and bifluoride . Few inorganic fluorides are soluble in water without undergoing significant hydrolysis. In terms of its reactivity, fluoride differs significantly from chloride and other halides, and is more strongly solvated in protic solvents due to its smaller radius/charge ratio. Its closest chemical relative is hydroxide, since both have similar geometries. Naked fluoride Most fluoride salts dissolve to give the bifluoride () anion. Sources of true F− anions are rare because the highly basic fluoride anion abstracts protons from many, even adventitious, sources. Relative unsolvated fluoride, which does exist in aprotic solvents, is called "naked". Naked fluoride is a strong Lewis base, and a powerful nucleophile. Some quaternary ammonium salts of naked fluoride include tetramethylammonium fluoride and tetrabutylammonium fluoride. Cobaltocenium fluoride is another example. However, they all lack structural characterization in aprotic solvents. Because of their high basicity, many so-called naked fluoride sources are in fact bifluoride salts. In late 2016 imidazolium fluoride was synthesized that is the closest approximation of a thermodynamically stable and structurally characterized example of a "naked" fluoride source in an aprotic solvent (acetonitrile). The sterically demanding imidazolium cation stabilizes the discrete anions and protects them from polymerization. Biochemistry At physiological pHs, hydrogen fluoride is usually fully ionised to fluoride. In biochemistry, fluoride and hydrogen fluoride are equivalent. Fluorine, in the form of fluoride, is considered to be a micronutrient for human health, necessary to prevent dental cavities, and to promote healthy bone growth. The tea plant (Camellia sinensis L.) is a known accumulator of fluorine compounds, released upon forming infusions such as the common beverage. The fluorine compounds decompose into products including fluoride ions. Fluoride is the most bioavailable form of fluorine, and as such, tea is potentially a vehicle for fluoride dosing. Approximately, 50% of absorbed fluoride is excreted renally with a twenty-four-hour period. The remainder can be retained in the oral cavity, and lower digestive tract. Fasting dramatically increases the rate of fluoride absorption to near 100%, from a 60% to 80% when taken with food. Per a 2013 study, it was found that consumption of one litre of tea a day, can potentially supply the daily recommended intake of 4 mg per day. Some lower quality brands can supply up to a 120% of this amount. Fasting can increase this to 150%. The study indicates that tea drinking communities are at an increased risk of dental and skeletal fluorosis, in the case where water fluoridation is in effect. Fluoride ion in low doses in the mouth reduces tooth decay. For this reason, it is used in toothpaste and water fluoridation. At much higher doses and frequent exposure, fluoride causes health complications and can be toxic. Applications Fluoride salts and hydrofluoric acid are the main fluorides of industrial value. Organofluorine chemistry Organofluorine compounds are pervasive. Many drugs, many polymers, refrigerants, and many inorganic compounds are made from fluoride-containing reagents. Often fluorides are converted to hydrogen fluoride, which is a major reagent and precursor to reagents. Hydrofluoric acid and its anhydrous form, hydrogen fluoride, are particularly important. Production of metals and their compounds The main uses of fluoride, in terms of volume, are in the production of cryolite, Na3AlF6. It is used in aluminium smelting. Formerly, it was mined, but now it is derived from hydrogen fluoride. Fluorite is used on a large scale to separate slag in steel-making. Mined fluorite (CaF2) is a commodity chemical used in steel-making. Uranium hexafluoride is employed in the purification of uranium isotopes. Cavity prevention Fluoride-containing compounds, such as sodium fluoride or sodium monofluorophosphate are used in topical and systemic fluoride therapy for preventing tooth decay. They are used for water fluoridation and in many products associated with oral hygiene. Originally, sodium fluoride was used to fluoridate water; hexafluorosilicic acid (H2SiF6) and its salt sodium hexafluorosilicate (Na2SiF6) are more commonly used additives, especially in the United States. The fluoridation of water is known to prevent tooth decay and is considered by the U.S. Centers for Disease Control and Prevention to be "one of 10 great public health achievements of the 20th century". In some countries where large, centralized water systems are uncommon, fluoride is delivered to the populace by fluoridating table salt. For the method of action for cavity prevention, see Fluoride therapy. Fluoridation of water has its critics . Fluoridated toothpaste is in common use. Meta-analysis show the efficacy of 500 ppm fluoride in toothpastes. However, no beneficial effect can be detected when more than one fluoride source is used for daily oral care. Laboratory reagent Fluoride salts are commonly used in biological assay processing to inhibit the activity of phosphatases, such as serine/threonine phosphatases. Fluoride mimics the nucleophilic hydroxide ion in these enzymes' active sites. Beryllium fluoride and aluminium fluoride are also used as phosphatase inhibitors, since these compounds are structural mimics of the phosphate group and can act as analogues of the transition state of the reaction. Dietary recommendations The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for some minerals in 1997. Where there was not sufficient information to establish EARs and RDAs, an estimate designated Adequate Intake (AI) was used instead. AIs are typically matched to actual average consumption, with the assumption that there appears to be a need, and that need is met by what people consume. The current AI for women 19 years and older is 3.0 mg/day (includes pregnancy and lactation). The AI for men is 4.0 mg/day. The AI for children ages 1–18 increases from 0.7 to 3.0 mg/day. The major known risk of fluoride deficiency appears to be an increased risk of bacteria-caused tooth cavities. As for safety, the IOM sets tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of fluoride the UL is 10 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women ages 18 and older the AI is set at 2.9 mg/day (including pregnancy and lactation). For men, the value is 3.4 mg/day. For children ages 1–17 years, the AIs increase with age from 0.6 to 3.2 mg/day. These AIs are comparable to the U.S. AIs. The EFSA reviewed safety evidence and set an adult UL at 7.0 mg/day (lower for children). For U.S. food and dietary supplement labeling purposes, the amount of a vitamin or mineral in a serving is expressed as a percent of Daily Value (%DV). Although there is information to set Adequate Intake, fluoride does not have a Daily Value and is not required to be shown on food labels. Estimated daily intake Daily intakes of fluoride can vary significantly according to the various sources of exposure. Values ranging from 0.46 to 3.6–5.4 mg/day have been reported in several studies (IPCS, 1984). In areas where water is fluoridated this can be expected to be a significant source of fluoride, however fluoride is also naturally present in virtually all foods and beverages at a wide range of concentrations. The maximum safe daily consumption of fluoride is 10 mg/day for an adult (U.S.) or 7 mg/day (European Union). The upper limit of fluoride intake from all sources (fluoridated water, food, beverages, fluoride dental products and dietary fluoride supplements) is set at 0.10 mg/kg/day for infants, toddlers, and children through to 8 years old. For older children and adults, who are no longer at risk for dental fluorosis, the upper limit of fluoride is set at 10 mg/day regardless of weight. Safety Ingestion According to the U.S. Department of Agriculture, the Dietary Reference Intakes, which is the "highest level of daily nutrient intake that is likely to pose no risk of adverse health effects" specify 10 mg/day for most people, corresponding to 10 L of fluoridated water with no risk. For young children the values are smaller, ranging from 0.7 mg/d to 2.2 mg/d for infants. Water and food sources of fluoride include community water fluoridation, seafood, tea, and gelatin. Soluble fluoride salts, of which sodium fluoride is the most common, are toxic, and have resulted in both accidental and self-inflicted deaths from acute poisoning. The lethal dose for most adult humans is estimated at 5 to 10 g (which is equivalent to 32 to 64 mg elemental fluoride per kg body weight). A case of a fatal poisoning of an adult with 4 grams of sodium fluoride is documented, and a dose of 120 g sodium fluoride has been survived. For sodium fluorosilicate (Na2SiF6), the median lethal dose (LD50) orally in rats is 125 mg/kg, corresponding to 12.5 g for a 100 kg adult. Treatment may involve oral administration of dilute calcium hydroxide or calcium chloride to prevent further absorption, and injection of calcium gluconate to increase the calcium levels in the blood. Hydrogen fluoride is more dangerous than salts such as NaF because it is corrosive and volatile, and can result in fatal exposure through inhalation or upon contact with the skin; calcium gluconate gel is the usual antidote. In the higher doses used to treat osteoporosis, sodium fluoride can cause pain in the legs and incomplete stress fractures when the doses are too high; it also irritates the stomach, sometimes so severely as to cause ulcers. Slow-release and enteric-coated versions of sodium fluoride do not have gastric side effects in any significant way, and have milder and less frequent complications in the bones. In the lower doses used for water fluoridation, the only clear adverse effect is dental fluorosis, which can alter the appearance of children's teeth during tooth development; this is mostly mild and is unlikely to represent any real effect on aesthetic appearance or on public health. Fluoride was known to enhance bone mineral density at the lumbar spine, but it was not effective for vertebral fractures and provoked more nonvertebral fractures. In areas that have naturally occurring high levels of fluoride in groundwater which is used for drinking water, both dental and skeletal fluorosis can be prevalent and severe. Hazard maps for fluoride in groundwater Around one-third of the human population drinks water from groundwater resources. Of this, about 10%, approximately 300 million people, obtain water from groundwater resources that are heavily contaminated with arsenic or fluoride. These trace elements derive mainly from minerals. Maps locating potential problematic wells are available. Topical Concentrated fluoride solutions are corrosive. Gloves made of nitrile rubber are worn when handling fluoride compounds. The hazards of solutions of fluoride salts depend on the concentration. In the presence of strong acids, fluoride salts release hydrogen fluoride, which is corrosive, especially toward glass. Other derivatives Organic and inorganic anions are produced from fluoride, including: Bifluoride, used as an etchant for glass Tetrafluoroberyllate Hexafluoroplatinate Tetrafluoroborate used in organometallic synthesis Hexafluorophosphate used as an electrolyte in commercial secondary batteries. Trifluoromethanesulfonate See also Per- and polyfluoroalkyl substances Fluorine-19 nuclear magnetic resonance spectroscopy Fluoride deficiency Fluoride selective electrode Fluoride therapy Sodium monofluorophosphate References External links "Fluoride in Drinking Water: A Review of Fluoridation and Regulation Issues", Congressional Research Service U.S. government site for checking status of local water fluoridation Anions Biology and pharmacology of chemical elements Nephrotoxins
Fluoride
[ "Physics", "Chemistry", "Biology" ]
4,104
[ "Pharmacology", "Matter", "Properties of chemical elements", "Anions", "Biology and pharmacology of chemical elements", "Salts", "Biochemistry", "Fluorides", "Ions" ]
155,715
https://en.wikipedia.org/wiki/Pyroelectricity
Pyroelectricity (from Greek: pyr (πυρ), "fire" and electricity) is a property of certain crystals which are naturally electrically polarized and as a result contain large electric fields. Pyroelectricity can be described as the ability of certain materials to generate a temporary voltage when they are heated or cooled. The change in temperature modifies the positions of the atoms slightly within the crystal structure, so that the polarization of the material changes. This polarization change gives rise to a voltage across the crystal. If the temperature stays constant at its new value, the pyroelectric voltage gradually disappears due to leakage current. The leakage can be due to electrons moving through the crystal, ions moving through the air, or current leaking through a voltmeter attached across the crystal. Explanation Pyroelectric charge in minerals develops on the opposite faces of asymmetric crystals. The direction in which the propagation of the charge tends is usually constant throughout a pyroelectric material, but, in some materials, this direction can be changed by a nearby electric field. These materials are said to exhibit ferroelectricity. All known pyroelectric materials are also piezoelectric. Despite being pyroelectric, novel materials such as boron aluminum nitride (BAlN) and boron gallium nitride (BGaN) have zero piezoelectric response for strain along the c-axis at certain compositions, the two properties being closely related. However, note that some piezoelectric materials have a crystal symmetry that does not allow pyroelectricity. Pyroelectric materials are mostly hard and crystals; however, soft pyroelectricity can be achieved by using electrets. Pyroelectricity is measured as the change in net polarization (a vector) proportional to a change in temperature. The total pyroelectric coefficient measured at constant stress is the sum of the pyroelectric coefficients at constant strain (primary pyroelectric effect) and the piezoelectric contribution from thermal expansion (secondary pyroelectric effect). Under normal circumstances, even polar materials do not display a net dipole moment. As a consequence, there are no electric dipole equivalents of bar magnets because the intrinsic dipole moment is neutralized by "free" electric charge that builds up on the surface by internal conduction or from the ambient atmosphere. Polar crystals only reveal their nature when perturbed in some fashion that momentarily upsets the balance with the compensating surface charge. Spontaneous polarization is temperature dependent, so a good perturbation probe is a change in temperature which induces a flow of charge to and from the surfaces. This is the pyroelectric effect. All polar crystals are pyroelectric, so the 10 polar crystal classes are sometimes referred to as the pyroelectric classes. Pyroelectric materials can be used as infrared and millimeter wavelength radiation detectors. An electret is the electrical equivalent of a permanent magnet. Mathematical description The pyroelectric coefficient may be described as the change in the spontaneous polarization vector with temperature: where pi (Cm−2K−1) is the vector for the pyroelectric coefficient. History The first record of the pyroelectric effect was made in 1707 by Johann Georg Schmidt, who noted that the "[hot] tourmaline could attract the ashes from the warm or burning coals, as the magnet does iron, but also repelling them again [after the contact]". In 1717 Louis Lemery noticed, as Schmidt had, that small scraps of non-conducting material were first attracted to tourmaline, but then repelled by it once they contacted the stone. In 1747 Linnaeus first related the phenomenon to electricity (he called tourmaline Lapidem Electricum, "the electric stone"), although this was not proven until 1756 by Franz Ulrich Theodor Aepinus. Research into pyroelectricity became more sophisticated in the 19th century. In 1824 Sir David Brewster gave the effect the name it has today. Both William Thomson in 1878 and Woldemar Voigt in 1897 helped develop a theory for the processes behind pyroelectricity. Pierre Curie and his brother, Jacques Curie, studied pyroelectricity in the 1880s, leading to their discovery of some of the mechanisms behind piezoelectricity. It is mistakenly attributed to Theophrastus (c. 314 BC) the first record of pyroelectricity. The misconception arose soon after the discovery of the pyroelectric properties of tourmaline, which made mineralogists of the time associate the legendary stone Lyngurium with it. Lyngurium is described in the work of Theophrastus as being similar to amber, without specifying any pyroelectric properties. Crystal classes All crystal structures belong to one of thirty-two crystal classes based on the number of rotational axes and reflection planes they possess that leave the crystal structure unchanged (point groups). Of the thirty-two crystal classes, twenty-one are non-centrosymmetric (not having a centre of symmetry). Of these twenty-one, twenty exhibit direct piezoelectricity, the remaining one being the cubic class 432. Ten of these twenty piezoelectric classes are polar, i.e., they possess a spontaneous polarization, having a dipole in their unit cell, and exhibit pyroelectricity. If this dipole can be reversed by the application of an electric field, the material is said to be ferroelectric. Any dielectric material develops a dielectric polarization (electrostatics) when an electric field is applied, but a substance which has such a natural charge separation even in the absence of a field is called a polar material. Whether or not a material is polar is determined solely by its crystal structure. Only 10 of the 32 point groups are polar. All polar crystals are pyroelectric, so the ten polar crystal classes are sometimes referred to as the pyroelectric classes. Piezoelectric crystal classes: 1, 2, m, 222, mm2, 4, -4, 422, 4mm, -42m, 3, 32, 3m, 6, -6, 622, 6mm, -62m, 23, -43m Pyroelectric: 1, 2, m, mm2, 3, 3m, 4, 4mm, 6, 6mm Related effects Two effects which are closely related to pyroelectricity are ferroelectricity and piezoelectricity. Normally materials are very nearly electrically neutral on the macroscopic level. However, the positive and negative charges which make up the material are not necessarily distributed in a symmetric manner. If the sum of charge times distance for all elements of the basic cell does not equal zero the cell will have an electric dipole moment (a vector quantity). The dipole moment per unit volume is defined as the dielectric polarization. If this dipole moment changes with the effect of applied temperature changes, applied electric field, or applied pressure, the material is pyroelectric, ferroelectric, or piezoelectric, respectively. The ferroelectric effect is exhibited by materials which possess an electric polarization in the absence of an externally applied electric field such that the polarization can be reversed if the electric field is reversed. Since all ferroelectric materials exhibit a spontaneous polarization, all ferroelectric materials are also pyroelectric (but not all pyroelectric materials are ferroelectric). The piezoelectric effect is exhibited by crystals (such as quartz or ceramic) for which an electric voltage across the material appears when pressure is applied. Similar to pyroelectric effect, the phenomenon is due to the asymmetric structure of the crystals that allows ions to move more easily along one axis than the others. As pressure is applied, each side of the crystal takes on an opposite charge, resulting in a voltage drop across the crystal. Pyroelectricity should not be confused with thermoelectricity: In a typical demonstration of pyroelectricity, the whole crystal is changed from one temperature to another, and the result is a temporary voltage across the crystal. In a typical demonstration of thermoelectricity, one part of the device is kept at one temperature and the other part at a different temperature, and the result is a permanent voltage across the device as long as there is a temperature difference. Both effects convert temperature change to electrical potential, but the pyroelectric effect converts temperature change over time into electrical potential, while the thermoelectric effect converts temperature change with position into electrical potential. Pyroelectric materials Although artificial pyroelectric materials have been engineered, the effect was first discovered in minerals such as tourmaline. The pyroelectric effect is also present in bone and tendon. The most important example is gallium nitride, a semiconductor. The large electric fields in this material are detrimental in light emitting diodes (LEDs), but useful for the production of power transistors. Progress has been made in creating artificial pyroelectric materials, usually in the form of a thin film, using gallium nitride (GaN), caesium nitrate (CsNO3), polyvinyl fluorides, derivatives of phenylpyridine, and cobalt phthalocyanine. Lithium tantalate (LiTaO3) is a crystal exhibiting both piezoelectric and pyroelectric properties, which has been used to create small-scale nuclear fusion ("pyroelectric fusion"). Recently, pyroelectric and piezoelectric properties have been discovered in doped hafnium oxide (HfO2), which is a standard material in CMOS manufacturing. Applications Heat sensors Very small changes in temperature can produce a pyroelectric potential. Passive infrared sensors are often designed around pyroelectric materials, as the heat of a human or animal from several feet away is enough to generate a voltage. Power generation A pyroelectric can be repeatedly heated and cooled (analogously to a heat engine) to generate usable electrical power. An example of a heat engine is the movement of the pistons in an internal combustion engine like that found in a gasoline powered automobile. One group calculated that a pyroelectric in an Ericsson cycle could reach 50% of Carnot efficiency, while a different study found a material that could, in theory, reach 84-92% of Carnot efficiency (these efficiency values are for the pyroelectric itself, ignoring losses from heating and cooling the substrate, other heat-transfer losses, and all other losses elsewhere in the system). Possible advantages of pyroelectric generators for generating electricity (as compared to the conventional heat engine plus electrical generator) include: Harvesting energy from waste-heat Potentially lower operating temperatures Less bulky equipment Fewer moving parts. Although a few patents have been filed for such a device, such generators do not appear to be anywhere close to commercialization. Nuclear fusion Pyroelectric materials have been used to generate large electric fields necessary to steer deuterium ions in a nuclear fusion process. This is known as pyroelectric fusion. See also Electrocaloric effect, an opposite effect of pyroelectricity Kelvin probe force microscope Lithium tantalate Thermoelectricity Zinc oxide References Gautschi, Gustav, 2002, Piezoelectric Sensorics, Springer, Piezoelectric Sensorics: Force Strain Pressure Acceleration and Acoustic Emission Sensors Materials and Amplifiers External links Pyroelectric Detectors for THz applications WiredSense Pyroelectric Infrared Detectors DIAS Infrared DoITPoMS Teaching and Learning Package- "Pyroelectric Materials" Lithium Tantalate (LiTaO3) Lithium Tantalate (LiTaO3) laser detection with lithium tantalate Optical and Dielectric Properties of Sr(x)Ba(1-x)Nb(2)O(6) Dielectric and Electrical Properties of Ce,Mn:SBN Thermodynamics Electrical phenomena Energy conversion Crystals
Pyroelectricity
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
2,581
[ "Physical phenomena", "Crystallography", "Electrical phenomena", "Crystals", "Thermodynamics", "Dynamical systems" ]
155,758
https://en.wikipedia.org/wiki/Gravity%20assist
A gravity assist, gravity assist maneuver, swing-by, or generally a gravitational slingshot in orbital mechanics, is a type of spaceflight flyby which makes use of the relative movement (e.g. orbit around the Sun) and gravity of a planet or other astronomical object to alter the path and speed of a spacecraft, typically to save propellant and reduce expense. Gravity assistance can be used to accelerate a spacecraft, that is, to increase or decrease its speed or redirect its path. The "assist" is provided by the motion of the gravitating body as it pulls on the spacecraft. Any gain or loss of kinetic energy and linear momentum by a passing spacecraft is correspondingly lost or gained by the gravitational body, in accordance with Newton's Third Law. The gravity assist maneuver was first used in 1959 when the Soviet probe Luna 3 photographed the far side of Earth's Moon, and it was used by interplanetary probes from Mariner 10 onward, including the two Voyager probes' notable flybys of Jupiter and Saturn. Explanation A gravity assist around a planet changes a spacecraft's velocity (relative to the Sun) by entering and leaving the gravitational sphere of influence of a planet. The sum of the kinetic energies of both bodies remains constant (see elastic collision). A slingshot maneuver can therefore be used to change the spaceship's trajectory and speed relative to the Sun. A close terrestrial analogy is provided by a tennis ball bouncing off the front of a moving train. Imagine standing on a train platform, and throwing a ball at 30 km/h toward a train approaching at 50 km/h. The driver of the train sees the ball approaching at 80 km/h and then departing at 80 km/h after the ball bounces elastically off the front of the train. Because of the train's motion, however, that departure is at 130 km/h relative to the train platform; the ball has added twice the train's velocity to its own. Translating this analogy into space: in the planet reference frame, the spaceship has a vertical velocity of v relative to the planet. After the slingshot occurs the spaceship is leaving on a course 90 degrees to that which it arrived on. It will still have a velocity of v, but in the horizontal direction. In the Sun reference frame, the planet has a horizontal velocity of v, and by using the Pythagorean Theorem, the spaceship initially has a total velocity of v. After the spaceship leaves the planet, it will have a velocity of v + v = 2v, gaining approximately 0.6v. This oversimplified example cannot be refined without additional details regarding the orbit, but if the spaceship travels in a path which forms a hyperbola, it can leave the planet in the opposite direction without firing its engine. This example is one of many trajectories and gains of speed the spaceship can experience. This explanation might seem to violate the conservation of energy and momentum, apparently adding velocity to the spacecraft out of nothing, but the spacecraft's effects on the planet must also be taken into consideration to provide a complete picture of the mechanics involved. The linear momentum gained by the spaceship is equal in magnitude to that lost by the planet, so the spacecraft gains velocity and the planet loses velocity. However, the planet's enormous mass compared to the spacecraft makes the resulting change in its speed negligibly small even when compared to the orbital perturbations planets undergo due to interactions with other celestial bodies on astronomically short timescales. For example, one metric ton is a typical mass for an interplanetary space probe whereas Jupiter has a mass of almost 2 x 1024 metric tons. Therefore, a one-ton spacecraft passing Jupiter will theoretically cause the planet to lose approximately 5 x 10−25 km/s of orbital velocity for every km/s of velocity relative to the Sun gained by the spacecraft. For all practical purposes the effects on the planet can be ignored in the calculation. Realistic portrayals of encounters in space require the consideration of three dimensions. The same principles apply as above except adding the planet's velocity to that of the spacecraft requires vector addition as shown below. Due to the reversibility of orbits, gravitational slingshots can also be used to reduce the speed of a spacecraft. Both Mariner 10 and MESSENGER performed this maneuver to reach Mercury. If more speed is needed than available from gravity assist alone, a rocket burn near the periapsis (closest planetary approach) uses the least fuel. A given rocket burn always provides the same change in velocity (Δv), but the change in kinetic energy is proportional to the vehicle's velocity at the time of the burn. Therefore the maximum kinetic energy is obtained when the burn occurs at the vehicle's maximum velocity (periapsis). The Oberth effect describes this technique in more detail. Historical origins In his paper "To Those Who Will Be Reading in Order to Build" (), published in 1938 but dated 1918–1919, Yuri Kondratyuk suggested that a spacecraft traveling between two planets could be accelerated at the beginning and end of its trajectory by using the gravity of the two planets' moons. The portion of his manuscript considering gravity-assists received no later development and was not published until the 1960s. In his 1925 paper "Problems of Flight by Jet Propulsion: Interplanetary Flights" (), Friedrich Zander showed a deep understanding of the physics behind the concept of gravity assist and its potential for the interplanetary exploration of the solar system. Italian engineer Gaetano Crocco was first to calculate an interplanetary journey considering multiple gravity-assists. The gravity assist maneuver was first used in 1959 when the Soviet probe Luna 3 photographed the far side of the Moon. The maneuver relied on research performed under the direction of Mstislav Keldysh at the Keldysh Institute of Applied Mathematics. In 1961, Michael Minovitch, UCLA graduate student who worked at NASA's Jet Propulsion Laboratory (JPL), developed a gravity assist technique, that would later be used for the Gary Flandro's Planetary Grand Tour idea. During the summer of 1964 at the NASA JPL, Gary Flandro was assigned the task of studying techniques for exploring the outer planets of the solar system. In this study he discovered the rare alignment of the outer planets (Jupiter, Saturn, Uranus, and Neptune) and conceived the Planetary Grand Tour multi-planet mission utilizing gravity assist to reduce mission duration from forty years to less than ten. Purpose A spacecraft traveling from Earth to an inner planet will increase its relative speed because it is falling toward the Sun, and a spacecraft traveling from Earth to an outer planet will decrease its speed because it is leaving the vicinity of the Sun. Although the orbital speed of an inner planet is greater than that of the Earth, a spacecraft traveling to an inner planet, even at the minimum speed needed to reach it, is still accelerated by the Sun's gravity to a speed notably greater than the orbital speed of that destination planet. If the spacecraft's purpose is only to fly by the inner planet, then there is typically no need to slow the spacecraft. However, if the spacecraft is to be inserted into orbit about that inner planet, then there must be some way to slow it down. Similarly, while the orbital speed of an outer planet is less than that of the Earth, a spacecraft leaving the Earth at the minimum speed needed to travel to some outer planet is slowed by the Sun's gravity to a speed far less than the orbital speed of that outer planet. Therefore, there must be some way to accelerate the spacecraft when it reaches that outer planet if it is to enter orbit about it. Rocket engines can certainly be used to increase and decrease the speed of the spacecraft. However, rocket thrust takes propellant, propellant has mass, and even a small change in velocity (known as Δv, or "delta-v", the delta symbol being used to represent a change and "v" signifying velocity) translates to a far larger requirement for propellant needed to escape Earth's gravity well. This is because not only must the primary-stage engines lift the extra propellant, they must also lift the extra propellant beyond that which is needed to lift that additional propellant. The liftoff mass requirement increases exponentially with an increase in the required delta-v of the spacecraft. Because additional fuel is needed to lift fuel into space, space missions are designed with a tight propellant "budget", known as the "delta-v budget". The delta-v budget is in effect the total propellant that will be available after leaving the earth, for speeding up, slowing down, stabilization against external buffeting (by particles or other external effects), or direction changes, if it cannot acquire more propellant. The entire mission must be planned within that capability. Therefore, methods of speed and direction change that do not require fuel to be burned are advantageous, because they allow extra maneuvering capability and course enhancement, without spending fuel from the limited amount which has been carried into space. Gravity assist maneuvers can greatly change the speed of a spacecraft without expending propellant, and can save significant amounts of propellant, so they are a very common technique to save fuel. Limits The main practical limit to the use of a gravity assist maneuver is that planets and other large masses are seldom in the right places to enable a voyage to a particular destination. For example, the Voyager missions which started in the late 1970s were made possible by the "Grand Tour" alignment of Jupiter, Saturn, Uranus and Neptune. A similar alignment will not occur again until the middle of the 22nd century. That is an extreme case, but even for less ambitious missions there are years when the planets are scattered in unsuitable parts of their orbits. Another limitation is the atmosphere, if any, of the available planet. The closer the spacecraft can approach, the faster its periapsis speed as gravity accelerates the spacecraft, allowing for more kinetic energy to be gained from a rocket burn. However, if a spacecraft gets too deep into the atmosphere, the energy lost to drag can exceed that gained from the planet's velocity. On the other hand, the atmosphere can be used to accomplish aerobraking. There have also been theoretical proposals to use aerodynamic lift as the spacecraft flies through the atmosphere. This maneuver, called an aerogravity assist, could bend the trajectory through a larger angle than gravity alone, and hence increase the gain in energy. Even in the case of an airless body, there is a limit to how close a spacecraft may approach. The magnitude of the achievable change in velocity depends on the spacecraft's approach velocity and the planet's escape velocity at the point of closest approach (limited by either the surface or the atmosphere.) Interplanetary slingshots using the Sun itself are not possible because the Sun is at rest relative to the Solar System as a whole. However, thrusting when near the Sun has the same effect as the powered slingshot described as the Oberth effect. This has the potential to magnify a spacecraft's thrusting power enormously, but is limited by the spacecraft's ability to resist the heat. A rotating black hole might provide additional assistance, if its spin axis is aligned the right way. General relativity predicts that a large spinning frame-dragging—close to the object, space itself is dragged around in the direction of the spin. Any ordinary rotating object produces this effect. Although attempts to measure frame dragging about the Sun have produced no clear evidence, experiments performed by Gravity Probe B have detected frame-dragging effects caused by Earth. General relativity predicts that a spinning black hole is surrounded by a region of space, called the ergosphere, within which standing still (with respect to the black hole's spin) is impossible, because space itself is dragged at the speed of light in the same direction as the black hole's spin. The Penrose process may offer a way to gain energy from the ergosphere, although it would require the spaceship to dump some "ballast" into the black hole, and the spaceship would have had to expend energy to carry the "ballast" to the black hole. Notable examples of use Luna 3 The gravity assist maneuver was first attempted in 1959 for Luna 3, to photograph the far side of the Moon. The satellite did not gain speed, but its orbit was changed in a way that allowed successful transmission of the photos. Pioneer 10 NASA's Pioneer 10 is a space probe launched in 1972 that completed the first mission to the planet Jupiter. Thereafter, Pioneer 10 became the first of five artificial objects to achieve the escape velocity needed to leave the Solar System. In December 1973, Pioneer 10 spacecraft was the first one to use the gravitational slingshot effect to reach escape velocity to leave Solar System. Pioneer 11 Pioneer 11 was launched by NASA in 1973, to study the asteroid belt, the environment around Jupiter and Saturn, solar winds, and cosmic rays. It was the first probe to encounter Saturn, the second to fly through the asteroid belt, and the second to fly by Jupiter. To get to Saturn, the spacecraft got a gravity assist on Jupiter. Mariner 10 The Mariner 10 probe was the first spacecraft to use the gravitational slingshot effect to reach another planet, passing by Venus on 5 February 1974 on its way to becoming the first spacecraft to explore Mercury. Voyager 1 Voyager 1 was launched by NASA on September 5, 1977. It gained the energy to escape the Sun's gravity by performing slingshot maneuvers around Jupiter and Saturn. Having operated for as of , the spacecraft still communicates with the Deep Space Network to receive routine commands and to transmit data to Earth. Real-time distance and velocity data is provided by NASA and JPL. At a distance of from Earth as of January 12, 2020, it is the most distant human-made object from Earth. Voyager 2 Voyager 2 was launched by NASA on August 20, 1977, to study the outer planets. Its trajectory took longer to reach Jupiter and Saturn than its twin spacecraft but enabled further encounters with Uranus and Neptune. Galileo The Galileo spacecraft was launched by NASA in 1989 and on its route to Jupiter got three gravity assists, one from Venus (February 10, 1990), and two from Earth (December 8, 1990 and December 8, 1992). Spacecraft reached Jupiter in December 1995. Gravity assists also allowed Galileo to flyby two asteroids, 243 Ida and 951 Gaspra. Ulysses In 1990, NASA launched the ESA spacecraft Ulysses to study the polar regions of the Sun. All the planets orbit approximately in a plane aligned with the equator of the Sun. Thus, to enter an orbit passing over the poles of the Sun, the spacecraft would have to eliminate the speed it inherited from the Earth's orbit around the Sun and gain the speed needed to orbit the Sun in the pole-to-pole plane. It was achieved by a gravity assist from Jupiter on February 8, 1992. MESSENGER The MESSENGER mission (launched in August 2004) made extensive use of gravity assists to slow its speed before orbiting Mercury. The MESSENGER mission included one flyby of Earth, two flybys of Venus, and three flybys of Mercury before finally arriving at Mercury in March 2011 with a velocity low enough to permit orbit insertion with available fuel. Although the flybys were primarily orbital maneuvers, each provided an opportunity for significant scientific observations. Cassini The Cassini–Huygens spacecraft was launched from Earth on 15 October 1997, followed by gravity assist flybys of Venus (26 April 1998 and 21 June 1999), Earth (18 August 1999), and Jupiter (30 December 2000). Transit to Saturn took 6.7 years, the spacecraft arrived at 1 July 2004. Its trajectory was called "the Most Complex Gravity-Assist Trajectory Flown to Date" in 2019. After entering orbit around Saturn, the Cassini spacecraft used multiple Titan gravity assists to achieve significant changes in the inclination of its orbit as well so that instead of staying nearly in the equatorial plane, the spacecraft's flight path was inclined well out of the plane of the rings. A typical Titan encounter changed the spacecraft's velocity by 0.75 km/s, and the spacecraft made 127 Titan encounters. These encounters enabled an orbital tour with a wide range of periapsis and apoapsis distances, various alignments of the orbit with respect to the Sun, and orbital inclinations from 0° to 74°. The multiple flybys of Titan also allowed Cassini to flyby other moons, such as Rhea and Enceladus. Rosetta The Rosetta probe, launched in March 2004, used four gravity assist maneuvers (including one just 250 km from the surface of Mars, and three assists from Earth) to accelerate throughout the inner Solar System. That enabled it to flyby the asteroids 21 Lutetia and 2867 Šteins as well as eventually match the velocity of the 67P/Churyumov–Gerasimenko comet at the rendezvous point in August 2014. New Horizons New Horizons was launched by NASA in 2006, and reached Pluto in 2015. In 2007 it performed a gravity assist on Jupiter. Juno The Juno spacecraft was launched on August 5, 2011 (UTC). The trajectory used a gravity assist speed boost from Earth, accomplished by an Earth flyby in October 2013, two years after its launch on August 5, 2011. In that way Juno changed its orbit (and speed) toward its final goal, Jupiter, after only five years. Parker Solar Probe The Parker Solar Probe, launched by NASA in 2018, has seven planned Venus gravity assists. Each gravity assist brings the Parker Solar Probe progressively closer to the Sun. As of 2022, the spacecraft has performed five of its seven assists. The Parker Solar Probe's mission will make the closest approach to the Sun by any space mission. The mission's final planned gravity assist maneuver, completed on November 6, 2024, prepared it for three final solar flybys reaching just 3.8 million miles of the surface of the sun on December 24, 2024 (see figure). Solar Orbiter Solar Orbiter was launched by ESA in 2020. In its initial cruise phase, which lasts until November 2021, Solar Orbiter performed two gravity-assist manoeuvres around Venus and one around Earth to alter the spacecraft's trajectory, guiding it towards the innermost regions of the Solar System. The first close solar pass will take place on 26 March 2022 at around a third of Earth's distance from the Sun. BepiColombo BepiColombo is a joint mission of the European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA) to the planet Mercury. It was launched on 20 October 2018. It will use the gravity assist technique with Earth once, with Venus twice, and six times with Mercury. It will arrive in 2025. BepiColombo is named after Giuseppe (Bepi) Colombo who was a pioneer thinker with this way of maneuvers. Lucy Lucy was launched by NASA on 16 October 2021. It gained one gravity assist from Earth on the 16th of October, 2022, and after a flyby of the main-belt asteroid 152830 Dinkinesh it will gain another in 2024. In 2025, it will fly by the inner main-belt asteroid 52246 Donaldjohanson. In 2027, it will arrive at the Trojan cloud (the Greek camp of asteroids that orbits about 60° ahead of Jupiter), where it will fly by four Trojans, 3548 Eurybates (with its satellite), 15094 Polymele, 11351 Leucus, and 21900 Orus. After these flybys, Lucy will return to Earth in 2031 for another gravity assist toward the Trojan cloud (the Trojan camp which trails about 60° behind Jupiter), where it will visit the binary Trojan 617 Patroclus with its satellite Menoetius in 2033. In fiction In the novel 2001: A Space Odyssey – but not the movie – Discovery performs such a manoeuvre to gain speed as it goes around Jupiter. As Arthur C. Clarke made clear at various times, the location of TMA-2 was switched from near Saturn (in the novel) to near Jupiter (in the movie). See also 3753 Cruithne, an asteroid which periodically has gravitational slingshot encounters with Earth Delta-v budget Low-energy transfer, a type of gravitational assist where a spacecraft is gravitationally snagged into orbit by a celestial body. This method is usually executed in the Earth-Moon system. Dynamical friction Flyby anomaly, an anomalous delta-v increase during gravity assists Gravitational keyhole Interplanetary Transport Network n-body problem Oberth effect, applying thrust near closest approach in a gravity well Pioneer H, first Out-Of-The-Ecliptic mission (OOE) proposed, for Jupiter and solar (Sun) observations STEREO, a gravity-assisted mission which used Earth's Moon to eject two spacecraft from Earth's orbit into heliocentric orbit Notes References External links Basics of Space Flight: A Gravity Assist Primer at NASA.gov Spaceflight and Spacecraft: Gravity Assist, discussion at Phy6.org Double-ball drop experiment Astrodynamics Soviet inventions Orbital maneuvers Spacecraft propulsion Assist Articles containing video clips
Gravity assist
[ "Engineering" ]
4,356
[ "Astrodynamics", "Aerospace engineering" ]
155,760
https://en.wikipedia.org/wiki/Hohmann%20transfer%20orbit
In astronautics, the Hohmann transfer orbit () is an orbital maneuver used to transfer a spacecraft between two orbits of different altitudes around a central body. For example, a Hohmann transfer could be used to raise a satellite's orbit from low Earth orbit to geostationary orbit. In the idealized case, the initial and target orbits are both circular and coplanar. The maneuver is accomplished by placing the craft into an elliptical transfer orbit that is tangential to both the initial and target orbits. The maneuver uses two impulsive engine burns: the first establishes the transfer orbit, and the second adjusts the orbit to match the target. The Hohmann maneuver often uses the lowest possible amount of impulse (which consumes a proportional amount of delta-v, and hence propellant) to accomplish the transfer, but requires a relatively longer travel time than higher-impulse transfers. In some cases where one orbit is much larger than the other, a bi-elliptic transfer can use even less impulse, at the cost of even greater travel time. The maneuver was named after Walter Hohmann, the German scientist who published a description of it in his 1925 book Die Erreichbarkeit der Himmelskörper (The Attainability of Celestial Bodies). Hohmann was influenced in part by the German science fiction author Kurd Lasswitz and his 1897 book Two Planets. When used for traveling between celestial bodies, a Hohmann transfer orbit requires that the starting and destination points be at particular locations in their orbits relative to each other. Space missions using a Hohmann transfer must wait for this required alignment to occur, which opens a launch window. For a mission between Earth and Mars, for example, these launch windows occur every 26 months. A Hohmann transfer orbit also determines a fixed time required to travel between the starting and destination points; for an Earth-Mars journey this travel time is about 9 months. When transfer is performed between orbits close to celestial bodies with significant gravitation, much less delta-v is usually required, as the Oberth effect may be employed for the burns. They are also often used for these situations, but low-energy transfers which take into account the thrust limitations of real engines, and take advantage of the gravity wells of both planets can be more fuel efficient. Example The diagram shows a Hohmann transfer orbit to bring a spacecraft from a lower circular orbit into a higher one. It is an elliptic orbit that is tangential both to the lower circular orbit the spacecraft is to leave (cyan, labeled 1 on diagram) and the higher circular orbit that it is to reach (red, labeled 3 on diagram). The transfer orbit (yellow, labeled 2 on diagram) is initiated by firing the spacecraft's engine to add energy and raise the apoapsis. When the spacecraft reaches the apoapsis, a second engine firing adds energy to raise the periapsis, putting the spacecraft in the larger circular orbit. Due to the reversibility of orbits, a similar Hohmann transfer orbit can be used to bring a spacecraft from a higher orbit into a lower one; in this case, the spacecraft's engine is fired in the opposite direction to its current path, slowing the spacecraft and lowering the periapsis of the elliptical transfer orbit to the altitude of the lower target orbit. The engine is then fired again at the lower distance to slow the spacecraft into the lower circular orbit. The Hohmann transfer orbit is based on two instantaneous velocity changes. Extra fuel is required to compensate for the fact that the bursts take time; this is minimized by using high-thrust engines to minimize the duration of the bursts. For transfers in Earth orbit, the two burns are labelled the perigee burn and the apogee burn (or apogee kick); more generally, for bodies that are not the Earth, they are labelled periapsis and apoapsis burns. Alternatively, the second burn to circularize the orbit may be referred to as a circularization burn. Type I and Type II An ideal Hohmann transfer orbit transfers between two circular orbits in the same plane and traverses exactly 180° around the primary. In the real world, the destination orbit may not be circular, and may not be coplanar with the initial orbit. Real world transfer orbits may traverse slightly more, or slightly less, than 180° around the primary. An orbit which traverses less than 180° around the primary is called a "Type I" Hohmann transfer, while an orbit which traverses more than 180° is called a "Type II" Hohmann transfer. Transfer orbits can go more than 360° around the primary. These multiple-revolution transfers are sometimes referred to as Type III and Type IV, where a Type III is a Type I plus 360°, and a Type IV is a Type II plus 360°. Uses A Hohmann transfer orbit can be used to transfer an object's orbit toward another object, as long as they co-orbit a more massive body. In the context of Earth and the Solar System, this includes any object which orbits the Sun. An example of where a Hohmann transfer orbit could be used is to bring an asteroid, orbiting the Sun, into contact with the Earth. Calculation For a small body orbiting another much larger body, such as a satellite orbiting Earth, the total energy of the smaller body is the sum of its kinetic energy and potential energy, and this total energy also equals half the potential at the average distance (the semi-major axis): Solving this equation for velocity results in the vis-viva equation, where: is the speed of an orbiting body, is the standard gravitational parameter of the primary body, assuming is not significantly bigger than (which makes ), (for Earth, this is μ~3.986E14 m3 s−2) is the distance of the orbiting body from the primary focus, is the semi-major axis of the body's orbit. Therefore, the delta-v (Δv) required for the Hohmann transfer can be computed as follows, under the assumption of instantaneous impulses: to enter the elliptical orbit at from the circular orbit, where is the aphelion of the resulting elliptical orbit, and to leave the elliptical orbit at to the circular orbit, where and are respectively the radii of the departure and arrival circular orbits; the smaller (greater) of and corresponds to the periapsis distance (apoapsis distance) of the Hohmann elliptical transfer orbit. Typically, is given in units of m3/s2, as such be sure to use meters, not kilometers, for and . The total is then: Whether moving into a higher or lower orbit, by Kepler's third law, the time taken to transfer between the orbits is (one half of the orbital period for the whole ellipse), where is length of semi-major axis of the Hohmann transfer orbit. In application to traveling from one celestial body to another it is crucial to start maneuver at the time when the two bodies are properly aligned. Considering the target angular velocity being angular alignment α (in radians) at the time of start between the source object and the target object shall be Example Consider a geostationary transfer orbit, beginning at r1 = 6,678 km (altitude 300 km) and ending in a geostationary orbit with r2 = 42,164 km (altitude 35,786 km). In the smaller circular orbit the speed is 7.73 km/s; in the larger one, 3.07 km/s. In the elliptical orbit in between the speed varies from 10.15 km/s at the perigee to 1.61 km/s at the apogee. Therefore the Δv for the first burn is 10.15 − 7.73 = 2.42 km/s, for the second burn 3.07 − 1.61 = 1.46 km/s, and for both together 3.88 km/s. This is greater than the Δv required for an escape orbit: 10.93 − 7.73 = 3.20 km/s. Applying a Δv at the Low Earth orbit (LEO) of only 0.78 km/s more (3.20−2.42) would give the rocket the escape velocity, which is less than the Δv of 1.46 km/s required to circularize the geosynchronous orbit. This illustrates the Oberth effect that at large speeds the same Δv provides more specific orbital energy, and energy increase is maximized if one spends the Δv as quickly as possible, rather than spending some, being decelerated by gravity, and then spending some more to overcome the deceleration (of course, the objective of a Hohmann transfer orbit is different). Worst case, maximum delta-v As the example above demonstrates, the Δv required to perform a Hohmann transfer between two circular orbits is not the greatest when the destination radius is infinite. (Escape speed is times orbital speed, so the Δv required to escape is  − 1 (41.4%) of the orbital speed.) The Δv required is greatest (53.0% of smaller orbital speed) when the radius of the larger orbit is 15.5817... times that of the smaller orbit. This number is the positive root of , which is . For higher orbit ratios the required for the second burn decreases faster than the first increases. Application to interplanetary travel When used to move a spacecraft from orbiting one planet to orbiting another, the Oberth effect allows to use less delta-v than the sum of the delta-v for separate manoeuvres to escape the first planet, followed by a Hohmann transfer to the second planet, followed by insertion into an orbit around the other planet. For example, consider a spacecraft travelling from Earth to Mars. At the beginning of its journey, the spacecraft will already have a certain velocity and kinetic energy associated with its orbit around Earth. During the burn the rocket engine applies its delta-v, but the kinetic energy increases as a square law, until it is sufficient to escape the planet's gravitational potential, and then burns more so as to gain enough energy to get into the Hohmann transfer orbit (around the Sun). Because the rocket engine is able to make use of the initial kinetic energy of the propellant, far less delta-v is required over and above that needed to reach escape velocity, and the optimum situation is when the transfer burn is made at minimum altitude (low periapsis) above the planet. The delta-v needed is only 3.6 km/s, only about 0.4 km/s more than needed to escape Earth, even though this results in the spacecraft going 2.9 km/s faster than the Earth as it heads off for Mars (see table below). At the other end, the spacecraft must decelerate for the gravity of Mars to capture it. This capture burn should optimally be done at low altitude to also make best use of the Oberth effect. Therefore, relatively small amounts of thrust at either end of the trip are needed to arrange the transfer compared to the free space situation. However, with any Hohmann transfer, the alignment of the two planets in their orbits is crucial – the destination planet and the spacecraft must arrive at the same point in their respective orbits around the Sun at the same time. This requirement for alignment gives rise to the concept of launch windows. The term lunar transfer orbit (LTO) is used for the Moon. It is possible to apply the formula given above to calculate the Δv in km/s needed to enter a Hohmann transfer orbit to arrive at various destinations from Earth (assuming circular orbits for the planets). In this table, the column labeled "Δv to enter Hohmann orbit from Earth's orbit" gives the change from Earth's velocity to the velocity needed to get on a Hohmann ellipse whose other end will be at the desired distance from the Sun. The column labeled "LEO height" gives the velocity needed (in a non-rotating frame of reference centered on the earth) when 300 km above the Earth's surface. This is obtained by adding to the specific kinetic energy the square of the escape velocity (10.9 km/s) from this height. The column "LEO" is simply the previous speed minus the LEO orbital speed of 7.73 km/s. Note that in most cases, Δv from LEO is less than the Δv to enter Hohmann orbit from Earth's orbit. To get to the Sun, it is actually not necessary to use a Δv of 24 km/s. One can use 8.8 km/s to go very far away from the Sun, then use a negligible Δv to bring the angular momentum to zero, and then fall into the Sun. This can be considered a sequence of two Hohmann transfers, one up and one down. Also, the table does not give the values that would apply when using the Moon for a gravity assist. There are also possibilities of using one planet, like Venus which is the easiest to get to, to assist getting to other planets or the Sun. Comparison to other transfers Bi-elliptic transfer The bi-elliptic transfer consists of two half-elliptic orbits. From the initial orbit, a first burn expends delta-v to boost the spacecraft into the first transfer orbit with an apoapsis at some point away from the central body. At this point a second burn sends the spacecraft into the second elliptical orbit with periapsis at the radius of the final desired orbit, where a third burn is performed, injecting the spacecraft into the desired orbit. While they require one more engine burn than a Hohmann transfer and generally require a greater travel time, some bi-elliptic transfers require a lower amount of total delta-v than a Hohmann transfer when the ratio of final to initial semi-major axis is 11.94 or greater, depending on the intermediate semi-major axis chosen. The idea of the bi-elliptical transfer trajectory was first published by Ary Sternfeld in 1934. Low-thrust transfer Low-thrust engines can perform an approximation of a Hohmann transfer orbit, by creating a gradual enlargement of the initial circular orbit through carefully timed engine firings. This requires a change in velocity (delta-v) that is greater than the two-impulse transfer orbit and takes longer to complete. Engines such as ion thrusters are more difficult to analyze with the delta-v model. These engines offer a very low thrust and at the same time, much higher delta-v budget, much higher specific impulse, lower mass of fuel and engine. A 2-burn Hohmann transfer maneuver would be impractical with such a low thrust; the maneuver mainly optimizes the use of fuel, but in this situation there is relatively plenty of it. If only low-thrust maneuvers are planned on a mission, then continuously firing a low-thrust, but very high-efficiency engine might generate a higher delta-v and at the same time use less propellant than a conventional chemical rocket engine. Going from one circular orbit to another by gradually changing the radius simply requires the same delta-v as the difference between the two speeds. Such maneuver requires more delta-v than a 2-burn Hohmann transfer maneuver, but does so with continuous low thrust rather than the short applications of high thrust. The amount of propellant mass used measures the efficiency of the maneuver plus the hardware employed for it. The total delta-v used measures the efficiency of the maneuver only. For electric propulsion systems, which tend to be low-thrust, the high efficiency of the propulsive system usually compensates for the higher delta-V compared to the more efficient Hohmann maneuver. Transfer orbits using electrical propulsion or low-thrust engines optimize the transfer time to reach the final orbit and not the delta-v as in the Hohmann transfer orbit. For geostationary orbit, the initial orbit is set to be supersynchronous and by thrusting continuously in the direction of the velocity at apogee, the transfer orbit transforms to a circular geosynchronous one. This method however takes much longer to achieve due to the low thrust injected into the orbit. Interplanetary Transport Network In 1997, a set of orbits known as the Interplanetary Transport Network (ITN) was published, providing even lower propulsive delta-v (though much slower and longer) paths between different orbits than Hohmann transfer orbits. The Interplanetary Transport Network is different in nature than Hohmann transfers because Hohmann transfers assume only one large body whereas the Interplanetary Transport Network does not. The Interplanetary Transport Network is able to achieve the use of less propulsive delta-v by employing gravity assist from the planets. See also Bi-elliptic transfer Delta-v budget Geostationary transfer orbit Halo orbit Lissajous orbit List of orbits Orbital mechanics Citations General and cited sources Further reading Astrodynamics Spacecraft propulsion Orbital maneuvers Types of orbit
Hohmann transfer orbit
[ "Engineering" ]
3,516
[ "Astrodynamics", "Aerospace engineering" ]
155,823
https://en.wikipedia.org/wiki/Sievert
The sievert (symbol: Sv) is a unit in the International System of Units (SI) intended to represent the stochastic health risk of ionizing radiation, which is defined as the probability of causing radiation-induced cancer and genetic damage. The sievert is important in dosimetry and radiation protection. It is named after Rolf Maximilian Sievert, a Swedish medical physicist renowned for work on radiation dose measurement and research into the biological effects of radiation. The sievert is used for radiation dose quantities such as equivalent dose and effective dose, which represent the risk of external radiation from sources outside the body, and committed dose, which represents the risk of internal irradiation due to inhaled or ingested radioactive substances. According to the International Commission on Radiological Protection (ICRP), one sievert results in a 5.5% probability of eventually developing fatal cancer based on the disputed linear no-threshold model of ionizing radiation exposure. To calculate the value of stochastic health risk in sieverts, the physical quantity absorbed dose is converted into equivalent dose and effective dose by applying factors for radiation type and biological context, published by the ICRP and the International Commission on Radiation Units and Measurements (ICRU). One sievert equals 100 rem, which is an older, CGS radiation unit. Conventionally, deterministic health effects due to acute tissue damage that is certain to happen, produced by high dose rates of radiation, are compared to the physical quantity absorbed dose measured by the unit gray (Gy). Definition CIPM definition of the sievert The SI definition given by the International Committee for Weights and Measures (CIPM) says: "The quantity dose equivalent H is the product of the absorbed dose D of ionizing radiation and the dimensionless factor Q (quality factor) defined as a function of linear energy transfer by the ICRU" H = Q × D The value of Q is not defined further by CIPM, but it requires the use of the relevant ICRU recommendations to provide this value. The CIPM also says that "in order to avoid any risk of confusion between the absorbed dose D and the dose equivalent H, the special names for the respective units should be used, that is, the name gray should be used instead of joules per kilogram for the unit of absorbed dose D and the name sievert instead of joules per kilogram for the unit of dose equivalent H". In summary: gray: quantity D—absorbed dose 1 Gy = 1 joule/kilogram—a physical quantity. 1 Gy is the deposit of a joule of radiation energy per kilogram of matter or tissue. sievert: quantity H—equivalent dose 1 Sv = 1 joule/kilogram—a biological effect. The sievert represents the equivalent biological effect of the deposit of a joule of radiation energy in a kilogram of human tissue. The ratio to absorbed dose is denoted by Q. ICRP definition of the sievert The ICRP definition of the sievert is: "The sievert is the special name for the SI unit of equivalent dose, effective dose, and operational dose quantities. The unit is joule per kilogram." The sievert is used for a number of dose quantities which are described in this article and are part of the international radiological protection system devised and defined by the ICRP and ICRU. External dose quantities When the sievert is used to represent the stochastic effects of external ionizing radiation on human tissue, the radiation doses received are measured in practice by radiometric instruments and dosimeters and are called operational quantities. To relate these actual received doses to likely health effects, protection quantities have been developed to predict the likely health effects using the results of large epidemiological studies. Consequently, this has required the creation of a number of different dose quantities within a coherent system developed by the ICRU working with the ICRP. The external dose quantities and their relationships are shown in the accompanying diagram. The ICRU is primarily responsible for the operational dose quantities, based upon the application of ionising radiation metrology, and the ICRP is primarily responsible for the protection quantities, based upon modelling of dose uptake and biological sensitivity of the human body. Naming conventions The ICRU/ICRP dose quantities have specific purposes and meanings, but some use common words in a different order. There can be confusion between, for instance, equivalent dose and dose equivalent. Although the CIPM definition states that the linear energy transfer function (Q) of the ICRU is used in calculating the biological effect, the ICRP in 1990 developed the "protection" dose quantities effective and equivalent dose which are calculated from more complex computational models and are distinguished by not having the phrase dose equivalent in their name. Only the operational dose quantities which still use Q for calculation retain the phrase dose equivalent. However, there are joint ICRU/ICRP proposals to simplify this system by changes to the operational dose definitions to harmonise with those of protection quantities. These were outlined at the 3rd International Symposium on Radiological Protection in October 2015, and if implemented would make the naming of operational quantities more logical by introducing "dose to lens of eye" and "dose to local skin" as equivalent doses. In the USA there are differently named dose quantities which are not part of the ICRP nomenclature. Physical quantities These are directly measurable physical quantities in which no allowance has been made for biological effects. Radiation fluence is the number of radiation particles impinging per unit area per unit time, kerma is the ionising effect on air of gamma rays and X-rays and is used for instrument calibration, and absorbed dose is the amount of radiation energy deposited per unit mass in the matter or tissue under consideration. Operational quantities Operational quantities are measured in practice, and are the means of directly measuring dose uptake due to exposure, or predicting dose uptake in a measured environment. In this way they are used for practical dose control, by providing an estimate or upper limit for the value of the protection quantities related to an exposure. They are also used in practical regulations and guidance. The calibration of individual and area dosimeters in photon fields is performed by measuring the collision "air kerma free in air" under conditions of secondary electron equilibrium. Then the appropriate operational quantity is derived applying a conversion coefficient that relates the air kerma to the appropriate operational quantity. The conversion coefficients for photon radiation are published by the ICRU. Simple (non-anthropomorphic) "phantoms" are used to relate operational quantities to measured free-air irradiation. The ICRU sphere phantom is based on the definition of an ICRU 4-element tissue-equivalent material which does not really exist and cannot be fabricated. The ICRU sphere is a theoretical 30 cm diameter "tissue equivalent" sphere consisting of a material with a density of 1 g·cm−3 and a mass composition of 76.2% oxygen, 11.1% carbon, 10.1% hydrogen and 2.6% nitrogen. This material is specified to most closely approximate human tissue in its absorption properties. According to the ICRP, the ICRU "sphere phantom" in most cases adequately approximates the human body as regards the scattering and attenuation of penetrating radiation fields under consideration. Thus radiation of a particular energy fluence will have roughly the same energy deposition within the sphere as it would in the equivalent mass of human tissue. To allow for back-scattering and absorption of the human body, the "slab phantom" is used to represent the human torso for practical calibration of whole body dosimeters. The slab phantom is depth to represent the human torso. The joint ICRU/ICRP proposals outlined at the 3rd International Symposium on Radiological Protection in October 2015 to change the definition of operational quantities would not change the present use of calibration phantoms or reference radiation fields. Protection quantities Protection quantities are calculated models, and are used as "limiting quantities" to specify exposure limits to ensure, in the words of ICRP, "that the occurrence of stochastic health effects is kept below unacceptable levels and that tissue reactions are avoided". These quantities cannot be measured in practice but their values are derived using models of external dose to internal organs of the human body, using anthropomorphic phantoms. These are 3D computational models of the body which take into account a number of complex effects such as body self-shielding and internal scattering of radiation. The calculation starts with organ absorbed dose, and then applies radiation and tissue weighting factors. As protection quantities cannot practically be measured, operational quantities must be used to relate them to practical radiation instrument and dosimeter responses. Instrument and dosimetry response This is an actual reading obtained from such as an ambient dose gamma monitor, or a personal dosimeter. Such instruments are calibrated using radiation metrology techniques which will trace them to a national radiation standard, and thereby relate them to an operational quantity. The readings of instruments and dosimeters are used to prevent the uptake of excessive dose and to provide records of dose uptake to satisfy radiation safety legislation; such as in the UK, the Ionising Radiations Regulations 1999. Calculating protection dose quantities The sievert is used in external radiation protection for equivalent dose (the external-source, whole-body exposure effects, in a uniform field), and effective dose (which depends on the body parts irradiated). These dose quantities are weighted averages of absorbed dose designed to be representative of the stochastic health effects of radiation, and use of the sievert implies that appropriate weighting factors have been applied to the absorbed dose measurement or calculation (expressed in grays). The ICRP calculation provides two weighting factors to enable the calculation of protection quantities.  1. The radiation factor WR, which is specific for radiation type R – This is used in calculating the equivalent dose HT which can be for the whole body or for individual organs.  2. The tissue weighting factor WT, which is specific for tissue type T being irradiated. This is used with WR to calculate the contributory organ doses to arrive at an effective dose E for non-uniform irradiation. When a whole body is irradiated uniformly only the radiation weighting factor WR is used, and the effective dose equals the whole body equivalent dose. But if the irradiation of a body is partial or non-uniform the tissue factor WT is used to calculate dose to each organ or tissue. These are then summed to obtain the effective dose. In the case of uniform irradiation of the human body, these summate to 1, but in the case of partial or non-uniform irradiation, they will summate to a lower value depending on the organs concerned; reflecting the lower overall health effect. The calculation process is shown on the accompanying diagram. This approach calculates the biological risk contribution to the whole body, taking into account complete or partial irradiation, and the radiation type or types. The values of these weighting factors are conservatively chosen to be greater than the bulk of experimental values observed for the most sensitive cell types, based on averages of those obtained for the human population. Radiation type weighting factor WR Since different radiation types have different biological effects for the same deposited energy, a corrective radiation weighting factor WR, which is dependent on the radiation type and on the target tissue, is applied to convert the absorbed dose measured in the unit gray to determine the equivalent dose. The result is given the unit sievert. The equivalent dose is calculated by multiplying the absorbed energy, averaged by mass over an organ or tissue of interest, by a radiation weighting factor appropriate to the type and energy of radiation. To obtain the equivalent dose for a mix of radiation types and energies, a sum is taken over all types of radiation energy dose. where is the equivalent dose absorbed by tissue T, is the absorbed dose in tissue T by radiation type R and is the radiation weighting factor defined by regulation. Thus for example, an absorbed dose of 1 Gy by alpha particles will lead to an equivalent dose of 20 Sv. This may seem to be a paradox. It implies that the energy of the incident radiation field in joules has increased by a factor of 20, thereby violating the laws of conservation of energy. However, this is not the case. The sievert is used only to convey the fact that a gray of absorbed alpha particles would cause twenty times the biological effect of a gray of absorbed x-rays. It is this biological component that is being expressed when using sieverts rather than the actual energy delivered by the incident absorbed radiation. Tissue type weighting factor WT The second weighting factor is the tissue factor WT, but it is used only if there has been non-uniform irradiation of a body. If the body has been subject to uniform irradiation, the effective dose equals the whole body equivalent dose, and only the radiation weighting factor WR is used. But if there is partial or non-uniform body irradiation the calculation must take account of the individual organ doses received, because the sensitivity of each organ to irradiation depends on their tissue type. This summed dose from only those organs concerned gives the effective dose for the whole body. The tissue weighting factor is used to calculate those individual organ dose contributions. The ICRP values for WT are given in the table shown here. The article on effective dose gives the method of calculation. The absorbed dose is first corrected for the radiation type to give the equivalent dose, and then corrected for the tissue receiving the radiation. Some tissues like bone marrow are particularly sensitive to radiation, so they are given a weighting factor that is disproportionally large relative to the fraction of body mass they represent. Other tissues like the hard bone surface are particularly insensitive to radiation and are assigned a disproportionally low weighting factor. In summary, the sum of tissue-weighted doses to each irradiated organ or tissue of the body adds up to the effective dose for the body. The use of effective dose enables comparisons of overall dose received regardless of the extent of body irradiation. Operational quantities The operational quantities are used in practical applications for monitoring and investigating external exposure situations. They are defined for practical operational measurements and assessment of doses in the body. Three external operational dose quantities were devised to relate operational dosimeter and instrument measurements to the calculated protection quantities. Also devised were two phantoms, The ICRU "slab" and "sphere" phantoms which relate these quantities to incident radiation quantities using the Q(L) calculation. Ambient dose equivalent This is used for area monitoring of penetrating radiation and is usually expressed as the quantity H*(10). This means the radiation is equivalent to that found 10 mm within the ICRU sphere phantom in the direction of origin of the field. An example of penetrating radiation is gamma rays. Directional dose equivalent This is used for monitoring of low penetrating radiation and is usually expressed as the quantity H'''(0.07). This means the radiation is equivalent to that found at a depth of 0.07 mm in the ICRU sphere phantom. Examples of low penetrating radiation are alpha particles, beta particles and low-energy photons. This dose quantity is used for the determination of equivalent dose to such as the skin, lens of the eye. In radiological protection practice value of omega is usually not specified as the dose is usually at a maximum at the point of interest. Personal dose equivalent This is used for individual dose monitoring, such as with a personal dosimeter worn on the body. The recommended depth for assessment is 10 mm which gives the quantity Hp(10). Proposals for changing the definition of protection dose quantities In order to simplify the means of calculating operational quantities and assist in the comprehension of radiation dose protection quantities, ICRP Committee 2 & ICRU Report Committee 26 started in 2010 an examination of different means of achieving this by dose coefficients related to Effective Dose or Absorbed Dose. Specifically; 1. For area monitoring of effective dose of whole body it would be:H = Φ × conversion coefficient The driver for this is that H∗(10) is not a reasonable estimate of effective dose due to high energy photons, as a result of the extension of particle types and energy ranges to be considered in ICRP report 116. This change would remove the need for the ICRU sphere and introduce a new quantity called Emax. 2. For individual monitoring, to measure deterministic effects on eye lens and skin, it would be:D = Φ × conversion coefficient for absorbed dose. The driver for this is the need to measure the deterministic effect, which it is suggested, is more appropriate than stochastic effect. This would calculate equivalent dose quantities Hlens and Hskin. This would remove the need for the ICRU Sphere and the Q-L function. Any changes would replace ICRU report 51, and part of report 57. A final draft report was issued in July 2017 by ICRU/ICRP for consultation. Internal dose quantities The sievert is used for human internal dose quantities in calculating committed dose. This is dose from radionuclides which have been ingested or inhaled into the human body, and thereby "committed" to irradiate the body for a period of time. The concepts of calculating protection quantities as described for external radiation applies, but as the source of radiation is within the tissue of the body, the calculation of absorbed organ dose uses different coefficients and irradiation mechanisms. The ICRP defines Committed effective dose, E(t) as the sum of the products of the committed organ or tissue equivalent doses and the appropriate tissue weighting factors WT, where t'' is the integration time in years following the intake. The commitment period is taken to be 50 years for adults, and to age 70 years for children. The ICRP further states "For internal exposure, committed effective doses are generally determined from an assessment of the intakes of radionuclides from bioassay measurements or other quantities (e.g., activity retained in the body or in daily excreta). The radiation dose is determined from the intake using recommended dose coefficients". A committed dose from an internal source is intended to carry the same effective risk as the same amount of equivalent dose applied uniformly to the whole body from an external source, or the same amount of effective dose applied to part of the body. Health effects Ionizing radiation has deterministic and stochastic effects on human health. Deterministic (acute tissue effect) events happen with certainty, with the resulting health conditions occurring in every individual who received the same high dose. Stochastic (cancer induction and genetic) events are inherently random, with most individuals in a group failing to ever exhibit any causal negative health effects after exposure, while an indeterministic random minority do, often with the resulting subtle negative health effects being observable only after large detailed epidemiology studies. The use of the sievert implies that only stochastic effects are being considered, and to avoid confusion deterministic effects are conventionally compared to values of absorbed dose expressed by the SI unit gray (Gy). Stochastic effects Stochastic effects are those that occur randomly, such as radiation-induced cancer. The consensus of nuclear regulators, governments and the UNSCEAR is that the incidence of cancers due to ionizing radiation can be modeled as increasing linearly with effective dose at a rate of 5.5% per sievert. This is known as the linear no-threshold model (LNT model). Some argue that this LNT model is now outdated and should be replaced with a threshold below which the body's natural cell processes repair damage and/or replace damaged cells. There is general agreement that the risk is much higher for infants and fetuses than adults, higher for the middle-aged than for seniors, and higher for women than for men, though there is no quantitative consensus about this. Deterministic effects The deterministic (acute tissue damage) effects that can lead to acute radiation syndrome only occur in the case of acute high doses (≳ 0.1 Gy) and high dose rates (≳ 0.1 Gy/h) and are conventionally not measured using the unit sievert, but use the unit gray (Gy). A model of deterministic risk would require different weighting factors (not yet established) than are used in the calculation of equivalent and effective dose. ICRP dose limits The ICRP recommends a number of limits for dose uptake in table 8 of report 103. These limits are "situational", for planned, emergency and existing situations. Within these situations, limits are given for the following groups: Planned exposure – limits given for occupational, medical and public Emergency exposure – limits given for occupational and public exposure Existing exposure – All persons exposed For occupational exposure, the limit is 50 mSv in a single year with a maximum of 100 mSv in a consecutive five-year period, and for the public to an average of 1 mSv (0.001 Sv) of effective dose per year, not including medical and occupational exposures. For comparison, natural radiation levels inside the United States Capitol are such that a human body would receive an additional dose rate of 0.85 mSv/a, close to the regulatory limit, because of the uranium content of the granite structure. According to the conservative ICRP model, someone who spent 20 years inside the capitol building would have an extra one in a thousand chance of getting cancer, over and above any other existing risk (calculated as: 20 a·0.85 mSv/a·0.001 Sv/mSv·5.5%/Sv ≈ 0.1%). However, that "existing risk" is much higher; an average American would have a 10% chance of getting cancer during this same 20-year period, even without any exposure to artificial radiation (see natural Epidemiology of cancer and cancer rates). Dose examples Significant radiation doses are not frequently encountered in everyday life. The following examples can help illustrate relative magnitudes; these are meant to be examples only, not a comprehensive list of possible radiation doses. An "acute dose" is one that occurs over a short and finite period of time, while a "chronic dose" is a dose that continues for an extended period of time so that it is better described by a dose rate. Dose examples Dose rate examples All conversions between hours and years have assumed continuous presence in a steady field, disregarding known fluctuations, intermittent exposure and radioactive decay. Converted values are shown in parentheses. "/a" is "per annum", which means per year. "/h" means "per hour". Notes on examples: History The sievert has its origin in the röntgen equivalent man (rem) which was derived from CGS units. The International Commission on Radiation Units and Measurements (ICRU) promoted a switch to coherent SI units in the 1970s, and announced in 1976 that it planned to formulate a suitable unit for equivalent dose. The ICRP pre-empted the ICRU by introducing the sievert in 1977. The sievert was adopted by the International Committee for Weights and Measures (CIPM) in 1980, five years after adopting the gray. The CIPM then issued an explanation in 1984, recommending when the sievert should be used as opposed to the gray. That explanation was updated in 2002 to bring it closer to the ICRP's definition of equivalent dose, which had changed in 1990. Specifically, the ICRP had introduced equivalent dose, renamed the quality factor (Q) to radiation weighting factor (WR), and dropped another weighting factor "N" in 1990. In 2002, the CIPM similarly dropped the weighting factor "N" from their explanation but otherwise kept other old terminology and symbols. This explanation only appears in the appendix to the SI brochure and is not part of the definition of the sievert. Common SI usage Frequently used SI prefixes are the millisievert (1 mSv = 0.001 Sv) and microsievert (1 μSv = 0.000 001 Sv) and commonly used units for time derivative or "dose rate" indications on instruments and warnings for radiological protection are μSv/h and mSv/h. Regulatory limits and chronic doses are often given in units of mSv/a or Sv/a, where they are understood to represent an average over the entire year. In many occupational scenarios, the hourly dose rate might fluctuate to levels thousands of times higher for a brief period of time, without infringing on the annual limits. The conversion from hours to years varies because of leap years and exposure schedules, but approximate conversions are: 1 mSv/h = 8.766 Sv/a 114.1 μSv/h = 1 Sv/a Conversion from hourly rates to annual rates is further complicated by seasonal fluctuations in natural radiation, decay of artificial sources, and intermittent proximity between humans and sources. The ICRP once adopted fixed conversion for occupational exposure, although these have not appeared in recent documents: 8 h = 1 day 40 h = 1 week 50 weeks = 1 year Therefore, for occupation exposures of that time period, 1 mSv/h = 2 Sv/a 500 μSv/h = 1 Sv/a Ionizing radiation quantities The following table shows radiation quantities in SI and non-SI units: Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for "public health ... purposes" be phased out by 31 December 1985. Rem equivalence An older unit for the dose equivalent is the rem, still often used in the United States. One sievert is equal to 100 rem: See also Acute radiation syndrome Becquerel (disintegrations per second) Counts per minute Exposure (radiation) Rutherford (unit) Sverdrup (a non-SI unit of volume transport with the same symbol Sv as sievert) Explanatory notes References External links Eurados - The European radiation dosimetry group Radiation health effects Radiobiology Radioactivity Units of radiation dose Units of radioactivity Radiation protection SI derived units
Sievert
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Biology" ]
5,398
[ "Radiation health effects", "Units of measurement", "Radiobiology", "Quantity", "Units of radioactivity", "Units of radiation dose", "Nuclear physics", "Radiation effects", "Radioactivity" ]
155,829
https://en.wikipedia.org/wiki/Curie%20%28unit%29
The curie (symbol Ci) is a non-SI unit of radioactivity originally defined in 1910. According to a notice in Nature at the time, it was to be named in honour of Pierre Curie, but was considered at least by some to be in honour of Marie Curie as well, and is in later literature considered to be named for both. It was originally defined as "the quantity or mass of radium emanation in equilibrium with one gram of radium (element)", but is currently defined as 1 Ci = decays per second after more accurate measurements of the activity of Ra (which has a specific activity of ). In 1975 the General Conference on Weights and Measures gave the becquerel (Bq), defined as one nuclear decay per second, official status as the SI unit of activity. Therefore: 1 Ci = = 37 GBq and 1 Bq ≅ ≅ 27 pCi While its continued use is discouraged by the National Institute of Standards and Technology (NIST) and other bodies, the curie is still widely used throughout government, industry and medicine in the United States and in other countries. At the 1910 meeting, which originally defined the curie, it was proposed to make it equivalent to 10 nanograms of radium (a practical amount). But Marie Curie, after initially accepting this, changed her mind and insisted on one gram of radium. According to Bertram Boltwood, Marie Curie thought that "the use of the name 'curie' for so infinitesimally small [a] quantity of anything was altogether inappropriate". The power emitted in radioactive decay corresponding to one curie can be calculated by multiplying the decay energy by approximately 5.93 mW / MeV. A radiotherapy machine may have roughly 1000 Ci of a radioisotope such as caesium-137 or cobalt-60. This quantity of radioactivity can produce serious health effects with only a few minutes of close-range, unshielded exposure. Radioactive decay can lead to the emission of particulate radiation or electromagnetic radiation. Ingesting even small quantities of some particulate emitting radionuclides may be fatal. For example, the median lethal dose (LD-50) for ingested polonium-210 is 240 μCi; about 53.5 nanograms. The typical human body contains roughly 0.1 μCi (14 mg) of naturally occurring potassium-40. A human body containing of carbon (see Composition of the human body) would also have about 24 nanograms or 0.1 μCi of carbon-14. Together, these would result in a total of approximately 0.2 μCi or 7400 decays per second inside the person's body (mostly from beta decay but some from gamma decay). As a measure of quantity Units of activity (the curie and the becquerel) also refer to a quantity of radioactive atoms. Because the probability of decay is a fixed physical quantity, for a known number of atoms of a particular radionuclide, a predictable number will decay in a given time. The number of decays that will occur in one second in one gram of atoms of a particular radionuclide is known as the specific activity of that radionuclide. The activity of a sample decreases with time because of decay. The rules of radioactive decay may be used to convert activity to an actual number of atoms. They state that 1 Ci of radioactive atoms would follow the expression N (atoms) × λ (s) = 1 Ci = 3.7 × 10 Bq, and so N = 3.7 × 10 Bq / λ, where λ is the decay constant in s−1. Here are some examples, ordered by half-life: Radiation related quantities The following table shows radiation quantities in SI and non-SI units: See also Geiger counter Ionizing radiation Radiation burn Radiation exposure Radiation poisoning United Nations Scientific Committee on the Effects of Atomic Radiation References Non-SI metric units Radioactivity Units of radioactivity Pierre Curie Radium
Curie (unit)
[ "Physics", "Chemistry", "Mathematics" ]
832
[ "Non-SI metric units", "Quantity", "Units of radioactivity", "Radioactivity", "Nuclear physics", "Units of measurement" ]
155,835
https://en.wikipedia.org/wiki/Becquerel
The becquerel (; symbol: Bq) is the unit of radioactivity in the International System of Units (SI). One becquerel is defined as an activity of one per second, on average, for aperiodic activity events referred to a radionuclide. For applications relating to human health this is a small quantity, and SI multiples of the unit are commonly used. The becquerel is named after Henri Becquerel, who shared a Nobel Prize in Physics with Pierre and Marie Curie in 1903 for their work in discovering radioactivity. Definition 1 Bq = 1 s−1 A special name was introduced for the reciprocal second (s) to represent radioactivity to avoid potentially dangerous mistakes with prefixes. For example, 1 μs would mean 10 disintegrations per second: , whereas 1 μBq would mean 1 disintegration per 1 million seconds. Other names considered were hertz (Hz), a special name already in use for the reciprocal second (for periodic events of any kind), and fourier (Fr; after Joseph Fourier). The hertz is now only used for periodic phenomena. While 1 Hz replaces the deprecated term cycle per second, 1 Bq refers to one event per second on average for aperiodic radioactive decays. The gray (Gy) and the becquerel (Bq) were introduced in 1975. Between 1953 and 1975, absorbed dose was often measured with the rad. Decay activity was given with the curie before 1946 and often with the rutherford between 1946 and 1975. Unit capitalization and prefixes As with every International System of Units (SI) unit named after a person, the first letter of its symbol is uppercase (Bq). However, when an SI unit is spelled out in English, it should always begin with a lowercase letter (becquerel)—except in a situation where any word in that position would be capitalized, such as at the beginning of a sentence or in material using title case. Like any SI unit, Bq can be prefixed; commonly used multiples are kBq (kilobecquerel, ), MBq (megabecquerel, , equivalent to 1 rutherford), GBq (gigabecquerel, ), TBq (terabecquerel, ), and PBq (petabecquerel, ). Large prefixes are common for practical uses of the unit. Examples For practical applications, 1 Bq is a small unit. For example, there is roughly 0.017 g of potassium-40 in a typical human body, producing about 4,400 decays per second (Bq). The activity of radioactive americium in a home smoke detector is about 37 kBq (1 μCi). The global inventory of carbon-14 is estimated to be (8.5 EBq, 8.5 exabecquerel). These examples are useful for comparing the amount of activity of these radioactive materials, but should not be confused with the amount of exposure to ionizing radiation that these materials represent. The level of exposure and thus the absorbed dose received are what should be considered when assessing the effects of ionizing radiation on humans. Relation to the curie The becquerel succeeded the curie (Ci), an older, non-SI unit of radioactivity based on the activity of 1 gram of radium-226. The curie is defined as , or 37 GBq. Conversion factors: 1 Ci = = 37 GBq 1 μCi = = 37 kBq 1 Bq = = 1 MBq = 0.027 mCi Relation to other radiation-related quantities The following table shows radiation quantities in SI and non-SI units. W (formerly 'Q' factor) is a factor that scales the biological effect for different types of radiation, relative to x-rays (e.g. 1 for beta radiation, 20 for alpha radiation, and a complicated function of energy for neutrons). In general, conversion between rates of emission, the density of radiation, the fraction absorbed, and the biological effects, requires knowledge of the geometry between source and target, the energy and the type of the radiation emitted, among other factors. See also Background radiation Banana equivalent dose Counts per minute Ionizing radiation Orders of magnitude (radiation) Radiation poisoning Relative biological effectiveness References External links Derived units on the International Bureau of Weights and Measures (BIPM) web site SI derived units Units of radioactivity Units of frequency
Becquerel
[ "Chemistry", "Mathematics" ]
939
[ "Quantity", "Units of radioactivity", "Radioactivity", "Units of frequency", "Units of measurement" ]
156,267
https://en.wikipedia.org/wiki/Cosplay
Cosplay, a blend word of "costume play", is an activity and performance art in which participants called cosplayers wear costumes and fashion accessories to represent a specific character. Cosplayers often interact to create a subculture, and a broader use of the term "cosplay" applies to any costumed role-playing in venues apart from the stage. Any entity that lends itself to dramatic interpretation may be taken up as a subject. Favorite sources include anime, cartoons, comic books, manga, television series, rock music performances, video games and in some cases, original characters. The term has been adopted as slang, often in politics, to mean someone pretending to play a role or take on a personality disingenuously. Cosplay grew out of the practice of fan costuming at science fiction conventions, beginning with Morojo's "futuristicostumes" created for the 1st World Science Fiction Convention held in New York City in 1939. The Japanese term was coined in 1984. A rapid growth in the number of people cosplaying as a hobby since the 1990s has made the phenomenon a significant aspect of popular culture in Japan, as well as in other parts of East Asia and in the Western world. Cosplay events are common features of fan conventions, and today there are many dedicated conventions and competitions, as well as social networks, websites, and other forms of media centered on cosplay activities. Cosplay is very popular among all genders, and it is not unusual to see crossplay, also referred to as gender-bending. Etymology The term "cosplay" is a Japanese blend word of the English terms costume and play. The term was coined by of Studio Hard after he attended the 1984 World Science Fiction Convention (Worldcon) in Los Angeles and saw costumed fans, which he later wrote about in an article for the Japanese magazine . Takahashi decided to coin a new word rather than use the existing translation of the English term "masquerade" because that translates into Japanese as "an aristocratic costume party", which did not match his experience of the Worldcon. The coinage reflects a common Japanese method of abbreviation in which the first two moras of a pair of words are used to form an independent compound: 'costume' becomes kosu (コス) and 'play' becomes pure (プレ). History Pre-20th century Masquerade balls were a feature of the Carnival season in the 15th century, and involved increasingly elaborate allegorical Royal Entries, pageants, and triumphal processions celebrating marriages and other dynastic events of late medieval court life. They were extended into costumed public festivities in Italy during the 16th century Renaissance, generally elaborate dances held for members of the upper classes, which were particularly popular in Venice. In April 1877, Jules Verne sent out almost 700 invitations for an elaborate costume ball, where several of the guests showed up dressed as characters from Verne's novels. Costume parties (American English) or fancy dress parties (British English) were popular from the 19th century onwards. Costuming guides of the period, such as Samuel Miller's Male Character Costumes (1884) or Ardern Holt's Fancy Dresses Described (1887), feature mostly generic costumes, whether that be period costumes, national costumes, objects or abstract concepts such as "Autumn" or "Night". Most specific costumes described therein are for historical figures although some are sourced from fiction, like The Three Musketeers or Shakespeare characters. By March 1891, a literal call by one Herbert Tibbits for what would today be described as "cosplayers" was advertised for an event held from 5–10 March that year at the Royal Albert Hall in London, for the so-named Vril-Ya Bazaar and Fete based on a science fiction novel and its characters, published two decades earlier. Fan costuming A.D. Condo's science fiction comic strip character Mr. Skygack, from Mars (a Martian ethnographer who comically misunderstands many Earthly affairs) is arguably the first fictional character that people emulated by wearing costumes, as in 1908 Mr. and Mrs. William Fell of Cincinnati, Ohio, are reported to have attended a masquerade at a skating rink wearing Mr. Skygack and Miss Dillpickles costumes. Later, in 1910, an unnamed woman won first prize at masquerade ball in Tacoma, Washington, wearing another Skygack costume. The first people to wear costumes to attend a convention were science fiction fans Forrest J Ackerman and Myrtle R. Douglas, known in fandom as Morojo. They attended the 1939 1st World Science Fiction Convention (Nycon or 1st Worldcon) in the Caravan Hall, New York, US dressed in "futuristicostumes", including green cape and breeches, based on the pulp magazine artwork of Frank R. Paul and the 1936 film Things to Come, designed and created by Douglas. Ackerman later stated that he thought everyone was supposed to wear a costume at a science fiction convention, although only he and Douglas did. Fan costuming caught on, however, and the 2nd Worldcon (1940) had both an unofficial masquerade held in Douglas' room and an official masquerade as part of the programme. David Kyle won the masquerade wearing a Ming the Merciless costume created by Leslie Perri, while Robert A. W. Lowndes received second place with a Bar Senestro costume (from the novel The Blind Spot by Austin Hall and Homer Eon Flint). Other costumed attendees included guest of honor E. E. Smith as Northwest Smith (from C. L. Moore's series of short stories) and both Ackerman and Douglas wearing their futuristicostumes again. Masquerades and costume balls continued to be part of World Science Fiction Convention tradition thereafter. Early Worldcon masquerade balls featured a band, dancing, food and drinks. Contestants either walked across a stage or a cleared area of the dance floor. Ackerman wore a "Hunchbackerman of Notre Dame" costume to the 3rd Worldcon (1941), which included a mask designed and created by Ray Harryhausen, but soon stopped wearing costumes to conventions. Douglas wore an Akka costume (from A. Merritt's novel The Moon Pool), the mask again made by Harryhausen, to the 3rd Worldcon and a Snake Mother costume (another Merritt costume, from The Snake Mother) to the 4th Worldcon (1946). Terminology was yet unsettled; the 1944 edition of Jack Speer's Fancyclopedia used the term costume party. Rules governing costumes became established in response to specific costumes and costuming trends. The first nude contestant at a Worldcon masquerade was in 1952; but the height of this trend was in the 1970s and early 1980s, with a few every year. This eventually led to "No Costume is No Costume" rule, which banned full nudity, although partial nudity was still allowed as long as it was a legitimate representation of the character. Mike Resnick describes the best of the nude costumes as Kris Lundi wearing a harpy costume to the 32nd Worldcon (1974) (she received an honorable mention in the competition). Another costume that instigated a rule change was an attendee at the 20th Worldcon (1962) whose blaster prop fired a jet of real flame; which led to fire being banned. At the 30th WorldCon (1972), artist Scott Shaw wore a costume composed largely of peanut butter to represent his own underground comix character called "The Turd". The peanut butter rubbed off, doing damage to soft furnishings and other peoples' costumes, and then began to go rancid under the heat of the lighting. Food, odious, and messy substances were banned as costume elements after that event. Costuming spread with the science fiction conventions and the interaction of fandom. The earliest known instance of costuming at a convention in the United Kingdom was at the London Science Fiction Convention (1953) but this was only as part of a play. However, members of the Liverpool Science Fantasy Society attended the 1st Cytricon (1955), in Kettering, wearing costumes and continued to do so in subsequent years. The 15th Worldcon (1957) brought the first official convention masquerade to the UK. The 1960 Eastercon in London may have been the first British-based convention to hold an official fancy dress party as part of its programme. The joint winners were Ethel Lindsay and Ina Shorrock as two of the titular witches from the novel The Witches of Karres by James H. Schmitz. Star Trek conventions began in 1969 and major conventions began in 1972 and they have featured cosplay throughout. In Japan, costuming at conventions was a fan activity from at least the 1970s, especially after the launch of the Comiket convention in December 1975. Costuming at this time was known as . The first documented case of costuming at a fan event in Japan was at Ashinocon (1978), in Hakone, at which future science fiction critic Mari Kotani wore a costume based on the cover art for Edgar Rice Burroughs' novel A Fighting Man of Mars. In an interview Kotani states that there were about twenty costumed attendees at the convention's costume party—made up of members of her Triton of the Sea fan club and , antecedent of the Gainax anime studio—with most attendees in ordinary clothing. One of the Kansai group, an unnamed friend of Yasuhiro Takeda, wore an impromptu Tusken Raider costume (from the film Star Wars) made from one of the host-hotel's rolls of toilet paper. Costume contests became a permanent part of the Nihon SF Taikai conventions from Tokon VII in 1980. Possibly the first costume contest held at a comic book convention was at the 1st Academy Con held at Broadway Central Hotel in New York in August 1965. Roy Thomas, future editor-in-chief of Marvel Comics but then just transitioning from a fanzine editor to a professional comic book writer, attended in a Plastic Man costume. The first Masquerade Ball held at San Diego Comic-Con was in 1974 during the convention's 6th event. Voice actress June Foray was the master of ceremonies. Future scream queen Brinke Stevens won first place wearing a Vampirella costume. Ackerman (who was the creator of Vampirella) was in attendance and posed with Stevens for photographs. They became friends and, according to Stevens "Forry and his wife, Wendayne, soon became like my god parents." Photographer Dan Golden saw a photograph of Stevens in the Vampirella costume while visiting Ackerman's house, leading to him hiring her for a non-speaking role in her first student film, Zyzak is King (1980), and later photographing her for the cover of the first issue of Femme Fatales (1992). Stevens attributes these events to launching her acting career. As early as a year after the 1975 release of The Rocky Horror Picture Show, audience members began dressing as characters from the movie and role-playing (although the initial incentive for dressing-up was free admission) in often highly accurate costumes. Costume-Con, a conference dedicated to costuming, was first held in January 1983. The International Costumers Guild, Inc., originally known as the Greater Columbia Fantasy Costumer's Guild, was launched after the 3rd Costume-Con (1985) as a parent organization and to support costuming. Cosplay Costuming had been a fan activity in Japan from the 1970s, and it became much more popular in the wake of Takahashi's report. The new term did not catch on immediately, however. It was a year or two after the article was published before it was in common use among fans at conventions. It was in the 1990s, after exposure on television and in magazines, that the term and practice of cosplaying became common knowledge in Japan. The first cosplay cafés appeared in the Akihabara area of Tokyo in the late 1990s. A temporary maid café was set up at the Tokyo Character Collection event in August 1998 to promote the video game Welcome to Pia Carrot 2 (1997). An occasional Pia Carrot Restaurant was held at the shop Gamers in Akihabara in the years up to 2000. Being linked to specific intellectual properties limited the lifespan of these cafés, which was solved by using generic maids, leading to the first permanent establishment, Cure Maid Café, which opened in March 2001. The first World Cosplay Summit was held on 12 October 2003 at the Rose Court Hotel in Nagoya, Japan, with five cosplayers invited from Germany, France and Italy. There was no contest until 2005, when the World Cosplay Championship began. The first winners were the Italian team of , Francesca Dani and Emilia Fata Livia. Worldcon masquerade attendance peaked in the 1980s and started to fall thereafter. This trend was reversed when the concept of cosplay was re-imported from Japan. Practice of cosplay Cosplay costumes vary greatly and can range from simple themed clothing to highly detailed costumes. It is generally considered different from Halloween and Mardi Gras costume wear, as the intention is to replicate a specific character, rather than to reflect the culture and symbolism of a holiday event. As such, when in costume, some cosplayers often seek to adopt the affect, mannerisms, and body language of the characters they portray (with "out of character" breaks). The characters chosen to be cosplayed may be sourced from any movie, TV series, book, comic book, video game, music band, anime, or manga. Some cosplayers even choose to cosplay an original character of their own design or a fusion of different genres (e.g., a steampunk version of a character), and it is a part of the ethos of cosplay that anybody can be anything, as with genderbending, crossplay, or drag, a cosplayer playing a character of another ethnicity, or a hijabi portraying Captain America. Costumes Cosplayers obtain their apparel through many different methods. Manufacturers produce and sell packaged outfits for use in cosplay, with varying levels of quality. These costumes are often sold online, but also can be purchased from dealers at conventions. Japanese manufacturers of cosplay costumes reported a profit of 35 billion yen in 2008. A number of individuals also work on commission, creating custom costumes, props, or wigs designed and fitted to the individual. Other cosplayers, who prefer to create their own costumes, still provide a market for individual elements, and various raw materials, such as unstyled wigs, hair dye, cloth and sewing notions, liquid latex, body paint, costume jewelry, and prop weapons. Cosplay represents an act of embodiment. Cosplay has been closely linked to the presentation of self, yet cosplayers' ability to perform is limited by their physical features. The accuracy of a cosplay is judged based on the ability to accurately represent a character through the body, and individual cosplayers frequently are faced by their own "bodily limits" such as level of attractiveness, body size, and disability that often restrict and confine how accurate the cosplay is perceived to be. Authenticity is measured by a cosplayer's individual ability to translate on-screen manifestation to the cosplay itself. Some have argued that cosplay can never be a true representation of the character; instead, it can only be read through the body, and that true embodiment of a character is judged based on nearness to the original character form. Cosplaying can also help some of those with self-esteem problems. Many cosplayers create their own outfits, referencing images of the characters in the process. In the creation of the outfits, much time is given to detail and qualities, thus the skill of a cosplayer may be measured by how difficult the details of the outfit are and how well they have been replicated. Because of the difficulty of replicating some details and materials, cosplayers often educate themselves in crafting specialties such as textiles, sculpture, face paint, fiberglass, fashion design, woodworking, and other uses of materials in the effort to render the look and texture of a costume accurately. Cosplayers often wear wigs in conjunction with their outfit to further improve the resemblance to the character. This is especially necessary for anime and manga or video-game characters who often have unnaturally colored and uniquely styled hair. Simpler outfits may be compensated for their lack of complexity by paying attention to material choice and overall high quality. To look more like the characters they are portraying, cosplayers might also engage in various forms of body modification. Cosplayers may opt to change their skin color utilizing make-up to more simulate the race of the character they are adopting. Contact lenses that match the color of their character's eyes are a common form of this, especially in the case of characters with particularly unique eyes as part of their trademark look. Contact lenses that make the pupil look enlarged to visually echo the large eyes of anime and manga characters are also used. Another form of body modification in which cosplayers engage is to copy any tattoos or special markings their character might have. Temporary tattoos, permanent marker, body paint, and in rare cases, permanent tattoos, are all methods used by cosplayers to achieve the desired look. Permanent and temporary hair dye, spray-in hair coloring, and specialized extreme styling products are all used by some cosplayers whose natural hair can achieve the desired hairstyle. It is also commonplace for them to shave off their eyebrows to gain a more accurate look. Some anime and video game characters have weapons or other accessories that are hard to replicate, and conventions have strict rules regarding those weapons, but most cosplayers engage in some combination of methods to obtain all the items necessary for their costumes; for example, they may commission a prop weapon, sew their own clothing, buy character jewelry from a cosplay accessory manufacturer, or buy a pair of off-the-rack shoes, and modify them to match the desired look. Presentation Cosplay may be presented in a number of ways and places. A subset of cosplay culture is centered on sex appeal, with cosplayers specifically choosing characters known for their attractiveness or revealing costumes. However, wearing a revealing costume can be a sensitive issue while appearing in public. People appearing naked at American science fiction fandom conventions during the 1970s were so common, a "no costume is no costume" rule was introduced. Some conventions throughout the United States, such as Phoenix Comicon (now known as Phoenix Fan Fusion) and Penny Arcade Expo, have also issued rules upon which they reserve the right to ask attendees to leave or change their costumes if deemed to be inappropriate to a family-friendly environment or something of a similar nature. Conventions The most popular form of presenting a cosplay publicly is by wearing it to a fan convention. Multiple conventions dedicated to anime and manga, comics, TV shows, video games, science fiction, and fantasy may be found all around the world. Cosplay-centered conventions include Cosplay Mania in the Philippines and EOY Cosplay Festival in Singapore. The single largest event featuring cosplay is the semiannual doujinshi market, Comic Market (Comiket), held in Japan during summer and winter. Comiket attracts hundreds of thousands of manga and anime fans, where thousands of cosplayers congregate on the roof of the exhibition center. In North America, the highest-attended fan conventions featuring cosplayers are San Diego Comic-Con and New York Comic Con held in the United States, and the anime-specific Anime North in Toronto, Otakon held in Washington, D.C. and Anime Expo held in Los Angeles. Europe's largest event is Japan Expo held in Paris, while the London MCM Expo and the London Super Comic Convention are the most notable in the UK. Supanova Pop Culture Expo is Australia's biggest event. Star Trek conventions have featured cosplay for many decades. These include Destination Star Trek, a UK convention, and Star Trek Las Vegas, a US convention. In different comic fairs, "Thematic Areas" are set up where cosplayers can take photos in an environment that follows that of the game or animation product from which they are taken. Sometimes the cosplayers are part of the area, playing the role of staff with the task of entertaining the other visitors. Some examples are the thematic areas dedicated to Star Wars or to Fallout. The areas are set up by not for profit associations of fans, but in some major fairs it is possible to visit areas set up directly by the developers of the video games or the producers of the anime. Photography The appearance of cosplayers at public events makes them a popular draw for photographers. As this became apparent in the late 1980s, a new variant of cosplay developed in which cosplayers attended events mainly for the purpose of modeling their characters for still photography rather than engaging in continuous role play. Rules of etiquette were developed to minimize awkward situations involving boundaries. Cosplayers pose for photographers and photographers do not press them for personal contact information or private sessions, follow them out of the area, or take photos without permission. The rules allow the collaborative relationship between photographers and cosplayers to continue with the least inconvenience to each other. Some cosplayers choose to have a professional photographer take high quality images of them in their costumes posing as the character. Cosplayers and photographers frequently exhibit their work online and sometimes sell their images. Competitions As the popularity of cosplay has grown, many conventions have come to feature a contest surrounding cosplay that may be the main feature of the convention. Contestants present their cosplay, and often to be judged for an award, the cosplay must be self-made. The contestants may choose to perform a skit, which may consist of a short performed script or dance with optional accompanying audio, video, or images shown on a screen overhead. Other contestants may simply choose to pose as their characters. Often, contestants are briefly interviewed on stage by a master of ceremonies. The audience is given a chance to take photos of the cosplayers. Cosplayers may compete solo or in a group. Awards are presented, and these awards may vary greatly. Generally, a best cosplayer award, a best group award, and runner-up prizes are given. Awards may also go to the best skit and a number of cosplay skill subcategories, such as master tailor, master weapon-maker, master armorer, and so forth. The most well-known cosplay contest event is the World Cosplay Summit, selecting cosplayers from 40 countries to compete in the final round in Nagoya, Japan. Some other international events include European Cosplay Gathering (finals taking place at Japan Expo in Paris), EuroCosplay (finals taking place at London MCM Comic Con), and the Nordic Cosplay Championship (finals taking place at NärCon in Linköping, Sweden). Common cosplay judging criteria This table contains a list of the most common cosplay competition judging criteria, as seen from World Cosplay Summit, Cyprus Comic Con, and ReplayFX. Gender issues Portraying a character of the opposite sex is called crossplay. The practicality of crossplay and cross-dress stems in part from the abundance in manga of male characters with delicate and somewhat androgynous features. Such characters, known as (lit. "pretty boy"), are Asian equivalent of the elfin boy archetype represented in Western tradition by figures such as Peter Pan and Ariel. Male to female cosplayers may experience issues when trying to portray a female character because it is hard to maintain the sexualized femininity of a character. Male cosplayers may also be subjected to discrimination, including homophobic comments and being touched without permission. This affects men possibly even more often than it affects women, despite inappropriate contact already being a problem for women who cosplay, as is "slut-shaming". Animegao kigurumi players, a niche group in the realm of cosplay, are often male cosplayers who use zentai and stylized masks to represent female anime characters. These cosplayers completely hide their real features so the original appearance of their characters may be reproduced as literally as possible, and to display all the abstractions and stylizations such as oversized eyes and tiny mouths often seen in Japanese cartoon art. This does not mean that only males perform animegao or that masks are only female. Harassment issues "Cosplay Is Not Consent", a movement started in 2013 by Rochelle Keyhan, Erin Filson, and Anna Kegler, brought attention to the issue of sexual harassment in the convention attending cosplay community. Harassment of cosplayers include photography without permission, verbal abuse, touching, and groping. Harassment is not limited to women in provocative outfits as male cosplayers talked about being bullied for not fitting certain costume and characters. Starting in 2014, New York Comic Con placed large signs at the entrance stating that "Cosplay is Not Consent". Attendees were reminded to ask permission for photos and respect the person's right to say no. The movement against sexual harassment against cosplayers has continued to gain momentum and awareness since being publicized. Traditional mainstream news media like The Mercury News and Los Angeles Times have reported on the topic, bringing awareness of sexual harassment to those outside of the cosplay community. Ethnicity issues As cosplay has entered more mainstream media, ethnicity becomes a controversial point. Cosplayers of different skin color than the character are often ridiculed for not being 'accurate' or 'faithful'. Many cosplayers feel as if anyone can cosplay any character, but it becomes complicated when cosplayers are not respectful of the character's ethnicity. These views against non-white cosplayers within the community have been attributed to the lack of representation in the industry and in media. Issues such as blackface, brownface, and yellowface are still controversial since a large part of the cosplay community see these as separate problems, or simply an acceptable part of cosplay. Cosplay models Cosplay has influenced the advertising industry, in which cosplayers are often used for event work previously assigned to agency models. Some cosplayers have thus transformed their hobby into profitable, professional careers. Japan's entertainment industry has been home to the professional cosplayers since the rise of Comiket and Tokyo Game Show. The phenomenon is most apparent in Japan but exists to some degree in other countries as well. Professional cosplayers who profit from their art may experience problems related to copyright infringement. A cosplay model, also known as a cosplay idol, cosplays costumes for anime and manga or video game companies. Good cosplayers are viewed as fictional characters in the flesh, in much the same way that film actors come to be identified in the public mind with specific roles. Cosplayers have modeled for print magazines like Cosmode and a successful cosplay model can become the brand ambassador for companies like Cospa. Some cosplay models can achieve significant recognition. While there are many significant cosplay models, Yaya Han was described as having emerged "as a well-recognized figure both within and outside cosplay circuits". Jessica Nigri, used her recognition in cosplay to gain other opportunities such as voice acting and her own documentary on Rooster Teeth. Liz Katz used her fanbase to take her cosplay from a hobby to a successful business venture, sparking debate through the cosplay community whether cosplayers should be allowed to fund and profit from their work. In the 2000s, cosplayers started to push the boundaries of cosplay into eroticism paving the way to "erocosplay". The advent of social media coupled with crowdfuding platforms like Patreon and OnlyFans have allowed cosplay models to turn cosplay into profitable full-time careers. Cosplay by country or region Cosplay in Japan Cosplayers in Japan used to refer to themselves as , pronounced "layer". Currently in Japan, cosplayers are more commonly called , pronounced "ko-su-pray", as reiyā is more often used to describe layers (i.e. hair, clothes, etc.). Words like cute (kawaii (可愛い)) and cool (kakko ī (かっこ いい)) were often used to describe these changes, expressions that were tied with notions of femininity and masculinity. Those who photograph players are called cameko, short for camera kozō or camera boy. Originally, the cameko gave prints of their photos to players as gifts. Increased interest in cosplay events, both on the part of photographers and cosplayers willing to model for them, has led to formalization of procedures at events such as Comiket. Photography takes place within a designated area removed from the exhibit hall. In Japan, costumes are generally not welcome outside of conventions or other designated areas. Since 1998, Tokyo's Akihabara district contains a number of cosplay restaurants, catering to devoted anime and cosplay fans, where the waitresses at such cafés dress as video game or anime characters; maid cafés are particularly popular. In Japan, Tokyo's Harajuku district is the favorite informal gathering place to engage in cosplay in public. Events in Akihabara also draw many cosplayers. is a form of Japanese cosplay where the players use body paint to make their skin color match that of the character they are playing. This allows them to represent anime or video game characters with non-human skin colors. A 2014 survey for the Comic Market convention in Japan noted that approximately 75% of cosplayers attending the event are female. Cosplay in other Asian countries Cosplay is common in many East Asian countries. For example, it is a major part of the Comic World conventions taking place regularly in South Korea, Hong Kong and Taiwan. Historically, the practice of dressing up as characters from works of fiction can be traced as far as the 17th century late Ming dynasty China. Cosplay in Western countries Western cosplay's origins are based primarily in science fiction and fantasy fandoms. It is also more common for Western cosplayers to recreate characters from live-action series than it is for Japanese cosplayers. Western costumers also include subcultures of hobbyists who participate in Renaissance faires, live action role-playing games, and historical reenactments. Competition at science fiction conventions typically include the masquerade (where costumes are presented on stage and judged formally) and hall costumes The increasing popularity of Japanese animation outside of Asia during the late 2000s led to an increase in American and other Western cosplayers who portray manga and anime characters. Anime conventions have become more numerous in the West in the previous decade, now competing with science fiction, comic book and historical conferences in attendance. At these gatherings, cosplayers, like their Japanese counterparts, meet to show off their work, be photographed, and compete in costume contests. Convention attendees also just as often dress up as Western comic book or animated characters, or as characters from movies and video games. Differences in taste still exist across cultures: some costumes that are worn without hesitation by Japanese cosplayers tend to be avoided by Western cosplayers, such as outfits that evoke Nazi uniforms. Some Western cosplayers have also encountered questions of legitimacy when playing characters of canonically different racial backgrounds, and people can be insensitive to cosplayers playing as characters who are canonically of other skin color. Western cosplayers of anime characters may also be subjected to particular mockery. In contrast to Japan, the wearing of costumes in public is more accepted in the UK, Ireland, US, Canada and other western countries. These countries have a longer tradition of Halloween costumes, fan costuming and other such activities. As a result, for example, costumed convention attendees can often be seen at local restaurants and eateries, beyond the boundaries of the convention or event. Media Magazines and books Japan is home to two especially popular cosplay magazines, Cosmode (コスモード) and ASCII Media Works' Dengeki Layers (電撃Layers). Cosmode has the largest share in the market and an English-language digital edition. Another magazine, aimed at a broader, worldwide audience is CosplayGen. In the United States, Cosplay Culture began publication in February 2015. Other magazines include CosplayZine featuring cosplayers from all over the world since October 2015, and Cosplay Realm Magazine which was started in April 2017. There are many books on the subject of cosplay as well. Documentaries and reality shows Cosplay Encyclopedia, a 1996 film about Japanese cosplay released by Japan Media Supply. It was released in subtitled VHS by Anime Works in 1999, eventually being released onto DVD in 2002. Otaku Unite!, a 2004 film about otaku subculture, features extensive footage of cosplayers. Akihabara Geeks, a 2005 Japanese short film. Animania: The Documentary is a 2007 film that explores the cosplay cultural phenomenon in North America, following four cosplayers from various ethnicities as they prepare to compete at Anime North, Canada's largest anime convention. Conventional Dress is a short documentary about cosplay at Dragon Con made by Celia Pearce and her students in 2008. Cosplayers: The Movie, released in 2009 by Martell Brothers Studios for free viewing on YouTube and Crunchyroll, explores the anime subculture in North America with footage from anime conventions and interviews with fans, voice actors and artists. "I'm a Fanboy", a 2009 episode of the MTV series True Life, focusing on fandom and cosplay. Fanboy Confessional, a 2011 Space Channel series that featured an episode on cosplay and cosplayers from the perspective of an insider. Comic-Con Episode IV: A Fan's Hope, a 2011 film about four attendees of San Diego Comic-Con, including a cosplayer. America's Greatest Otaku, a 2011 TV series where contenders included cosplayers. Cosplayers UK: The Movie, a 2011 film following a small selection of cosplayers at the London MCM Expo. My Other Me: A Film About Cosplayers, chronicling a year in the life of three different cosplayers: a veteran cosplayer who launched a career from cosplay, a young 14-year-old first-timer, and a transgender man who found himself through cosplay. It was released in 2013 and was a featured segment on The Electric Playground. Heroes of Cosplay, a reality show on cosplay that premiered in 2013 on the Syfy network. It follows nine cosplayers as they create their costumes, travel to conventions and compete in contests. "24 Hours With A Comic Con Character", a segment from CNNMoney following around a known cosplayer while she prepared for and attended New York Comic Con. WTF is Cosplay?, a reality show that premiered in 2015 on the Channel 4 network. It follows six cosplayers throughout their day-to-day lives and what cosplay means to them. Call to Cosplay, a competition reality show that premiered in 2014 on Myx TV. It is a cosplay design competition show where contestants were tasked to create a costumes based on theme and time constraints. Cosplay Melee, a competition reality show on cosplay that premiered in 2017 on the Syfy network. Cosplay Culture, a 90 minutes documentary that follows cosplayers during preparation and conventions in Canada, Japan and Romania. Includes a visit of Akihabara (Japan), a geek Mardi Gras parade in New Orleans and a historic overview explaining the origin of cosplay. Other media Cosplay Complex, a 2002 anime miniseries. Downtown no Gaki no Tsukai ya Arahende!!, a Japanese TV variety show that includes the Cosplay Bus Tour series segment. Super Cosplay War Ultra, a 2004 freeware fighting game. A large number of erotic and pornographic films featuring cosplaying actresses; many of such films come from the Japanese company TMA. Cosplay groups and organizations 501st Legion Rebel Legion See also Anime and manga fandom Costume party Costumed character Escapism Fan labor Furry fandom Halloween costume Iga Ueno Ninja Festa Japanese pop culture in the United States Japanese street fashion List of cosplayers Lolita fashion Look-alike Quadrobers Real-life superhero Sexual roleplay Uniform fetishism Zombie walk Notes References External links International Cosplay Day 1984 neologisms Anime and manga terminology Fandom Costume design Japanese subcultures Japanese youth culture Nerd culture Otaku Video game culture
Cosplay
[ "Engineering" ]
7,730
[ "Costume design", "Design" ]
156,428
https://en.wikipedia.org/wiki/Microtechnology
Microtechnology is technology whose features have dimensions of the order of one micrometre (one millionth of a metre, or 10−6 metre, or 1μm). It focuses on physical and chemical processes as well as the production or manipulation of structures with one-micrometre magnitude. Development Around 1970, scientists learned that by arraying large numbers of microscopic transistors on a single chip, microelectronic circuits could be built that dramatically improved performance, functionality, and reliability, all while reducing cost and increasing volume. This development led to the Information Revolution. More recently, scientists have learned that not only electrical devices, but also mechanical devices, may be miniaturized and batch-fabricated, promising the same benefits to the mechanical world as integrated circuit technology has given to the electrical world. While electronics now provide the ‘brains’ for today's advanced systems and products, micro-mechanical devices can provide the sensors and actuators — the eyes and ears, hands and feet — which interface to the outside world. Today, micromechanical devices are the key components in a wide range of products such as automobile airbags, ink-jet printers, blood pressure monitors, and projection display systems. It seems clear that in the not-too-distant future these devices will be as pervasive as electronics. The process has also become more precise, driving the dimensions of the technology down to sub-micrometer range as demonstrated in the case of advanced microelectric circuits that reached below 20 nm. Micro electromechanical systems The term MEMS, for Micro Electro Mechanical Systems, was coined in the 1980s to describe new, sophisticated mechanical systems on a chip, such as micro electric motors, resonators, gears, and so on. Today, the term MEMS in practice is used to refer to any microscopic device with a mechanical function, which can be fabricated in a batch process (for example, an array of microscopic gears fabricated on a microchip would be considered a MEMS device but a tiny laser-machined stent or watch component would not). In Europe, the term MST for Micro System Technology is preferred, and in Japan MEMS are simply referred to as "micromachines". The distinctions in these terms are relatively minor and are often used interchangeably. Though MEMS processes are generally classified into a number of categories – such as surface machining, bulk machining, LIGA, and EFAB – there are indeed thousands of different MEMS processes. Some produce fairly simple geometries, while others offer more complex 3-D geometries and more versatility. A company making accelerometers for airbags would need a completely different design and process to produce an accelerometer for inertial navigation. Changing from an accelerometer to another inertial device such as a gyroscope requires an even greater change in design and process, and most likely a completely different fabrication facility and engineering team. MEMS technology has generated a tremendous amount of excitement, due to the vast range of important applications where MEMS can offer previously unattainable performance and reliability standards. In an age where everything must be smaller, faster, and cheaper, MEMS offers a compelling solution. MEMS have already had a profound impact on certain applications such as automotive sensors and inkjet printers. The emerging MEMS industry is already a multibillion-dollar market. It is expected to grow rapidly and become one of the major industries of the 21st century. Cahners In-Stat Group has projected sales of MEMS to reach $12B by 2005. The European NEXUS group projects even larger revenues, using a more inclusive definition of MEMS. Microtechnology is often constructed using photolithography. Lightwaves are focused through a mask onto a surface. They solidify a chemical film. The soft, unexposed parts of the film are washed away. Then acid etches away the material not protected. Microtechnology's most famous success is the integrated circuit. It has also been used to construct micromachinery. As an offshoot of researchers attempting to further miniaturize microtechnology, nanotechnology emerged in the 1980s, particularly after the invention of new microscopy techniques. These produced materials and structures that have 1-100 nm in dimensions. Items constructed at the microscopic level The following items have been constructed on a scale of 1 micrometre using photolithography: Electronics: wires resistors transistors thermionic valves diodes sensors capacitors Machinery: electric motors gears levers bearings hinges Fluidics: valves channels pumps turbines See also Microfabrication References External links Institute for Micromachine and Microfabrication Research at Simon Fraser University Nanotechnology Semiconductor device fabrication Technology by type
Microtechnology
[ "Materials_science", "Engineering" ]
978
[ "Nanotechnology", "Semiconductor device fabrication", "Materials science", "Microtechnology" ]
10,731,502
https://en.wikipedia.org/wiki/Honeycomb%20structure
Honeycomb structures are natural or man-made structures that have the geometry of a honeycomb to allow the minimization of the amount of used material to reach minimal weight and minimal material cost. The geometry of honeycomb structures can vary widely but the common feature of all such structures is an array of hollow cells formed between thin vertical walls. The cells are often columnar and hexagonal in shape. A honeycomb-shaped structure provides a material with minimal density and relative high out-of-plane compression properties and out-of-plane shear properties. Man-made honeycomb structural materials are commonly made by layering a honeycomb material between two thin layers that provide strength in tension. This forms a plate-like assembly. Honeycomb materials are widely used where flat or slightly curved surfaces are needed and their high specific strength is valuable. They are widely used in the aerospace industry for this reason, and honeycomb materials in aluminum, fibreglass and advanced composite materials have been featured in aircraft and rockets since the 1950s. They can also be found in many other fields, from packaging materials in the form of paper-based honeycomb cardboard, to sporting goods like skis and snowboards. Introduction Natural honeycomb structures include beehives, honeycomb weathering in rocks, tripe, and bone. Man-made honeycomb structures include sandwich-structured composites with honeycomb cores. Man-made honeycomb structures are manufactured by using a variety of different materials, depending on the intended application and required characteristics, from paper or thermoplastics, used for low strength and stiffness for low load applications, to high strength and stiffness for high performance applications, from aluminum or fiber reinforced plastics. The strength of laminated or sandwich panels depends on the size of the panel, facing material used and the number or density of the honeycomb cells within it. Honeycomb composites are used widely in many industries, from aerospace industries, automotive and furniture to packaging and logistics. The material takes its name from its visual resemblance to a bee's honeycomb – a hexagonal sheet structure. History The hexagonal comb of the honey bee has been admired and wondered about from ancient times. The first man-made honeycomb, according to Greek mythology, is said to have been manufactured by Daedalus from gold by lost wax casting more than 3000 years ago. Marcus Varro reports that the Greek geometers Euclid and Zenodorus found that the hexagon shape makes most efficient use of space and building materials. The interior ribbing and hidden chambers in the dome of the Pantheon in Rome is an early example of a honeycomb structure. Galileo Galilei discusses in 1638 the resistance of hollow solids: "Art, and nature even more, makes use of these in thousands of operations in which robustness is increased without adding weight, as is seen in the bones of birds and in many stalks that are light and very resistant to bending and breaking”. Robert Hooke discovers in 1665 that the natural cellular structure of cork is similar to the hexagonal honeybee comb. and Charles Darwin states in 1859 that "the comb of the hive-bee, as far as we can see, is absolutely perfect in economizing labour and wax”. The first paper honeycomb structures might have been made by the Chinese 2000 years ago for ornaments, but no reference for this has been found. Paper honeycombs and the expansion production process has been invented in Halle/Saale in Germany by Hans Heilbrun in 1901 for decorative applications. First honeycomb structures from corrugated metal sheets had been proposed for bee keeping in 1890. For the same purpose, as foundation sheets to harvest more honey, a honeycomb moulding process using a paper paste glue mixture had been patented in 1878. The three basic techniques for honeycomb production that are still used today—expansion, corrugation and moulding—were already developed by 1901 for non-sandwich applications. Hugo Junkers first explored the idea of a honeycomb core within a laminate structure. He proposed and patented the first honeycomb cores for aircraft application in 1915. He described in detail his concept to replace the fabric covered aircraft structures by metal sheets and reasoned that a metal sheet can also be loaded in compression if it is supported at very small intervals by arranging side by side a series of square or rectangular cells or triangular or hexagonal hollow bodies. The problem of bonding a continuous skin to cellular cores led Junkers later to the open corrugated structure, which could be riveted or welded together. The first use of honeycomb structures for structural applications had been independently proposed for building application and published already in 1914. In 1934 Edward G. Budd patented a welded steel honeycomb sandwich panel from corrugated metal sheets and Claude Dornier aimed 1937 to solve the core-skin bonding problem by rolling or pressing a skin which is in a plastic state into the core cell walls. The first successful structural adhesive bonding of honeycomb sandwich structures was achieved by Norman de Bruyne of Aero Research Limited, who patented an adhesive with the right viscosity to form resin fillets on the honeycomb core in 1938. The North American XB-70 Valkyrie made extensive use of stainless steel honeycomb panels using a brazing process they developed. A summary of the important developments in the history of honeycomb technology is given below: 60 BC Diodorus Siculus reports a golden honeycomb manufactured by Daedalus via lost wax casting. 36 BC Marcus Varro reports most efficient use of space and building materials by hexagonal shape. 126 The Pantheon was rebuilt in Rome using a coffer structure, sunken panel in the shape of a square structure, to support its dome. 1638 Galileo Galilei discusses hollow solids and their increase of resistance without adding weight. 1665 Robert Hooke discovers that the natural cellular structure of cork is similar to the hexagonal honeybee comb. 1859 Charles Darwin states that the comb of the hive-bee is absolutely perfect in economizing labour and wax. 1877 F. H. Küstermann invents a honeycomb moulding process using a paper paste glue mixture. 1890 Julius Steigel invents the honeycomb production process from corrugated metal sheets. 1901 Hans Heilbrun invents the hexagonal paper honeycombs and the expansion production process. 1914 R. Höfler and S. Renyi patent the first use of honeycomb structures for structural applications. 1915 Hugo Junkers patents the first honeycomb cores for aircraft application. 1931 George Thomson proposes to use decorative expended paper honeycombs for lightweight plasterboard panels. 1934 Edward G. Budd patents welded steel honeycomb sandwich panel from corrugated metal sheets. 1937 Claude Dornier patents a honeycomb sandwich panel with skins pressed in a plastic state into the core cell walls. 1938 Norman de Bruyne patents the structural adhesive bonding of honeycomb sandwich structures. 1941 John D. Lincoln proposes the use of expanded paper honeycombs for aircraft radomes 1948 Roger Steele applies the expansion production process using fiber reinforced composite sheets. 1969 Boeing 747 incorporates extensive fire-resistant honeycombs from Hexcel Composites using DuPont's Nomex aramid fiber paper. 1980s Thermoplastic honeycombs produced by extrusion processes are introduced. Manufacture The three traditional honeycomb production techniques, expansion, corrugation, and moulding, were all developed by 1901 for non-sandwich applications. For decorative applications the expanded honeycomb production reached a remarkable degree of automation in the first decade of the 20th century. Today honeycomb cores are manufactured via the expansion process and the corrugation process from composite materials such as glass-reinforced plastic (also known as fiberglass), carbon fiber reinforced plastic, Nomex aramide paper reinforced plastic, or from a metal (usually aluminum). Honeycombs from metals (like aluminum) are today produced by the expansion process. Continuous processes of folding honeycombs from a single aluminum sheet after cutting slits had been developed already around 1920. Continuous in-line production of metal honeycomb can be done from metal rolls by cutting and bending. Thermoplastic honeycomb cores (usually from polypropylene) are usually made by extrusion processed via a block of extruded profiles or extruded tubes from which the honeycomb sheets are sliced. Recently a new, unique process to produce thermoplastic honeycombs has been implemented, allowing a continuous production of a honeycomb core as well as in-line production of honeycombs with direct lamination of skins into cost efficient sandwich panel. Applications Composite honeycomb structures have been used in numerous engineering and scientific applications. More recent developments show that honeycomb structures are also advantageous in applications involving nanohole arrays in anodized alumina, microporous arrays in polymer thin films, activated carbon honeycombs, and photonic band gap honeycomb structures. Aerodynamics A honeycomb mesh is often used in aerodynamics to reduce or to create wind turbulence. It is also used to obtain a standard profile in a wind tunnel (temperature, flow speed). A major factor in choosing the right mesh is the length ratio (length vs honeycomb cell diameter) L/d. Length ratio < 1: Honeycomb meshes of low length ratio can be used on vehicles front grille. Beside the aesthetic reasons, these meshes are used as screens to get a uniform profile and to reduce the intensity of turbulence. Length ratio >> 1: Honeycomb meshes of large length ratio reduce lateral turbulence and eddies of the flow. Early wind tunnels used them with no screens; unfortunately, this method introduced high turbulence intensity in the test section. Most modern tunnels use both honeycomb and screens. While aluminium honeycombs are common use in the industry, other materials are offered for specific applications. People using metal structures should take care of removing burrs as they can introduce additional turbulences. Polycarbonate structures are a low-cost alternative. The honeycombed, screened center of this open-circuit air intake for Langley's first wind tunnel ensured a steady, non-turbulent flow of air. Two mechanics pose near the entrance end of the actual tunnel, where air was pulled into the test section through a honeycomb arrangement to smooth the flow. Honeycomb is not the only cross-section available in order to reduce eddies in an airflow. Square, rectangular, circular and hexagonal cross-sections are other choices available, although honeycomb is generally the preferred choice. Properties In combination with two skins applied on the honeycomb, the structure offers a sandwich panel with excellent rigidity at minimal weight. The behavior of the honeycomb structures is orthotropic, meaning the panels react differently depending on the orientation of the structure. It is therefore necessary to distinguish between the directions of symmetry, the so-called L and W-direction. The L-direction is the strongest and the stiffest direction. The weakest direction is at 60° from the L-direction (in the case of a regular hexagon) and the most compliant direction is the W-direction. Another important property of honeycomb sandwich core is its compression strength. Due to the efficient hexagonal configuration, where walls support each other, compression strength of honeycomb cores is typically higher (at same weight) compared to other sandwich core structures such as, for instance, foam cores or corrugated cores. The mechanical properties of honeycombs depend on its cell geometry, the properties of the material from which the honeycomb is constructed (often referred to as the solid), which include the Young's modulus, yield stress, and fracture stress of the material, and the relative density of the honeycomb (the density of the honeycomb normalized by that of the solid, ρ*/ρs). The ratio of the effective elastic moduli and the solid's Young's moduli, e.g., and , of low-density honeycombs are independent of the solid. The mechanical properties of honeycombs will also vary based on the direction in which the load is applied. In-plane loading: Under in-plane loading, it is often assumed that the wall thickness of the honeycomb is small compared to the length of the wall. For a regular honeycomb, the relative density is proportional to the wall thickness to wall length ratio (t/L) and the Young’s modulus is proportional to (t/L)3. Under high enough compressive load, the honeycomb reaches a critical stress and fails due to one of the following mechanisms – elastic buckling, plastic yielding, or brittle crushing. The mode of failure is dependent on the material of the solid which the honeycomb is made of. Elastic buckling of the cell walls is the mode of failure for elastomeric materials, ductile materials fail due to plastic yielding, and brittle crushing is the mode of failure when the solid is brittle. The elastic buckling stress is proportional to the relative density cubed, plastic collapse stress is proportional to relative density squared, and brittle crushing stress is proportional to relative density squared. Following the critical stress and failure of the material, a plateau stress is observed in the material, in which increases in strain are observed while the stress of the honeycomb remains roughly constant. Once a certain strain is reached, the material will begin to undergo densification as further compression pushes the cell walls together. Out of-plane loading: Under out-of-plane loading, the out-of-plane Young’s modulus of a regular hexagonal honeycombs is proportional to the relative density of the honeycomb. The elastic buckling stress is proportional to (t/L)3 while the plastic buckling stress is proportional to (t/L)5/3. The shape of the honeycomb cell is often varied to meet different engineering applications. Shapes that are commonly used besides the regular hexagonal cell include triangular cells, square cells, and circular-cored hexagonal cells, and circular-cored square cells. The relative densities of these cells will depend on their new geometry. See also Lightening holes Metal foam Hollow structural section Composite material Sandwich structured composite Sandwich plate system Timoshenko beam theory Plate theory Sandwich panel Triangle structure References Buildings and structures by type Composite materials Aerospace materials Pantheon, Rome
Honeycomb structure
[ "Physics", "Engineering" ]
2,923
[ "Buildings and structures by type", "Aerospace materials", "Composite materials", "Materials", "Aerospace engineering", "Matter", "Architecture" ]
10,732,685
https://en.wikipedia.org/wiki/Phenylalanine%20racemase%20%28ATP-hydrolysing%29
The enzyme phenylalanine racemase (, phenylalanine racemase, phenylalanine racemase (adenosine triphosphate-hydrolysing), gramicidin S synthetase I) is the enzyme that acts on amino acids and derivatives. It activates both the L & D stereo isomers of phenylalanine to form L-phenylalanyl adenylate and D-phenylalanyl adenylate, which are bound to the enzyme. These bound compounds are then transferred to the thiol group of the enzyme followed by conversion of its configuration, the D-isomer being the more favorable configuration of the two, with a 7 to 3 ratio between the two isomers. The racemisation reaction of phenylalanine is coupled with the highly favorable hydrolysis of adenosine triphosphate (ATP) to adenosine monophosphate (AMP) and pyrophosphate (PP), thermodynamically allowing it to proceed. This reaction is then drawn forward by further hydrolyzing PP to inorganic phosphate (Pi), via Le Chatelier's principle. Other names phenylalanine racemase phenylalanine racemase (adenosine triphosphate-hydrolysing) gramicidin S synthetase I Pathway Phenylalanine Metabolism Substrate L – Phenylalanine Product D - Phenylalanine Cofactor Pyridoxal-phosphate (active form of vitamin B6) Links to disease Problems in the digestion of phenylalanine (phe) to tyrosine (tyr) lead to the buildup of both phe and phenylpyruvate, in a disease called Phenylketonuria (PKU). These two compounds build up in the blood stream and cerebral spinal fluid, which can lead to mental retardation if left untreated. Treatment consists of a restricted diet of foods that contain phe or compounds that can breakdown into phe. Children in the US are routinely tested for this at birth. For more information see the Phenylketonuria page or the link below. Quick facts pH Range = 7.2 – 8.6 Equilibrium Ratio:L-Phe:D-Phe = 3:7 Specific Activity: 0.019 The reaction |} See also Phenylalanine Racemase Phenylketonuria References External links Protein Data Bank 1amu Metabolism EC 5.1.1
Phenylalanine racemase (ATP-hydrolysing)
[ "Chemistry", "Biology" ]
533
[ "Biochemistry", "Metabolism", "Cellular processes" ]
10,736,521
https://en.wikipedia.org/wiki/Succinylmonocholine
Succinylmonocholine is an ester of succinic acid and choline created by the metabolism of suxamethonium chloride. See also Succinic acid Choline References Choline esters Carboxylic acids Quaternary ammonium compounds
Succinylmonocholine
[ "Chemistry" ]
57
[ "Carboxylic acids", "Functional groups" ]
10,737,785
https://en.wikipedia.org/wiki/Floating%20nuclear%20power%20plant
A floating nuclear power plant is a floating power station that derives its energy from a nuclear reactor. Instead of a stationary complex on land, they consist of a floating structure such as an offshore platform, barge or conventional ship. Since the reactors employed are smaller in size and power than most commercial land-based reactors, mostly derived from nuclear ship and submarine power plants, the power output is generally a fraction of a conventional nuclear power plant, usually around 100MWe, although some are planned to have as much as 800MWe. The advantage of such power plants is their relative mobility and their ability to deliver in-situ electric power "on demand" even to remote regions, since they can be moved or towed to position with relative ease within large water bodies, and then docked with coastal facilities to transfer the produced power and heat to a land power grid. However, environmental groups are concerned that floating nuclear power plants are more exposed to accidents than onshore power stations and also pose a threat to marine habitats. History 20th century The first floating nuclear power station was the MH-1A, using pressurized water reactor built in a converted Liberty ship, which achieved criticality in 1967. Proposals to build a floating nuclear power plants off the coast of New Jersey and off Jacksonville, Florida were considered in the 1970's but ultimately scrapped. 21st century In the 21st century, Russia has led in the practical development of floating nuclear power stations. On 14 September 2019, Russia’s first-floating nuclear power plant, Akademik Lomonosov, arrived to its permanent location in the Chukotka region. It started operation on 19 December 2019. In 2022, the United States Department of Energy funded a three-year research study of offshore floating nuclear power generation. In October 2022, NuScale Power and Canadian company Prodigy announced a joint project to bring a North American small modular reactor based floating plant to market. Samsung and UK-based Core Power are also looking into using compact molten salt reactor technology in floating platforms, with the former aiming at a modular power barge of up to 800MWe. Advantages Virtually no land or concrete is used. Earthquake resistant. Easily transported for relocation, refueling, refurbishment and decommissioning. Surrounded by water that can be used for active or passive cooling. Available to remote locations where a conventional power plant would be unfeasible. See also Floating solar Floating wind turbine Footnotes External links Sevmash, a leading Russian manufacturer of floating nuclear power plants Floating nuclear power stations raise spectre of Chernobyl at sea Nuclear power Nuclear power stations Floating nuclear power stations
Floating nuclear power plant
[ "Physics" ]
526
[ "Power (physics)", "Physical quantities", "Nuclear power" ]
5,436,866
https://en.wikipedia.org/wiki/Hamaker%20theory
After the explanation of van der Waals forces by Fritz London, several scientists soon realised that his definition could be extended from the interaction of two molecules with induced dipoles to macro-scale objects by summing all of the forces between the molecules in each of the bodies involved. The theory is named after H. C. Hamaker, who derived the interaction between two spheres, a sphere and a wall, and presented a general discussion in a heavily cited 1937 paper. The interaction of two bodies is then treated as the pairwise interaction of a set of N molecules at positions: Ri {i:1,2,... ...,N}. The distance between the molecules i and j is then: The interaction energy of the system is taken to be: where is the interaction of molecules i and j in the absence of the influence of other molecules. The theory is however only an approximation which assumes that the interactions can be treated independently, the theory must also be adjusted to take into account quantum perturbation theory. References Physical chemistry Intermolecular forces
Hamaker theory
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
217
[ "Molecular physics", "Applied and interdisciplinary physics", "Materials science", "Intermolecular forces", "nan", "Physical chemistry", "Physical chemistry stubs" ]
5,442,545
https://en.wikipedia.org/wiki/Induction%20hardening
Induction hardening is a type of surface hardening in which a metal part is induction-heated and then quenched. The quenched metal undergoes a martensitic transformation, increasing the hardness and brittleness of the part. Induction hardening is used to selectively harden areas of a part or assembly without affecting the properties of the part as a whole. Process Induction heating is a non contact heating process which uses the principle of electromagnetic induction to produce heat inside the surface layer of a work-piece. By placing a conductive material into a strong alternating magnetic field, electric current can be made to flow in the material thereby creating heat due to the I2R losses in the material. In magnetic materials, further heat is generated below the curie point due to hysteresis losses. The current generated flows predominantly in the surface layer, the depth of this layer being dictated by the frequency of the alternating field, the surface power density, the permeability of the material, the heat time and the diameter of the bar or material thickness. By quenching this heated layer in water, oil, or a polymer based quench, the surface layer is altered to form a martensitic structure which is harder than the base metal. Definition A widely used process for the surface hardening of steel. The components are heated by means of an alternating magnetic field to a temperature within or above the transformation range followed by immediate quenching. The core of the component remains unaffected by the treatment and its physical properties are those of the bar from which it was machined, whilst the hardness of the case can be within the range 37/58 HRC. Carbon and alloy steels with an equivalent carbon content in the range 0.40/0.45% are most suitable for this process. A large alternating current is driven through a coil, generating a very intense and rapidly changing magnetic field in the space within. The workpiece to be heated is placed within this alternating magnetic field where eddy currents are generated within the workpiece and resistance leads to Joule heating of the metal. Many mechanical parts, such as shafts, gears, and springs, are subjected to surface treatments after machining in order to improve wear behavior. The effectiveness of these treatments depends both on surface materials properties modification and on the introduction of residual stress. Among these treatments, induction hardening is one of the most widely employed to improve component durability. It determines in the work-piece a tough core with tensile residual stresses and a hard surface layer with compressive stress, which have proved to be very effective in extending the component fatigue life and wear resistance. Induction surface hardened low alloyed medium carbon steels are widely used for critical automotive and machine applications which require high wear resistance. Wear resistance behavior of induction hardened parts depends on hardening depth and the magnitude and distribution of residual compressive stress in the surface layer. History The basis of all induction heating systems was discovered in 1831 by Michael Faraday. Faraday proved that by winding two coils of wire around a common magnetic core it was possible to create a momentary electromotive force in the second winding by switching the electric current in the first winding on and off. He further observed that if the current was kept constant, no EMF was induced in the second winding and that this current flowed in opposite directions subject to whether the current was increasing or decreasing in the circuit. Faraday concluded that an electric current can be produced by a changing magnetic field. As there was no physical connection between the primary and secondary windings, the emf in the secondary coil was said to be induced and so Faraday's law of induction was born. Once discovered, these principles were employed over the next century or so in the design of dynamos (electrical generators and electric motors, which are variants of the same thing) and in forms of electrical transformers. In these applications, any heat generated in either the electrical or magnetic circuits was felt to be undesirable. Engineers went to great lengths and used laminated cores and other methods to minimise the effects. Early last century the principles were explored as a means to melt steel, and the motor generator was developed to provide the power required for the induction furnace. After general acceptance of the methodology for melting steel, engineers began to explore other possibilities for the use of the process. It was already understood that the depth of current penetration in steel was a function of its magnetic permeability, resistivity and the frequency of the applied field. Engineers at Midvale Steel and The Ohio Crankshaft Company drew on this knowledge to develop the first surface hardening induction heating systems using motor generators. The need for rapid easily automated systems led to massive advances in the understanding and use of the induction hardening process and by the late 1950s many systems using motor generators and thermionic emission triode oscillators were in regular use in a vast array of industries. Modern day induction heating units use the latest in semiconductor technology and digital control systems to develop a range of powers from 1 kW to many megawatts. Principal methods Single shot hardening In single shot systems the component is held statically or rotated in the coil and the whole area to be treated is heated simultaneously for a pre-set time followed by either a flood quench or a drop quench system. Single shot is often used in cases where no other method will achieve the desired result for example for flat face hardening of hammers, edge hardening complex shaped tools or the production of small gears. In the case of shaft hardening a further advantage of the single shot methodology is the production time compared with progressive traverse hardening methods. In addition the ability to use coils which can create longitudinal current flow in the component rather than diametric flow can be an advantage with certain complex geometry. There are disadvantages with the single shot approach. The coil design can be an extremely complex and involved process. Often the use of ferrite or laminated loading materials is required to influence the magnetic field concentrations in given areas thereby to refine the heat pattern produced. Another drawback is that much more power is required due to the increased surface area being heated compared with a traverse approach. Traverse hardening In traverse hardening systems the work piece is passed through the induction coil progressively and a following quench spray or ring is used. Traverse hardening is used extensively in the production of shaft type components such as axle shafts, excavator bucket pins, steering components, power tool shafts and drive shafts. The component is fed through a ring type inductor which normally features a single turn. The width of the turn is dictated by the traverse speed, the available power and frequency of the generator. This creates a moving band of heat which when quenched creates the hardened surface layer. The quench ring can be either integral a following arrangement or a combination of both subject to the requirements of the application. By varying speed and power it is possible to create a shaft which is hardened along its whole length or just in specific areas and also to harden shafts with steps in diameter or splines. It is normal when hardening round shafts to rotate the part during the process to ensure any variations due to concentricity of the coil and the component are removed. Traverse methods also feature in the production of edge components, such as paper knives, leather knives, lawnmower bottom blades, and hacksaw blades. These types of application normally use a hairpin coil or a transverse flux coil which sits over the edge of the component. The component is progressed through the coil and a following spray quench consisting of nozzles or drilled blocks. Many methods are used to provide the progressive movement through the coil and both vertical and horizontal systems are used. These normally employ a digital encoder and programmable logic controller for the positional control, switching, monitoring, and setting. In all cases the speed of traverse needs to be closely controlled and consistent as variation in speed will have an effect on the depth of hardness and the hardness value achieved. Equipment Power required Power supplies for induction hardening vary in power from a few kilowatts to hundreds of kilowatts depending on the size of the component to be heated and the production method employed i.e. single shot hardening, traverse hardening or submerged hardening. In order to select the correct power supply it is first necessary to calculate the surface area of the component to be heated. Once this has been established then a variety of methods can be used to calculate the power density required, heat time and generator operating frequency. Traditionally this was done using a series of graphs, complex empirical calculations and experience. Modern techniques typically use finite element analysis and computer-aided manufacturing techniques, however as with all such methods a thorough working knowledge of the induction heating process is still required. For single shot applications the total area to be heated needs to be calculated. In the case of traverse hardening the circumference of the component is multiplied by the face width of the coil. Care must be exercised when selecting a coil face width that it is practical to construct the coil of the chosen width and that it will live at the power required for the application. Frequency Induction heating systems for hardening are available in a variety of different operating frequencies typically from 1 kHz to 400 kHz. Higher and lower frequencies are available but typically these will be used for specialist applications. The relationship between operating frequency and current penetration depth and therefore hardness depth is inversely proportional. i.e. the lower the frequency the deeper the case. The above table is purely illustrative, good results can be obtained outside these ranges by balancing power densities, frequency and other practical considerations including cost which may influence the final selection, heat time and coil width. As well as the power density and frequency, the time the material is heated for will influence the depth to which the heat will flow by conduction. The time in the coil can be influenced by the traverse speed and the coil width, however this will also have an effect on the overall power requirement or the equipment throughput. It can be seen from the above table that the selection of the correct equipment for any application can be extremely complex as more than one combination of power, frequency and speed can be used for a given result. However in practice many selections are immediately obvious based on previous experience and practicality. Advantages Fast process, no holding time is required, hence more production rate No scaling or decarburizing More case depth, up to 8 mm Selective hardening High wear and fatigue resistance Applications The process is applicable for electrically conductive magnetic materials such as steel. Long work pieces such as axles can be processed. See also Case hardening Induction forging Induction heater Induction shrink fitting References Notes Bibliography . . . External links Frequently Asked Questions About The Induction Hardening Process with examples of Induction Heating Applications The National Metals Centre offering Design, Modeling & Simulation (DMS) technologies relating to Induction Hardening processes - NAMTEC Metal heat treatments it:Tempra#Tempra ad induzione
Induction hardening
[ "Chemistry" ]
2,227
[ "Metallurgical processes", "Metal heat treatments" ]
5,443,884
https://en.wikipedia.org/wiki/Ornstein%20isomorphism%20theorem
In mathematics, the Ornstein isomorphism theorem is a deep result in ergodic theory. It states that if two Bernoulli schemes have the same Kolmogorov entropy, then they are isomorphic. The result, given by Donald Ornstein in 1970, is important because it states that many systems previously believed to be unrelated are in fact isomorphic; these include all finite stationary stochastic processes, including Markov chains and subshifts of finite type, Anosov flows and Sinai's billiards, ergodic automorphisms of the n-torus, and the continued fraction transform. Discussion The theorem is actually a collection of related theorems. The first theorem states that if two different Bernoulli shifts have the same Kolmogorov entropy, then they are isomorphic as dynamical systems. The third theorem extends this result to flows: namely, that there exists a flow such that is a Bernoulli shift. The fourth theorem states that, for a given fixed entropy, this flow is unique, up to a constant rescaling of time. The fifth theorem states that there is a single, unique flow (up to a constant rescaling of time) that has infinite entropy. The phrase "up to a constant rescaling of time" means simply that if and are two Bernoulli flows with the same entropy, then for some constant c. The developments also included proofs that factors of Bernoulli shifts are isomorphic to Bernoulli shifts, and gave criteria for a given measure-preserving dynamical system to be isomorphic to a Bernoulli shift. A corollary of these results is a solution to the root problem for Bernoulli shifts: So, for example, given a shift T, there is another shift that is isomorphic to it. History The question of isomorphism dates to von Neumann, who asked if the two Bernoulli schemes BS(1/2, 1/2) and BS(1/3, 1/3, 1/3) were isomorphic or not. In 1959, Ya. Sinai and Kolmogorov replied in the negative, showing that two different schemes cannot be isomorphic if they do not have the same entropy. Specifically, they showed that the entropy of a Bernoulli scheme BS(p1, p2,..., pn) is given by The Ornstein isomorphism theorem, proved by Donald Ornstein in 1970, states that two Bernoulli schemes with the same entropy are isomorphic. The result is sharp, in that very similar, non-scheme systems do not have this property; specifically, there exist Kolmogorov systems with the same entropy that are not isomorphic. Ornstein received the Bôcher prize for this work. A simplified proof of the isomorphism theorem for symbolic Bernoulli schemes was given by Michael S. Keane and M. Smorodinsky in 1979. References Further reading Steven Kalikow, Randall McCutcheon (2010) Outline of Ergodic Theory, Cambridge University Press Donald Ornstein (2008), "Ornstein theory" Scholarpedia, 3(3):3957. Daniel J. Rudolph (1990) Fundamentals of measurable dynamics: Ergodic theory on Lebesgue spaces, Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1990. Ergodic theory Symbolic dynamics
Ornstein isomorphism theorem
[ "Mathematics" ]
696
[ "Symbolic dynamics", "Ergodic theory", "Dynamical systems" ]
6,936,536
https://en.wikipedia.org/wiki/Prandtl%E2%80%93Meyer%20expansion%20fan
A supersonic expansion fan, technically known as Prandtl–Meyer expansion fan, a two-dimensional simple wave, is a centered expansion process that occurs when a supersonic flow turns around a convex corner. The fan consists of an infinite number of Mach waves, diverging from a sharp corner. When a flow turns around a smooth and circular corner, these waves can be extended backwards to meet at a point. Each wave in the expansion fan turns the flow gradually (in small steps). It is physically impossible for the flow to turn through a single "shock" wave because this would violate the second law of thermodynamics. Across the expansion fan, the flow accelerates (velocity increases) and the Mach number increases, while the static pressure, temperature and density decrease. Since the process is isentropic, the stagnation properties (e.g. the total pressure and total temperature) remain constant across the fan. The theory was described by Theodor Meyer on his thesis dissertation in 1908, along with his advisor Ludwig Prandtl, who had already discussed the problem a year before. Flow properties The expansion fan consists of an infinite number of expansion waves or Mach lines. The first Mach line is at an angle with respect to the flow direction, and the last Mach line is at an angle with respect to final flow direction. Since the flow turns in small angles and the changes across each expansion wave are small, the whole process is isentropic. This simplifies the calculations of the flow properties significantly. Since the flow is isentropic, the stagnation properties like stagnation pressure (), stagnation temperature () and stagnation density () remain constant. The final static properties are a function of the final flow Mach number () and can be related to the initial flow conditions as follows, where is the heat capacity ratio of the gas (1.4 for air): The Mach number after the turn () is related to the initial Mach number () and the turn angle () by, where, is the Prandtl–Meyer function. This function determines the angle through which a sonic flow (M = 1) must turn to reach a particular Mach number (M). Mathematically, By convention, Thus, given the initial Mach number (), one can calculate and using the turn angle find . From the value of one can obtain the final Mach number () and the other flow properties. The velocity field in the expansion fan, expressed in polar coordinates are given by is the specific enthalpy and is the stagnation specific enthalpy. Maximum turn angle As Mach number varies from 1 to , takes values from 0 to , where This places a limit on how much a supersonic flow can turn through, with the maximum turn angle given by, One can also look at it as follows. A flow has to turn so that it can satisfy the boundary conditions. In an ideal flow, there are two kinds of boundary condition that the flow has to satisfy, Velocity boundary condition, which dictates that the component of the flow velocity normal to the wall be zero. It is also known as no-penetration boundary condition. Pressure boundary condition, which states that there cannot be a discontinuity in the static pressure inside the flow (since there are no shocks in the flow). If the flow turns enough so that it becomes parallel to the wall, we do not need to worry about pressure boundary condition. However, as the flow turns, its static pressure decreases (as described earlier). If there is not enough pressure to start with, the flow won't be able to complete the turn and will not be parallel to the wall. This shows up as the maximum angle through which a flow can turn. The lower the Mach number is to start with (i.e. small ), the greater the maximum angle through which the flow can turn. The streamline which separates the final flow direction and the wall is known as a slipstream (shown as the dashed line in the figure). Across this line there is a jump in the temperature, density and tangential component of the velocity (normal component being zero). Beyond the slipstream the flow is stagnant (which automatically satisfies the velocity boundary condition at the wall). In case of real flow, a shear layer is observed instead of a slipstream, because of the additional no-slip boundary condition. Notes See also Gas dynamics Mach wave Oblique shock Shock wave Shadowgraph technique Schlieren photography Sonic boom References External links Expansion fan (NASA) Prandtl- Meyer expansion fan calculator (Java applet). Aerodynamics Conservation equations Fluid dynamics
Prandtl–Meyer expansion fan
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
955
[ "Chemical engineering", "Conservation laws", "Mathematical objects", "Equations", "Aerodynamics", "Aerospace engineering", "Piping", "Fluid dynamics", "Conservation equations", "Symmetry", "Physics theorems" ]
6,938,470
https://en.wikipedia.org/wiki/Rotary%20vacuum-drum%20filter
A Rotary Vacuum Filter Drum consists of a cylindrical filter membrane that is partly sub-merged in a slurry to be filtered. The inside of the drum is held lower than the ambient pressure. As the drum rotates through the slurry, the liquid is sucked through the membrane, leaving solids to cake on the membrane surface while the drum is submerged. A knife or blade is positioned to scrape the product from the surface. The technique is well suited to slurries, flocculated suspensions, and liquids with a high solid content, which could clog other forms of filter. It is common to pre-coated with a filter aid, typically of diatomaceous earth (DE) or Perlite. In some implementations, the knife also cuts off a small portion of the filter media to reveal a fresh media surface that will enter the liquid as the drum rotates. Such systems advance the knife automatically as the surface is removed. Basic fundamentals Rotary vacuum drum filter Rotary vacuum drum filter (RVDF), patented in 1872, is one of the oldest filters used in the industrial liquid-solids separation. It offers a wide range of industrial processing flow sheets and provides a flexible application of dewatering, washing and/or clarification. A rotary vacuum filter consists of a large rotating drum covered by a cloth. The drum is suspended on an axial over a trough containing liquid or solids slurry with approximately 50-80% of the screen area immersed in the slurry. As the drum rotates into and out of the trough, the slurry is sucked on the surface of the cloth and rotated out of the liquid or solids suspension as a cake. When the cake is rotating out, it is dewatered in the drying zone. The cake is dry because the vacuum drum is continuously sucking the cake and taking the water out of it. At the final step of the separation, the cake is discharged as solids products and the drum rotates continuously to another separation cycle. Range of application Applications: The rotary filter is most suitable for continuous operation on large quantities of slurry. If the slurry contains considerable amount of solids, that is, in the range of 15-30%. Examples of pharmaceutical applications include the collection of calcium carbonate, magnesium carbonate and starch. The separation of the mycelia from the fermentation liquor in the manufacture of antibiotics. block and instant yeast production. Advantages and limitations The advantages and limitations of rotary vacuum drum filter compared to other separation methods are: Advantages The rotary vacuum drum filter is a continuous and automatic operation, so the operating cost is low. The variation of the drum speed rotating can be used to control the cake thickness. The process can be easily modified (pre-coating filter process). Can produce relatively clean product by adding a showering device. Disadvantages Due to the structure, the pressure difference is theoretically limited to atmospheric pressure (1 bar), and in practice somewhat lower. Besides the drum, other accessories, for example, agitators and vacuum pump, vacuum receivers, slurry pumps are required. The discharge cake contains residual moisture. The cake tends to crack due to their air drawn through by the vacuum system, so that washing and drying are not efficient. High energy consumption by the vacuum pump. Designs available Basically there are five types of discharge that are used for the rotary vacuum drum filter such as belt, scraper, roll, string and pre coat discharge. Belt discharge The filter cloth is washed on both sides with each drum rotation while discharging filter cakes. The products for this mechanism are usually sticky, wet and thin thus, requiring the aid of a discharge roll. Belt discharge is used if slurry with moderate solid concentration is used or if the slurry is easy to filter to produce cake formation or if a longer wear resistance is desired for the separation of the mentioned slurry..... Scraper discharge This is the standard drum filter discharge. A scraper blade, which serves to redirect the filter cake into the discharge chute, removes the cake from the filter cloth just before re-entering the vat. Scraper discharge is used if the desired separation requires high filtration rate or if heavy solid slurry is used or if the slurry is easy to filter to produce cake formation or if a longer wear resistance is desired for the separation of the mentioned slurry. Roll discharge It is a suitable discharge option for cakes that are thin and have the tendency to stick with one another. Filter cakes on the drum and discharged roll are pressed against one another to ensure that the thin filter cake is peeled or pulled from the drum. Removal of solids from the discharge roll is done via a knife blade. Roll discharged is used if the desired separation requires high filtration rate, if high solid content slurry is used or if the slurry is easy to filter to produce cake formation or if the discharged solid is sticky or mud-like cake. String discharge The filtrate cakes that are thin and fragile are usually the end products of this discharge lie. The materials are capable of changing phases, from solid to liquid, due to instability and disturbance. Two rollers guide the strings back to drum surface and at the same time separation of the filtrate cake occurs as they pass the rollers. Application of the string discharge can be seen at the pharmaceutical and starch industries. String discharge is used if the high solid concentration slurry is used or if the slurry is easy to filter to produce cake formation or if the discharged solid is fibrous, stringy or pulpy or if a longer wear resistance is desired for the separation of the mentioned slurry. Pre coat discharge Application of this discharge are usually seen where production of filter cakes that blind the filter media thoroughly and processes that have low solid concentration slurry. Pre coat discharge is used if slurry with very low solid concentration slurry is used that resulted in difficult cake formation or if the slurry is difficult to filter to produce cake . Main process characteristics and assessment Generally, the main process in a rotary vacuum drum filter is continuous filtration whereby solids are separated from liquids through a filter medium by a vacuum. The filter cloth is one of the most important components on a filter and is typically made of weaving polymer yarns. The best selection of cloth can increase the performance of filtration. Initially, slurry is pumped into the trough and as the drum rotates, it is partially submerged in the slurry. The vacuum draws liquid and air through the filter media and out the shaft hence forming a layer of cake. An agitator is used to regulate the slurry if the texture is coarse and it is settling rapidly. Solids that are trapped on the surface of the drum are washed and dried after 2/3 of revolution, removing all the free moisture. During the washing stage, the wash liquid can either be poured onto the drum or sprayed on the cake. Cake pressing is optional but its advantages are preventing cake cracking and removing more moisture. Cake discharge is when all the solids are removed from the surface of the cake by a scraper blade, leaving a clean surface as drum re-enters the slurry. There are a few types of discharge which are scraper, roller, string, endless belt and pre coat. The filtrate and air flow through internal pipes, valve and into the vacuum receiver where the separation of liquid and gas occurs producing a clear filtrate. Pre coat filtration is an ideal method to produce a high clarity of filtrate. Basically, the drum surface is pre coated with a filter aid such as diatomaceous earth (DE) or perlite to improve filtration and increase cake permeability. It then undergoes the same process cycle as the conventional rotary vacuum drum filter however, pre coat filtration uses a higher precision blade to scrape off the cake. The filter is assessed by the size of the drum or filter area and its possible output. Typically, the output is in the units of pounds per hour of dry solids per square foot of filter area. The size of the auxiliary parts depends on the area of the filter and the type of usage. Rotary vacuum filters are flexible in handling variety of materials therefore the estimated solids yield from 5 to 200 pounds per hour per square foot. For pre coat discharge, the solid output is approximately 2 to 40 gallons per hour per square foot. Filtration efficiencies can also be improved in terms dryness of filter cake by significantly preventing filtrate liquid from getting stuck in the filter drum during filtration phase. Usage of multiple filters for example, running 3 filter units instead of 2 units yields a thicker cake hence, producing a clearer filtrate. This becomes beneficial in terms of production cost and also quality. Heuristics design process Basic operation parameters heuristics Vat level and drum speed are the two basic operating parameters for any rotary vacuum drum filter. These parameters are adjusted dependently to each other to optimize the filtration performance. Valve level determines the proportion filter cycle in the filter. The filter cycle consist of the filter drum rotation, release of cake formation from slurry and the drying period for the cake formation shown in figure 1. By default, operate the vat at its maximum level to maximise the rate of filtration. Reduce vat level if discharged solid is in the form of thin and slimy cake or if the discharged solid is very thick. Decrease in the vat level eventually leads to a decrease in the portion of the drum being submerge under the slurry, more surface exposure for the cake dying surface hence, larger cake formation to dry time ratio. This result in less moisture content of formed solid and lessen the thickness of the form solid. In addition to operating at lower vat level, the flow rate per drum revolution decreases and ultimately thinner cake formation occurs. In the case of pre coat discharge the filter aid efficiency increases. Drum speed is the driving factor for the filter output and its units is in the form of minutes per drum revolution. At steady operating conditions, adjusting the drum speed gives a proportional relationship with the filter throughput as shown as in figure 2. Discharge mechanism adjustment heuristics Endless belt Select filter cloth to obtain a good surface for cake formation. Use twill weave variation in the construction pattern of the fabric for better wear resistance. The belt tension, de-mooning bar height, wash water quantity and discharge roll speed are carefully tuned to maintain a good path for the cake formation to prevent excessive wear of the filter cloth. Scraper Select filter cloth to obtain good wear and solid binding characteristics. Use moderate blowback pressure to avoid high wear. Adjust duration of blow back pressure short enough to remove the cake from the filter cloth. The tuning of valve body is important for the blow back to prevent the excess filtrated being force back out of the pipe to with the release cake solid as this minimises wear and filter media maintenance. Roll Select filter cloth to obtain solid binding resistance and good cake release. Use coated fabric for more effective cake release and have a longer-lasting cloth media due to solid binding filter cloth. Both the discharge roll speed and drum speed must be the same. Adjust the scraper knife to leave a significant heal on discharge roll to produce a continuous cake transfer. String Minimise the lateral pressure of the strings by adjusting the alignment tine bar to avoid the string being cut off. Have ceramic tube place over each aligning tine bar to act as bearing surface for the strings. Pre coat Select filter cloth based on the type filter aid used (refer Filter aid selection), adjust the advancing knife to optimize the knife advance rate per drum revolution. (Detail explained in Advance blade section) Pre coat filter operation heuristics Filter aid selection: filter aid are recoat cake that act as the actual filter media and there two different types which are diatomaceous earth or perlite. Important parameter to consider is the solid penetration into the pre coat cake and its limits 0.002 to 0.005 inch penetration thickness. Large amount of filter aid is used i.e. “open”, more filter aid is aid removed which lead to higher disposal cost. If little amount of filter aid is used i.e. “tight” will lead to no flow rate into the drum. This comparison can be illustrated in figure 5 as below. Advanced blade The approximate knife advance rate can be determined for a set of operating conditions using table 6 below. The table indicates the number of hours that the filter can operate in a one-inch pre coat cake; the required condition is that the advance blade must be at a constant position. This method can be used to check for optimum operation range. If the operating parameter is higher than the optimum range, the user can reduce the knife advance rate and use a tighter grade of filter aid. This will result in less filter aid used (lower capital cost) and less filter aid being removed (lower disposal cost). However, if the operating parameter is lower than the optimum range, the user can increase the knife advanced rate (more production) and decrease the drum speed for less filter air usage (reduced operating cost). Necessary post treatment for waste stream for thicker Chlorination Most commonly used post treatment, where chlorine is dissolved in water to form and hydrochloric acid hypochlorous acid. The latter act as a disinfectant that is able to eliminate pathogens such as bacteria, viruses and protozoa by penetrating the cell walls. UV radiation The waste stream is irradiated with Ultraviolet radiation. The UV radiation disinfect by disrupting the pathogen cell to be mutated and prevent the cell from replicating. Eventually the mutated cell becomes extinct and this process eliminates odour. Ozonation The stream is exposed to ozone and ozone is unstable at atmospheric condition. The ozone (O3) decomposes into oxygen (O2) and more oxygen is dissolved into the stream. The pathogen is oxidised to form carbon dioxide. This process eliminates the odour of the stream but result in slightly acidic product due to the effect of carbon dioxide present. Necessary post treatment for waste stream for clarifier Land reclamation The waste discharge can be used as land stabilizer as dry bio-solids that can be distributed to the market. The land stabilizer is used in reclaiming marginal land such as mining waste land. This process will help to restore the land to its initial appearance. Incineration The waste discharge can be sent into incineration plant, where the organic solid undergoes combustion process. The combustion process produces heat that can be used to generate electricity. New Development The rotary vacuum drum filter designs available vary in physical aspects and their characteristics. The filtration area ranges from 0.5 m2 to 125 m2. Disregarding the size of the design, filter cloth washing is a priority as it ensures efficiency of cake washing and acting vacuum. However, a smaller design would be more economical as the maintenance, energy usage and investment cost would be less than a bigger rotary vacuum drum filter. Over the years, the technology drive has pushed development to further heights revolving around rotary vacuum drum filter in terms of design, performance, maintenance and cost. This has also led to the development of smaller rotary drum vacuum filters, ranging from laboratory scale to pilot scale, both of which can be used for smaller applications (such as at a lab in a university) High performance capacity, optimised filtrate drainage with low flow resistance and minimal pressure loss are just a few of the benefits. With advanced control systems prompting automation, this has reduced the operation of attention needed hence, reducing the operational cost. Advancements in technology also means that precoat can be cut to 1/20th the thickness of human hair, thus making the use of precoat more efficient Lowered operational and capital cost can also be achieved nowadays due to easier maintenance and cleaning. Complete cell emptying can be done quickly with the installation of leading and trailing pipes. Given that the filter cloth is usually one of the more expensive component in the rotary vacuum drum filter build up, priority on its maintenance must be kept quite high. A longer lifetime, protection from damage and consistent performance are the few criteria that must not be overlooked. Besides considering production cost and quality, cake washing and cake thickness are essential issues that are important in the process. Methods have been performed to ensure a minimal amount of cake moisture while undergoing good cake washing with large cake dewatering angle. An even thickness of filter cake besides having a complete cake discharge is also possible. See also Vacuum ceramic filter Dewatering References Further reading John J. McKetta, John J. McKetta Jr, "Unit Operations Handbook: Mechanical separations and materials handling", CRC Press, 1992, pp. 274–288. Hiroaki Masuda, Kō Higashitani, Hideto Yoshida, "Powder Technology: Handling and Operations, Process Instrumentation, and Working Hazards", CRC Press, 2006, pp. 194–195. External links Rotary drum filter, United States Patent 308143 Rotary drum filter, United States Patent 5006136 Luthi rotary drum filter Filter, patent number 2362300 Drum Filter Made in Viet Nam Filters Separation processes
Rotary vacuum-drum filter
[ "Chemistry", "Engineering" ]
3,533
[ "Separation processes", "Chemical equipment", "Filters", "Filtration", "nan" ]
6,945,419
https://en.wikipedia.org/wiki/Blondel%27s%20theorem
Blondel's theorem, named after its discoverer, French electrical engineer André Blondel, is the result of his attempt to simplify both the measurement of electrical energy and the validation of such measurements. The result is a simple rule that specifies the minimum number of watt-hour meters required to measure the consumption of energy in any system of electrical conductors. The theorem states that the power provided to a system of N conductors is equal to the algebraic sum of the power measured by N watt-meters. The N watt-meters are separately connected such that each one measures the current level in one of the N conductors and the potential level between that conductor and a common point. In a further simplification, if that common point is located on one of the conductors, that conductor's meter can be removed and only N-1 meters are required. An electrical energy meter is a watt-meter whose measurements are integrated over time, thus the theorem applies to watt-hour meters as well. Blondel wrote a paper on his results that was delivered to the International Electric Congress held in Chicago in 1893. Although he was not present at the Congress, his paper is included in the published Proceedings. Instead of using N-1 separate meters, the meters are combined into a single housing for commercial purposes such as measuring energy delivered to homes and businesses. Each pairing of a current measuring unit plus a potential measuring unit is then termed a stator or element. Thus, for example, a meter for a four wire service will include three elements. Blondel's Theorem simplifies the work of an electrical utility worker by specifying that an N wire service will be correctly measured by a N-1 element meter. Unfortunately, confusion arises for such workers due to the existence of meters that don't contain tidy pairings of single potential measuring units with single current measuring units. For example, a meter was previously used for four wire services containing two potential coils and three current coils and called a 2.5 element meter. Blondel Noncompliance Electric energy meters that meet the requirement of N-1 elements for an N wire service are often said to be Blondel Compliant. This label identifies the meter as one that will measure correctly under all conditions when correctly installed. However, a meter doesn't have to be Blondel compliant in order to provide suitably accurate measurements and industry practice often includes the use of such non compliant meters. The form 2S meter is extensively used in the metering of residential three wire services, despite being non compliant in such services. This common residential service consists of two 120 volt wires and one neutral wire. A Blondel compliant meter for such a service would need two elements (and a five jaw socket to accept such a meter), but the 2S meter is a single element meter. The 2S meter includes one potential measuring device (a coil or a voltmeter) and two current measuring devices. The current measuring devices provide a measurement equal to one half of the actual current value. The combination of a single potential coil and two so called half coils provides highly accurate metering under most conditions. The meter has been used since the early days of the electrical industry. The advantages were the lower cost of a single potential coil and the avoidance of interference between two elements driving a single disc in an induction meter. For line to line loads, the meter is Blondel compliant. Such loads are two wire loads and a single element meter suffices. The non compliance of the meter occurs in measuring line to neutral loads. The meter design approximates a two element measurement by combining a half current value with the potential value of the line to line connection. The line to line potential is exactly twice the line to neutral connection if the two line to neutral connections are exactly balanced. Twice the potential times half the current then approximates the actual power value with equality under balanced potential. In the case of line to line loads, two times the half current value times the potential value equals the actual power. Error is introduced if the two line to neutral potentials are not balanced and if the line to neutral loads are not equally distributed. That error is given by 0.5(V1-V2)(I1-I2) where V1 and I1 are the potential and current connected between one line and neutral and V2 and I2 are those connected between the other line and neutral. Since the industry typically maintains five percent accuracy in potential, the error will be acceptably low if the loads aren't heavily unbalanced. This same meter has been modified or installed in modified sockets and used for two wire, 120 volt services (relabeled as 2W on the meter face). The modification places the two half coils in series such that a full coil is created. In such installations, the single element meter is Blondel compliant. There is also a three wire 240/480 volt version that is not Blondel compliant. Also in use are three phase meters that are not Blondel compliant, such as forms 14S and 15S, but they can be easily replaced by modern meters and can be considered obsolete. References Electric power Eponymous theorems of physics
Blondel's theorem
[ "Physics", "Engineering" ]
1,052
[ "Equations of physics", "Physical quantities", "Eponymous theorems of physics", "Power (physics)", "Electric power", "Electrical engineering", "Physics theorems" ]
15,901,488
https://en.wikipedia.org/wiki/Prescribed%20scalar%20curvature%20problem
In Riemannian geometry, a branch of mathematics, the prescribed scalar curvature problem is as follows: given a closed, smooth manifold M and a smooth, real-valued function ƒ on M, construct a Riemannian metric on M whose scalar curvature equals ƒ. Due primarily to the work of J. Kazdan and F. Warner in the 1970s, this problem is well understood. The solution in higher dimensions If the dimension of M is three or greater, then any smooth function ƒ which takes on a negative value somewhere is the scalar curvature of some Riemannian metric. The assumption that ƒ be negative somewhere is needed in general, since not all manifolds admit metrics which have strictly positive scalar curvature. (For example, the three-dimensional torus is such a manifold.) However, Kazdan and Warner proved that if M does admit some metric with strictly positive scalar curvature, then any smooth function ƒ is the scalar curvature of some Riemannian metric. See also Prescribed Ricci curvature problem Yamabe problem References Aubin, Thierry. Some nonlinear problems in Riemannian geometry. Springer Monographs in Mathematics, 1998. Kazdan, J., and Warner F. Scalar curvature and conformal deformation of Riemannian structure. Journal of Differential Geometry. 10 (1975). 113–134. Riemannian geometry Mathematical problems Scalar curvature
Prescribed scalar curvature problem
[ "Physics", "Mathematics" ]
285
[ "Geometric measurement", "Mathematical problems", "Physical quantities", "Curvature (mathematics)" ]
2,964,718
https://en.wikipedia.org/wiki/Electronic%20remittance%20advice
An electronic remittance advice (ERA) is an electronic data interchange (EDI) version of a medical insurance payment explanation. It provides details about providers' claims payment, and if the claims are denied, it would then contain the required explanations. The explanations include the denial codes and the descriptions, which present at the bottom of ERA. ERA are provided by plans to Providers. In the United States the industry standard ERA is HIPAA X12N 835 (HIPAA = Health Insurance Portability and Accountability Act; X12N = insurance subcommittees of ASC X12; 835 is the specific code number for ERA), which is sent from insurer to provider either directly or via a bank. See also Remittance advice References Citations Data interchange standards Health insurance
Electronic remittance advice
[ "Technology" ]
158
[ "Computer standards", "Data interchange standards" ]
2,970,014
https://en.wikipedia.org/wiki/Beam%20emittance
In accelerator physics, emittance is a property of a charged particle beam. It refers to the area occupied by the beam in a position-and-momentum phase space. Each particle in a beam can be described by its position and momentum along each of three orthogonal axes, for a total of six position and momentum coordinates. When the position and momentum for a single axis are plotted on a two dimensional graph, the average spread of the coordinates on this plot are the emittance. As such, a beam will have three emittances, one along each axis, which can be described independently. As particle momentum along an axis is usually described as an angle relative to that axis, an area on a position-momentum plot will have dimensions of length × angle (for example, millimeters × milliradian). Emittance is important for analysis of particle beams. As long as the beam is only subjected to conservative forces, Liouville's theorem shows that emittance is a conserved quantity. If the distribution over phase space is represented as a cloud in a plot (see figure), emittance is the area of the cloud. A variety of more exact definitions handle the fuzzy borders of the cloud and the case of a cloud that does not have an elliptical shape. In addition, the emittance along each axis is independent unless the beam passes through beamline elements (such as solenoid magnets) which correlate them. A low-emittance particle beam is a beam where the particles are confined to a small distance and have nearly the same momentum, which is a desirable property for ensuring that the entire beam is transported to its destination. In a colliding beam accelerator, keeping the emittance small means that the likelihood of particle interactions will be greater resulting in higher luminosity. In a synchrotron light source, low emittance means that the resulting x-ray beam will be small, and result in higher brightness. Definitions The coordinate system used to describe the motion of particles in an accelerator has three orthogonal axes, but rather than being centered on a fixed point in space, they are oriented with respect to the trajectory of an "ideal" particle moving through the accelerator with no deviation from the intended speed, position, or direction. Motion along this design trajectory is referred to as the longitudinal axis, and the two axes perpendicular to this trajectory (usually oriented horizontally and vertically) are referred to as transverse axes. The most common convention is for the longitudinal axis to be labelled and the transverse axes to be labelled and . Emittance has units of length, but is usually referred to as "length × angle", for example, "millimeter × milliradians". It can be measured in all three spatial dimensions. Geometric transverse emittance When a particle moves through a circular accelerator or storage ring, the position and angle of the particle in the x direction will trace an ellipse in phase space. (All of this section applies equivalently to and ) This ellipse can be described by the following equation: where x and are the position and angle of the particle, and are the Courant–Snyder (Twiss) parameters, calculated from the shape of the ellipse. The emittance is given by , and has units of length × angle. However, many sources will move the factor of into the units of emittance rather than including the specific value, giving units of "length × angle × ." This formula is the single particle emittance, which describes the area enclosed by the trajectory of a single particle in phase space. However, emittance is more useful as a description of the collective properties of the particles in a beam, rather than of a single particle. Since beam particles are not necessarily distributed uniformly in phase space, definitions of emittance for an entire beam will be based on the area of the ellipse required to enclose a specific fraction of the beam particles. If the beam is distributed in phase space with a Gaussian distribution, the emittance of the beam may be specified in terms of the root mean square value of and the fraction of the beam to be included in the emittance. The equation for the emittance of a Gaussian beam is: where is the root mean square width of the beam, is the Courant-Snyder , and is the fraction of the beam to be enclosed in the ellipse, given as a number between 0 and 1. Here the factor of is shown on the right of the equation, and would often be included in the units of emittance, rather than being multiplied in to the computed value. The value chosen for will depend on the application and the author, and a number of different choices exist in the literature. Some common choices and their equivalent definition of emittance are: {| class="wikitable" |- ! !! |- | || 0.15 |- | || 0.39 |- | || 0.87 |- | || 0.95 |} While the x and y axes are generally equivalent mathematically, in horizontal rings where the x coordinate represents the plane of the ring, consideration of dispersion can be added to the equation of the emittance. Because the magnetic force of a bending magnet is dependent on the energy of the particle being bent, particles of different energies will be bent along different trajectories through the magnet, even if their initial position and angle are the same. The effect of this dispersion on the beam emittance is given by: where is the dispersion at location s, is the ideal particle momentum, and is the root mean square of the momentum difference of the particles in the beam from the ideal momentum. (This definition assumes F=0.15) Longitudinal emittance The geometrical definition of longitudinal emittance is more complex than that of transverse emittance. While the and coordinates represent deviation from a reference trajectory which remains static, the coordinate represents deviation from a reference particle, which is itself moving with a specified energy. This deviation can be expressed in terms of distance along the reference trajectory, time of flight along the reference trajectory (how "early" or "late" the particle is compared to the reference), or phase (for a specified reference frequency). In turn, the coordinate is generally not expressed as an angle. Since represents the change in z over time, it corresponds to the forward motion of the particle. This can be given in absolute terms, as a velocity, momentum, or energy, or in relative terms, as a fraction of the position, momentum, or energy of the reference particle. However, the fundamental concept of emittance is the same—the positions of the particles in a beam are plotted along one axis of a phase space plot, the rate of change of those positions over time is plotted on the other axis, and the emittance is a measure of the area occupied on that plot. One possible definition of longitudinal emittance is given by: where the integral is taken along a path which tightly encloses the beam particles in phase space. Here is the reference frequency and the longitudinal coordinate is the phase of the particles relative to a reference particle. Longitudinal equations such as this one often must be solved numerically, rather than analytically. RMS emittance The geometric definition of emittance assumes that the distribution of particles in phase space can be reasonably well characterized by an ellipse. In addition, the definitions using the root mean square of the particle distribution assume a Gaussian particle distribution. In cases where these assumptions do not hold, it is still possible to define a beam emittance using the moments of the distribution. Here, the RMS emittance () is defined to be, where is the variance of the particle's position, is the variance of the angle a particle makes with the direction of travel in the accelerator ( with along the direction of travel), and represents an angle-position correlation of particles in the beam. This definition is equivalent to the geometric emittance in the case of an elliptical particle distribution in phase space. The emittance may also be expressed as the determinant of the variance-covariance matrix of the beam's phase space coordinates where it becomes clear that quantity describes an effective area occupied by the beam in terms of its second order statistics. Depending on context, some definitions of RMS emittance will add a scaling factor to correspond to a fraction of the total distribution, to facilitate comparison with geometric emittances using the same fraction. RMS emittance in higher dimensions It is sometimes useful to talk about phase space area for either four dimensional transverse phase space (IE , , , ) or the full six dimensional phase space of particles (IE , , , , , ). The RMS emittance generalizes to full three dimensional space as shown: In the absences of correlations between different axes in the particle accelerator, most of these matrix elements become zero and we are left with a product of the emittance along each axis. Normalized emittance Although the previous definitions of emittance remain constant for linear beam transport, they do change when the particles undergo acceleration (an effect called adiabatic damping). In some applications, such as for linear accelerators, photoinjectors, and the accelerating sections of larger systems, it becomes important to compare beam quality across different energies. Normalized emittance, which is invariant under acceleration, is used for this purpose. Normalized emittance in one dimension is given by: The angle in the prior definition has been replaced with the normalized transverse momentum , where is the Lorentz factor and is the normalized transverse velocity. Normalized emittance is related to the previous definitions of emittance through and the normalized velocity in the direction of the beam's travel (): The normalized emittance does not change as a function of energy and so can be used to indicate beam degradation if the particles are accelerated. For speeds close to the speed of light, where is close to one, the emittance is approximately inversely proportional to the energy. In this case, the physical width of the beam will vary inversely with the square root of the energy. Higher dimensional versions of the normalized emittance can be defined in analogy to the RMS version by replacing all angles with their corresponding momenta. Measurement Quadrupole scan technique One of the most fundamental methods of measuring beam emittance is the quadrupole scan method. The emittance of the beam for a particular plane of interest (i.e., horizontal or vertical) can be obtained by varying the field strength of a quadrupole (or quadrupoles) upstream of a monitor (i.e., a wire or a screen). The properties of a beam can be described as the following beam matrix. where is the derivative of x with respect to the longitudinal coordinate. The forces experienced by the beam as it travels down the beam line and passes through the quadrupole(s) are described using the transfer matrix (referenced to transfer maps page) of the beam line, including the quadrupole(s) and other beam line components such as drifts: Here is the transfer matrix between the original beam position and the quadrupole(s), is the transfer matrix of the quadrupole(s), and is the transfer matrix between the quadrupole(s) and the monitor screen. During the quadrupole scan process, and stay constant, and changes with the field strength of the quadrupole(s). The final beam when it reaches the monitor screen at distance s from its original position can be described as another beam matrix : The final beam matrix can be calculated from the original beam matrix by doing matrix multiplications with the beam line transfer matrix : Where is the transpose of . Now, focusing on the (1,1) element of the final beam matrix throughout the matrix multiplications, we get the equation: Here the middle term has a factor of 2 because . Now divide both sides of the above equation by , the equation becomes: Which is a quadratic equation of the variable . Since the RMS emittance RMS is defined to be the following. The RMS emittance of the original beam can be calculated using its beam matrix elements: To obtain the emittance measurement, the following procedure is employed: For each value (or value combination) of the quadrupole(s), the beam line transfer transfer matrix is calculated to determine values of and . The beam propagates through the varied beam line, and is observed at the monitor screen, where the beam size is measured. Repeat step 1 and 2 to obtain a series of values for and , fit the results with a parabola . Equate parabola fit parameters with original beam matrix elements: , , . Calculate RMS emittance of the original beam: If the length of the quadrupole is short compared to its focal length , where is the field strength of the quadrupole, its transfer matrix can be approximated by the thin lens approximation: Then the RMS emittance can be calculated by fitting a parabola to values of measured beam size versus quadrupole strength . By adding additional quadrupoles, this technique can be extended to a full 4-D reconstruction. Mask-based reconstruction Another fundamental method for measuring emittance is by using a predefined mask to imprint a pattern on the beam and sample the remaining beam at a screen downstream.  Two such masks are pepper pots and TEM grids.  A schematic of the TEM grid measurement is shown below. By using the knowledge of the spacing of the features in the mask one can extract information about the beam size at the mask plane.  By measuring the spacing between the same features on the sampled beam downstream, one can extract information about the angles in the beam.  The quantities of merit can be extracted as described in Marx et al. The choice of mask is generally dependent on the charge of the beam; low-charge beams are better suited to the TEM grid mask over the pepper pot, as more of the beam is transmitted. Emittance of electrons versus heavy particles To understand why the RMS emittance takes on a particular value in a storage ring, one needs to distinguish between electron storage rings and storage rings with heavier particles (such as protons). In an electron storage ring, radiation is an important effect, whereas when other particles are stored, it is typically a small effect. When radiation is important, the particles undergo radiation damping (which slowly decreases emittance turn after turn) and quantum excitation causing diffusion which leads to an equilibrium emittance. When no radiation is present, the emittances remain constant (apart from impedance effects and intrabeam scattering). In this case, the emittance is determined by the initial particle distribution. In particular if one injects a "small" emittance, it remains small, whereas if one injects a "large" emittance, it remains large. Acceptance The acceptance, also called admittance, is the maximum emittance that a beam transport system or analyzing system is able to transmit. This is the size of the chamber transformed into phase space and does not suffer from the ambiguities of the definition of beam emittance. Conservation of emittance Lenses can focus a beam, reducing its size in one transverse dimension while increasing its angular spread, but cannot change the total emittance. This is a result of Liouville's theorem. Ways of reducing the beam emittance include radiation damping, stochastic cooling, and electron cooling. Emittance and brightness Emittance is also related to the brightness of the beam. In microscopy brightness is very often used, because it includes the current in the beam and most systems are circularly symmetric. Consider the brightness of the incident beam at the sample, where indicates the beam current and represents the total emittance of the incident beam and the wavelength of the incident electron. The intrinsic emittance , describing a normal distribution in the initial phase space, is diffused by the emittance introduced by aberrations . The total emittance is approximately the sum in quadrature. Under the assumption of uniform illumination of the aperture with current per unit angle , we have the following emittance-brightness relation, See also Accelerator physics Etendue Mean transverse energy References Accelerator physics
Beam emittance
[ "Physics" ]
3,384
[ "Applied and interdisciplinary physics", "Accelerator physics", "Experimental physics" ]
2,970,044
https://en.wikipedia.org/wiki/Radiation%20damping
Radiation damping in accelerator physics is a phenomenum where betatron oscillations and longitudinal oscilations of the particle are damped due to energy loss by synchrotron radiation. It can be used to reduce the beam emittance of a high-velocity charged particle beam. The two main ways of using radiation damping to reduce the emittance of a particle beam are the use of undulators and damping rings (often containing undulators), both relying on the same principle of inducing synchrotron radiation to reduce the particles' momentum, then replacing the momentum only in the desired direction of motion. Damping rings As particles are moving in a closed orbit, the lateral acceleration causes them to emit synchrotron radiation, thereby reducing the size of their momentum vectors (relative to the design orbit) without changing their orientation (ignoring quantum effects for the moment). In longitudinal direction, the loss of particle impulse due to radiation is replaced by accelerating sections (RF cavities) that are installed in the beam path so that an equilibrium is reached at the design energy of the accelerator. Since this is not happening in transverse direction, where the emittance of the beam is only increased by the quantization of radiation losses (quantum effects), the transverse equilibrium emittance of the particle beam will be smaller with large radiation losses, compared to small radiation losses. Because high orbit curvatures (low curvature radii) increase the emission of synchrotron radiation, damping rings are often small. If long beams with many particle bunches are needed to fill a larger storage ring, the damping ring may be extended with long straight sections. Undulators and wigglers When faster damping is required than can be provided by the turns inherent in a damping ring, it is common to add undulator or wiggler magnets to induce more synchrotron radiation. These are devices with periodic magnetic fields that cause the particles to oscillate transversely, equivalent to many small tight turns. These operate using the same principle as damping rings and this oscillation causes the charged particles to emit synchrotron radiation. The many small turns in an undulator have the advantage that the cone of synchrotron radiation is all in one direction, forward. This is easier to shield than the broad fan produced by a large turn. Energy loss The power radiated by a charged particle is given by a generalization of the Larmor formula derived by Liénard in 1898 , where is the velocity of the particle, the acceleration, e the elementary charge, the vacuum permittivity, the Lorentz factor and the speed of light. Note: is the momentum and is the mass of the particle. Linac and RF Cavities In case of an acceleration parallel to the longitudinal axis ( ), the radiated power can be calculated as below Inserting in Larmor's formula gives Bending In case of an acceleration perpendicular to the longitudinal axis ( ) Inserting in Larmor's formula gives (Hint: Factor and use ) Using magnetic field perpendicular to velocity Using radius of curvature and inserting in gives Electron Here are some useful formulas to calculate the power radiated by an electron accelerated by a magnetic field perpendicular to the velocity and . where , is the perpendicular magnetic field, the electron mass. Using the classical electron radius where is the radius of curvature, can also be derived from particle coordinates (using common 6D phase space coordinates system x,x',y,y',s,): Note: The transverse magnetic field is often normalized using the magnet rigidity: Field expansion (using Laurent_series): where is the transverse field expressed in [T], the multipole field strengths (skew and normal) expressed in , the particle position and the multipole order, k=0 for a dipole,k=1 for a quadrupole,k=2 for a sextupole, etc... See also Particle beam cooling References External links SLAC damping rings home page, including a non-technical description of the damping rings at SLAC. Studies Pertaining to a Small Damping Ring for the International Linear Collider, a report describing the constraints on minimum damping ring size. Accelerator physics Synchrotron radiation
Radiation damping
[ "Physics" ]
881
[ "Applied and interdisciplinary physics", "Accelerator physics", "Experimental physics" ]
1,500,520
https://en.wikipedia.org/wiki/Cross%20sea
A cross sea (also referred to as a squared sea or square waves) is a sea state of wind-generated ocean waves that form nonparallel wave systems. Cross seas have a large amount of directional spreading. This may occur when water waves from one weather system continue despite a shift in wind. Waves generated by the new wind run at an angle to the old. Two weather systems that are far from each other may create a cross sea when the waves from the systems meet at a place far from either weather system. Until the older waves have dissipated, they can present a perilous sea hazard. This sea state is fairly common and a large percentage of ship accidents have been found to occur in this state. Vessels fare better against large waves when sailing directly perpendicular to oncoming surf. In a cross sea scenario, that becomes impossible as sailing into one set of waves necessitates sailing parallel to the other. A cross swell is generated when the wave systems are longer-period swells, rather than short-period wind-generated waves. Notes References Water waves
Cross sea
[ "Physics", "Chemistry" ]
217
[ "Water waves", "Waves", "Physical phenomena", "Fluid dynamics" ]
1,501,608
https://en.wikipedia.org/wiki/Montmorillonite
Montmorillonite is a very soft phyllosilicate group of minerals that form when they precipitate from water solution as microscopic crystals, known as clay. It is named after Montmorillon in France. Montmorillonite, a member of the smectite group, is a 2:1 clay, meaning that it has two tetrahedral sheets of silica sandwiching a central octahedral sheet of alumina. The particles are plate-shaped with an average diameter around 1 μm and a thickness of 0.96 nm; magnification of about 25,000 times, using an electron microscope, is required to resolve individual clay particles. Members of this group include saponite, nontronite, beidellite, and hectorite. Montmorillonite is a subclass of smectite, a 2:1 phyllosilicate mineral characterized as having greater than 50% octahedral charge; its cation exchange capacity is due to isomorphous substitution of Mg for Al in the central alumina plane. The substitution of lower valence cations in such instances leaves the nearby oxygen atoms with a net negative charge that can attract cations. In contrast, beidellite is smectite with greater than 50% tetrahedral charge originating from isomorphous substitution of Al for Si in the silica sheet. The individual crystals of montmorillonite clay are not tightly bound hence water can intervene, causing the clay to swell, hence montmorillonite is a characteristic component of swelling soil. The water content of montmorillonite is variable and it increases greatly in volume when it absorbs water. Chemically, it is hydrated sodium calcium aluminium magnesium silicate hydroxide . Potassium, iron, and other cations are common substitutes, and the exact ratio of cations varies with source. It often occurs intermixed with chlorite, muscovite, illite, cookeite, and kaolinite. Cave conditions Montmorillonite can be concentrated and transformed within cave environments. The natural weathering of the cave can leave behind concentrations of aluminosilicates which were contained within the bedrock. Montmorillonite can form slowly in solutions of aluminosilicates. High concentrations and long periods of time can aid in its formation. Montmorillonite can then transform to palygorskite under dry conditions and to halloysite-10Å (endellite) in acidic conditions (pH 5 or lower). Halloysite-10Å can further transform into halloysite-7Å by drying. Uses Montmorillonite is used in the oil drilling industry as a component of drilling mud, making the mud slurry viscous, which helps to keep the drill bit cool, and to remove drilled solids. It is also used as a soil additive to hold soil water in drought-prone soils, used in the construction of earthen dams and levees, and to prevent the leakage of fluids. It is also used as a component of foundry sand and as a desiccant to remove moisture from air and gases. Montmorillonite clays have been extensively used in catalytic processes. Cracking catalysts have used montmorillonite clays for over 60 years. Other acid-based catalysts use acid-treated montmorillonite clays. Similar to many other clays, montmorillonite swells with the addition of water. Montmorillonites expand considerably more than other clays due to water penetrating the interlayer molecular spaces and concomitant adsorption. The amount of expansion is due largely to the type of exchangeable cation contained in the sample. The presence of sodium as the predominant exchangeable cation can result in the clay swelling to several times its original volume. Hence, sodium montmorillonite has come to be used as the major constituent in nonexplosive agents for splitting rock in natural stone quarries in an effort to limit the amount of waste, or for the demolition of concrete structures where the use of explosive charges is unacceptable. This swelling property makes montmorillonite-containing bentonite useful also as an annular seal or plug for water wells and as a protective liner for landfills. Other uses include as an anticaking agent in animal feed, in papermaking to minimize deposit formation, and as a retention and drainage aid component. Montmorillonite has also been used in cosmetics. Sodium montmorillonite is also used as the base of some cat litter products, due to its adsorbent and clumping properties. Montmorillonite can be used to remove arsenic from wastewater. Calcined clay products Montmorillonite can be calcined to produce arcillite, a porous material. This calcined clay is sold as a soil conditioner for playing fields and other soil products such as for use as bonsai soil as an alternative to akadama. Medicine and pharmacology Montmorillonite is effective as an adsorptive of heavy metals, however the impact this has on human health is unknown. It's assumed that heavy metal adsorption would only be applicable when the clay has direct contact. Hence it will not help when ingested, as it almost certainly doesn't pass through the intestinal mucous membranes. For external use, montmorillonite has been used to treat contact dermatitis. Pet food Montmorillonite clay is added to some dog and cat foods as an anti-caking agent and because it may provide some resistance to environmental toxins, though research on the subject is not yet conclusive. In a fine powder form, it can also be used as a flocculant in ponds. Tossed on the surface as it drops into the water, making the water "clouded", it attracts minute particles in the water and then settles to the bottom, cleaning the water. Koi and goldfish (carp) then actually feed on the "clump" which can aid in the digestion of the fish. It is sold in pond supply shops. Discovery Montmorillonite was first described in 1847 for an occurrence in Montmorillon in the department of Vienne, France, more than 50 years before the discovery of bentonite in the US. It is found in many locations worldwide and known by other names. Recently, a new source of Montmorillonite has been explored in Sulaiman Mountains of Pakistan. See also References Papke, Keith G. Montmorillonite, Bentonite and Fuller’s Earth Deposits in Nevada, Nevada Bureau of Mines Bulletin 76, Mackay School of Mines, University of Nevada-Reno, 1970. Mineral Galleries Mineral web Smectite group Magnesium minerals Sodium minerals Calcium minerals Desiccants Medicinal clay Cave minerals Luminescent minerals Monoclinic minerals Minerals in space group 12 Catalysts
Montmorillonite
[ "Physics", "Chemistry" ]
1,410
[ "Catalysis", "Catalysts", "Luminescence", "Luminescent minerals", "Desiccants", "Materials", "Chemical kinetics", "Matter" ]
1,502,748
https://en.wikipedia.org/wiki/La%20Noumbi
La Noumbi is a floating production storage and offloading (FPSO) unit operated by Perenco. The vessel, converted from the former Finnish Aframax crude oil tanker Tempera by Keppel Corporation, will replace an older FPSO unit in the Yombo field off the Republic of Congo in 2018. Built at Sumitomo Heavy Industries in Japan in 2002, Tempera was the first ship to utilize the double acting tanker (DAT) concept in which the vessel is designed to travel ahead in open water and astern in severe ice conditions. Tempera and her sister ship Mastera, built in 2003, were used mainly to transport crude oil, year-round, from the Russian oil terminal in Primorsk to Neste Oil refineries in Porvoo and Naantali. In 2015, Neste sold Tempera to the oil and gas company Perenco for conversion to an FPSO. Concept Although icebreaking cargo ships had been built in the past, their hull forms were always compromises between open water performance and icebreaking capability. A good icebreaking bow, designed to break the ice by bending it under the ship's weight, has very poor open water characteristics and is subjected to slamming in heavy weather while a hydrodynamically efficient bulbous bow greatly increases the ice resistance. However, already in the late 1800s captains operating ships in icebound waters discovered that sometimes it was easier to break through ice by running their vessels astern. This was because the forward-facing propellers generated a water flow that lowered the resistance by reducing friction between the ship's hull and ice. These findings resulted in the adoption of bow propellers in older icebreakers operating in the Great Lakes and the Baltic Sea, but as forward-facing propellers have a very low propulsion efficiency and the steering ability of a ship is greatly reduced when running astern, it could not be considered a main operating mode for merchant ships. For this reason it was not until the development of electric podded propulsion, ABB's Azipod, that the concept of double acting ships became feasible. The superiority of podded propulsion in icebreaking merchant ships, especially when running astern, was proved when Finnish product tankers Uikku and Lunni were converted to Azipod propulsion in 1993 and 1994, respectively. Even though the ships were originally designed with icebreaking capability in mind, after the conversion ice resistance in level ice when running astern was only 40% of that when breaking ice ahead despite the ships being equipped with an icebreaking bow and not designed to break ice astern. History Development and construction Following the successful operation of the Azipod-converted tankers Uikku and Lunni in the Northern Sea Route, Kværner Masa-Yards Arctic Research Centre developed the first double acting tanker concept in the early 1990s. The 90,000 DWT tankers were designed to transport oil and gas condensate from the Pechora Sea in the Russian Arctic, where ice conditions during winter can be considered moderate and the ships would operate mainly in astern mode, first to Murmansk and then Rotterdam, where most of the distance can be travelled in open water year round. Other early double acting concepts included a similar ship with an icebreaking bow that would be utilized in summer time when the ship was traveling in areas with low ice concentration but with a risk of colliding with multi-year ice blocks. In early 2000s Fortum Shipping, the transportation arm of the Finnish energy company Fortum, started a major fleet renewal program to increase the efficiency and reduce the average age of its vessels. The program also included replacing the company's old tankers, such as the 90,000-ton Natura, that were used to transport crude oil to the company's oil refineries in the Gulf of Finland. The old ships had traffic restrictions during the worst part of the winter because of their lower ice class of 1C and could not deliver their cargo all the way to the refineries in Porvoo and Naantali because they were denied icebreaker assistance. When this happened, the oil had to be transported to smaller ships of higher ice class at the edge of the ice — a practice that was both uneconomical and hazardous. To solve these problems Kværner Masa-Yards Arctic Research Centre developed a new 100,000 DWT Aframax tanker concept together with Fortum Shipping, which ordered two vessels from Sumitomo Heavy Industries in 2001. The new ships were designed to the highest Finnish-Swedish ice class, 1A Super, and to be capable of operating in all ice conditions encountered in the Baltic Sea. The possibility to operate in the Pechora Sea was also taken into account in the design process. Extensive ice model tests confirmed the vessel's operational capability in level ice, rubble fields, ice channels and ridges. The world's first double acting tanker and the largest 1A Super class oil tanker at that time, Tempera, was delivered from Yokosuka shipyard in late August 2002. She was awarded the Ship Of The Year 2002 award by the Society of Naval Architects of Japan (SNAJ). The second ship, Mastera, was delivered in the following year. Both ships were named after the company's oil products. While the price of the contract was not made public, the company later admitted that the 60–70 million euro estimate was "quite close to the truth". The ships were owned by ABB Credit, which leased them to Fortum for ten years. The leasing business was later sold to SEB Leasing. Tempera (2002–2018) Since the beginning, Tempera and Mastera were used primarily for year-round transportation of crude oil from the Russian oil terminal of Primorsk to company's own oil terminals in Porvoo and Naantali, where they have been the only ships capable of operating without delays or problems during the harshest winters. Occasionally they have carried cargoes in the Gulf of Bothnia and even outside the Baltic Sea depending on the amount of oil in the refineries' storage tanks. Tempera has also visited Murmansk, where she loaded crude oil from FSO Belokamenka. However, due to draft restrictions the ships could not carry a full cargo of 100,000 tons to the port of Naantali until April 2010 — they had to stop at Porvoo on the way and unload 20,000 tons of oil to reduce the draft of the vessel. In 2005 Fortum's oil division was transferred to the re-established Neste Oil and the management of the ships, including the double acting tankers, was handed over to a subsidiary company Neste Shipping. Throughout their career, Tempera and Mastera were the only double acting tankers operating in the Baltic Sea. While other double acting ships have been built in the recent years, the tankers operated by Neste Shipping are also the only ones equipped with a bulbous bow designed primarily with open water performance in mind — the tankers and container ships built for the Russian Arctic have a more traditional icebreaking bow due to the more severe ice conditions. On 29 May 2015, Neste announced a decision to sell Tempera to Perenco. La Noumbi (2018 onwards) In 2017, Tempera left the Baltic Sea and sailed around the Cape of Good Hope, headed for Singapore where she would be converted to a floating production storage and offloading (FPSO) unit at Keppel Corporation shipyard in 2017–2018, after which she would replace FPSO Conkouati at the Yombo field off the Republic of Congo. The conversion included installing additional accommodation capacity as well as production-related equipment. The converted vessel was renamed La Noumbi on 26 July 2018. Design (as oil tanker) General characteristics Tempera is long overall and between perpendiculars. The moulded breadth and depth of her hull are and , respectively, and from keel to mast she measured . Her gross tonnage is 64,259 and net tonnage 30,846, and the deadweight tonnage corresponding to the draft at summer freeboard, , is 106,034 tons, slightly less than in Mastera. In ballast Tempera draws only of water. The foreship of Tempera is designed for open water performance with a bulbous bow to maximize the hydrodynamic efficiency. The ship is, just like any other ice-strengthened vessel, also capable of running ahead in light ice conditions. The stern is, however, shaped like an icebreaker's bow, and Tempera is designed to operate independently in the most severe ice conditions of the Baltic Sea. For this purpose the ship is also equipped with two bridges for navigating in both directions. The ship is served by a crew of 15 to 20 depending on operating conditions during winter and maintenance work during summer. Tempera is classified by Lloyd's Register of Shipping. Cargo tanks and handling Tempera has six pairs of heated, partially epoxy-coated cargo tanks and one pair of fully coated slop tanks, all divided by a longitudinal center bulkhead and protected by double hull, with a combined capacity of at 98% filling. For cargo handling she has three electrically driven cargo oil pumps with a capacity of 3,500 m3/h × 130 m and one cargo oil stripping pump rated for 300 m3/h × 130 m. The cargo can be loaded in 10 hours and discharged in 12 hours. Each cargo tank has two and both slop tanks one automated tank cleaning machines as well as holes for portable tank cleaning machines. The ship's ballast water capacity of is divided into sixteen segregated ballast tanks, six pairs in the double hull around the cargo tanks, two fore peak tanks and two aft peak tanks. She has two electrically driven ballast pumps rated at 2,500 m3/h × 35 m and 3,000 m3/h × 70 m. The ballast capacity is needed to maintain correct trim especially during drydocking — the empty ship has an aft trim of and an uneven weight distribution may damage the hull girder. Power and propulsion Tempera has diesel-electric powertrain with four main generating sets, two nine-cylinder Wärtsilä 9L38B and two six-cylinder 6L38B four-stroke medium-speed diesel engines, with a combined output of . The main engines are equipped with exhaust gas economizers. In addition Tempera has one auxiliary diesel generator that can be used when the ship is at port. The auxiliary generator, six-cylinder Wärtsilä 6L26A, has an output of . While underway at , the fuel consumption of the ship's main engines is 56 tons of heavy fuel oil per day when loaded and 40 tons per day in ballast. Her tanks can store of heavy fuel oil for the main engines, of diesel oil for the auxiliary generator, steam boilers and inert gas system, and of lubrication oil. Tempera and her sister ship are the first tankers propelled by ABB Azipod electric azimuth thrusters capable of rotating 360 degrees around the vertical axis. The pulling-type VI2500 pods in these two ships, with a nominal output of 16 MW and fixed-pitch propellers turning at 86 rpm, are the most powerful ice-strengthened Azipod units ABB has ever produced. The forward-facing propeller increases the propulsion efficiency due to optimal water flow to the propeller and thus improves fuel efficiency. In addition an azimuthing thruster's ability to direct thrust to any direction also results in excellent manoeuvrability characteristics that exceeds those of ships utilizing traditional mechanical shaftlines and rudders. The turning circle of Mastera and Tempera is only half a kilometer at full speed, half of that of a traditional oil tanker of the same size. This is a significant safety factor as the stopping distance of a traditional tanker can be up to . For maneuvering at low speeds in harbours, Tempera is also equipped with two 1,750kW bow thrusters. Icebreaking capability The icebreaking capability of the double acting tankers proved to be superior to other ships since the beginning — in shuttle service between Primorsk, Russia, and the Finnish refineries the tankers require no icebreaker assistance and have even acted as an icebreaker for other merchant ships that have utilized the wide channel opened by the Aframax tanker. However, this has not been intentional — when the world's largest 1A Super class ice tanker Stena Arctica, also owned by Neste Shipping, became stuck in ice outside the port of Primorsk during the winter of 2009–2010, a decision was made to leave the assisting to Russian icebreakers. The ships have performed beyond expectations in both level ice up to thick, which can be broken in continuous motion at , and ridges up to deep, which can be penetrated by either allowing the forward-facing propeller to mill (crush) the ice or breaking the ridge apart with the propeller wash. While the vessels have been immobilized occasionally by pack ice, they have been able to free themselves by using the rotating propeller pod to clear the ice around the hull. References 2002 ships Double acting ships Floating production storage and offloading vessels Ships built by Sumitomo Heavy Industries
La Noumbi
[ "Chemistry" ]
2,703
[ "Floating production storage and offloading vessels", "Petroleum technology" ]
1,502,835
https://en.wikipedia.org/wiki/Magnetohydrodynamic%20generator
A magnetohydrodynamic generator (MHD generator) is a magnetohydrodynamic converter that transforms thermal energy and kinetic energy directly into electricity. An MHD generator, like a conventional generator, relies on moving a conductor through a magnetic field to generate electric current. The MHD generator uses hot conductive ionized gas (a plasma) as the moving conductor. The mechanical dynamo, in contrast, uses the motion of mechanical devices to accomplish this. MHD generators are different from traditional electric generators in that they operate without moving parts (e.g. no turbines), so there is no limit on the upper temperature at which they can operate. They have the highest known theoretical thermodynamic efficiency of any electrical generation method. MHD has been developed for use in combined cycle power plants to increase the efficiency of electric generation, especially when burning coal or natural gas. The hot exhaust gas from an MHD generator can heat the boilers of a steam power plant, increasing overall efficiency. Practical MHD generators have been developed for fossil fuels, but these were overtaken by less expensive combined cycles in which the exhaust of a gas turbine or molten carbonate fuel cell heats steam to power a steam turbine. MHD dynamos are the complement of MHD accelerators, which have been applied to pump liquid metals, seawater, and plasmas. Natural MHD dynamos are an active area of research in plasma physics and are of great interest to the geophysics and astrophysics communities since the magnetic fields of the Earth and Sun are produced by these natural dynamos. Background In a conventional thermal power plant, like a coal-fired power station or nuclear power plant, the energy created by the chemical or nuclear reactions is absorbed in a working fluid, usually water. In a coal plant, for instance, the coal burns in an open chamber which is surrounded by tubes carrying water. The heat from the combustion is absorbed by the water which boils into steam. The steam is then sent into a steam turbine which extracts energy from the steam by turning it into rotational motion. The steam is slowed and cooled as it passes through the turbine. The rotational motion then turns an electrical generator. The efficiency of this overall cycle, known the Rankine cycle, is a function of the temperature difference between the inlet to the boiler and the outlet to the turbine. The maximum temperature at the turbine is a function of the energy source; and the minimum temperature at the inlet is a function of the surrounding environment's ability to absorb waste heat. For many practical reasons, coal plants generally extract about 35% of the heat energy from the coal, the rest is ultimately dumped into the cooling system or escapes through other losses. MHD generators can extract more energy from the fuel source than turbine-generator systems. They do this by skipping the step where the heat is transferred to another working fluid. Instead, they use the hot exhaust directly as the working fluid. In the case of a coal plant, the exhaust is directed through a nozzle that increases its velocity, essentially a rocket nozzle, and then directs it through a magnetic system that directly generates electricity. In a conventional generator, rotating magnets move past a material filled with nearly-free electrons, typically copper wire (or vice-versa depending on the design). In the MHD system the electrons in the exhaust gas move past a stationary magnet. Ultimately the effect is the same, the working fluid is slowed down and cools as its kinetic energy is transferred to electrons, and is thereby converted to electrical power. MHD can only be used with power sources that produce large amounts of fast moving plasma, like the gas from burning coal. This means it is not suitable for systems that work at lower temperatures or do not produce an ionized gas, like a solar power tower or nuclear reactor. In the early days of development of nuclear power, one alternative design was the gaseous fission reactor, which did produce plasma, and this led to some interest in MHD for this role. This style of reactor was never built, however, and interest from the nuclear industry waned. The vast majority of work on MHD for electrical generation has been related to coal fired plants. Principle The Lorentz Force Law describes the effects of a charged particle moving in a constant magnetic field. The simplest form of this law is given by the vector equation. where is the force acting on the particle. is the charge of the particle, is the velocity of the particle, and is the magnetic field. The vector is perpendicular to both and according to the right hand rule. Power generation Typically, for a large power station to approach the operational efficiency of computer models, steps must be taken to increase the electrical conductivity of the conductive substance. Heating a gas to its plasma state, or adding other easily ionizable substances like the salts of alkali metals, can help to accomplish this. In practice, a number of issues must be considered in the implementation of an MHD generator: generator efficiency, economics, and toxic byproducts. These issues are affected by the choice of one of the three MHD generator designs: the Faraday generator, the Hall generator, and the disc generator. Faraday generator The Faraday generator is named for Michael Faraday's experiments on moving charged particles in the Thames River. A simple Faraday generator consists of a wedge-shaped pipe or tube of some non-conductive material. When an electrically conductive fluid flows through the tube, in the presence of a significant perpendicular magnetic field, a voltage is induced in the fluid. This can be drawn off as electrical power by placing electrodes on the sides, at 90-degree angles to the magnetic field. There are limitations on the density and type of field used in this example. The amount of power that can be extracted is proportional to the cross-sectional area of the tube and the speed of the conductive flow. The conductive substance is also cooled and slowed by this process. MHD generators typically reduce the temperature of the conductive substance from plasma temperatures to just over 1000 °C. The main practical problem of a Faraday generator is that differential voltages and currents in the fluid may short through the electrodes on the sides of the duct. The generator can also experience losses from the Hall effect current, which makes the Faraday duct inefficient. Most further refinements of MHD generators have tried to solve this problem. The optimal magnetic field on duct-shaped MHD generators is a sort of saddle shape. To get this field, a large generator requires an extremely powerful magnet. Many research groups have tried to adapt superconducting magnets to this purpose, with varying success. Hall generator The typical solution has been to use the Hall effect to create a current that flows with the fluid. (See illustration.) This design has arrays of short, segmented electrodes on the sides of the duct. The first and last electrodes in the duct power the load. Each other electrode is shorted to an electrode on the opposite side of the duct. These shorts of the Faraday current induce a powerful magnetic field within the fluid, but in a chord of a circle at right angles to the Faraday current. This secondary, induced field makes the current flow in a rainbow shape between the first and last electrodes. Losses are less than in a Faraday generator, and voltages are higher because there is less shorting of the final induced current. However, this design has problems because the speed of the material flow requires the middle electrodes to be offset to "catch" the Faraday currents. As the load varies, the fluid flow speed varies, misaligning the Faraday current with its intended electrodes, and making the generator's efficiency very sensitive to its load. Disc generator The third and, currently, the most efficient design is the Hall effect disc generator. This design currently holds the efficiency and energy density records for MHD generation. A disc generator has fluid flowing between the center of a disc, and a duct wrapped around the edge. (The ducts are not shown.) The magnetic excitation field is made by a pair of circular Helmholtz coils above and below the disk. (The coils are not shown.) The Faraday currents flow in a perfect dead short around the periphery of the disk. The Hall effect currents flow between ring electrodes near the center duct and ring electrodes near the periphery duct. The wide flat gas flow reduced the distance, hence the resistance of the moving fluid. This increases efficiency. Another significant advantage of this design is that the magnets are more efficient. First, they cause simple parallel field lines. Second, because the fluid is processed in a disk, the magnet can be closer to the fluid, and in this geometry, magnetic field strengths increase as the 7th power of distance. Finally, the generator is compact, so the magnet is smaller and uses a much smaller percentage of the generated power. Generator efficiency The efficiency of the direct energy conversion in MHD power generation increases with the magnetic field strength and the plasma conductivity, which depends directly on the plasma temperature, and more precisely on the electron temperature. As very hot plasmas can only be used in pulsed MHD generators (for example using shock tubes) due to the fast thermal material erosion, it was envisaged to use nonthermal plasmas as working fluids in steady MHD generators, where only free electrons are heated a lot (10,000–20,000 kelvins) while the main gas (neutral atoms and ions) remains at a much lower temperature, typically 2500 kelvins. The goal was to preserve the materials of the generator (walls and electrodes) while improving the limited conductivity of such poor conductors to the same level as a plasma in thermodynamic equilibrium; i.e. completely heated to more than 10,000 kelvins, a temperature that no material could stand. Evgeny Velikhov first discovered theoretically in 1962 and experimentally in 1963 that an ionization instability, later called the Velikhov instability or electrothermal instability, quickly arises in any MHD converter using magnetized nonthermal plasmas with hot electrons, when a critical Hall parameter is reached, depending on the degree of ionization and the magnetic field. This instability greatly degrades the performance of nonequilibrium MHD generators. The prospects of this technology, which initially predicted high efficiencies, crippled MHD programs all over the world as no solution to mitigate the instability was found at that time. Without implementing solutions to overcome the electrothermal instability, practical MHD generators had to limit the Hall parameter or use moderately-heated thermal plasmas instead of cold plasmas with hot electrons, which severely lowers efficiency. As of 1994, the 22% efficiency record for closed-cycle disc MHD generators was held by Tokyo Technical Institute. The peak enthalpy extraction in these experiments reached 30.2%. Typical open-cycle Hall & duct coal MHD generators are lower, near 17%. These efficiencies make MHD unattractive, by itself, for utility power generation, since conventional Rankine cycle power plants can reach 40%. However, the exhaust of an MHD generator burning fossil fuel is almost as hot as a flame. By routing its exhaust gases into a heat exchanger for a turbine Brayton cycle or steam generator Rankine cycle, MHD can convert fossil fuels into electricity with an overall estimated efficiency of up to 60 percent, compared to the 40 percent of a typical coal plant. A magnetohydrodynamic generator might also be the first stage of a gas core reactor. Material and design issues MHD generators have problems in regard to materials, both for the walls and the electrodes. Materials must not melt or corrode at very high temperatures. Exotic ceramics were developed for this purpose, selected to be compatible with the fuel and ionization seed. The exotic materials and the difficult fabrication methods contribute to the high cost of MHD generators. MHDs also work better with stronger magnetic fields. The most successful magnets have been superconducting, and very close to the channel. A major difficulty was refrigerating these magnets while insulating them from the channel. The problem is worse because the magnets work better when they are closer to the channel. There are also risks of damage to the hot, brittle ceramics from differential thermal cracking: magnets are usually near absolute zero, while the channel is several thousand degrees. For MHDs, both alumina (Al2O3) and magnesium peroxide (MgO2) were reported to work for the insulating walls. Magnesium peroxide degrades near moisture. Alumina is water-resistant and can be fabricated to be quite strong, so in practice, most MHDs have used alumina for the insulating walls. For the electrodes of clean MHDs (i.e. burning natural gas), one good material was a mix of 80% CeO2, 18% ZrO2, and 2% Ta2O5. Coal-burning MHDs have highly corrosive environments with slag. The slag both protects and corrodes MHD materials. In particular, migration of oxygen through the slag accelerates the corrosion of metallic anodes. Nonetheless, very good results have been reported with stainless steel electrodes at 900K. Another, perhaps superior option is a spinel ceramic, FeAl2O4 - Fe3O4. The spinel was reported to have electronic conductivity, absence of a resistive reaction layer but with some diffusion of iron into the alumina. The diffusion of iron could be controlled with a thin layer of very dense alumina, and water cooling in both the electrodes and alumina insulators. Attaching the high-temperature electrodes to conventional copper bus bars is also challenging. The usual methods establish a chemical passivation layer, and cool the busbar with water. Economics MHD generators have not been used for large-scale mass energy conversion because other techniques with comparable efficiency have a lower lifecycle investment cost. Advances in natural gas turbines achieved similar thermal efficiencies at lower costs, by having the turbine exhaust drive a Rankine cycle steam plant. To get more electricity from coal, it is cheaper to simply add more low-temperature steam-generating capacity. A coal-fueled MHD generator is a type of Brayton power cycle, similar to the power cycle of a combustion turbine. However, unlike the combustion turbine, there are no moving mechanical parts; the electrically conducting plasma provides the moving electrical conductor. The side walls and electrodes merely withstand the pressure within, while the anode and cathode conductors collect the electricity that is generated. All Brayton cycles are heat engines. Ideal Brayton cycles also have an ideal efficiency equal to ideal Carnot cycle efficiency. Thus, the potential for high energy efficiency from an MHD generator. All Brayton cycles have higher potential for efficiency the higher the firing temperature. While a combustion turbine is limited in maximum temperature by the strength of its air/water or steam-cooled rotating airfoils; there are no rotating parts in an open-cycle MHD generator. This upper bound in temperature limits the energy efficiency in combustion turbines. The upper bound on Brayton cycle temperature for an MHD generator is not limited, so inherently an MHD generator has a higher potential capability for energy efficiency. The temperatures at which linear coal-fueled MHD generators can operate are limited by factors that include: (a) the combustion fuel, oxidizer, and oxidizer preheat temperature which limit the maximum temperature of the cycle; (b) the ability to protect the sidewalls and electrodes from melting; (c) the ability to protect the electrodes from electrochemical attack from the hot slag coating the walls combined with the high current or arcs that impinge on the electrodes as they carry off the direct current from the plasma; and (d) by the capability of the electrical insulators between each electrode. Coal-fired MHD plants with oxygen/air and high oxidant preheats would probably provide potassium-seeded plasmas of about 4200°F, 10 atmospheres pressure, and begin expansion at Mach1.2. These plants would recover MHD exhaust heat for oxidant preheat, and for combined cycle steam generation. With aggressive assumptions, one DOE-funded feasibility study of where the technology could go, 1000 MWe Advanced Coal-Fired MHD/Steam Binary Cycle Power Plant Conceptual Design, published in June 1989, showed that a large coal-fired MHD combined cycle plant could attain a HHV energy efficiency approaching 60 percent—well in excess of other coal-fueled technologies, so the potential for low operating costs exists. However, no testing at those aggressive conditions or size has yet occurred, and there are no large MHD generators now under test. There is simply an inadequate reliability track record to provide confidence in a commercial coal-fuelled MHD design. U25B MHD testing in Russia using natural gas as fuel used a superconducting magnet, and had an output of 1.4 megawatts. A coal-fired MHD generator series of tests funded by the U.S. Department of Energy (DOE) in 1992 produced MHD power from a larger superconducting magnet at the Component Development and Integration Facility (CDIF) in Butte, Montana. None of these tests were conducted for long-enough durations to verify the commercial durability of the technology. Neither of the test facilities were in large-enough scale for a commercial unit. Superconducting magnets are used in the larger MHD generators to eliminate one of the large parasitic losses: the power needed to energize the electromagnet. Superconducting magnets, once charged, consume no power and can develop intense magnetic fields 4 teslas and higher. The only parasitic load for the magnets are to maintain refrigeration, and to make up the small losses for the non-supercritical connections. Because of the high temperatures, the non-conducting walls of the channel must be constructed from an exceedingly heat-resistant substance such as yttrium oxide or zirconium dioxide to retard oxidation. Similarly, the electrodes must be both conductive and heat-resistant at high temperatures. The AVCO coal-fueled MHD generator at the CDIF was tested with water-cooled copper electrodes capped with platinum, tungsten, stainless steel, and electrically conducting ceramics. Toxic byproducts MHD reduces the overall production of fossil fuel wastes because it increases plant efficiency. In MHD coal plants, the patented commercial "Econoseed" process developed by the U.S. (see below) recycles potassium ionization seed from the fly ash captured by the stack-gas scrubber. However, this equipment is an additional expense. If molten metal is the armature fluid of an MHD generator, care must be taken with the coolant of the electromagnetics and channel. The alkali metals commonly used as MHD fluids react violently with water. Also, the chemical byproducts of heated, electrified alkali metals and channel ceramics may be poisonous and environmentally persistent. History The first practical MHD power research was funded in 1938 in the U.S. by Westinghouse in its Pittsburgh, Pennsylvania laboratories, headed by Hungarian Bela Karlovitz. The initial patent on MHD is by B. Karlovitz, U.S. Patent No. 2,210,918, "Process for the Conversion of Energy", August 13, 1940. World War II interrupted development. In 1962, the First International Conference on MHD Power was held in Newcastle upon Tyne, UK by Dr. Brian C. Lindley of the International Research and Development Company Ltd. The group set up a steering committee to set up further conferences and disseminate ideas. In 1964, the group held a second conference in Paris, France, in consultation with the European Nuclear Energy Agency. Since membership in the ENEA was limited, the group persuaded the International Atomic Energy Agency to sponsor a third conference, in Salzburg, Austria, July 1966. Negotiations at this meeting converted the steering committee into a periodic reporting group, the ILG-MHD (international liaison group, MHD), under the ENEA, and later in 1967, also under the International Atomic Energy Agency. Further research in the 1960s by R. Rosa established the practicality of MHD for fossil-fueled systems. In the 1960s, AVCO Everett Aeronautical Research began a series of experiments, ending with the Mk. V generator of 1965. This generated 35MW, but used about 8 MW to drive its magnet. In 1966, the ILG-MHD had its first formal meeting in Paris, France. It began issuing a periodic status report in 1967. This pattern persisted, in this institutional form, up until 1976. Toward the end of the 1960s, interest in MHD declined because nuclear power was becoming more widely available. In the late 1970s, as interest in nuclear power declined, interest in MHD increased. In 1975, UNESCO became persuaded that MHD might be an efficient way to utilise world coal reserves, and in 1976, sponsored the ILG-MHD. In 1976, it became clear that no nuclear reactor in the next 25 years would use MHD, so the International Atomic Energy Agency and ENEA (both nuclear agencies) withdrew support from the ILG-MHD, leaving UNESCO as the primary sponsor of the ILG-MHD. Former Yugoslavia development Engineers in former Yugoslavian Institute of Thermal and Nuclear Technology (ITEN), Energoinvest Co., Sarajevo, built and patented the first experimental Magneto-Hydrodynamic facility power generator in 1989. U.S. development In the 1980s, the U.S. Department of Energy began a multiyear program, culminating in a 1992 50 MW demonstration coal combustor at the Component Development and Integration Facility (CDIF) in Butte, Montana. This program also had significant work at the Coal-Fired-In-Flow-Facility (CFIFF) at University of Tennessee Space Institute. This program combined four parts: An integrated MHD topping cycle, with channel, electrodes, and current control units developed by AVCO, later known as Textron Defence of Boston. This system was a Hall effect duct generator heated by pulverized coal, with a potassium ionisation seed. AVCO had developed the famous Mk. V generator, and had significant experience. An integrated bottoming cycle, developed at the CDIF. A facility to regenerate the ionization seed was developed by TRW. Potassium carbonate is separated from the sulphate in the fly ash from the scrubbers. The carbonate is removed, to regain the potassium. A method to integrate MHD into preexisting coal plants. The Department of Energy commissioned two studies. Westinghouse Electric performed a study based on the Scholtz Plant of Gulf Power in Sneads, Florida. The MHD Development Corporation also produced a study based on the J.E. Corrette Plant of the Montana Power Company of Billings, Montana. Initial prototypes at the CDIF operated for short durations, with various coals: Montana Rosebud, and a high-sulphur corrosive coal, Illinois No. 6. A great deal of engineering, chemistry, and material science was completed. After the final components were developed, operational testing completed with 4,000 hours of continuous operation, 2,000 on Montana Rosebud, 2,000 on Illinois No. 6. The testing ended in 1993. Japanese development The Japanese program in the late 1980s concentrated on closed-cycle MHD. The belief was that it would have higher efficiencies, and smaller equipment, especially in the clean, small, economical plant capacities near 100 megawatts (electrical) which are suited to Japanese conditions. Open-cycle coal-powered plants are generally thought to become economic above 200 megawatts. The first major series of experiments was FUJI-1, a blow-down system powered from a shock tube at the Tokyo Institute of Technology. These experiments extracted up to 30.2% of enthalpy, and achieved power densities near 100 megawatts per cubic meter. This facility was funded by Tokyo Electric Power, other Japanese utilities, and the Department of Education. Some authorities believe this system was a disc generator with a helium and argon carrier gas and potassium ionization seed. In 1994, there were detailed plans for FUJI-2, a 5 MWe continuous closed-cycle facility, powered by natural gas, to be built using the experience of FUJI-1. The basic MHD design was to be a system with inert gases using a disk generator. The aim was an enthalpy extraction of 30% and an MHD thermal efficiency of 60%. FUJI-2 was to be followed by a retrofit to a 300MWe natural gas plant. Australian development From the 1980s, Professor Hugo Messerle at The University of Sydney researched coal-fueled MHD. This resulted in a 28MWe topping facility that was operated outside Sydney. Messerle also wrote a key reference work on MHD, as part of a UNESCO education program. Italian development The Italian program began in 1989 with a budget of about 20 million $US, and had three main development areas: MHD Modelling. Superconducting magnet development. The goal in 1994 was a prototype 2m long, storing 66MJ, for an MHD demonstration 8m long. The field was to be 5teslas, with a taper of 0.15T/m. The geometry was to resemble a saddle shape, with cylindrical and rectangular windings of niobium-titanium copper. Retrofits to natural gas powerplants. One was to be at the Enichem-Anic factor in Ravenna. In this plant, the combustion gases from the MHD would pass to the boiler. The other was a 230MW (thermal) installation for a power station in Brindisi, that would pass steam to the main power plant. Chinese development A joint U.S.-China national programme ended in 1992 by retrofitting the coal-fired No. 3 plant in Asbach. A further eleven-year program was approved in March 1994. This established centres of research in: The Institute of Electrical Engineering in the Chinese Academy of Sciences, Beijing, concerned with MHD generator design. The Shanghai Power Research Institute, concerned with overall system and superconducting magnet research. The Thermoenergy Research Engineering Institute at the Nanjing's Southeast University, concerned with later developments. The 1994 study proposed a 10W (electrical, 108MW thermal) generator with the MHD and bottoming cycle plants connected by steam piping, so either could operate independently. Russian developments In 1971, the natural-gas-fired U-25 plant was completed near Moscow, with a designed capacity of 25 megawatts. By 1974 it delivered 6 megawatts of power. By 1994, Russia had developed and operated the coal-operated facility U-25, at the High-Temperature Institute of the Russian Academy of Science in Moscow. U-25's bottoming plant was operated under contract with the Moscow utility, and fed power into Moscow's grid. There was substantial interest in Russia in developing a coal-powered disc generator. In 1986 the first industrial power plant with MHD generator was built, but in 1989 the project was cancelled before MHD launch and this power plant later joined to Ryazan Power Station as a 7th unit with ordinary construction. See also Computational magnetohydrodynamics Electrohydrodynamics Electromagnetic pump Ferrofluid Magnetic flow meter Magnetohydrodynamic turbulence MHD sensor Plasma stability Shocks and discontinuities (magnetohydrodynamics) References Further reading Hugo K. Messerle, Magnetohydrodynamic Power Generation, 1994, John Wiley, Chichester, Part of the UNESCO Energy Engineering Series (This is the source of the historical and generator design information). Shioda, S. "Results of Feasibility Studies on Closed-Cycle MHD Power Plants", Proc. Plasma Tech. Conf., 1991, Sydney, Australia, pp. 189–200. R.J. Rosa, Magnetohydrodynamic Energy Conversion, 1987, Hemisphere Publishing, Washington, D.C. G.J. Womac, MHD Power Generation, 1969, Chapman and Hall, London. External links MHD generator Research at the University of Tennessee Space Institute (archive) - 2004 Model of an MHD-generator at the Institute of Computational Modelling, Akademgorodok, Russia - 2003 The Magnetohydrodynamic Engineering Laboratory Of The University Of Bologna, Italy - 2003 High Efficiency Magnetohydrodynamic Power Generation - 2015 Chemical engineering Electrical generators Energy conversion American inventions Plasma technology and applications Power station technology
Magnetohydrodynamic generator
[ "Physics", "Chemistry", "Technology", "Engineering" ]
5,936
[ "Electrical generators", "Machines", "Plasma physics", "Plasma technology and applications", "Chemical engineering", "Physical systems", "nan" ]
1,503,112
https://en.wikipedia.org/wiki/Domed%20city
A domed city is a hypothetical structure that encloses a large urban area under a single roof. In most descriptions, the dome is airtight and pressurized, creating a habitat that can be controlled for air temperature, composition and quality, typically due to an external atmosphere (or lack thereof) that is inimical to habitation for one or more reasons. Domed cities have been a fixture of science fiction and futurology since the early 20th century, offer inspirations for potential utopias and may be situated on Earth, a moon or other planet. Origin In the early 19th century, the social reformer Charles Fourier proposed that an ideal city must be connected by glass galleries. Such ideas inspired several architectural projects along of 19th and 20th centuries. The most famous of these is the building of The Crystal Palace in 1851 at Hyde Park. In fiction Domed cities appear frequently in underwater environments. In Robert Ellis Dudgeon's novel Colymbia (1873), glass domes are used for underwater conversation. In William Delisle Hay's novel Three Hundred Years Hence (1881), whole cities are covered by domes beneath the sea. Survivors of Atlantis are found living in an underwater glass-domed city in André Laurie's novel Atlantis (1895). The same idea is found later in David M. Parry's The Scarlet Empire (1906) and Stanton A Coblentz's The Sunken World (1928). In William Gibson's Sprawl trilogy, the namesake of the series is a massive supercity in the USA, stretching from Boston to Atlanta and housed in a series of geodesic domes. Authors used domed cities in response to many problems, sometimes to the benefit of the people living in them and sometimes not. The problems of air pollution and other environmental destruction are a common motive, particularly in stories of the middle to late 20th century. As in the Pure trilogy of books by Julianna Baggott. In some works, the domed city represents the last stand of a human race that is either dead or dying. The 1976 film Logan's Run shows both of these themes. The characters have a comfortable life within a domed city, but the city also serves to control the populace and to ensure that humanity never again outgrows its means. The domed city in fiction has been interpreted as a symbolic womb that both nourishes and protects humanity. Where other science fiction stories emphasize the vast expanse of the universe, the domed city places limits on its inhabitants, with the subtext that chaos will ensue if they interact with the world outside. In some works cities are getting "domed" to quarantine its inhabitants. Engineering proposals During the 1960s and 1970s, the domed city concept was widely discussed outside the confines of science fiction. In 1960, visionary engineer Buckminster Fuller described the Dome over Manhattan, a 3 km geodesic dome spanning Midtown Manhattan that would regulate weather and reduce air pollution. A domed city was proposed in 1979 for Winooski, Vermont and in 2010 for Houston. Seward's Success, Alaska, was a domed city proposed in 1968 and designed to hold over 40,000 people along with commercial, recreational and office space. Intended to capitalize on the economic boom following the discovery of oil in northern Alaska, the project was canceled in 1972 due to delays in constructing the Trans-Alaska Pipeline. In order to test whether an artificial closed ecological system was feasible, Biosphere 2 (a complex of interconnected domes and glass pyramids) was constructed in the late 1980s. Its original experiment housed eight people and remains the largest such system attempted to date. In 2010, a domed city known as Eco-city 2020 of 100,000 was proposed for the Mir mine in Siberia. In 2014, the ruler of Dubai announced plans for a climate-controlled domed city, named the Mall of the World, covering an area of 48 million square feet (4.5 square kilometers), but as of 2016, the project has been redesigned without the dome. See also Closed ecosystems: Notes Space colonization Science fiction themes Fictional populated places City, domed Architecture related to utopias
Domed city
[ "Technology", "Engineering" ]
831
[ "Exploratory engineering", "Proposed arcologies", "Architecture related to utopias", "Architecture" ]
1,503,166
https://en.wikipedia.org/wiki/Terminal%20yield
In formal language theory, the terminal yield (or fringe) of a tree is the sequence of leaves encountered in an ordered walk of the tree. Parse trees and/or derivation trees are encountered in the study of phrase structure grammars such as context-free grammars or linear grammars. The leaves of a derivation tree for a formal grammar G are the terminal symbols of that grammar, and the internal nodes the nonterminal or variable symbols. One can read off the corresponding terminal string by performing an ordered tree traversal and recording the terminal symbols in the order they are encountered. The resulting sequence of terminals is a string of the language L(G) generated by the grammar G. Formal languages
Terminal yield
[ "Mathematics" ]
143
[ "Formal languages", "Mathematical logic" ]
1,503,224
https://en.wikipedia.org/wiki/Local%20boundedness
In mathematics, a function is locally bounded if it is bounded around every point. A family of functions is locally bounded if for any point in their domain all the functions are bounded around that point and by the same number. Locally bounded function A real-valued or complex-valued function defined on some topological space is called a if for any there exists a neighborhood of such that is a bounded set. That is, for some number one has In other words, for each one can find a constant, depending on which is larger than all the values of the function in the neighborhood of Compare this with a bounded function, for which the constant does not depend on Obviously, if a function is bounded then it is locally bounded. The converse is not true in general (see below). This definition can be extended to the case when takes values in some metric space Then the inequality above needs to be replaced with where is some point in the metric space. The choice of does not affect the definition; choosing a different will at most increase the constant for which this inequality is true. Examples The function defined by is bounded, because for all Therefore, it is also locally bounded. The function defined by is bounded, as it becomes arbitrarily large. However, it locally bounded because for each in the neighborhood where The function defined by is neither bounded locally bounded. In any neighborhood of 0 this function takes values of arbitrarily large magnitude. Any continuous function is locally bounded. Here is a proof for functions of a real variable. Let be continuous where and we will show that is locally bounded at for all Taking ε = 1 in the definition of continuity, there exists such that for all with . Now by the triangle inequality, which means that is locally bounded at (taking and the neighborhood ). This argument generalizes easily to when the domain of is any topological space. The converse of the above result is not true however; that is, a discontinuous function may be locally bounded. For example consider the function given by and for all Then is discontinuous at 0 but is locally bounded; it is locally constant apart from at zero, where we can take and the neighborhood for example. Locally bounded family A set (also called a family) U of real-valued or complex-valued functions defined on some topological space is called locally bounded if for any there exists a neighborhood of and a positive number such that for all and In other words, all the functions in the family must be locally bounded, and around each point they need to be bounded by the same constant. This definition can also be extended to the case when the functions in the family U take values in some metric space, by again replacing the absolute value with the distance function. Examples The family of functions where is locally bounded. Indeed, if is a real number, one can choose the neighborhood to be the interval Then for all in this interval and for all one has with Moreover, the family is uniformly bounded, because neither the neighborhood nor the constant depend on the index The family of functions is locally bounded, if is greater than zero. For any one can choose the neighborhood to be itself. Then we have with Note that the value of does not depend on the choice of x0 or its neighborhood This family is then not only locally bounded, it is also uniformly bounded. The family of functions is locally bounded. Indeed, for any the values cannot be bounded as tends toward infinity. Topological vector spaces Local boundedness may also refer to a property of topological vector spaces, or of functions from a topological space into a topological vector space (TVS). Locally bounded topological vector spaces A subset of a topological vector space (TVS) is called bounded if for each neighborhood of the origin in there exists a real number such that A is a TVS that possesses a bounded neighborhood of the origin. By Kolmogorov's normability criterion, this is true of a locally convex space if and only if the topology of the TVS is induced by some seminorm. In particular, every locally bounded TVS is pseudometrizable. Locally bounded functions Let a function between topological vector spaces is said to be a locally bounded function if every point of has a neighborhood whose image under is bounded. The following theorem relates local boundedness of functions with the local boundedness of topological vector spaces: Theorem. A topological vector space is locally bounded if and only if the identity map is locally bounded. See also External links PlanetMath entry for Locally Bounded nLab entry for Locally Bounded Category Theory of continuous functions Functional analysis Mathematical analysis
Local boundedness
[ "Mathematics" ]
922
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Theory of continuous functions", "Mathematical objects", "Topology", "Mathematical relations" ]