id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
1,300,614
https://en.wikipedia.org/wiki/Tanespimycin
Tanespimycin (17-N-allylamino-17-demethoxygeldanamycin, 17-AAG) is a derivative of the antibiotic geldanamycin that is being studied in the treatment of cancer, specifically in younger patients with certain types of leukemia or solid tumors, especially kidney tumors. It works by inhibiting Hsp90, which is expressed in those tumors. It belongs to the family of drugs called antitumor antibiotics. Clinical trials Bristol-Myers Squibb conducted Phase 1 and Phase 2 clinical trials. However, in 2010 the company halted development of tanespimycin, during late-stage clinical trials as a potential treatment for multiple myeloma. While no definitive explanation was given, it has been suggested that Bristol-Myers Squibb halted development over concerns of the financial feasibility of tanespimycin development given the 2014 expiry of the patent on this compound, and the relative expense of manufacture. References External links National Cancer Institutie Bulletin on Phase 2 trials against Von Hippel-Lindau disease Safety sheet for 17AAG Antibiotics Experimental cancer drugs 1,4-Benzoquinones Carbamates Lactams Ethers Secondary alcohols Ansamycins
Tanespimycin
[ "Chemistry", "Biology" ]
254
[ "Biotechnology products", "Functional groups", "Organic compounds", "Antibiotics", "Ethers", "Biocides" ]
1,300,633
https://en.wikipedia.org/wiki/2-Methoxyestradiol
2-Methoxyestradiol (2-ME2, 2-MeO-E2) is a natural metabolite of estradiol and 2-hydroxyestradiol (2-OHE2). It is specifically the 2-methyl ether of 2-hydroxyestradiol. 2-Methoxyestradiol prevents the formation of new blood vessels that tumors need in order to grow (angiogenesis), hence it is an angiogenesis inhibitor. It also acts as a vasodilator and induces apoptosis in some cancer cell lines. 2-Methoxyestradiol is derived from estradiol, although it interacts poorly with the estrogen receptors (2,000-fold lower activational potency relative to estradiol). However, it retains activity as a high-affinity agonist of the G protein-coupled estrogen receptor (GPER) (10 nM, relative to 3–6 nM for estradiol). Clinical development 2-Methoxyestradiol was being developed as an experimental drug candidate with the tentative brand name Panzem. It has undergone Phase 1 clinical trials against breast cancer. A phase II trial of 18 advanced ovarian cancer patients reported encouraging results in October 2007. Preclinical models also suggest that 2-methoxyestradiol could also be effective against inflammatory diseases such as rheumatoid arthritis. Several studies have been conducted showing 2-methoxyestradiol is a microtubule inhibitor and is inhibitory against prostate cancer in rodents. , all clinical development of 2-methoxyestradiol has been suspended or discontinued. This is significantly due to the very poor oral bioavailability of the molecule and also due to its extensive metabolism. Analogues have been developed in an attempt to overcome these problems. An example is 2-methoxyestradiol disulfamate (STX-140), the C3 and C17β disulfamate ester of 2-methoxyestradiol. Clinical effects 2-Methoxyestradiol was found to increase sex hormone-binding globulin (SHBG) levels in men by 2.5-fold at a dose of 400 mg/day and by 4-fold at a dose of 1,200 mg/day. Conversely, it did not seem to suppress testosterone levels. See also 2-Methoxyestriol 2-Methoxyestrone 4-Methoxyestradiol 4-Methoxyestrone MP-2001 References Abandoned drugs Angiogenesis inhibitors Antineoplastic drugs Estranes Ethers GPER agonists Human metabolites Hypolipidemic agents Microtubule inhibitors Mitotic inhibitors Hydroxyarenes
2-Methoxyestradiol
[ "Chemistry", "Biology" ]
580
[ "Mitotic inhibitors", "Angiogenesis", "Harmful chemical substances", "Functional groups", "Drug safety", "Organic compounds", "Angiogenesis inhibitors", "Ethers", "Abandoned drugs" ]
1,300,709
https://en.wikipedia.org/wiki/Middlesex%20Canal
The Middlesex Canal was a 27-mile (44-kilometer) barge canal connecting the Merrimack River with the port of Boston. When operational it was 30 feet (9.1 m) wide, and 3 feet (0.9 m) deep, with 20 locks, each 80 feet (24 m) long and between 10 and 11 feet (3.0 and 3.4 m) wide. It also had eight aqueducts. Built from 1793 to 1803, the canal was one of the first civil engineering projects of its type in the United States, and was studied by engineers working on other major canal projects such as the Erie Canal. A number of innovations made the canal possible, including hydraulic cement, which was used to mortar its locks, and an ingenious floating towpath to span the Concord River. The canal operated until 1851, when more efficient means of transportation of bulk goods, largely railroads, meant it was no longer competitive. In 1967, the canal was designated a National Historic Civil Engineering Landmark by the American Society of Civil Engineers. Remnants of the canal still survive and were the subject of a 1972 listing on the National Register of Historic Places, while the entire route, including parts that have been overbuilt, is the subject of a second listing in 2009. History Conception By 1790, England had thirty years and all of continental Europe's many canals to draw on for the experience. In the years after the American Revolutionary War, the young United States began a period of economic expansion away from the coast. American men of influence had always kept an eye on news from Europe, especially from Great Britain, so when in the years from 1790–1794 the British Parliament passed eighty-one canal and navigation acts, American leaders were paying attention. Because of extremely poor roads, the cost of bringing goods such as lumber, ashes, grain, and fur to the coast could be quite high if water transport was unavailable. Most American rivers were made unnavigable by rapids and waterfalls. Up and down the Atlantic coast, companies were formed to build canals as cheaper ways to move goods between the interior of the country and the coast. Well aware that to stay independent the nation needed to grow strong and develop industries, the news from Europe rekindled a number of previously dropped canal or navigations projects and began discussions leading in the next decades to many others. The year 1790 is credited as the start of the American Canal Age. In Massachusetts, several ideas were proposed for bringing goods to the principal port, Boston, and connecting to the interior.<ref name=HES-Ch1&2>Roberts, Chapters I and Chapter II</ref> For about three years there were plans to connect the upper reaches of the Connecticut River, above the falls at Enfield Connecticut, to Boston through a canal to the Charles. Connecticut was believed to rise at similar elevations to the Merrimack River's, which could be reached by a string of streams, ponds, lakes, and manmade canals—if the canals were built. In the first two years, rough surveys sought the best route up to the Connecticut Valley; but no route was obviously best, and nobody championed a specific one. A few true believers, but lesser socialites, needed a champion and pestered the Secretary of War, Henry Knox to ignite the project. After the collapse of stocks in early 1793 put paid to a scheme to join the Charles River with Connecticut, championed by Henry Knox, a group of leading Massachusetts businessmen and politicians led by States Attorney General James Sullivan proposed a connection from the Merrimack River to Boston Harbor in 1793. This became the Middlesex (County) Canal system. The Middlesex Canal Corporation was chartered on June 22, 1793, with a signature by Governor John Hancock, who purchased shares with other political figures including John Adams, John Quincy Adams, James Sullivan, and Christopher Gore. The incorporators were James Sullivan; Oliver Prescott; James Winthrop; Loammi Baldwin; Benjamin Hall; Jonathan Porter; Andrew Hall; Ebenezer Hall; Samuel Tufts Jr.; Willis Hall; Samuel Swan Jr.; and Ebenezer Hall Jr. Sullivan was made the company's president; its vice president and eventually chief engineer was Loammi Baldwin, a native of Woburn, who had attended science lectures at Harvard College and was a friend of physicist Benjamin Thompson. Construction The route of the canal was first surveyed in August 1793. Local lore is that it is on this expedition that Baldwin was introduced to a particular apple variety that now bears his name. The route survey, however, was sufficiently uncertain that a second survey was made in October. Due to discrepancies in their results, Baldwin was authorized by the proprietors to travel to Philadelphia in an effort to secure the services of William Weston, a British engineer working on several canal and turnpike projects in Pennsylvania under contract to the Schuylkill and Susquehanna Navigation Company. Baldwin's application to the Navigation company was successful: Weston was authorized to travel to Massachusetts. In July and August 1794, Weston, accompanied by Baldwin and several of the latter's sons surveyed and identified two possible routes for the proposed canal. The proprietors then secured contracts to acquire the land for the canal, some of which was donated by its owners; in sixteen cases the proprietors used eminent domain proceedings to take the land. The basic plan was for the canal's principal water source to be the Concord River at its highest point in North Billerica, with additional water to be drawn as needed from Horn Pond in Woburn. The site where the canal met the Concord River had been the site of a grist mill since the 17th century, which the proprietors purchased along with all of its water rights. From this point, the canal descended six miles to the Merrimack River in East Chelmsford (now western Lowell) and 22 miles to the Charles River in Charlestown. In late September 1794 ground was broken in North Billerica. Work on the canal was performed by a number of contractors. In some instances, local workers were contracted to dig sections, while in other areas contract labor was brought in from Massachusetts and New Hampshire for the construction work. A variety of engineering challenges were overcome, leading to innovations in construction materials and equipment. A form of hydraulic cement (made in part from volcanic materials imported at great expense from Sint Eustatius in the West Indies) was used to make the stone locks watertight. Because of its cost and the cost of working in stone, a number of the locks were made of wood instead. An innovation was made in earth-moving equipment with the development of a precursor of the dump truck, where one side of the carrier was hinged to allow the rapid dumping of material at the desired location. Water was diverted into the canal in December 1800, and by 1803 the canal was filled to Charlestown. The first boat operated on part of the canal on April 22, 1802. Merrimack canals A variety of enterprises by all or a few of "the proprietors of the Middlesex Canal" which were the corporations' principal stockholders and the board came together with other third parties or acted in a few cases as a combined whole to fund the development of other stretches of the canal up the Merrimack above Chelmsford. By the completion of construction between Medford and Chelmsford, several extensions envisioned all along were also nearing completion. The whole system was complete in 1814, and with just a few exceptions, overall came to operate with prices and regulations as set by the proprietors of the (main) Middlesex Canal. The following extensions opened up the balance of the Merrimack River to the New Hampshire capital, Concord, which acted as a staging point for riverine traffic deep into the river system penetrating the White and Green Mountains of New Hampshire and Vermont respectively. Operation By 1808 the completed canal had reached Merrimack, New Hampshire, from the Charles (the downstream terminus) and was carrying two-thirds of the down freight and one-third of the up freight to Western New Hampshire and Eastern Vermont. The other direction, the canal ran from 'Middlesex Village' or East Chelmsford, Massachusetts (The Town of Chelmsford was later divided and East Chelmsford was renamed Lowell, Massachusetts, now the fourth most populous city in Massachusetts, primarily because of the Lowell textile industry spawned by the transportation infrastructure and water power along the Middlesex Canal and the Nashua and Merrimack Rivers), through several sparser settled Middlesex County outlier suburbs such as Billerica, and Tewksbury, then closer-in suburban towns with the lower course running towards Boston generally along water courses nearly paralleling the routes of MA 38 from Wilmington, thence in Woburn along the Aberjona River from Horn Pond through Winchester (or Waterfield) into the Mystic Lakes and down the Mystic River between Arlington and Somerville on the west bank and Medford along the east (left) bank, until the river and canal ran into the Boston Harbor tidewater in the Charlestown basin. At first, it terminated in Medford, but was later extended to Charlestown, Massachusetts, with a branch near Medford Center to the Mystic River. Over time, the canal was connected to by a series of other canal companies, and many of those were owned in part by the canal proprietors, which constructed spurs up through New Hampshire upstream along the Nashua and Merrimack Rivers, enabling freight to be transported as far inland as Concord, New Hampshire. Within two years of commencing operations, regular boat traffic operated by independent companies was reaching upriver over . Three main companies operated on the canal system. The water source for the canal was the Concord River at North Billerica. This was also the highest point of the canal and is the present location of the Middlesex Canal Association's museum. Freight boats required 18 hours from Boston up to Lowell, and 12 hours down, thus averaging 2.5 miles per hour; passenger boats were faster, at 12 and 8 hours, respectively (4 miles per hour). As seen on later American canals, use was not restricted to freight and transit: people from the city would ride passenger boats on daylong tourism excursions to the countryside and take vacations in luxuriously fitted out canal boats, whole families spending a week or two lazing along the waterways in the heat of summer. Freight statistics compiled for twenty years cited in the Harvard Economic study by Roberts indicate the downriver trips from Concord to Boston took four days, and the reverse trip upriver took on average five days. A round trip between Boston and Concord, New Hampshire, usually took 7–10 days. These speed limits were set and maintained by the board of proprietors to prevent wakes from damaging the canal sides. Roberts noted they were unlikely to be enforced, and generation of a shore damaging wake would require sails or animals to drive a canal boat in excess of , which would require dangerously stiff breezes in the correct direction. The canal was one of the main thoroughfares in eastern New England until a few decades after the advent of the railroad. The Boston and Lowell Railroad (now a part of the MBTA Commuter Rail system) was built using the plans from the original surveys for the canal. Portions of the line follow the canal route closely, and the canal was used to transport construction materials and also an engine for the railroad. The canal was no longer economically viable after the introduction of railroad competition, and the company collected its last tolls in 1851. The Middlesex Turnpike, incorporated in 1805, also contributed to its downfall. Investors who held their shares in the company lost money: shareholders invested a total of $740 per share but only reaped $559.50 in dividends. Those who sold their shares at an appropriate time made money: shares valued at $25 in 1794 reached a value of $500 in 1804 and were worth $300 in 1816. Before the corporation was dissolved, the proprietors proposed to convert the canal into an aqueduct to bring drinking water to Boston, but this effort was unsuccessful. After the canal ceased operation its infrastructure quickly fell into disrepair. In 1852 the company ordered dilapidated bridges over the canal torn down and the canal underneath filled in. Permission was given for the company to liquidate and pay the proceeds to the stockholders, and its 1793 charter was revoked in 1860. The company's records were given over to the state for preservation. The canal corporation's land and dam in North Billerica, as well as the water rights on the Concord River, were sold to Charles and Thomas Talbot, who erected the Talbot Mills complex that now stands in the Billerica Mills Historic District. Parts of the canal bed were covered by roads in the 20th century, including parts of the Mystic Valley Parkway in Medford and Winchester, and parts of Boston Avenue in Somerville and Medford. Boston Avenue crosses the Mystic River where the canal did. Parts of the canal in eastern Somerville were filled in by leveling Ploughed Hill in the late 19th century. Ploughed Hill was the site of notorious anti-Catholic riots in 1832 and had subsequently been abandoned. Impact The opening of the canal diminished the commercial viability of the port of Newburyport, Massachusetts, the outlet of the Merrimack River, since all trade from the Merrimack Valley in New Hampshire now went via the canal to Boston, rather than through the sometimes difficult to navigate river. The canal also played a prominent role in the eventual growth of Lowell as a major industrial center. Its opening brought on a decline in business at the Pawtucket Canal, a transit canal opened in the 1790s which bypassed the Pawtucket Falls just downstream from the Middlesex Canal's northern end. Its owners converted the Pawtucket Canal for use as a power provider, leading to the growth of the mill businesses on its banks beginning in the 1820s. The Middlesex Canal was used for the transport of raw materials, finished goods, and personnel to and from Lowell. The canal's use of the Concord River had significant long-term environmental consequences. The raising of the dam height at North Billerica was believed to cause flooding of seasonal hay meadows upstream and prompted numerous lawsuits against the canal proprietors. These were all ultimately unsuccessful, due in part to the uncertainty of the science, and also in part to the political power of the proprietors. As the canal was in decline in its later years, the state legislature finally ordered the dam height to be lowered, but then repealed the order before it was executed. Analysis done in the 20th century suggests that the dam, which still stands (although no longer at its greatest height), probably had a flooding effect on hay meadows as far as 25 miles up the watershed. Many of these meadows had to be abandoned, and some now form portions of the Great Meadows National Wildlife Refuge; they are classified as wetlands. The canal featured a number of innovations and was referred to as an example for later engineering projects. The use of hydraulic cement to mortar the locks is the first known use of the material in North America. The route was surveyed using a Wye level (an early version of a dumpy level), again the first recorded use in America. At North Billerica, where the canal met the Concord River at the millpond, a floating towpath was devised to handle the needs of crossing traffic patterns. Today Though significant portions of the Middlesex Canal are still visible, urban and suburban sprawl is quickly overcoming many of the remains. The Middlesex Canal Association, founded in 1962, has erected markers along portions of the canal's path. Prominent portions of the canal that are still visible include water-filled portions in Wilmington, Billerica, and near the Baldwin House in Woburn. Dry walkable sections can be found in Winchester, most notably a section at the Mystic Lakes where an aqueduct was situated, and Wilmington, where aqueduct remnants are also visible in the town park off Route 38. Most of the canal south of Winchester has been overbuilt by roads and residential construction, although traces may still be discerned in a few places. In 1967 the canal was designated a Historic Civil Engineering Landmark (one of the first such designations made) by the American Society of Civil Engineers. The surviving elements of the canal are the subject of a 1972 listing on the National Register of Historic Places, while the entire route, including parts that have been overbuilt, is the subject of a second listing in 2009. The Middlesex Canal Association maintains a museum in North Billerica, Massachusetts, at the Faulkner Mills. Directions and additional information are available on the Middlesex Canal Association website. Gallery Notes References Sources Further reading Other online archives of The New England Magazine. This is how the Middlesex Canal Corporation was dissolved. External links Middlesex Canal Association Paintings: Middlesex Canal by Joseph Payro 1930s Billerica, Massachusetts Canal museums in the United States Canals in Middlesex County, Massachusetts Canals on the National Register of Historic Places in Massachusetts Canals opened in 1803 Chelmsford, Massachusetts Historic Civil Engineering Landmarks Historic districts on the National Register of Historic Places in Massachusetts History of Massachusetts Merrimack River Mystic River National Register of Historic Places in Lowell, Massachusetts National Register of Historic Places in Medford, Massachusetts National Register of Historic Places in Middlesex County, Massachusetts National Register of Historic Places in Winchester, Massachusetts Transportation in Lowell, Massachusetts Transportation in Medford, Massachusetts Transportation in Somerville, Massachusetts Wilmington, Massachusetts Winchester, Massachusetts Buildings and structures in Woburn, Massachusetts 1803 establishments in Massachusetts Archaeological sites on the National Register of Historic Places in Massachusetts
Middlesex Canal
[ "Engineering" ]
3,542
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
1,300,740
https://en.wikipedia.org/wiki/90Y-DOTA-biotin
{{DISPLAYTITLE:90Y-DOTA-biotin}} 90Y-DOTA-biotin consists of a radioactive substance (yttrium-90) complexed by a chelating agent (DOTA), which in turn is attached to the vitamin biotin via a chemical linker. It is used experimentally in pretargeted radioimmunotherapy. Animal studies have been conducted as well as clinical studies in humans. In pretargeted radioimmunotherapy, two or three medications are applied in succession. At first, an antibody-drug conjugate is administered, which consists of a monoclonal antibody designed to target the tumour, and a chemical marker which in the case of DOTA-biotin therapy is one of the proteins avidin and streptavidin. After a time of typically one or two days to let the antibody accumulate in the tumour, a clearing agent may be given to eliminate residues of antibody that are still circulating in the bloodstream; this is especially done in humans. After a further waiting time, the radiotherapy (90Y-DOTA-biotin) is administered. Due to the high affinity of biotin to avidin and streptavidin, the radiotherapy accumulates where the antibody is, namely in the tumour, where it delivers its radioactivity. References Radiopharmaceuticals DOTA (chelator) derivatives
90Y-DOTA-biotin
[ "Chemistry" ]
300
[ "Chemicals in medicine", "Radiopharmaceuticals", "Medicinal radiochemistry" ]
1,300,874
https://en.wikipedia.org/wiki/Actinic%20keratosis
Actinic keratosis (AK), sometimes called solar keratosis or senile keratosis, is a pre-cancerous area of thick, scaly, or crusty skin. Actinic keratosis is a disorder (-osis) of epidermal keratinocytes that is induced by ultraviolet (UV) light exposure (actin-). These growths are more common in fair-skinned people and those who are frequently in the sun. They are believed to form when skin gets damaged by UV radiation from the sun or indoor tanning beds, usually over the course of decades. Given their pre-cancerous nature, if left untreated, they may turn into a type of skin cancer called squamous cell carcinoma. Untreated lesions have up to a 20% risk of progression to squamous cell carcinoma, so treatment by a dermatologist is recommended. Actinic keratoses characteristically appear as thick, scaly, or crusty areas that often feel dry or rough. Size commonly ranges between 2 and 6 millimeters, but they can grow to be several centimeters in diameter. AKs are often felt before they are seen, and the texture is sometimes compared to sandpaper. They may be dark, light, tan, pink, red, a combination of all these, or have the same color as the surrounding skin. Given the causal relationship between sun exposure and AK growth, they often appear on a background of sun-damaged skin and in areas that are commonly sun-exposed, such as the face, ears, neck, scalp, chest, backs of hands, forearms, or lips. Because sun exposure is rarely limited to a small area, most people who have an AK have more than one. If clinical examination findings are not typical of AK and the possibility of in situ or invasive squamous cell carcinoma (SCC) cannot be excluded based on clinical examination alone, a biopsy or excision can be considered for definitive diagnosis by histologic examination of the lesional tissue. Multiple treatment options for AK are available. Photodynamic therapy (PDT) is one option for the treatment of numerous AK lesions in a region of the skin, termed field cancerization. It involves the application of a photosensitizer to the skin followed by illumination with a strong light source. Topical creams, such as 5-fluorouracil or imiquimod, may require daily application to affected skin areas over a typical time course of weeks. Cryotherapy is frequently used for few and well-defined lesions, but undesired skin lightening, or hypopigmentation, may occur at the treatment site. By following up with a dermatologist, AKs can be treated before they progress to skin cancer. If cancer does develop from an AK lesion, it can be caught early with close monitoring, at a time when treatment is likely to have a high cure rate. Signs and symptoms Actinic keratoses (AKs) most commonly present as a white, scaly plaque of variable thickness with surrounding redness; they have a sandpaper-like texture when felt with a gloved hand. Skin nearby the lesion often shows evidence of solar damage characterized by pigmentary alterations, being yellow or pale in color with areas of hyperpigmentation; deep wrinkles, coarse texture, purpura and ecchymoses, dry skin, and scattered telangiectasias are also characteristic. Photoaging leads to an accumulation of oncogenic changes, resulting in a proliferation of mutated keratinocytes that can manifest as AKs or other neoplastic growths. With years of sun damage, it is possible to develop multiple AKs in a single area on the skin. This condition is termed field cancerization. The lesions are usually asymptomatic, but can be tender, itch, bleed, or produce a stinging or burning sensation. AKs are typically graded in accordance with their clinical presentation: Grade I (easily visible, slightly palpable), Grade II (easily visible, palpable), and Grade III (frankly visible and hyperkeratotic). Variants Actinic keratoses can have various clinical presentations, often characterized as follows: Classic (or common): Classic AKs present as white, scaly macules, papules or plaques of various thickness, often with surrounding erythema. They are usually 2–6mm in diameter but can sometimes reach several centimeters in diameter. Hypertrophic (or hyperkeratotic): Hypertrophic AKs (HAKs) appears as a thicker scale or rough papule or plaque, often adherent to an erythematous base. Classic AKs can progress to become HAKs, and HAKs themselves can be difficult to distinguish from malignant lesions. Atrophic: Atrophic AKs lack an overlying scale, and therefore appear as a nonpalpable change in color (or macule). They are often smooth and red and are less than 10mm in diameter. AK with cutaneous horn: A cutaneous horn is a keratinic projection with its height at least one-half of its diameter, often conical in shape. They can be seen in the setting of actinic keratosis as a progression of an HAK, but are also present in other skin conditions. 38–40% of cutaneous horns represent AKs. Pigmented AK: Pigmented AKs are rare variants that often present as macules or plaques that are tan to brown in color. They can be difficult to distinguish from a solar lentigo or lentigo maligna. Actinic cheilitis: When an AK forms on the lip, it is called actinic cheilitis. This usually presents as a rough, scaly patch on the lip, often accompanied by the sensation of dry mouth and symptomatic splitting of the lips. Bowenoid AK: Usually presents as a solitary, erythematous, scaly patch or plaque with well-defined borders. Bowenoid AKs are differentiated from Bowen's disease by degree of epithelial involvement as seen on histology. The presence of ulceration, nodularity, or bleeding should raise concern for malignancy. Specifically, clinical findings suggesting an increased risk of progression to squamous cell carcinoma can be recognized as "IDRBEU": I (induration/inflammation), D (diameter > 1 cm), R (rapid enlargement), B (bleeding), E (erythema), and U (ulceration). AKs are usually diagnosed clinically, but because they are difficult to clinically differentiate from squamous cell carcinoma, any concerning features warrant biopsy for diagnostic confirmation. Causes The most important cause of AK formation is solar radiation, through a variety of mechanisms. Mutation of the p53 tumor suppressor gene, induced by UV radiation, has been identified as a crucial step in AK formation. This tumor suppressor gene, located on chromosome 17p132, allows for cell cycle arrest when DNA or RNA is damaged. Dysregulation of the p53 pathway can thus result in unchecked replication of dysplastic keratinocytes, thereby serving as a source of neoplastic growth and the development of AK, as well as possible progression from AK to skin cancer. Other molecular markers that have been associated with the development of AK include the expression of p16ink4, p14, the CD95 ligand, TNF-related apoptosis-inducing ligand (TRAIL) and TRAIL receptors, and loss of heterozygosity. Evidence also suggests that the human papillomavirus (HPV) plays a role in the development of AKs. The HPV virus has been detected in AKs, with measurable HPV viral loads (one HPV-DNA copy per less than 50 cells) measured in 40% of AKs. Similar to UV radiation, higher levels of HPV found in AKs reflect enhanced viral DNA replication. This is suspected to be related to the abnormal keratinocyte proliferation and differentiation in AKs, which facilitate an environment for HPV replication. This in turn may further stimulate the abnormal proliferation that contributes to the development of AKs and carcinogenesis. Ultraviolet radiation It is thought that ultraviolet (UV) radiation induces mutations in the keratinocytes of the epidermis, promoting the survival and proliferation of these atypical cells. Both UV-A and UV-B radiation have been implicated as causes of AKs. UV-A radiation (wavelength 320–400 nm) reaches more deeply into the skin and can lead to the generation of reactive oxygen species, which in turn can damage cell membranes, signaling proteins, and nucleic acids. UV-B radiation (wavelength 290–320 nm) causes thymidine dimer formation in DNA and RNA, leading to significant cellular mutations. In particular, mutations in the p53 tumor suppressor gene have been found in 30–50% of AK lesion skin samples. UV radiation has also been shown to cause elevated inflammatory markers such as arachidonic acid, as well as other molecules associated with inflammation. Eventually, over time these changes lead to the formation of AKs. Several predictors for increased AK risk from UV radiation have been identified: Extent of sun exposure: Cumulative sun exposure leads to an increased risk for development of AKs. In one U.S. study, AKs were found in 55% of fair-skinned men with high cumulative sun exposure, and in only 19% of fair-skinned men with low cumulative sun exposure in an age-matched cohort (the percents for women in this same study were 37% and 12% respectively). Furthermore, the use of sunscreen (SPF 17 or higher) has been found to significantly reduce the development of AK lesions, and also promotes the regression of existing lesions. History of sunburn: Studies show that even a single episode of painful sunburn as a child can increase an individual's risk of developing AK as an adult. Six or more painful sunburns over the course of a lifetime was found to be significantly associated with the likelihood of developing AK. Skin pigmentation Melanin is a pigment in the epidermis that functions to protect keratinocytes from the damage caused by UV radiation; it is found in higher concentrations in the epidermis of darker-skinned individuals, affording them protection against the development of AKs. Fair-skinned individuals have a significantly increased risk of developing AKs when compared to olive-skinned individuals (odds ratios of 14.1 and 6.5, respectively), and AKs are uncommon in dark-skinned people of African descent. Other phenotypic features seen in fair-skinned individuals that are associated with an increased propensity to develop AKs include: Freckling Light hair and eye color Propensity to sunburn Inability to tan Other risk factors Immunosuppression: People with a compromised immune system from medical conditions (such as AIDS) or immunosuppressive therapy (such as chronic immunosuppression after organ transplantation, or chemotherapy for cancer) are at increased risk for developing AKs. They may develop AK at an earlier age or have an increased number of AK lesions compared to immunocompetent people. Human papillomavirus (HPV): The role of HPV in the development of AK remains unclear, but evidence suggests that infection with the betapapillomavirus type of HPV may be associated with an increased likelihood of AK. Genodermatoses: Certain genetic disorders interfere with DNA repair after sun exposure, thereby putting these individuals at higher risk for the development of AKs. Examples of such genetic disorders include xeroderma pigmentosum and Bloom syndrome. Balding: AKs are commonly found on the scalps of balding men. The degree of baldness seems to be a risk factor for lesion development, as men with severe baldness were found to be seven times more likely to have 10 or more AKs when compared to men with minimal or no baldness. This observation can be explained by an absence of hair causing a larger proportion of scalp to be exposed to UV radiation if other sun protection measures are not taken. Diagnosis Physicians usually diagnose actinic keratosis by doing a thorough physical examination, through a combination of visual observation and touch. However a biopsy may be necessary when the keratosis is large in diameter, thick, or bleeding, in order to make sure that the lesion is not a skin cancer. Actinic keratosis may progress to invasive squamous cell carcinoma (SCC) but both diseases can present similarly upon physical exam and can be difficult to distinguish clinically. Histological examination of the lesion from a biopsy or excision may be necessary to definitively distinguish AK from in situ or invasive SCC. In addition to SCCs, AKs can be mistaken for other cutaneous lesions including seborrheic keratoses, basal cell carcinoma, lichenoid keratosis, porokeratosis, viral warts, erosive pustular dermatosis of the scalp, pemphigus foliaceus, inflammatory dermatoses like psoriasis, or melanoma. Biopsy A lesion biopsy is performed if the diagnosis remains uncertain after a clinical physical exam, or if there is suspicion that the AK might have progressed to squamous cell carcinoma. The most common tissue sampling techniques include shave or punch biopsy. When only a portion of the lesion can be removed due to its size or location, the biopsy should sample tissue from the thickest area of the lesion, as SCCs are most likely to be detected in that area. If a shave biopsy is performed, it should extend through to the level of the dermis in order to provide sufficient tissue for diagnosis; ideally, it would extend to the mid-reticular dermis. Punch biopsy usually extends to the subcutaneous fat when the entire length of the punch blade is utilized. Histopathology On histologic examination, actinic keratoses usually show a collection of atypical keratinocytes with hyperpigmented or pleomorphic nuclei, extending to the basal layer of the epidermis. A "flag sign" is often described, referring to alternating areas of orthokeratosis and parakeratosis. Epidermal thickening and surrounding areas of sun-damaged skin are often seen. The normal ordered maturation of the keratinocytes is disordered to varying degrees: there may be widening of the intracellular spaces, cytologic atypia such as abnormally large nuclei, and a mild chronic inflammatory infiltrate. Specific findings depend on the clinical variant and particular lesion characteristics. The seven major histopathologic variants are all characterized by atypical keratinocytic proliferation beginning in the basal layer and confined to the epidermis; they include: Hypertrophic: Notable for marked hyperkeratosis, often with evident parakeratosis. Keratinocytes in the stratum malphigii may show a loss of polarity, pleomorphism, and anaplasia. Some irregular downward proliferation into the uppermost dermis may be observed, but does not represent frank invasion. Atrophic: With slight hyperkeratosis and overall atrophic changes to the epidermis; the basal layer shows cells with large, hyperchromatic nuclei in close proximity to each other. These cells have been observed to proliferate into the dermis as buds and duct-like structures. Lichenoid: Demonstrate a band-like lymphocytic infiltrate in the papillary dermis, directly beneath the dermal-epidermal junction. Achantholytic: Intercellular clefts or lacunae in the lowermost epidermal layer that result from anaplastic changes; these produce dyskeratotic cells with disrupted intercellular bridges. Bowenoid: This term is controversial and usually refers to full-thickness atypia, microscopically indistinguishable from Bowen's Disease. However most dermatologists and pathologists will use it in reference to tissue samples that are notable for small foci of atypia that involve the full thickness of the epidermis, in the background of a lesion that is otherwise consistent with an AK. Epidermolytic: With granular degeneration. Pigmented: Show pigmentation in the basal layer of the epidermis, similar to a solar lentigo. Dermoscopy Dermoscopy is a noninvasive technique utilizing a handheld magnifying device coupled with a transilluminating lift. It is often used in the evaluation of cutaneous lesions but lacks the definitive diagnostic ability of biopsy-based tissue diagnosis. Histopathologic exam remains the gold standard. Polarized contact dermoscopy of AKs occasionally reveals a "rosette sign," described as four white points arranged in a clover pattern, often localized to within a follicular opening. It is hypothesized that the "rosette sign" corresponds histologically to the changes of orthokeratosis and parakeratosis known as the "flag sign." Non-pigmented AKs: linear or wavy vascular patterning, or a "strawberry pattern," described as unfocused vessels between hair follicles, with white-haloed follicular openings. Pigmented AKs: gray to brown dots or globules surrounding follicular openings, and annular-granular rhomboidal structures; often difficult to differentiate from lentigo maligna. Prevention Ultraviolet radiation is believed to contribute to the development of actinic keratoses by inducing mutations in epidermal keratinocytes, leading to proliferation of atypical cells. Therefore, preventive measures for AKs are targeted at limiting exposure to solar radiation, including: Limiting extent of sun exposure Avoid sun exposure during noontime hours between 10:00 AM and 2:00 PM when UV light is most powerful Minimize all time in the sun, since UV exposure occurs even in the winter and on cloudy days Using sun protection Applying sunscreens with SPF ratings 30 or greater that also block both UVA and UVB light, at least every 2 hours and after swimming or sweating Applying sunscreen at least 15 minutes before going outside, as this allows time for the sunscreen to be absorbed appropriately by the skin Wearing sun protective clothing such as hats, sunglasses, long-sleeved shirts, long skirts, or trousers. “Consider taking 10 micrograms of vitamin D a day if you always cover up outdoors. This is because you may not get enough vitamin D from sunlight.” Recent research implicating human papillomavirus (HPV) in the development of AKs suggests that HPV prevention might in turn help prevent development of AKs, as UV-induced mutations and oncogenic transformation are likely facilitated in cases of active HPV infection. A key component of HPV prevention includes vaccination, and the CDC currently recommends routine vaccination in all children at age 11 or 12. There are some data that in individuals with a history of non-melanoma skin cancer, a low-fat diet can serve as a preventative measure against future actinic keratoses. Management There are a variety of treatment options for AK depending on the patient and the clinical characteristics of the lesion. AKs show a wide range of features, which guide decision-making in choosing treatment. As there are multiple effective treatments, patient preference and lifestyle are also factors that physicians consider when determining the management plan for actinic keratosis. Regular follow-up is advisable after any treatment to make sure no new lesions have developed and that old ones are not progressing. Adding topical treatment after a procedure may improve outcomes. Medication Topical medications are often recommended for areas where multiple or ill-defined AKs are present, as the medication can easily be used to treat a relatively large area. Fluorouracil cream Topical fluorouracil (5-FU) destroys AKs by blocking methylation of thymidylate synthetase, thereby interrupting DNA and RNA synthesis. This in turn prevents the proliferation of dysplastic cells in AK. Topical 5-FU is the most utilized treatment for AK, and often results in effective removal of the lesion. Overall, there is a 50% efficacy rate resulting in 100% clearance of AKs treated with topical 5-FU. 5-FU may be up to 90% effective in treating non-hyperkeratotic lesions. While topical 5-FU is a widely used and cost-effective treatment for AKs and is generally well tolerated, its potential side-effects can include: pain, crusting, redness, and local swelling. These adverse effects can be mitigated or minimized by reducing the frequency of application or taking breaks between uses. The most commonly used application regimen consists of applying a layer of topical cream to the lesion twice a day after washing; duration of treatment is typically 2–4 weeks to thinner skin like the cheeks and up to 8 weeks for the arms; treatment of up to 8 weeks has demonstrated a higher cure rate. Imiquimod cream Imiquimod is a topical immune-enhancing agent licensed for the treatment of genital warts. Imiquimod stimulates the immune system through the release and up-regulation of cytokines. Treatment with Imiquimod cream applied 2–3 times per week for 12 to 16 weeks was found to result in complete resolution of AKs in 50% of people, compared to 5% of controls. The Imiquimod 3.75% cream has been validated in a treatment regimen consisting of daily application to entire face and scalp for two 2-week treatment cycles, with a complete clearance rate of 36%. While the clearance rate observed with the Imiquimod 3.75% cream was lower than that observed with the 5% cream (36 and 50 percent, respectively), there are lower reported rates of adverse reactions with the 3.75% cream: 19% of individuals using Imiquimod 3.75% cream reported adverse reactions including local erythema, scabbing, and flaking at the application site, while nearly a third of individuals using the 5% cream reported the same types of reactions with Imiquimod treatment. However, it is ultimately difficult to compare the efficacy of the different strength creams directly, as current study data varies in methodology (e.g. duration and frequency of treatment, and amount of skin surface area covered). Ingenol mebutate gel Ingenol mebutate is a newer treatment for AK used in Europe and the United States. It works in two ways, first by disrupting cell membranes and mitochondria resulting cell death, and then by inducing antibody-dependent cellular cytotoxicity to eliminate remaining tumor cells. A 3-day treatment course with the 0.015% gel is recommended for the scalp and face, while a 2-day treatment course with the 0.05% gel is recommended for the trunk and extremities. Treatment with the 0.015% gel was found to completely clear 57% of AK, while the 0.05% gel had a 34% clearance rate. Advantages of ingenol mebutate treatment include the short duration of therapy and a low recurrence rate. Local skin reactions including pain, itching and redness can be expected during treatment with ingenol mebutate. This treatment was derived from the petty spurge, Euphorbia peplus which has been used as a traditional remedy for keratosis. Diclofenac sodium gel Topical diclofenac sodium gel is a nonsteroidal anti-inflammatory drug that is thought to work in the treatment of AK through its inhibition of the arachidonic acid pathway, thereby limiting the production of prostaglandins which are thought to be involved in the development of UVB-induced skin cancers. Recommended duration of therapy is 60 to 90 days with twice daily application. Treatment of facial AK with diclofenac gel led to complete lesion resolution in 40% of cases. Common side effects include dryness, itching, redness, and rash at the site of application. Retinoids Topical retinoids have been studied in the treatment of AK with modest results, and the American Academy of Dermatology does not currently recommend this as first-line therapy. Treatment with adapalene gel daily for 4 weeks, and then twice daily thereafter for a total of nine months led to a significant but modest reduction in the number AKs compared to placebo; it demonstrated the additional advantage of improving the appearance of photodamaged skin. Topical tretinoin is ineffective as treatment for reducing the number of AKs. For secondary prevention of AK, systemic, low-dose acitretin was found to be safe, well tolerated and moderately effective in chemoprophylaxis for skin cancers in kidney transplant patients. Acitretin is a viable treatment option for organ transplant patients according to expert opinion. Tirbanibulin Tirbanibulin (Klisyri) was approved for medical use in the United States in December 2020, for the treatment of actinic keratosis on the face or scalp. Procedures Cryotherapy Liquid nitrogen (−195.8 °C) is the most commonly used destructive therapy for the treatment of AK in the United States. It is a well-tolerated office procedure that does not require anesthesia. Cryotherapy is particularly indicated for cases where there are fewer than 15 thin, well-demarcated lesions. Caution is encouraged for thicker, more hyperkeratotic lesions, as dysplastic cells may evade treatment. Treatment with both cryotherapy and field treatment can be considered for these more advanced lesions. Cryotherapy is generally performed using an open-spray technique, wherein the AK is sprayed for several seconds. The process can be repeated multiple times in one office visit, as tolerated. Cure rates from 67 to 99 percent have been reported, depending on freeze time and lesion characteristics. Disadvantages include discomfort during and after the procedure; blistering, scarring and redness; hypo- or hyperpigmentation; and destruction of healthy tissue. Photodynamic therapy AKs are one of the most common dermatologic lesions for which photodynamic therapy, including topical methyl aminolevulinate (MAL) or 5-aminolevulinic acid (5-ALA), is indicated. Treatment begins with preparation of the lesion, which includes scraping away scales and crusts using a dermal curette. A thick layer of topical MAL or 5-ALA cream is applied to the lesion and a small area surrounding the lesion, which is then covered with an occlusive dressing and left for a period of time. During this time the photosensitizer accumulates in the target cells within the AK lesion. The dressings are then removed and the lesion is treated with light at a specified wavelength. Multiple treatment regimens using different photosensitizers, incubation times, light sources, and pretreatment regimens have been studied and suggest that longer incubation times lead to higher rates of lesion clearance. Photodynamic therapy is gaining in popularity. It has been found to have a 14% higher likelihood of achieving complete lesion clearance at 3 months compared to cryotherapy, and seems to result in superior cosmetic outcomes when compared to cryotherapy or 5-FU treatment. Photodynamic therapy can be particularly effective in treating areas with multiple AK lesions. Surgical techniques Surgical excision: Excision should be reserved for cases when the AK is a thick, horny papule, or when deeper invasion is suspected and histopathologic diagnosis is necessary. It is a rarely utilized technique for AK treatment. Shave excision and curettage (sometimes followed by electrodesiccation when deemed appropriate by the physician): This technique is often used for treatment of AKs, and particularly for lesions appearing more similar to squamous cell carcinoma, or those that are unresponsive to other treatments. The surface of the lesion can be scraped away using a scalpel, or the base can be removed with a curette. Tissue can be evaluated histopathologically under the microscope, but specimens acquired using this technique are not often adequate to determine whether a lesion is invasive or intraepidermal. Dermabrasion: Dermabrasion is useful in the treatment of large areas with multiple AK lesions. The process involves using a hand-held instrument to "sand" the skin, removing the stratum corneum layer of the epidermis. Diamond fraises or wire brushes revolving at high speeds are used. The procedure can be quite painful and requires procedural sedation and anesthetic, necessitating a hospital stay. One-year clearance rates with dermabrasion treatment are as high as 96%, but diminish drastically to 54% at five years. Laser therapy Laser therapy using carbon dioxide () or erbium:yttrium aluminum garnet (Er:YAG) lasers is a treatment approach being utilized with increased frequency, and sometimes in conjunction with computer scanning technology. Laser therapy has not been extensively studied, but current evidence suggests it may be effective in cases involving multiple AKs refractive to medical therapy, or AKs located in cosmetically sensitive locations such as the face. The laser has been recommended for extensive actinic cheilitis that has not responded to 5-FU. Chemical peels A chemical peel is a topically applied agent that wounds the outermost layer of the skin, promoting organized repair, exfoliation, and eventually the development of smooth and rejuvenated skin. Multiple therapies have been studied. A medium-depth peel may effectively treat multiple non-hyperkeratotic AKs. It can be achieved with 35% to 50% trichloroacetic acid (TCA) alone or at 35% in combination with Jessner's solution in a once-daily application for a minimum of 3 weeks; 70% glycolic acid (α-hydroxy acid); or solid . When compared to treatment with 5-FU, chemical peels have demonstrated similar efficacy and increased ease of use with similar morbidity. Chemical peels must be performed in a controlled clinic environment and are recommended only for individuals who are able to comply with follow-up precautions, including avoidance of sun exposure. Furthermore, they should be avoided in individuals with a history of HSV infection or keloids, and in those who are immunosuppressed or who are taking photosensitizing medications. Prognosis Untreated AKs follow one of three paths: they can either persist as AKs, regress, or progress to invasive skin cancer, as AK lesions are considered to be on the same continuum with squamous cell carcinoma (SCC). AK lesions that regress also have the potential to recur. Progression: The overall risk of an AK turning into invasive cancer is low. In average-risk individuals, likelihood of an AK lesion progressing to SCC is less than 1% per year. Despite this low rate of progression, studies suggest that a full 60% of SCCs arise from pre-existing AKs, reinforcing the idea that these lesions are closely related. Regression: Reported regression rates for single AK lesions have ranged between 15 and 63% after one year. Recurrence: Recurrence rates after 1 year for single AK lesions that have regressed range between 15 and 53%. Clinical course Given the aforementioned differering clinical outcomes, it is difficult to predict the clinical course of any given actinic keratosis. AK lesions may also come and go—in a cycle of appearing on the skin, remaining for months, and then disappearing. Often they will reappear in a few weeks or months, particularly after unprotected sun exposure. Left untreated, there is a chance that the lesion will advance to become invasive. Although it is difficult to predict whether an AK will advance to become squamous cell carcinoma, it has been noted that squamous cell carcinomas originate in lesions formerly diagnosed as AKs with frequencies reported between 65 and 97%. Epidemiology Actinic keratosis is very common, with an estimated 14% of dermatology visits related to AKs. It is seen more often in fair-skinned individuals, and rates vary with geographical location and age. Other factors such as exposure to ultraviolet (UV) radiation, certain phenotypic features, and immunosuppression can also contribute to the development of AKs. Men are more likely to develop AK than women, and the risk of developing AK lesions increases with age. These findings have been observed in multiple studies, with numbers from one study suggesting that approximately 5% of women ages 20–29 develop AK compared to 68% of women ages 60–69, and 10% of men ages 20–29 develop AK compared to 79% of men ages 60–69. Geography seems to play a role in the sense that individuals living in locations where they are exposed to more UV radiation throughout their lifetime have a significantly higher risk of developing AK. Much of the literature on AK comes from Australia, where the prevalence of AK is estimated at 40–50% in adults over 40, as compared to the United States and Europe, where prevalence is estimated at under 11–38% in adults. One study found that those who immigrated to Australia after age 20 had fewer AKs than native Australians in all age groups. Research Diagnostically, researchers are investigating the role of novel biomarkers to assist in determining which AKs are more likely to develop into cutaneous or metastatic SCC. Upregulation of matrix metalloproteinases (MMP) is seen in many different types of cancers, and the expression and production of MMP-7 in particular has been found to be elevated in SCC specifically. The role of serin peptidase inhibitors (Serpins) is also being investigated. SerpinA1 was found to be elevated in the keratinocytes of SCC cell lines, and SerpinA1 upregulation was correlated with SCC tumor progression in vivo. Further investigation into specific biomarkers could help providers better assess prognosis and determine best treatment approaches for particular lesions. In terms of treatment, a number of medications are being studied. Resiquimod is a TLR 7/8 agonist that works similarly to imiquimod, but is 10 to 100 times more potent; when used to treat AK lesions, complete response rates have range from 40 to 74%. Afamelanotide is a drug that induces the production of melanin by melanocytes to act as a protective factor against UVB radiation. It is being studied to determine its efficacy in preventing AKs in organ transplant patients who are on immunosuppressive therapy. Epidermal growth factor receptor (EGFR) inhibitors such as gefitinib, and anti-EGFR antibodies such as cetuximab are used in the treatment of various types of cancers, and are currently being investigated for potential use in the treatment and prevention of AKs. References Dermatology
Actinic keratosis
[ "Chemistry" ]
7,379
[ "Sun tanning", "Ultraviolet radiation" ]
1,300,939
https://en.wikipedia.org/wiki/Infinite-dimensional%20optimization
In certain optimization problems the unknown optimal solution might not be a number or a vector, but rather a continuous quantity, for example a function or the shape of a body. Such a problem is an infinite-dimensional optimization problem, because, a continuous quantity cannot be determined by a finite number of certain degrees of freedom. Examples Find the shortest path between two points in a plane. The variables in this problem are the curves connecting the two points. The optimal solution is of course the line segment joining the points, if the metric defined on the plane is the Euclidean metric. Given two cities in a country with many hills and valleys, find the shortest road going from one city to the other. This problem is a generalization of the above, and the solution is not as obvious. Given two circles which will serve as top and bottom for a cup of given height, find the shape of the side wall of the cup so that the side wall has minimal area. The intuition would suggest that the cup must have conical or cylindrical shape, which is false. The actual minimum surface is the catenoid. Find the shape of a bridge capable of sustaining given amount of traffic using the smallest amount of material. Find the shape of an airplane which bounces away most of the radio waves from an enemy radar. Infinite-dimensional optimization problems can be more challenging than finite-dimensional ones. Typically one needs to employ methods from partial differential equations to solve such problems. Several disciplines which study infinite-dimensional optimization problems are calculus of variations, optimal control and shape optimization. See also Semi-infinite programming References David Luenberger (1997). Optimization by Vector Space Methods. John Wiley & Sons. . Edward J. Anderson and Peter Nash, Linear Programming in Infinite-Dimensional Spaces, Wiley, 1987. M. A. Goberna and M. A. López, Linear Semi-Infinite Optimization, Wiley, 1998. Cassel, Kevin W.: Variational Methods with Applications in Science and Engineering, Cambridge University Press, 2013. Functional analysis Optimization in vector spaces
Infinite-dimensional optimization
[ "Mathematics" ]
411
[ "Functional analysis", "Mathematical objects", "Functions and mappings", "Mathematical relations" ]
1,301,022
https://en.wikipedia.org/wiki/Cofferdam
A cofferdam is an enclosure built within a body of water to allow the enclosed area to be pumped out or drained. This pumping creates a dry working environment so that the work can be carried out safely. Cofferdams are commonly used for construction or repair of permanent dams, oil platforms, bridge piers, etc., built within water. They also form an integral part of naval architecture. These cofferdams are usually welded steel structures, with components consisting of sheet piles, wales, and cross braces. Such structures are usually dismantled after the construction work is completed. The origin of the word comes from coffer (originally from Latin meaning 'basket') and dam from Proto-Germanic meaning 'barrier across a stream of water to obstruct its flow and raise its level'). Uses For dam construction, two cofferdams are usually built, one upstream and one downstream of the proposed dam, after an alternative diversion tunnel or channel has been provided for the river flow to bypass the foundation area of the dam. These cofferdams are typically a conventional embankment dam of both earth- and rock-fill, but concrete or some sheet piling also may be used. Usually, upon completion of the dam and associated structures, the downstream coffer is removed and the upstream coffer is flooded as the diversion is closed and the reservoir begins to fill. Depending on the geography of a dam site, in some applications, a U-shaped cofferdam is used in the construction of one half of a dam. When complete, the cofferdam is removed and a similar one is created on the opposite side of the river for the construction of the dam's other half. Cofferdams are used in ship husbandry to allow dry access to underwater equipment and to close underwater openings while work is done on the fittings inside the ship. This is more common in naval vessels where a cofferdam may fit several vessels of a class. The cofferdam is also used on occasion in the shipbuilding and ship repair industry, when it is not practical to put a ship in drydock for repair work or modernization. An example of such an application is the lengthening of ships. In some cases a ship is actually cut in two while still in the water, and a new section of ship is floated in to lengthen the ship. The cutting of the hull is done inside a cofferdam attached directly to the hull of the ship; the cofferdam is then detached before the hull sections are floated apart. The cofferdam is later replaced while the hull sections are welded together again. As expensive as this may be to accomplish, the use of a drydock might be even more expensive. Cofferdams are also used in some marine salvage operations. Cofferdams have been used to recover aircraft from water as well, as in the case of Avro Lancaster ED603, which was recovered from the IJsselmeer in 2023 using a cofferdam, allowing for close examination of the wreckage, as well as to locate and repatriate the remains of its crew. Examples A 100-ton open caisson that was lowered more than a mile to the sea floor in attempts to stop the flow of oil in the Deepwater Horizon oil spill has been called a cofferdam. A cofferdam over 1 mile long was built to permit the construction of the Livingstone Channel in the Detroit River. See main article at Stony Island. The museum battleships USS Alabama (BB-60) and USS North Carolina (BB-55) have had cofferdams since 2003 and 2018, respectively. This saves much money compared to towing and dry docking them after the tow and this also provides additional security so there is a low chance of the ships sinking and becoming impossible to repair. Types Several types of structure performing this function can be distinguished, depending on how they are constructed and how they are used. Civil and coastal engineering In civil and costal engineering applications cofferdams are usually made from interlocking steel sheet piles which are driven deep into the bed of the water source in order to create a temporary dam behind which the engineering contractors can carry out their works. After the construction project is complete the sheet piles can then be removed and the area behind them rewetted. Naval architecture A cofferdam is a space between two watertight bulkheads or decks within a ship. It is usually a void (empty) space intended to ensure that the contents of nearly adjacent tanks cannot leak directly from one to the other which would result in contamination of the contents of one or both of the compartments. The cofferdam would be kept empty at all times and the ship may have sensors within it to warn if it has begun to fill with liquid. If two different cargoes that react dangerously with each other are carried on the same vessel, one or more cofferdams are usually required between the cargo spaces. Marine salvage When all or part of the main deck of a sunken ship is submerged, flooded spaces cannot be dewatered until all openings are sealed or the effective freeboard is extended above the high water level. One method of doing this is to build a temporary watertight extension of the entire hull of the ship, or the space to be dewatered, to the surface. This watertight extension is a cofferdam. Although they are temporary structures, cofferdams for this purpose have to be strongly built, adequately stiffened, and reinforced to withstand the hydrostatic and other loads that they will have to withstand. Large cofferdams are normally restricted to harbor operations. Complete cofferdams cover most or all of the sunken vessel and are equivalent to extensions of the ship's sides to above the water surface. Partial cofferdams are constructed around moderate-sized openings or areas such as a cargo hatch or small deckhouse. They can often be prefabricated and installed as a unit, or prefabricated panels can be joined during erection. When partial cofferdams are used, it may be necessary to compensate for hydrostatic pressure on the deck by shoring the decks. With both complete and partial cofferdams, there is usually a large free surface in the spaces being pumped. Sometimes this can be limited by dewatering one compartment at a time, or in groups, taking into account the beam strength loads on the ship induced by the load distribution. Small cofferdams are used for pumping or to allow salvors access to spaces that are covered by water at some stage of the tide. They are usually prefabricated and fitted around minor openings. Diving work on cofferdams often involves clearing obstructions, fitting, and fastening, including underwater welding, and where necessary, caulking, bracing and shoring the adjacent structure. Ship husbandry There are two common types of dry chambers used in underwater ship husbandry. Open bottom cofferdams allow divers direct access to the enclosed hull area, system, or opening. The flange sides of the chamber secure and seal against the hull, acting as an airtight boundary. Open bottom cofferdams are typically used as diver work space for rigging or welding and ventilation for welding or epoxy cure, where there is no opening to the interior of the vessel, or the interior is pressurised in this area. The air space is at the pressure of the water surface at the bottom of the chamber. Open top cofferdams allow surface access to the work area below the waterline, and are at atmospheric pressure. Openings through the hull to the interior of the ship are possible. Portable cofferdams Portable cofferdams can be inflatable or frame and fabric cofferdams that can be reused. Inflatable cofferdams are stretched across the site, then inflated with water from the prospected dry area. Frame and fabric cofferdams are erected in the water and covered with watertight fabric. Once the area is dry, water still remaining from the dry area can be siphoned over to the wet area. See also Caisson (engineering) Causeway Dental dam References External links Dams by type Naval architecture
Cofferdam
[ "Engineering" ]
1,665
[ "Naval architecture", "Marine engineering" ]
1,301,093
https://en.wikipedia.org/wiki/Two-port%20network
In electronics, a two-port network (a kind of four-terminal network or quadripole) is an electrical network (i.e. a circuit) or device with two pairs of terminals to connect to external circuits. Two terminals constitute a port if the currents applied to them satisfy the essential requirement known as the port condition: the current entering one terminal must equal the current emerging from the other terminal on the same port. The ports constitute interfaces where the network connects to other networks, the points where signals are applied or outputs are taken. In a two-port network, often port 1 is considered the input port and port 2 is considered the output port. It is commonly used in mathematical circuit analysis. Application The two-port network model is used in mathematical circuit analysis techniques to isolate portions of larger circuits. A two-port network is regarded as a "black box" with its properties specified by a matrix of numbers. This allows the response of the network to signals applied to the ports to be calculated easily, without solving for all the internal voltages and currents in the network. It also allows similar circuits or devices to be compared easily. For example, transistors are often regarded as two-ports, characterized by their -parameters (see below) which are listed by the manufacturer. Any linear circuit with four terminals can be regarded as a two-port network provided that it does not contain an independent source and satisfies the port conditions. Examples of circuits analyzed as two-ports are filters, matching networks, transmission lines, transformers, and small-signal models for transistors (such as the hybrid-pi model). The analysis of passive two-port networks is an outgrowth of reciprocity theorems first derived by Lorentz. In two-port mathematical models, the network is described by a 2 by 2 square matrix of complex numbers. The common models that are used are referred to as -parameters, -parameters, -parameters, -parameters, and -parameters, each described individually below. These are all limited to linear networks since an underlying assumption of their derivation is that any given circuit condition is a linear superposition of various short-circuit and open circuit conditions. They are usually expressed in matrix notation, and they establish relations between the variables , voltage across port 1 , current into port 1 , voltage across port 2 , current into port 2 which are shown in figure 1. The difference between the various models lies in which of these variables are regarded as the independent variables. These current and voltage variables are most useful at low-to-moderate frequencies. At high frequencies (e.g., microwave frequencies), the use of power and energy variables is more appropriate, and the two-port current–voltage approach is replaced by an approach based upon scattering parameters. General properties There are certain properties of two-ports that frequently occur in practical networks and can be used to greatly simplify the analysis. These include: Reciprocal networks A network is said to be reciprocal if the voltage appearing at port 2 due to a current applied at port 1 is the same as the voltage appearing at port 1 when the same current is applied to port 2. Exchanging voltage and current results in an equivalent definition of reciprocity. A network that consists entirely of linear passive components (that is, resistors, capacitors and inductors) is usually reciprocal, a notable exception being passive circulators and isolators that contain magnetized materials. In general, it will not be reciprocal if it contains active components such as generators or transistors. Symmetrical networks A network is symmetrical if its input impedance is equal to its output impedance. Most often, but not necessarily, symmetrical networks are also physically symmetrical. Sometimes also antimetrical networks are of interest. These are networks where the input and output impedances are the duals of each other. Lossless network A lossless network is one which contains no resistors or other dissipative elements. Impedance parameters (z-parameters) where All the -parameters have dimensions of ohms. For reciprocal networks . For symmetrical networks . For reciprocal lossless networks all the are purely imaginary. Example: bipolar current mirror with emitter degeneration Figure 3 shows a bipolar current mirror with emitter resistors to increase its output resistance. Transistor is diode connected, which is to say its collector-base voltage is zero. Figure 4 shows the small-signal circuit equivalent to Figure 3. Transistor is represented by its emitter resistance : a simplification made possible because the dependent current source in the hybrid-pi model for draws the same current as a resistor connected across . The second transistor is represented by its hybrid-pi model. Table 1 below shows the z-parameter expressions that make the z-equivalent circuit of Figure 2 electrically equivalent to the small-signal circuit of Figure 4. The negative feedback introduced by resistors can be seen in these parameters. For example, when used as an active load in a differential amplifier, , making the output impedance of the mirror approximately compared to only without feedback (that is with = 0Ω). At the same time, the impedance on the reference side of the mirror is approximately only a moderate value, but still larger than with no feedback. In the differential amplifier application, a large output resistance increases the difference-mode gain, a good thing, and a small mirror input resistance is desirable to avoid Miller effect. Admittance parameters (y-parameters) where All the Y-parameters have dimensions of siemens. For reciprocal networks . For symmetrical networks . For reciprocal lossless networks all the are purely imaginary. Hybrid parameters (h-parameters) where This circuit is often selected when a current amplifier is desired at the output. The resistors shown in the diagram can be general impedances instead. Off-diagonal -parameters are dimensionless, while diagonal members have dimensions the reciprocal of one another. For reciprocal networks . For symmetrical networks . For reciprocal lossless networks and are real, while and are purely imaginary. Example: common-base amplifier Note: Tabulated formulas in Table 2 make the -equivalent circuit of the transistor from Figure 6 agree with its small-signal low-frequency hybrid-pi model in Figure 7. Notation: is base resistance of transistor, is output resistance, and is mutual transconductance. The negative sign for reflects the convention that are positive when directed into the two-port. A non-zero value for means the output voltage affects the input voltage, that is, this amplifier is bilateral. If , the amplifier is unilateral. History The -parameters were initially called series-parallel parameters. The term hybrid to describe these parameters was coined by D. A. Alsberg in 1953 in "Transistor metrology". In 1954 a joint committee of the IRE and the AIEE adopted the term -parameters and recommended that these become the standard method of testing and characterising transistors because they were "peculiarly adaptable to the physical characteristics of transistors". In 1956, the recommendation became an issued standard; 56 IRE 28.S2. Following the merge of these two organisations as the IEEE, the standard became Std 218-1956 and was reaffirmed in 1980, but has now been withdrawn. Inverse hybrid parameters (g-parameters) where Often this circuit is selected when a voltage amplifier is wanted at the output. Off-diagonal g-parameters are dimensionless, while diagonal members have dimensions the reciprocal of one another. The resistors shown in the diagram can be general impedances instead. Example: common-base amplifier Note: Tabulated formulas in Table 3 make the -equivalent circuit of the transistor from Figure 8 agree with its small-signal low-frequency hybrid-pi model in Figure 9. Notation: is base resistance of transistor, is output resistance, and is mutual transconductance. The negative sign for reflects the convention that are positive when directed into the two-port. A non-zero value for means the output current affects the input current, that is, this amplifier is bilateral. If , the amplifier is unilateral. ABCD-parameters The -parameters are known variously as chain, cascade, or transmission parameters. There are a number of definitions given for parameters, the most common is, Note: Some authors chose to reverse the indicated direction of I2 and suppress the negative sign on I2. where For reciprocal networks . For symmetrical networks . For networks which are reciprocal and lossless, and are purely real while and are purely imaginary. This representation is preferred because when the parameters are used to represent a cascade of two-ports, the matrices are written in the same order that a network diagram would be drawn, that is, left to right. However, a variant definition is also in use, where The negative sign of arises to make the output current of one cascaded stage (as it appears in the matrix) equal to the input current of the next. Without the minus sign the two currents would have opposite senses because the positive direction of current, by convention, is taken as the current entering the port. Consequently, the input voltage/current matrix vector can be directly replaced with the matrix equation of the preceding cascaded stage to form a combined matrix. The terminology of representing the parameters as a matrix of elements designated etc. as adopted by some authors and the inverse parameters as a matrix of elements designated etc. is used here for both brevity and to avoid confusion with circuit elements. Table of transmission parameters The table below lists and inverse parameters for some simple network elements. Scattering parameters (S-parameters) The previous parameters are all defined in terms of voltages and currents at ports. -parameters are different, and are defined in terms of incident and reflected waves at ports. -parameters are used primarily at UHF and microwave frequencies where it becomes difficult to measure voltages and currents directly. On the other hand, incident and reflected power are easy to measure using directional couplers. The definition is, where the are the incident waves and the are the reflected waves at port . It is conventional to define the and in terms of the square root of power. Consequently, there is a relationship with the wave voltages (see main article for details). For reciprocal networks . For symmetrical networks . For antimetrical networks . For lossless reciprocal networks and Scattering transfer parameters (T-parameters) Scattering transfer parameters, like scattering parameters, are defined in terms of incident and reflected waves. The difference is that -parameters relate the waves at port 1 to the waves at port 2 whereas -parameters relate the reflected waves to the incident waves. In this respect -parameters fill the same role as parameters and allow the -parameters of cascaded networks to be calculated by matrix multiplication of the component networks. -parameters, like parameters, can also be called transmission parameters. The definition is, -parameters are not as easy to measure directly as -parameters. However, -parameters are easily converted to -parameters, see main article for details. Combinations of two-port networks When two or more two-port networks are connected, the two-port parameters of the combined network can be found by performing matrix algebra on the matrices of parameters for the component two-ports. The matrix operation can be made particularly simple with an appropriate choice of two-port parameters to match the form of connection of the two-ports. For instance, the -parameters are best for series connected ports. The combination rules need to be applied with care. Some connections (when dissimilar potentials are joined) result in the port condition being invalidated and the combination rule will no longer apply. A Brune test can be used to check the permissibility of the combination. This difficulty can be overcome by placing 1:1 ideal transformers on the outputs of the problem two-ports. This does not change the parameters of the two-ports, but does ensure that they will continue to meet the port condition when interconnected. An example of this problem is shown for series-series connections in figures 11 and 12 below. Series-series connection When two-ports are connected in a series-series configuration as shown in figure 10, the best choice of two-port parameter is the -parameters. The -parameters of the combined network are found by matrix addition of the two individual -parameter matrices. As mentioned above, there are some networks which will not yield directly to this analysis. A simple example is a two-port consisting of a -network of resistors and . The -parameters for this network are; Figure 11 shows two identical such networks connected in series-series. The total -parameters predicted by matrix addition are; However, direct analysis of the combined circuit shows that, The discrepancy is explained by observing that of the lower two-port has been by-passed by the short-circuit between two terminals of the output ports. This results in no current flowing through one terminal in each of the input ports of the two individual networks. Consequently, the port condition is broken for both the input ports of the original networks since current is still able to flow into the other terminal. This problem can be resolved by inserting an ideal transformer in the output port of at least one of the two-port networks. While this is a common text-book approach to presenting the theory of two-ports, the practicality of using transformers is a matter to be decided for each individual design. Parallel-parallel connection When two-ports are connected in a parallel-parallel configuration as shown in figure 13, the best choice of two-port parameter is the -parameters. The -parameters of the combined network are found by matrix addition of the two individual -parameter matrices. Series-parallel connection When two-ports are connected in a series-parallel configuration as shown in figure 14, the best choice of two-port parameter is the -parameters. The -parameters of the combined network are found by matrix addition of the two individual -parameter matrices. Parallel-series connection When two-ports are connected in a parallel-series configuration as shown in figure 15, the best choice of two-port parameter is the -parameters. The -parameters of the combined network are found by matrix addition of the two individual -parameter matrices. Cascade connection When two-ports are connected with the output port of the first connected to the input port of the second (a cascade connection) as shown in figure 16, the best choice of two-port parameter is the -parameters. The -parameters of the combined network are found by matrix multiplication of the two individual -parameter matrices. A chain of two-ports may be combined by matrix multiplication of the matrices. To combine a cascade of -parameter matrices, they are again multiplied, but the multiplication must be carried out in reverse order, so that; Example Suppose we have a two-port network consisting of a series resistor followed by a shunt capacitor . We can model the entire network as a cascade of two simpler networks: The transmission matrix for the entire network is simply the matrix multiplication of the transmission matrices for the two network elements: Thus: Interrelation of parameters Where is the determinant of . Certain pairs of matrices have a particularly simple relationship. The admittance parameters are the matrix inverse of the impedance parameters, the inverse hybrid parameters are the matrix inverse of the hybrid parameters, and the form of the -parameters is the matrix inverse of the form. That is, Networks with more than two ports While two port networks are very common (e.g., amplifiers and filters), other electrical networks such as directional couplers and circulators have more than 2 ports. The following representations are also applicable to networks with an arbitrary number of ports: Admittance () parameters Impedance () parameters Scattering () parameters For example, three-port impedance parameters result in the following relationship: However the following representations are necessarily limited to two-port devices: Hybrid () parameters Inverse hybrid () parameters Transmission () parameters Scattering transfer () parameters Collapsing a two-port to a one port A two-port network has four variables with two of them being independent. If one of the ports is terminated by a load with no independent sources, then the load enforces a relationship between the voltage and current of that port. A degree of freedom is lost. The circuit now has only one independent parameter. The two-port becomes a one-port impedance to the remaining independent variable. For example, consider impedance parameters Connecting a load, onto port 2 effectively adds the constraint The negative sign is because the positive direction for is directed into the two-port instead of into the load. The augmented equations become The second equation can be easily solved for as a function of and that expression can replace in the first equation leaving ( and and ) as functions of So, in effect, sees an input impedance and the two-port's effect on the input circuit has been effectively collapsed down to a one-port; i.e., a simple two terminal impedance. See also Admittance parameters Impedance parameters Scattering parameters Transfer-matrix method (optics) for reflection/transmission calculation of light waves in transparent layers Ray transfer matrix for calculation of paraxial propagation of a light ray Notes References Bibliography Carlin, HJ, Civalleri, PP, Wideband circuit design, CRC Press, 1998. . William F. Egan, Practical RF system design, Wiley-IEEE, 2003 . Farago, PS, An Introduction to Linear Network Analysis, The English Universities Press Ltd, 1961. Ghosh, Smarajit, Network Theory: Analysis and Synthesis, Prentice Hall of India . Matthaei, Young, Jones, Microwave Filters, Impedance-Matching Networks, and Coupling Structures, McGraw-Hill, 1964. Mahmood Nahvi, Joseph Edminister, Schaum's outline of theory and problems of electric circuits, McGraw-Hill Professional, 2002 . Dragica Vasileska, Stephen Marshall Goodnick, Computational electronics, Morgan & Claypool Publishers, 2006 . Clayton R. Paul, Analysis of Multiconductor Transmission Lines, John Wiley & Sons, 2008 , 9780470131541. h-parameters history D. A. Alsberg, "Transistor metrology", IRE Convention Record, part 9, pp. 39–44, 1953. also published as "Transistor metrology", Transactions of the IRE Professional Group on Electron Devices, vol. ED-1, iss. 3, pp. 12–17, August 1954. AIEE-IRE joint committee, "Proposed methods of testing transistors", Transactions of the American Institute of Electrical Engineers: Communications and Electronics, pp. 725–740, January 1955. "IRE Standards on solid-state devices: methods of testing transistors, 1956", Proceedings of the IRE, vol. 44, iss. 11, pp. 1542–1561, November, 1956. IEEE Standard Methods of Testing Transistors, IEEE Std 218-1956. Transfer functions
Two-port network
[ "Engineering" ]
3,880
[ "Two-port networks", "Electronic engineering" ]
1,301,559
https://en.wikipedia.org/wiki/Leibniz%20formula%20for%20%CF%80
In mathematics, the Leibniz formula for , named after Gottfried Wilhelm Leibniz, states that an alternating series. It is sometimes called the Madhava–Leibniz series as it was first discovered by the Indian mathematician Madhava of Sangamagrama or his followers in the 14th–15th century (see Madhava series), and was later independently rediscovered by James Gregory in 1671 and Leibniz in 1673. The Taylor series for the inverse tangent function, often called Gregory's series, is The Leibniz formula is the special case It also is the Dirichlet -series of the non-principal Dirichlet character of modulus 4 evaluated at and therefore the value of the Dirichlet beta function. Proofs Proof 1 Considering only the integral in the last term, we have: Therefore, by the squeeze theorem, as , we are left with the Leibniz series: Proof 2 Let , when , the series converges uniformly, then Therefore, if approaches so that it is continuous and converges uniformly, the proof is complete, where, the series to be converges by the Leibniz's test, and also, approaches from within the Stolz angle, so from Abel's theorem this is correct. Convergence Leibniz's formula converges extremely slowly: it exhibits sublinear convergence. Calculating to 10 correct decimal places using direct summation of the series requires precisely five billion terms because for (one needs to apply Calabrese error bound). To get 4 correct decimal places (error of 0.00005) one needs 5000 terms. Even better than Calabrese or Johnsonbaugh error bounds are available. However, the Leibniz formula can be used to calculate to high precision (hundreds of digits or more) using various convergence acceleration techniques. For example, the Shanks transformation, Euler transform or Van Wijngaarden transformation, which are general methods for alternating series, can be applied effectively to the partial sums of the Leibniz series. Further, combining terms pairwise gives the non-alternating series which can be evaluated to high precision from a small number of terms using Richardson extrapolation or the Euler–Maclaurin formula. This series can also be transformed into an integral by means of the Abel–Plana formula and evaluated using techniques for numerical integration. Unusual behaviour If the series is truncated at the right time, the decimal expansion of the approximation will agree with that of for many more digits, except for isolated digits or digit groups. For example, taking five million terms yields where the underlined digits are wrong. The errors can in fact be predicted; they are generated by the Euler numbers according to the asymptotic formula where is an integer divisible by 4. If is chosen to be a power of ten, each term in the right sum becomes a finite decimal fraction. The formula is a special case of the Euler–Boole summation formula for alternating series, providing yet another example of a convergence acceleration technique that can be applied to the Leibniz series. In 1992, Jonathan Borwein and Mark Limber used the first thousand Euler numbers to calculate to 5,263 decimal places with the Leibniz formula. Euler product The Leibniz formula can be interpreted as a Dirichlet series using the unique non-principal Dirichlet character modulo 4. As with other Dirichlet series, this allows the infinite sum to be converted to an infinite product with one term for each prime number. Such a product is called an Euler product. It is: In this product, each term is a superparticular ratio, each numerator is an odd prime number, and each denominator is the nearest multiple of 4 to the numerator. The product is conditionally convergent; its terms must be taken in order of increasing . See also List of formulae involving References Pi algorithms Articles containing proofs Gottfried Wilhelm Leibniz Mathematical series
Leibniz formula for π
[ "Mathematics" ]
818
[ "Sequences and series", "Mathematical structures", "Series (mathematics)", "Pi algorithms", "Calculus", "Articles containing proofs", "Pi" ]
1,301,665
https://en.wikipedia.org/wiki/Registration%2C%20Evaluation%2C%20Authorisation%20and%20Restriction%20of%20Chemicals
Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) is a European Union regulation dating from 18 December 2006, amended on 16 December 2008 by Regulation (EC) No 1272/2008. REACH addresses the production and use of chemical substances, and their potential impacts on both human health and the environment. Its 849 pages took seven years to pass, and it has been described as the most complex legislation in the Union's history and the most important in 20 years. It is the strictest law to date regulating chemical substances and will affect industries throughout the world. REACH entered into force on 1 June 2007, with a phased implementation over the next decade. The regulation also established the European Chemicals Agency, which manages the technical, scientific and administrative aspects of REACH. Overview When REACH is fully in force, it will require all companies manufacturing or importing chemical substances into the European Union in quantities of one tonne or more per year to register these substances with a new European Chemicals Agency (ECHA) in , Finland. Since REACH applies to some substances that are contained in objects (articles in REACH terminology), any company importing goods into Europe could be affected. The European Chemicals Agency has set three major deadlines for registration of chemicals. In general these are determined by tonnage manufactured or imported, with 1000 tonnes/a. being required to be registered by 1 December 2010, 100 tonnes/a. by 1 June 2013 and 1 tonne/a. by 1 June 2018. In addition, chemicals of higher concern or toxicity also have to meet the 2010 deadline. About 143,000 chemical substances marketed in the European Union were pre-registered by the 1 December 2008 deadline. Although pre-registering was not mandatory, it allows potential registrants much more time before they have to fully register. Supply of substances to the European market which have not been pre-registered or registered is illegal (known in REACH as "no data, no market"). REACH also addresses the continued use of chemical substances of very high concern (SVHC) because of their potential negative impacts on human health or the environment. From 1 June 2011, the European Chemicals Agency must be notified of the presence of SVHCs in articles if the total quantity used is more than one tonne per year and the SVHC is present at more than 0.1% of the mass of the object. Some uses of SVHCs may be subject to prior authorisation from the European Chemicals Agency, and applicants for authorisation will have to include plans to replace the use of the SVHC with a safer alternative (or, if no safer alternative exists, the applicant must work to find one) – known as substitution. , there were 219 SVHCs on the candidate list for authorization. REACH applies to all chemicals imported or produced in the EU. The European Chemicals Agency will manage the technical, scientific and administrative aspects of the REACH system. To somewhat simplify the registration of the 143,000 substances and to limit vertebrate animal testing as far as possible, substance information exchange forums (SIEFs) are formed amongst legal entities (such as manufacturers, importers, and data holders) who are dealing with the same substance. This allows them to join forces and finances to create 1 registration dossier. However, this creates a series of new problems as a SIEF is the cooperation between sometimes a thousand legal entities that did not know each other at all before but suddenly must: find each other and start communicating openly and honestly start sharing data start sharing costs in a fair and transparent way democratically and in full consensus take the most complex decisions in order to complete a several thousand end points dossier in a limited time. The European Commission supports businesses affected by REACH by handing out – free of charge – a software application (IUCLID) that simplifies capturing, managing, and submitting data on chemical properties and effects. Such submission is a mandatory part of the registration process. Under certain circumstances the performance of a chemical safety assessment (CSA) is mandatory and a chemical safety report (CSR) assuring the safe use of the substance has to be submitted with the dossier. Dossier submission is done using the web-based software REACH-IT. The aim of REACH is to improve the protection of human health and the environment by identification of the intrinsic properties of chemical substances. At the same time, innovative capability and competitiveness of the EU chemicals industry should be enhanced. Background The European Commission's (EC) White Paper of 2001 on a 'future chemical strategy' proposed a system that requires chemicals manufactured in quantities of greater than 1 tonne to be 'registered', those manufactured in quantities greater than 100 tonnes to be 'evaluated', and certain substances of high concern (for example carcinogenic, mutagenic and toxic to reproduction – CMRs) to be 'authorised'. The EC adopted its proposal for a new scheme to manage the manufacture, importation and supply of chemicals in Europe on in October 2003. This proposal eventually became law once the European Parliament officially approved its final text of REACH. It came into force on 1 June 2007. Requirements One of the major elements of the REACH regulation is the requirement to communicate information on chemicals up and down the supply chain. This ensures that manufacturers, importers, and also their customers are aware of information relating to health and safety of the products supplied. For many retailers the obligation to provide information about substances in their products within 45 days of receipt of a request from a consumer is particularly challenging. Having detailed information on the substances present in their products will allow retailers to work with the manufacturing base to substitute or remove potentially harmful substances from products. The list of harmful substances is continuously growing and requires organizations to constantly monitor any announcements and additions to the REACH scope. This can be done on the European Chemicals Agency's website. Registration A requirement is to collect, collate and submit data to the European Chemicals Agency (ECHA) on the hazardous properties of all substances (except Polymers and non-isolated intermediates) manufactured or imported into the EU in quantities above 1 tonne per year. Certain substances of high concern, such as carcinogenic, mutagenic and reproductive toxic substances (CMRs) will have to be authorised. Chemicals will be registered in three phases according to the tonnage of the substance evaluation: More than 1000 tonnes a year, or substances of highest concern, must be registered in the first 3 years; 100–1000 tonnes a year must be registered in the first 6 years; 1–100 tonnes a year must be registered in the first 11 years. In addition, industry should prepare risk assessments and provide controls measures for using the substance safely to downstream users. Evaluation Evaluation provides a means for the authorities to require registrants, and in very limited cases downstream users, to provide further information. There are two types of evaluation: dossier evaluation and substance evaluation: Dossier evaluation is conducted by authorities to examine proposals for testing to ensure that unnecessary animal tests and costs are avoided, and to check the compliance of registration dossier with the registration requirements. Chemical companies failed to provide "important safety information" in nearly three quarters (74% or 211 of 286) of cases checked by authorities, according to the European Chemicals Agency's 2018 annual progress report. “The numbers show a similar picture to previous years," it said. Industry group Cefic acknowledged the problem. Substance evaluation is performed by the relevant authorities when there is a reason to suspect that a substance presents a risk to human health or the environment (e.g. because of its structural similarity to another substance). Therefore, all registration dossiers submitted for a substance are examined together and any other available information is taken into account. Substance evaluation is carried out under a programme known as the Community Rolling Action Plan (CoRAP). An independent review of progress by national officials published in late 2018 found that 352 substances have so far been prioritised for substance evaluation with 94 completed. For almost half the 94, officials concluded that existing commercial use of the substance is unsafe for human health and/or the environment. Risk management has been initiated for twelve substances since REACH came into force. For 74% of substances (34 out of 46), concerns were demonstrated, but no actual regulatory follow-up has yet been initiated. In addition, national officials concluded that 64% of the substances under evaluation (126 out of 196) lacked the information needed to demonstrate the safety of the chemicals marketed in Europe due to inadequate industry data. Authorisation REACH allows restricted substances of very high concern to continue being used, subject to authorisation. This authorisation requirement attempts to ensure that risks from the use of such substances are either adequately controlled or justified by socio-economic grounds, having taken into account the available information on alternative substances or processes. The Regulation enables restrictions of use to be introduced across the European Community where this is shown to be necessary. Member States or the Commission may prepare such proposals. By March 2019, authorisation had been granted 185 times, with no eligible request ever having been rejected. NGOs have complained that authorisations have been granted despite safer alternatives existing and that this was hindering substitution. In March 2019, the European Court of Justice revoked an authorisation in a ruling that criticised the European Chemicals Agency for failing to identify a safer alternative. Information exchange Manufactures and importers should develop risk reduction measures for all known uses of the chemical including downstream uses. Downstream users such as plastic pipe producers should provide detail of their uses to their suppliers. In cases where downstream users decide not to disclose this information, they need to have their own CSR. History REACH is the product of a wide-ranging overhaul of EU chemical policy. It passed the first reading in the European Parliament on 17 November 2005, and the Council of Ministers reached a political agreement for a common position on 13 December 2005. The European Parliament approved REACH on 13 December 2006 and the Council of Ministers formally adopted it on 18 December 2006. Weighing up expenditure versus profit has always been a significant issue, with the estimated cost of compliance being around €5 billion over 11 years, and the assumed health benefits of saved billions of euro in healthcare costs. However, there have been different studies on the estimated cost which vary considerably in the outcome. It came into force on 20 January 2009, and will be fully implemented by 2015. A separate regulation – the CLP Regulation (for "Classification, Labelling, Packaging") – implements the United Nations Globally Harmonized System of Classification and Labelling of Chemicals (GHS) and will steadily replace the previous Dangerous Substances Directive and Dangerous Preparations Directive. The REACH regulation was amended in April 2018 to include specific information requirements for nanomaterials. In the European Green Deal of 2020, a commitment was made to update the REACH regulation to ban between 7,000 and 12,000 toxic substances in all consumer products, except where truly essential. The goal was among the priorities of the European Commission, but is in danger of being radically revised due to lobbying by the EU chemical industry and the positions taken by the European People's Party. Rationale The legislation was proposed under dual reasoning: protection of human health and protection of the environment. Using potentially toxic substances (such as phthalates or brominated flame retardants) is deemed undesirable and REACH will force the use of certain substances to be phased out. Using potentially toxic substances in products other than those ingested by humans (such as electronic devices) may seem to be safe, but there are several ways in which chemicals can enter the human body and the environment. Substances can leave particles during consumer use, for example into the air where they can be inhaled or ingested. Even where they might not do direct harm to humans, they can contaminate the air or water, and can enter the food chain through plants, fish or other animals. According to the European Commission, little safety information exists for 99 percent of the tens of thousands of chemicals placed on the market before 1981. There were 100,106 chemicals in use in the EU in 1981, when the last survey was performed. Of these only 3,000 have been tested and over 800 are known to be carcinogenic, mutagenic or toxic to reproduction. These are listed in the Annex 1 of the Dangerous Substances Directive (now Annex VI of the CLP Regulation). Continued use of many toxic chemicals is sometimes justified because "at very low levels they are not a concern to health". However, many of these substances may bioaccumulate in the human body, thus reaching dangerous concentrations. They may also chemically react with one another, producing new substances with new risks. In non-EU countries A number of countries outside of the European Union have started to implement REACH regulations or are in the process of adopting such a regulatory framework to approach a more globalized system of chemicals registration under the Globally Harmonized System of Classification and Labelling of Chemicals (GHS). Balkan countries such as Croatia and Serbia are in the process of adopting the EU REACH system under the auspices of the EU IPA programme. Switzerland has moved towards implementation of REACH through partial revision of the Swiss Chemical Ordinance on February 1, 2009. The new Chemicals Management Regulation in Turkey is paving the way for the planned adoption of REACH in 2013. China has moved towards a more efficient and coherent system for the control of chemicals in compliance with GHS. In the UK, the government announced a "UK REACH" that the UK's Chemical Industry Association described as a "hugely expensive duplication" of the EU's safety data. The new regulations were to be enforced from October 2021 but deferred to October 2023, and then to October 2025. Following industry representations, the responsible Minister announced "that officials would now explore 'a new model' for UK REACH registrations that would look to 'reduce the need for replicating EU Reach data packages'". In March 2021, a group of more than 20 leading UK organisations, including the CHEM Trust and Breast Cancer UK, "rejected industry proposals to streamline UK Reach as a 'major weakening' of the envisaged post-Brexit regime". Controversy Over a decade after REACH came into force, progress has been slow. Of the 100,000 chemicals used in Europe today, “only a small fraction has been thoroughly evaluated by authorities regarding their health and environmental properties and impacts, and even fewer are actually regulated,” according to a report for the European Commission. Apart from the potential costs to industry and the complexity of the new law, REACH has also attracted concern because of animal testing. Animal tests on vertebrates are now required but allowed only once per each new substance and if suitable alternatives cannot be used. If a company pays for such tests, it must sell the rights of the results for a "reasonable" price, which is not defined. There are additional concerns that access to the necessary information may prove very costly for potential registrants needing to purchase it. On 8 June 2006, the REACH proposal was criticized by non-EU countries, including the United States, India and Brazil, which stated that the bill would hamper global trade. The cosmetics company Lush were critical of the legislation when it was first proposed in 2006, as they believed it would increase animal testing. The cosmetics company wrote to its European customers and also ran an in-store marketing campaign, asking for postcards objecting to the legislation be sent to MEPs, a move which resulted in 80,000 Lush customers sending postcards. In December 2006, Lush protested outside the European Parliament in Strasbourg, by dumping horse manure outside the building. An opinion in Nature in 2009 by Thomas Hartung and Constanza Rovida estimated that 54 million vertebrate animals would be used under REACH and that the costs would amount to €9.5 billion, set against the annual European industry annual turnover of €507 billion. Hartung is the former head of European Centre for the Validation of Alternative Methods (ECVAM). In a news release, ECHA criticised assumptions made by Hartung and Rovida; ECHA's alternative assumptions reduced sixfold the number of animals. Only representative services Only representatives are EU-based entities that must comply with REACH (Article 8) and should operate standard, transparent working practices. The Only Representative assumes responsibility and liability for fulfilling obligations of importers in accordance with REACH for substances being brought into the EU by a non-EU manufacturer. Non-EU consultancies offer "only representative" services, though according to REACH it is not possible to register a substance if your "only representative" consultancy company is not based in the EU, unless it is subcontracted to an EU-based registrant. The SIEFs will bring new challenges. An article in the business news service Chemical Watch described how some "pre-registrants" may simply be consultants hoping for work ("gold diggers") while others may be aiming to charge exorbitant rates for the data they have to offer ("jackals"). Example of chemical inventories in various countries/regions Source: Regulation (EC) Nr. 1907/2006 (REACH) AICS – Australian Inventory of Chemical Substances DSL – Canadian Domestic Substances List NDSL – Canadian Non-Domestic Substances List KECL (Korean ECL) – Korean Existing Chemicals List ENCS (MITI) – Japanese Existing and New Chemical Substances PICCS – Philippine Inventory of Chemicals and Chemical Substances TSCA – US Toxic Substances Control Act Giftliste 1 (Swiss list of toxic substances, repealed in 2005) Authorisation List The European Chemical Agency (ECHA) has published the REACH Authorisation List, in an effort to tighten the use of Substances of Very High Concern (SVHCs). The list is an official recommendation from the ECHA to the European Commission. The list is also regularly updated and expanded. Currently the Candidate List for Authorisation comprises a total of 233 SVHCs (see ECHA list at https://echa.europa.eu/candidate-list-table), some of which are already active on the Authorization List. To sell or use these substances, manufacturers, importers, and retailers in the European Union (EU) must apply for authorization from the ECHA. The applicant is to submit a chemical safety report on the risks entailed by the substance, as well as an analysis of possible alternative substances or technologies including present and future research and development processed. See also Chemicals Strategy for Sustainability Towards a Toxic-Free Environment Consumer protection Environmental health International Material Data System Kashinhou – Japanese law Pesticides in the European Union Porter hypothesis Quality of life Toxic Substances Control Act of 1976 – US law References External links European Chemicals Agency - The organization responsible for implementing REACH "What is REACH" European Commission REACH & GHS overview - for enterprise and industry Database of REACH consortia - Chemical Watch Development of a mechanistic model for the Advanced REACH Tool, TNO Report ECHA Guidance on REACH implementation Advanced REACH Tool (ART) Global Chemical Inventories REACH resources and tools - Institute of Occupational Medicine fact sheets REACH and nanomaterials Occupational safety and health law Toxicology European Union regulations Evaluation 2006 in European Union law Chemical safety Regulation of chemicals in the European Union
Registration, Evaluation, Authorisation and Restriction of Chemicals
[ "Chemistry", "Environmental_science" ]
3,895
[ "Chemical accident", "Regulation of chemicals in the European Union", "Toxicology", "Regulation of chemicals", "nan", "Chemical safety" ]
21,003,747
https://en.wikipedia.org/wiki/High-resolution%20melting%20analysis
High Resolution Melt (HRM) analysis is a powerful technique in molecular biology for the detection of mutations, polymorphisms and epigenetic differences in double-stranded DNA samples. It was discovered and developed by Idaho Technology and the University of Utah. It has advantages over other genotyping technologies, namely: It is cost-effective vs. other genotyping technologies such as sequencing and TaqMan SNP typing. This makes it ideal for large scale genotyping projects. It is fast and powerful thus able to accurately genotype many samples rapidly. It is simple. With a good quality HRM assay, powerful genotyping can be performed by non-geneticists in any laboratory with access to an HRM capable real-time PCR machine. Method HRM analysis is performed on double stranded DNA samples. Typically the user will use polymerase chain reaction (PCR) prior to HRM analysis to amplify the DNA region in which their mutation of interest lies. In the sample tube there are now many copies of the DNA region of interest. This region that is amplified is known as the amplicon. After the PCR process the HRM analysis begins. The process is simply a precise warming of the amplicon DNA from around 50 ˚C up to around 95 ˚C. At some point during this process, the melting temperature of the amplicon is reached and the two strands of DNA separate or "melt" apart. The key to HRM is to monitor this separation of strands in real-time. This is achieved by using a fluorescent dye. The dyes that are used for HRM are known as intercalating dyes and have a unique property. They bind specifically to double-stranded DNA and when they are bound they fluoresce brightly. In the absence of double stranded DNA they have nothing to bind to and they only fluoresce at a low level. At the beginning of the HRM analysis there is a high level of fluorescence in the sample because of the billions of copies of the amplicon. But as the sample is heated up and the two strands of the DNA melt apart, presence of double stranded DNA decreases and thus fluorescence is reduced. The HRM machine has a camera that watches this process by measuring the fluorescence. The machine then simply plots this data as a graph known as a melt curve, showing the level of fluorescence vs the temperature: Comparison of melt curves The melting temperature of the amplicon at which the two DNA strands come apart is entirely predictable. It is dependent on the sequence of the DNA bases. If you are comparing two samples from two different people, they should give exactly the same shaped melt curve. However, if one person has a mutation in the DNA region you have amplified, then this will alter the temperature at which the DNA strands melt apart. So now the two melt curves appear different. The difference may only be tiny, perhaps a fraction of a degree, but because the HRM machine has the ability to monitor this process in "high resolution", it is possible to accurately document these changes and therefore identify if a mutation is present or not. Wild type, heterozygote or homozygote? Things become slightly more complicated than this because organisms contain two (or more) copies of each gene, known as the two alleles. So, if a sample is taken from a patient and amplified using PCR both copies of the region of DNA (alleles) of interest are amplified. So if we are looking for mutation there are now three possibilities: Neither allele contains a mutation One or other allele contains a mutation Both alleles contain a mutation. These three scenarios are known as "Wild–type", "Heterozygote" or "Homozygote" respectively. Each gives a melt curve that is slightly different. With a high quality HRM assay it is possible to distinguish between all three of these scenarios. Homozygous allelic variants may be characterised by a temperature shift on the resulting melt curve produced by HRM analysis. In comparison, heterozygotes are characterised by changes in melt curve shape. This is due to base-pair mismatching generated as a result of destabilised heteroduplex annealing between wild-type and variant strands. These differences can be easily seen on the resulting melt curve and the melt profile differences between the different genotypes can be amplified visually via generating a difference curve Applications SNP typing/Point mutation detection Conventional SNP typing methods are typically time-consuming and expensive, requiring several probe based assays to be multiplexed together or the use of DNA microarrays. HRM is more cost-effective and reduces the need to design multiple pairs of primers and the need to purchase expensive probes. The HRM method has been successfully used to detect a single G to A substitution in the gene Vssc (Voltage Sensitive Sodium Channel) which confers resistance to the acaricide permethrin in Scabies mite. This mutation results in a coding change in the protein (G1535D). The analysis of scabies mites collected from suspected permethrin susceptible and tolerant populations by HRM showed distinct melting profiles. The amplicons from the sensitive mites were observed to have a higher melting temperature relative to the tolerant mites, as expected from the higher thermostability of the GC base pair In a field more relevant to clinical diagnostics, HRM has been shown to be suitable in principle for the detection of mutations in the breast cancer susceptibility genes BRCA1 and BRCA2. More than 400 mutations have been identified in these genes.The sequencing of genes is the gold standard for identifying mutations. Sequencing is time-consuming and labour-intensive and is often preceded by techniques used to identify heteroduplex DNA, which then further amplify these issues. HRM offers a faster and more convenient closed-tube method of assessing the presence of mutations and gives a result which can be further investigated if it is of interest. In a study carried out by Scott et al. in 2006, 3 cell lines harbouring different BRCA mutations were used to assess the HRM methodology. It was found that the melting profiles of the resulting PCR products could be used to distinguish the presence or absence of a mutation in the amplicon. Similarly in 2007 Krypuy et al. showed that the careful design of HRM assays (with regards to primer placement) could be successfully employed to detect mutations in the TP53 gene, which encodes the tumour suppressor protein p53 in clinical samples of breast and ovarian cancer. Both these studies highlighted the fact that changes in the melting profile can be in the form of a shift in the melting temperature or an obvious difference in the shape of the melt curve. Both of these parameters are a function of the amplicon sequence. The consensus is that HRM is a cost efficient method that can be employed as an initial screen for samples suspected of harbouring polymorphisms or mutations. This would reduce the number of samples which need to be investigated further using more conventional methods. Zygosity testing Currently there are many methods used to determine the zygosity status of a gene at a particular locus. These methods include the use of PCR with specifically designed probes to detect the variants of the genes (SNP typing is the simplest case). In cases where longer stretches of variation is implicated, post PCR analysis of the amplicons may be required. Changes in enzyme restriction, electrophoretic and chromatographic profiles can be measured. These methods are usually more time-consuming and increase the risk of amplicon contamination in the laboratory, due to the need to work with high concentrations of amplicons in the lab post-PCR. The use of HRM reduces the time required for analysis and the risk of contamination. HRM is a more cost-effective solution and the high resolution element not only allows the determination of homo and heterozygosity, it also resolves information about the type of homo and heterozygosity, with different gene variants giving rise to differing melt curve shapes. A study by Gundry et al. 2003, showed that fluorescent labelling of one primer (in the pair) has been shown to be favourable over using an intercalating dye such as SYBR green I. However, progress has been made in the development and use of improved intercalating dyes which reduce the issue of PCR inhibition and concerns over non-saturating intercalation of the dye. Epigenetics The HRM methodology has also been exploited to provide a reliable analysis of the methylation status of DNA. This is of significance since changes to the methylation status of tumour suppressor genes, genes that regulate apoptosis and DNA repair, are characteristics of cancers and also have implications for responses to chemotherapy. For example, cancer patients can be more sensitive to treatment with DNA alkylating agents if the promoter of the DNA repair gene MGMT of the patient is methylated. In a study which tested the methylation status of the MGMT promoter on 19 colorectal samples, 8 samples were found to be methylated. Another study compared the predictive power of MGMT promoter methylation in 83 high grade glioma patients obtained by either MSP, pyrosequencing, and HRM. The HRM method was found to be at least equivalent to pyrosequencing in quantifying the methylation level. Methylated DNA can be treated by bi-sulphite modification, which converts non-methylated cytosines to uracil. Therefore, PCR products resulting from a template that was originally unmethylated will have a lower melting point than those derived from a methylated template. HRM also offers the possibility of determining the proportion of methylation in a given sample, by comparing it to a standard curve which is generated by mixing different ratios of methylated and non-methylated DNA together. This can offer information regarding the degree of methylation that a tumour may have and thus give an indication of the character of the tumour and how far it deviates from what is "normal". HRM also is practically advantageous for use in diagnostics, due to its capacity to be adapted to high throughput screening testing, and again it minimises the possibility of amplicon spread and contamination within a laboratory, owing to its closed-tube format. Intercalating dyes To follow the transition of dsDNA (double-stranded) to ssDNA (single-stranded), intercalating dyes are employed. These dyes show differential fluorescence emission dependent on their association with double-stranded or single-stranded DNA. SYBR Green I is a first generation dye for HRM. It fluoresces when intercalated into dsDNA and not ssDNA. Because it may inhibit PCR at high concentrations, it is used at sub-saturating concentrations. Recently, some researchers have discouraged the use of SYBR Green I for HRM, claiming that substantial protocol modifications are required. This is because it is suggested that the lack of accuracy may result from "dye jumping", where dye from a melted duplex may get reincorporated into regions of dsDNA which had not yet melted. New saturating dyes such as LC Green and LC Green Plus, ResoLight, EvaGreen, Chromofy and SYTO 9 are available on the market and have been used successfully for HRM. However, some groups have successfully used SYBR Green I for HRM with the Corbett Rotorgene instruments and advocate the use of SYBR Green I for HRM applications. Design of high-resolution melting experiments High resolution melting assays typically involve qPCR amplification followed by a melting curve collected using a fluorescent dye. Due to the sensitivity of high-resolution melting analysis, it is necessary to carefully consider PCR cycling conditions, template DNA quality, and melting curve parameters. For accurate and repeatable results, PCR thermal cycling conditions must be optimized to ensure that the desired DNA region is amplified with high specificity and minimal bias between sequence variants. The melting curve is typically performed across a broad range of temperatures in small (~0.3 °C) increments that are long enough (~10 seconds) for the DNA to reach equilibrium at each temperature step. In addition to typical primer design considerations, the design of primers for high-resolution melting assays involves maximizing the thermodynamic differences between PCR products belonging to different genotypes. Smaller amplicons generally yield greater melting temperature variation than longer amplicons, but the variability cannot be predicted by eye. For this reason, it is critical to accurately predict the melting curve of PCR products when designing primers that will distinguish sequence variants. Specialty software, such as uMelt and DesignSignatures, are available to help design primers that will maximize melting curve variability specifically for high-resolution melting assays. See also Melting curve analysis References External links HRM Technology Molecular biology DNA Biotechnology
High-resolution melting analysis
[ "Chemistry", "Biology" ]
2,700
[ "Biochemistry", "nan", "Biotechnology", "Molecular biology" ]
21,003,830
https://en.wikipedia.org/wiki/Template%20modeling%20score
In bioinformatics, the template modeling score or TM-score is a measure of similarity between two protein structures. The TM-score is intended as a more accurate measure of the global similarity of full-length protein structures than the often used RMSD measure. The TM-score indicates the similarity between two structures by a score between , where 1 indicates a perfect match between two structures (thus the higher the better). Generally scores below 0.20 corresponds to randomly chosen unrelated proteins whereas structures with a score higher than 0.5 assume roughly the same fold. A quantitative study shows that proteins of TM-score = 0.5 have a posterior probability of 37% in the same CATH topology family and of 13% in the same SCOP fold family. The probabilities increase rapidly when TM-score > 0.5. The TM-score is designed to be independent of protein lengths. The TM-score equation TM-score between two protein structures (e.g., a template structure and a target structure) is defined by where is the length of the amino acid sequence of the target protein, and is the number of residues that appear in both the template and target structures. is the distance between the th pair of residues in the template and target structures, and is a distance scale that normalizes distances. The maximum is taken over all possible structure superpositions of the model and template (or some sample thereof). When comparing two protein structures that have the same residue order, reads from the C-alpha order number of the structure files (i.e., Column 23-26 in Protein Data Bank (file format)). When comparing two protein structures that have different sequences and/or different residue orders, a structural alignment is usually performed first, and TM-score is then calculated on the commonly aligned residues from the structural alignment. Other measures An often used structural similarity measure is root-mean-square deviation (RMSD). Because RMSD is calculated as an average of distance error () with equal weight over all residue pairs, a large local error on a few residue pairs can result in a quite large RMSD. On the other hand, by putting in the denominator, TM-score naturally weights smaller distance errors more strongly than larger distance errors. Therefore, TM-score value is more sensitive to the global structural similarity rather than to the local structural errors, compared to RMSD. Another advantage of TM-score is the introduction of the scale which makes the magnitude of TM-score length-independent for random structure pairs, while RMSD and most other measures are length-dependent metrics. The Global Distance Test (GDT) algorithm, and its GDT TS score to represent "total score", is another measure of similarity between two protein structures with known amino acid correspondences (e.g. identical amino acid sequences) but different tertiary structures. GDT score has the same length-dependence issue as RMSD, because the average GDT score for random structure pairs has a power-law dependence on the protein size. See also RMSD — a different structure comparison measure GDT — a different structure comparison measure Longest continuous segment (LCS) — A different structure comparison measure Global distance calculation (GDC_sc, GDC_all) — Structure comparison measures that use full-model information (not just α-carbon) to assess similarity Local global alignment (LGA) — Protein structure alignment program and structure comparison measure References External links TM-score webserver — by the Yang Zhang research group. Calculates TM-score and supplies source code. GDT and LGA description services and documentation on structure comparison and similarity measures. Bioinformatics Computational chemistry
Template modeling score
[ "Chemistry", "Engineering", "Biology" ]
765
[ "Bioinformatics", "Theoretical chemistry", "Computational chemistry", "Biological engineering" ]
26,868,709
https://en.wikipedia.org/wiki/Polder%20tensor
The Polder tensor is a tensor introduced by Dirk Polder for the description of magnetic permeability of ferrites. The tensor notation needs to be used because ferrimagnetic material becomes anisotropic in the presence of a magnetizing field. The tensor is described mathematically as: Neglecting the effects of damping, the components of the tensor are given by where (rad / s) / (A / m) is the effective gyromagnetic ratio and , the so-called effective g-factor (physics), is a ferrite material constant typically in the range of 1.5 - 2.6, depending on the particular ferrite material. is the frequency of the RF/microwave signal propagating through the ferrite, is the internal magnetic bias field, is the magnetization of the ferrite material and is the magnetic permeability of free space. To simplify computations, the radian frequencies of and can be replaced with frequencies (Hz) in the equations for and because the factor cancels. In this case, Hz / (A / m) MHz / Oe. If CGS units are used, computations can be further simplified because the factor can be dropped. References Ferrites Tensor physical quantities Ferromagnetic materials Magnetic ordering
Polder tensor
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
272
[ "Tensors", "Physical quantities", "Quantity", "Ferromagnetic materials", "Tensor physical quantities", "Electric and magnetic fields in matter", "Materials science", "Magnetic ordering", "Materials", "Condensed matter physics", "Matter" ]
26,877,140
https://en.wikipedia.org/wiki/Electrostatic%20particle%20accelerator
An electrostatic particle accelerator is a particle accelerator in which charged particles are accelerated to a high energy by a static high voltage potential. This contrasts with the other major category of particle accelerator, oscillating field particle accelerators, in which the particles are accelerated by oscillating electric fields. Owing to their simpler design, electrostatic types were the first particle accelerators. The two most common types are the Van de Graaf generator invented by Robert Van de Graaff in 1929, and the Cockcroft-Walton accelerator invented by John Cockcroft and Ernest Walton in 1932. The maximum particle energy produced by electrostatic accelerators is limited by the maximum voltage which can be achieved the machine. This is in turn limited by insulation breakdown to a few megavolts. Oscillating accelerators do not have this limitation, so they can achieve higher particle energies than electrostatic machines. The advantages of electrostatic accelerators over oscillating field machines include lower cost, the ability to produce continuous beams, and higher beam currents that make them useful to industry. As such, they are by far the most widely used particle accelerators, with industrial applications such as plastic shrink wrap production, high power X-ray machines, radiation therapy in medicine, radioisotope production, ion implanters in semiconductor production, and sterilization. Many universities worldwide have electrostatic accelerators for research purposes. High energy oscillating field accelerators usually incorporate an electrostatic machine as their first stage, to accelerate particles to a high enough velocity to inject into the main accelerator. Applications Electrostatic accelerators have a wide array of applications in science and industry. In the realm of fundamental research, they are used to provide beams of atomic nuclei for research at energies up to several hundreds of MeV. In industry and materials science they are used to produce ion beams for materials modification, including ion implantation and ion beam mixing. There are also a number of materials analysis techniques based on electrostatic acceleration of heavy ions, including Rutherford backscattering spectrometry (RBS), particle-induced X-ray emission (PIXE), accelerator mass spectrometry (AMS), Elastic recoil detection (ERD), and others. Although these machines primarily accelerate atomic nuclei, there are a number of compact machines used to accelerate electrons for industrial purposes including sterilization of medical instruments, x-ray production, and silicon wafer production. A special application of electrostatic particle accelerator are dust accelerators in which nanometer to micrometer sized electrically charged dust particles are accelerated to speeds up to 100 km/s. Dust accelerators are used for impact cratering studies, calibration of impact ionization dust detectors, and meteor studies. Single-ended machines Using a high voltage terminal kept at a static potential on the order of millions of volts, charged particles can be accelerated. In simple language, an electrostatic generator is basically a giant capacitor (although lacking plates). The high voltage is achieved either using the methods of Cockcroft & Walton or Van de Graaff, with the accelerators often being named after these inventors. Van de Graaff's original design places electrons on an insulating sheet, or belt, with a metal comb, and then the sheet physically transports the immobilized electrons to the terminal. Although at high voltage, the terminal is a conductor, and there is a corresponding comb inside the conductor which can pick up the electrons off the sheet; owing to Gauss's law, there is no electric field inside a conductor, so the electrons are not repulsed by the platform once they are inside. The belt is similar in style to a conventional conveyor belt, with one major exception: it is seamless. Thus, if the belt is broken, the accelerator must be disassembled to some degree in order to replace the belt, which, owing to its constant rotation and being made typically of a rubber, is not a particularly uncommon occurrence. The practical difficulty with belts led to a different medium for physically transporting the charges: a chain of pellets. Unlike a normal chain, this one is non-conducting from one end to the other, as both insulators and conductors are used in its construction. These types of accelerators are usually called Pelletrons. Once the platform can be electrically charged by one of the above means, some source of positive ions is placed on the platform at the end of the beam line, which is why it's called the terminal. However, as the ion source is kept at a high potential, one cannot access the ion source for control or maintenance directly. Thus, methods such as plastic rods connected to various levers inside the terminal can branch out and be toggled remotely. Omitting practical problems, if the platform is positively charged, it will repel the ions of the same electric polarity, accelerating them. As E=qV, where E is the emerging energy, q is the ionic charge, and V is the terminal voltage, the maximum energy of particles accelerated in this manner is practically limited by the discharge limit of the high voltage platform, about 12 MV under ambient atmospheric conditions. This limit can be increased, for example, by keeping the HV platform in a tank of an insulating gas with a higher dielectric constant than air, such as SF6 which has dielectric constant roughly 2.5 times that of air. However, even in a tank of SF6 the maximum attainable voltage is around 30 MV. There could be other gases with even better insulating powers, but SF6 is also chemically inert and non-toxic. To increase the maximum acceleration energy further, the tandem concept was invented to use the same high voltage twice. Tandem accelerators Conventionally, positively charged ions are accelerated because this is the polarity of the atomic nucleus. However, if one wants to use the same static electric potential twice to accelerate ions, then the polarity of the ions' charge must change from anions to cations or vice versa while they are inside the conductor where they will feel no electric force. It turns out to be simple to remove, or strip, electrons from an energetic ion. One of the properties of ion interaction with matter is the exchange of electrons, which is a way the ion can lose energy by depositing it within the matter, something we should intuitively expect of a projectile shot at a solid. However, as the target becomes thinner or the projectile becomes more energetic, the amount of energy deposited in the foil becomes less and less. Tandems locate the ion source outside the terminal, which means that accessing the ion source while the terminal is at high voltage is significantly less difficult, especially if the terminal is inside a gas tank. So then an anion beam from a sputtering ion source is injected from a relatively lower voltage platform towards the high voltage terminal. Inside the terminal, the beam impinges on a thin foil (on the order of micrograms per square centimeter), often carbon or beryllium, stripping electrons from the ion beam so that they become cations. As it is difficult to make anions of more than -1 charge state, then the energy of particles emerging from a tandem is E=(q+1)V, where we have added the second acceleration potential from that anion to the positive charge state q emerging from the stripper foil; we are adding these different charge signs together because we are increasing the energy of the nucleus in each phase. In this sense, we can see clearly that a tandem can double the maximum energy of a proton beam, whose maximum charge state is merely +1, but the advantage gained by a tandem has diminishing returns as we go to higher mass, as, for example, one might easily get a 6+ charge state of a silicon beam. It is not possible to make every element into an anion easily, so it is very rare for tandems to accelerate any noble gases heavier than helium, although KrF− and XeF− have been successfully produced and accelerated with a tandem. It is not uncommon to make compounds in order to get anions, however, and TiH2 might be extracted as TiH− and used to produce a proton beam, because these simple, and often weakly bound chemicals, will be broken apart at the terminal stripper foil. Anion ion beam production was a major subject of study for tandem accelerator application, and one can find recipes and yields for most elements in the Negative Ion Cookbook. Tandems can also be operated in terminal mode, where they function like a single-ended electrostatic accelerator, which is a more common and practical way to make beams of noble gases. The name 'tandem' originates from this dual-use of the same high voltage, although tandems may also be named in the same style of conventional electrostatic accelerators based on the method of charging the terminal. The MP Tandem van de Graaff is a type of Tandem accelerator. Ten of these were installed in the 20th century; six in North America and four in Europe. Geometry One trick which has to be considered with electrostatic accelerators is that usually vacuum beam lines are made of steel. However, one cannot very well connect a conducting pipe of steel from the high voltage terminal to the ground. Thus, many rings of a strong glass, like Pyrex, are assembled together in such a manner that their interface is a vacuum seal, like a copper gasket; a single long glass tube could implode under vacuum or fracture supporting its own weight. Importantly for the physics, these inter-spaced conducting rings help to make a more uniform electric field along the accelerating column. This beam line of glass rings is simply supported by compression at either end of the terminal. As the glass is non-conducting, it could be supported from the ground, but such supports near the terminal could induce a discharge of the terminal, depending on the design. Sometimes the compression is not sufficient, and the entire beam line may collapse and shatter. This idea is especially important to the design of tandems, because they naturally have longer beam lines, and the beam line must run through the terminal. Most often electrostatic accelerators are arranged in a horizontal line. However, some tandems may have a "U" shape, and in principle the beam can be turned to any direction with a magnetic dipole at the terminal. Some electrostatic accelerators are arranged vertically, where either the ion source or, in the case of a U-shaped vertical tandem, the terminal, is at the top of a tower. A tower arrangement can be a way to save space, and also the beam line connecting to the terminal made of glass rings can take some advantage of gravity as a natural source of compression. Particle energy In a single-ended electrostatic accelerator the charged particle is accelerated through a single potential difference between two electrodes, so the output particle energy is equal to the charge on the particle multiplied by the accelerating voltage In a tandem accelerator the particle is accelerated twice by the same voltage, so the output energy is , as the anion form is singly charged. If the charge is in conventional units of coulombs and the potential is in volts the particle energy will be given in joules. However, because the charge on elementary particles is so small (the charge on the electron is 1.6x10−19 coulombs), the energy in joules is a very small number. Since all elementary particles have charges which are multiples of the elementary charge on the electron, coulombs, particle physicists use a different unit to express particle energies, the electron volt (eV) which makes it easier to calculate. The electronvolt is equal to the energy a particle with a charge of 1e gains passing through a potential difference of one volt. In the above equation, if is measured in elementary charges e and is in volts, the particle energy is given in eV. For example, if an alpha particle which has a charge of 2e is accelerated through a voltage difference of one million volts (1 MV), it will have an energy of two million electron volts, abbreviated 2 MeV. The accelerating voltage on electrostatic machines is in the range 0.1 to 25 MV and the charge on particles is a few elementary charges, so the particle energy is in the low MeV range. More powerful accelerators can produce energies in the giga electron volt (GeV) range. References External links IAEA database of electrostatic accelerators Accelerator physics Nuclear physics
Electrostatic particle accelerator
[ "Physics" ]
2,564
[ "Applied and interdisciplinary physics", "Accelerator physics", "Experimental physics", "Nuclear physics" ]
26,877,288
https://en.wikipedia.org/wiki/Pulse%20width
The pulse width is a measure of the elapsed time between the leading and trailing edges of a single pulse of energy. The measure is typically used with electrical signals and is widely used in the fields of radar and power supplies. There are two closely related measures. The pulse repetition interval measures the time between the leading edges of two pulses but is normally expressed as the pulse repetition frequency (PRF), the number of pulses in a given time, typically a second. The duty cycle expresses the pulse width as a fraction or percentage of one complete cycle. Pulse width is an important measure in radar systems. Radars transmit pulses of radio frequency energy out of an antenna and then listen for their reflection off of target objects. The amount of energy that is returned to the radar receiver is a function of the peak energy of the pulse, the pulse width, and the pulse repetition frequency. Increasing the pulse width increases the amount of energy reflected off the target and thereby increases the range at which an object can be detected. Radars measure range based on the time between transmission and reception, and the resolution of that measurement is a function of the length of the received pulse. This leads to the basic outcome that increasing the pulse width allows the radar to detect objects at longer range but at the cost of decreasing the accuracy of that range measurement. This can be addressed by encoding the pulse with additional information, as is the case in pulse compression systems. In modern switched-mode power supplies, the voltage of the output electrical power is controlled by rapidly switching a fixed-voltage source on and off and then smoothing the resulting stepped waveform. Increasing the pulse width increases the output power. This allows complex output waveforms to be constructed by rapidly changing the pulse width to produce the desired signal, a concept known as pulse-width modulation. References Signal processing Radar theory
Pulse width
[ "Technology", "Engineering" ]
369
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
24,011,507
https://en.wikipedia.org/wiki/Effective%20diffusion%20coefficient
The effective diffusion coefficient of a in atomic diffusion of solid polycrystalline materials like metal alloys is often represented as a weighted average of the grain boundary diffusion coefficient and the lattice diffusion coefficient. Diffusion along both the grain boundary and in the lattice may be modeled with an Arrhenius equation. The ratio of the grain boundary diffusion activation energy over the lattice diffusion activation energy is usually 0.4–0.6, so as temperature is lowered, the grain boundary diffusion component increases. Increasing temperature often allows for increased grain size, and the lattice diffusion component increases with increasing temperature, so often at 0.8 Tmelt (of an alloy), the grain boundary component can be neglected. Modeling The effective diffusion coefficient can be modeled using Hart's equation when lattice diffusion is dominant (type A kinetics): where effective diffusion coefficient grain boundary diffusion coefficient lattice diffusion coefficient value based on grain shape, 1 for parallel grains, 3 for square grains average grain size grain boundary width, often assumed to be 0.5 nm Grain boundary diffusion is significant in face-centered cubic metals below about 0.8 Tmelt (Absolute). Line dislocations and other crystalline defects can become significant below ~0.4 Tmelt in FCC metals. See also Kirkendall effect Mass diffusivity References Diffusion
Effective diffusion coefficient
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
268
[ "Transport phenomena", "Materials science stubs", "Physical phenomena", "Diffusion", "Materials science" ]
24,012,524
https://en.wikipedia.org/wiki/Kopp%E2%80%93Etchells%20effect
The Kopp–Etchells effect is a sparkling ring or disk that is sometimes produced by rotary-wing aircraft when operating in sandy conditions, particularly near the ground at night. The name was coined by photographer Michael Yon to honor two soldiers who were killed in combat; Benjamin Kopp, a US Army Ranger, and Joseph Etchells, a British soldier. Both were killed in combat in Sangin, Afghanistan in July 2009. Other names that have been used to describe this phenomenon include scintillation, halo effect, pixie dust, and corona effect. Explanation Helicopter rotors are fitted with abrasion shields along their leading edges to protect the blades. These abrasion strips are often made of titanium, stainless steel, or nickel alloys, which are very hard, but not as hard as sand. When a helicopter flies low to the ground in sandy environments, sand can strike the metal abrasion strip and cause erosion, which produces a visible corona or halo around the rotor blades. The effect is caused by the pyrophoric oxidation of the ablated metal particles. In this way, the Kopp–Etchells effect is similar to the sparks made by a grinder, which are also due to pyrophoricity. When a speck of metal is chipped off the rotor, it is heated by rapid oxidation. This occurs because its freshly exposed surface reacts with oxygen to produce heat. If the particle is sufficiently small, then its mass is small compared to its surface area, and so heat is generated faster than it can be dissipated. This causes the particle to become so hot that it reaches its ignition temperature. At that point, the metal continues to burn freely. Abrasion strips made of titanium produce the brightest sparks, and the intensity increases with the size and concentration of sand grains in the air. Sand particles are more likely to hit the rotor when the rotorcraft is near the ground. This occurs because sand is blown into the air by the downwash and then carried to the top of the rotor disk by a vortex of air. This process is called recirculation and can lead to a complete brownout in severe situations. The Kopp–Etchells effect is not necessarily associated with takeoff and landing operations. It has been observed without night vision goggles at altitudes as high as . Other theories The effect is often and incorrectly believed to be an electrical phenomenon, either as a result of static electricity as in St. Elmo's Fire, or due to the interaction of sand with the rotor (triboelectric effect), or a piezoelectric property of quartz sand. Mechanical action has been considered, whereby impact with the sand particles may cause photoluminescence. Additionally, mechanisms relating to triboluminescence, chemiluminescence, and electroluminescence have been suggested. Yet another incorrect theory is that the extreme speed of the helicopter blades pushes sand particles out of the way so fast that they burn up like meteors in the atmosphere due to adiabatic heating. Groundcrew have mistaken the phenomenon for fire or other malfunctions. Consequences The erosion associated with the Kopp–Etchells effect presents costly maintenance and logistics problems, and is an example of foreign object damage (FOD). Sand hitting the moving rotor blades represents a security risk because of the highly visible ring it produces, which places military operations at a tactical disadvantage when trying to remain concealed in darkness. The light from the Kopp–Etchells effect can interfere with the pilot's ability to see, especially when using night vision equipment. This may cause difficulty with landing safely, and produce spatial disorientation. See also Index of aviation articles Corona discharge Wingtip vortices References Materials science Military aviation Night flying
Kopp–Etchells effect
[ "Physics", "Materials_science", "Engineering" ]
769
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
24,015,974
https://en.wikipedia.org/wiki/Macromolecular%20Chemistry%20and%20Physics
Macromolecular Chemistry and Physics is a biweekly peer-reviewed scientific journal covering polymer science. It publishes full papers, talents, trends, and highlights in all areas of polymer science, from chemistry to physical chemistry, physics, and materials science. History Macromolecular Chemistry and Physics was established in 1947 as Die Makromolekulare Chemie/Macromolecular Chemistry by Hermann Staudinger and obtained its current title in 1994. According to the Journal Citation Reports, the journal has a 2021 impact factor of 2.996. See also Macromolecular Rapid Communications, 1979 Macromolecular Theory and Simulations, 1992 Macromolecular Materials and Engineering, 2000 Macromolecular Bioscience, 2001 Macromolecular Reaction Engineering, 2007 References External links Physical chemistry journals Materials science journals Academic journals established in 1947 Biweekly journals Wiley-VCH academic journals English-language journals
Macromolecular Chemistry and Physics
[ "Chemistry", "Materials_science", "Engineering" ]
188
[ "Materials science stubs", "Materials science journals", "Materials science journal stubs", "Materials science", "Physical chemistry journals", "Physical chemistry stubs" ]
24,017,146
https://en.wikipedia.org/wiki/C11H16N2
{{DISPLAYTITLE:C11H16N2}} The molecular formula C11H16N2 (molar mass: 176.263 g/mol) may refer to: Benzylpiperazine (BZP) ortho-Methylphenylpiperazine (oMPP) Molecular formulas
C11H16N2
[ "Physics", "Chemistry" ]
69
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,017,391
https://en.wikipedia.org/wiki/National%20Institute%20for%20Mathematical%20and%20Biological%20Synthesis
The National Institute for Mathematical and Biological Synthesis is a research institute focused on the science of mathematics and biology, located on the University of Tennessee, Knoxville, campus. Known by its acronym NIMBioS (pronounced NIM-bus), the Institute is a National Science Foundation (NSF) Synthesis Center supported through NSF's Biological Sciences Directorate via a Cooperative Agreement with UT-Knoxville, totaling more than $35 million over ten years. Background The Institute opened in September 2008, with additional support from the U.S. Department of Homeland Security and the U.S. Department of Agriculture. Since March 2009 when NIMBioS programs officially began, more than 5,000+ individuals from more than 50 countries and every U.S. state have participated in various research and educational activities. Goals Primary goals of NIMBios are: to address key biological questions using cross-disciplinary approaches in mathematical biology to foster the development of a cadre of researchers who are capable of conceiving and engaging in creative and collaborative connections across disciplines. To achieve its goals, NIMBioS advances a wide variety of research and outreach/education activities designed to facilitate interaction between mathematicians and biologists to arrive at innovative solutions to environmental problems. Two primary mechanisms for research are Working Groups and Investigative Workshops. Working Groups are composed of 10-15 invited participants focusing on specific questions related to mathematical biology. Each group typically meets at the Institute two to three times over the course of two years. Investigative workshops may include 30-40 participants with some invited by organizers and others accepted through an open application process. Workshops are more general in focus and may lead to working group formation. NIMBioS also provides support for post-doctoral and sabbatical fellows, short-term visitors, graduate research assistants, and faculty collaborators at UT. Function One area of particular emphasis at NIMBioS has been modeling animal infectious diseases, such as white-nose syndrome in bats, pseudo-rabies virus in feral swine, Toxoplasma gondii in cats, and malaria from mosquitoes. As a leading international center for animal infectious disease modeling, NIMBioS has contributed significantly to global needs in analyzing the potential spread, impact and control of diseases that can move from animals to humans, such as West Nile virus, anthrax, swine flu and mad cow disease. NIMBioS also collaborates with the Great Smoky Mountains National Park to develop methods of particular interest for natural area management that are transferable to numerous U.S. locations. NIMBioS encourages multidisciplinary participation in all its activities. Participants at NIMBioS have included behavioral biologists, ecologists, evolutionary biologists, computational scientists, anthropologists, geneticists, psychologists, bioinformaticians, mathematicians, statisticians, veterinarians, epidemiologists, and wildlife biologists. NIMBioS has an active Education and Outreach program geared toward learners of all ages, from elementary school students through college and graduate school and the general public. NIMBioS organizes a Summer Research Experience for Undergraduates and Teachers program for eight weeks each summer. Participants live on campus and conduct research in teams with UT professors, NIMBioS researchers, and collaborators on projects at the interface of math and biology. NIMBioS also hosts the annual Undergraduate Research Conference at the Interface of Biology and Mathematics each fall, featuring student talks and posters as well as panel discussions. Programs for graduate students include the Visiting Graduate Student Fellowship offering training and research visits for up to several months by graduate students interested in pursuing research with NIMBioS senior personnel, postdoctoral fellows or working group participants. NIMBioS provides varying levels of tutorial workshops designed to enlighten biologists about key quantitative methods, such as optimal control and optimization or high performance computing methods for analyzing biological problems involving large data sets, spatial information, and dynamics. Director NIMBioS’ director is Nina H. Fefferman, Professor of Ecology & Evolutionary Biology and Mathematics at the UT. Louis J. Gross, Professor of Ecology & Evolutionary Biology and Mathematics at the UT served as the previous Director. NIMBioS leadership team also includes associate directors and a deputy director. NIMBioS has an external Board of Advisors from academic institutions from around the world. In addition, NIMBioS has a group of senior personnel consisting of UT faculty and Oak Ridge National Laboratory (ORNL) scientists, as well as a group of additional associated faculty and staff collaborators from UT and ORNL. Research The need for the Institute arose out of the significant growth of the field of mathematical biology over the last decade with research becoming more closely linked to observation and experiment. Rather than starting from mathematical abstractions, it is now common for researchers to: Begin with observations; Use those to suggest promising methods, tools and models; and Proceed to analysis, simulation, evaluation and application. Across the spectrum of the life sciences in which mathematics has been contributing new insights, data are increasingly used to focus conceptual models as the first step in problem formulation. Website description The NIMBioS website includes descriptions of working groups, investigative workshops, post-doctoral fellowships, sabbaticals, short-term visits, graduate assistantships, and faculty positions and information on how to submit requests for support. The web site also describes education and outreach opportunities for undergraduates, teachers, and K-12 students. The web site also has an extensive video library including interviews with visiting scientists, full-length seminars tutorials, and workshops, and short narrative science features. Sponsorship From 2010 to 2012, NIMBioS, in conjunction with the UT's James R. Cox Endowment Fund, sponsored a Songwriter-in-Residence Program to encourage the creation and production of songs involving ideas of modern biology and the lives of scientists who pursue research in biology. A total of five songwriters were supported. Each songwriter created and produced a minimum of two songs as a result of his or her four-week residency. References External links National Institute for Mathematical and Biological Synthesis: www.nimbios.org official website Bioinformatics organizations Biostatistics Biotechnology organizations Organizations based in Tennessee
National Institute for Mathematical and Biological Synthesis
[ "Mathematics", "Engineering", "Biology" ]
1,230
[ "Bioinformatics organizations", "Mathematical and theoretical biology", "Applied mathematics", "Bioinformatics", "Biotechnology organizations" ]
25,387,069
https://en.wikipedia.org/wiki/Enhanced%20weathering
Enhanced weathering, also termed ocean alkalinity enhancement when proposed for carbon credit systems, is a process that aims to accelerate the natural weathering by spreading finely ground silicate rock, such as basalt, onto surfaces which speeds up chemical reactions between rocks, water, and air. It also removes carbon dioxide () from the atmosphere, permanently storing it in solid carbonate minerals or ocean alkalinity. The latter also slows ocean acidification. Enhanced weathering is a chemical approach to remove carbon dioxide involving land-based or ocean-based techniques. One example of a land-based enhanced weathering technique is in-situ carbonation of silicates. Ultramafic rock, for example, has the potential to store hundreds to thousands of years' worth of CO2 emissions, according to estimates. Ocean-based techniques involve alkalinity enhancement, such as grinding, dispersing, and dissolving olivine, limestone, silicates, or calcium hydroxide to address ocean acidification and CO2 sequestration. Although existing mine tailings or alkaline industrial silicate minerals (such as steel slags, construction & demolition waste, or ash from biomass incineration) may be used at first, mining more basalt might eventually be required to limit climate change. History Enhanced weathering has been proposed for both terrestrial and ocean-based carbon sequestration. Ocean methods are being tested by the non-profit organization Project Vesta to see if they are environmentally and economically viable. In July 2020, a group of scientists assessed that the geo-engineering technique of enhanced rock weathering, i.e., spreading finely crushed basalt on fields – has potential use for carbon dioxide removal by nations, identifying costs, opportunities, and engineering challenges. Natural mineral weathering and ocean acidification Weathering is the natural process of rocks and minerals dissolving to the action of water, ice, acids, salts, plants, animals, and temperature changes. It is mechanical (breaking up rock—also called physical weathering or disaggregation) and chemical (changing the chemical compounds in the rocks). Biological weathering is a form of weathering (mechanical or chemical) by plants, fungi, or other living organisms. Chemical weathering can happen by different mechanisms, depending mainly on the nature of the minerals involved. This includes solution, hydration, hydrolysis, and oxidation weathering. Carbonation weathering is a particular type of solution weathering. Carbonate and silicate minerals are examples of minerals affected by carbonation weathering. When silicate or carbonate minerals are exposed to rainwater or groundwater, they slowly dissolve due to carbonation weathering: that is the water (H2O) and carbon dioxide () present in the atmosphere form carbonic acid (H2CO3) by the reaction: H2O + → H2CO3 This carbonic acid then attacks the mineral to form carbonate ions in solution with the unreacted water. As a result of these two chemical reactions (carbonation and dissolution), minerals, water, and carbon dioxide combine, which alters the chemical composition of minerals and removes from the atmosphere. Of course, these are reversible reactions, so if the carbonate encounters H ions from acids, such as in soils, they will react to form water and release back to the atmosphere. Applying limestone (a calcium carbonate) to acid soils neutralizes the H ions but releases from the limestone. In particular, forsterite (a silicate mineral) is dissolved through the reaction: Mg2SiO4(s) + 4H2CO3(aq) → 2Mg2+(aq) + 4HCO3−(aq) + H4SiO4(aq) where "(s)" indicates a substance in a solid state and "(aq)" indicates a substance in an aqueous solution. Calcite (a carbonate mineral) is instead dissolved through the reaction: CaCO3(s) + H2CO3(aq) → Ca2+(aq) + 2HCO3−(aq) Although some of the dissolved bicarbonate may react with soil acids during the passage through the soil profile to groundwater, water with dissolved bicarbonate ions (HCO3−) eventually ends up in the ocean, where the bicarbonate ions are biomineralized to carbonate minerals for shells and skeletons through the reaction: Ca2+ + 2HCO3− → CaCO3 + + H2O The carbonate minerals then eventually sink from the ocean surface to the ocean floor. Most of the carbonate is redissolved in the deep ocean as it sinks. Over geological time periods these processes are thought to stabilize the Earth's climate. The ratio of carbon dioxide in the atmosphere as a gas (CO2) to the quantity of carbon dioxide converted into carbonate is regulated by a chemical equilibrium: in case of a change of this equilibrium state, it takes theoretically (if no other alteration is happening during this time) thousands of years to establish a new equilibrium state. For silicate weathering, the theoretical net effect of dissolution and precipitation is 1 mol of sequestered for every mol of Ca2+ or Mg2+ weathered out of the mineral. Given that some of the dissolved cations react with existing alkalinity in the solution to form CO32− ions, the ratio is not exactly 1:1 in natural systems but is a function of temperature and partial pressure. The net sequestration of carbonate weathering reaction and carbonate precipitation reaction is zero. Weathering and biological carbonate precipitation are thought to be only loosely coupled on short time periods (<1000 years). Therefore, an increase in both carbonate and silicate weathering with respect to carbonate precipitation will result in a buildup of alkalinity in the ocean. Terrestrial enhanced weathering Enhanced weathering was initially used to refer specifically to the spreading of crushed silicate minerals on the land surface. Biological activity in soils has been shown to promote the dissolution of silicate minerals, but there is still uncertainty surrounding how quickly this may happen. Because weathering rate is a function of saturation of the dissolving mineral in solution (decreasing to zero in fully saturated solutions), some have suggested that lack of rainfall may limit terrestrial enhanced weathering, although others suggest that secondary mineral formation or biological uptake may suppress saturation and promote weathering. The amount of energy that is required for comminution depends on the rate at which the minerals dissolve (less comminution is required for rapid mineral dissolution). A 2012 study suggested a large range in potential cost of enhanced weathering largely due to the uncertainty surrounding mineral dissolution rates. Oceanic enhanced weathering To overcome the limitations of solution saturation and to use natural comminution of sand particles from wave energy, silicate minerals may be applied to coastal environments, although the higher pH of seawater may substantially decrease the rate of dissolution, and it is unclear how much comminution is possible from wave action. Alternatively, the direct application of carbonate minerals to the upwelling regions of the ocean has been investigated. Carbonate minerals are supersaturated in the surface ocean but are undersaturated in the deep ocean. In areas of upwelling, this undersaturated water is brought to the surface. While this technology will likely be cheap, the maximum annual CO2 sequestration potential is limited. Transforming the carbonate minerals into oxides and spreading this material in the open ocean ('Ocean Liming') has been proposed as an alternative technology. Here the carbonate mineral (CaCO3) is transformed into lime (CaO) through calcination. The energy requirements for this technology are substantial. Mineral carbonation The enhanced dissolution and carbonation of silicates ('mineral carbonation') was first proposed by Seifritz in 1990, and developed initially by Lackner et al. and further by the Albany Research Center. This early research investigated the carbonation of extracted and crushed silicates at elevated temperatures (~180 °C) and partial pressures of CO2 (~15 MPa) inside controlled reactors ("ex-situ mineral carbonation"). Some research explores the potential of "in-situ mineral carbonation" in which the CO2 is injected into silicate rock formations to promote carbonate formation underground (see: CarbFix). Mineral carbonation research has largely focused on the sequestration of from flue gas. It could be used for geoengineering if the source of was derived from the atmosphere, e.g. through direct air capture or biomass-CCS. Soil Remineralization contributes to the enhanced weathering process. Mixing the soil with crushed rock such as silicate benefits not only plants' health, but also carbon sequestration when calcium or magnesium are present. Remineralize The Earth is a non-profit organization that promotes rock dust applications as natural fertilizers in agriculture fields to restore soils with minerals, improve the quality of vegetation and increase carbon sequestration. Electrolytic dissolution of silicate minerals Where abundant electric surplus electricity is available, the electrolytic dissolution of silicate minerals has been proposed and experimentally shown. The process resembles the weathering of some minerals. In addition, hydrogen produced would be a carbon-negative. Cost In a 2020 techno-economical analysis, the cost of utilizing this method on cropland was estimated at US$80–180 per tonne of CO2. This is comparable with other methods of removing carbon dioxide from the atmosphere currently available (BECCS (US$100–200 per tonne of CO2)- Bio-Energy with Carbon Capture and Storage) and direct air capture and storage at large scale deployment and low-cost energy inputs (US$100–300 per tonne of CO2). In contrast, the cost of reforestation was estimated lower than US$100 per tonne of CO2. Example projects UNDO, a UK-based Enhanced Weathering company, spreads crushed silicate rock, such as basalt and wollastonite, on agricultural land in the United Kingdom, Canada and Australia. They claim to have spread more than 200,000 tonnes of crushed rock to date, which will capture over 40,000 tonnes of CO2 as their rock weathers. In March of 2024, they published a peer-reviewed paper in partnership with Newcastle University in PLOS ONE journal concerning the agronomic co-benefits of crushed basalt in a temperate climate. They are 1 of 20 XPRIZE Carbon Removal finalists, a $100 million competition hosted by the Musk Foundation. An Irish company named Silicate has run trials in Ireland and in 2023 is running trials in the USA near Chicago. Using concrete crushed down to dust it is scattered on farmland on the ratio 500 tonnes to 50 hectares, aiming to capture 100 tonnes of CO2 per annum from that area. Claiming it improves soil quality and crop productivity, the company sells carbon removal credits to fund the costs. The initial pilot funding comes from prize money awarded to the startup by the THRIVE/Shell Climate-Smart Agriculture Challenge. See also Olivine#Uses References External links Enhanced Weathering Conference 2022 Carbon dioxide removal Climate engineering Weathering Enhanced weathering
Enhanced weathering
[ "Engineering" ]
2,306
[ "Planetary engineering", "Geoengineering" ]
25,388,263
https://en.wikipedia.org/wiki/Abrasion%20%28mechanical%29
Abrasion is the process of scuffing, scratching, wearing down, marring, or rubbing away. It can be intentionally imposed in a controlled process using an abrasive. Abrasion can be an undesirable effect of exposure to normal use or exposure to the elements. In stone shaping Ancient artists, working in stone, used abrasion to create sculptures. The artist selected dense stones like carbonite and emery and rubbed them consistently against comparatively softer stones like limestone and granite. The artist used different sizes and shapes of abrasives, or turned them in various ways as they rubbed, to create effects on the softer stone's surface. Water was continuously poured over the surface to carry away particles. Abrasive technique in stone shaping was a long, tedious process that, with patience, resulted in eternal works of art in stone. Models The Archard equation is a simple model used to describe sliding wear and is based on the theory of asperity contact. where: Q is the total volume of wear debris produced K is the wear coefficient W is the total normal load L is the sliding distance H is the hardness of the softest contacting surfaces K is obtained from experimental results and depends on several parameters. Among them are surface quality, chemical affinity between the material of two surfaces, surface hardness process, heat transfer between two surfaces and others. Abrasion resistance The resistance of materials and structures to abrasion can be measured by a variety of test methods. These often use a specified abrasive or other controlled means of abrasion. Under the conditions of the test, the results can be reported or can be compared items subjected to similar tests. Such standardized measurements can produce two quantities: abrasion rate and normalized abrasion rate (also called abrasion resistance index). The former is the amount of mass lost per 1000 cycles of abrasion. The latter is the ratio of former with the known abrasion rate for some specific reference material. One type of instrument used to get the abrasion rate and normalized abrasion rate is the abrasion scrub tester, which is made up of a mechanical arm, liquid pump, and programmable electronics. The machine draws the mechanical arm with attached brush (or sandpaper, sponge, etc.) over the surface of the material that is being tested. The operator sets a pre-programmed number of passes for a repeatable and controlled result. The liquid pump can provide detergent or other liquids to the mechanical arm during testing to simulate washing and other normal uses. The use of proper lubricants can help control abrasion in some instances. Some items can be covered with an abrasion-resistant material. Controlling the cause of abrasion is sometimes an option. Standards ASTM ASTM B611 Test Method for Abrasive Wear Resistance of Cemented Carbides ASTM C131 Standard Test Method for Resistance to Degradation of Small-Size Coarse Aggregate by Abrasion and Impact in the Los Angeles Machine ASTM C448 Standard Test Methods for Abrasion Resistance of Porcelain Enamels ASTM C535 Standard Test Method for Resistance to Degradation of Large-Size Coarse Aggregate by Abrasion and Impact in the Los Angeles Machine ASTM C944 Standard Test Method for Abrasion Resistance of Concrete or Mortar Surfaces by the Rotating-Cutter Method ASTM C1027 Standard Test Method for Determining Visible Abrasion Resistance of Glazed Ceramic Tile ASTM C1353 Standard Test Method for Abrasion Resistance of Dimension Stone Subjected to Foot Traffic Using a Rotary Platform, Double-Head Abraser ASTM D968 Standard Test Methods for Abrasion Resistance of Organic Coatings by Falling Abrasive ASTM D1630 Standard Test Method for Rubber Property — Abrasion Resistance (Footwear Abrader) ASTM D2228 Standard Test Method for Rubber Property - Relative Abrasion Resistance by the Pico Abrader Method ASTM D3389 Standard Test Method for Coated Fabrics Abrasion Resistance (Rotary Platform Abrader) ASTM D4060 Standard Test Method for Abrasion Resistance of Organic Coatings by the Taber Abraser ASTM D4158 Standard Guide for Abrasion Resistance of Textile Fabrics], ASTM D4966 Standard Test Method for Abrasion Resistance of Textile Fabrics ASTM D5181 Standard Test Method for Abrasion Resistance of Printed Matter by the GA-CAT Comprehensive Abrasion Tester ASTM D5264 Standard Practice for Abrasion Resistance of Printed Materials by the Sutherland Rub Tester ASTM D5963 Standard Test Method for Rubber Property—Abrasion Resistance (Rotary Drum Abrader) ASTM D6279 Standard Test Method for Rub Abrasion Mar Resistance of High Gloss Coatings ASTM D7428 Standard Test Method for Resistance of Fine Aggregate to Degradation by Abrasion in the Micro-Deval Apparatus ASTM F1486 Standard Practice for Determination of Abrasion and Smudge Resistance of Images Produced from Office Products ASTM F1978 Standard Test Method for Measuring Abrasion Resistance of Metallic Thermal Spray Coatings by Using the Taber Abraser ASTM G56 Standard Test Method for Abrasiveness of Ink-Impregnated Fabric Printer Ribbons and Other Web Materials ASTM G65 Standard Test Method for Measuring Abrasion Using the Dry Sand/Rubber Wheel Apparatus ASTM G75 Standard Test Method for Determination of Slurry Abrasivity (Miller Number) and Slurry Abrasion Response of Materials (SAR Number) ASTM G81 Standard Test Method for Jaw Crusher Gouging Abrasion Test ASTM G99 Standard Test Method for Wear Testing with a Pin-on-Disk Apparatus ASTM G105 Standard Test Method for Conducting Wet Sand/Rubber Wheel Abrasion Tests ASTM G132 Standard Test Method for Pin Abrasion Testing ASTM G171 Standard Test Method for Scratch Hardness of Materials Using a Diamond Stylus ASTM G174 Standard Test Method for Measuring Abrasion Resistance of Materials by Abrasive Loop Contact DIN DIN 53516 Testing of Rubber and Elastomers; Determination of Abrasion Resistance ISO ISO 4649 Rubber, vulcanized or thermoplastic -- Determination of abrasion resistance using a rotating cylindrical drum device ISO 9352 Plastics -- Determination of resistance to wear by abrasive wheels ISO 28080 Hardmetals -- Abrasion tests for hardmetals ISO 23794 Rubber, vulcanized or thermoplastic -- Abrasion testing -- Guidance ISO 21988:2006 Abrasion-resistant cast irons. Classification ISO 28080:2011 Hardmetals. Abrasion tests for hardmetals ISO 16282:2008 Methods of test for dense shaped refractory products. Determination of resistance to abrasion at ambient temperature JSA JIS A 1121 Method of test for resistance to abrasion of coarse aggregate by use of the Los Angeles machine JIS A 1452 Method of abrasion test for building materials and part of building construction (falling sand method) JIS A 1453 Method of abrasion test for building materials and part of building construction (abrasive-paper method) JIS A 1509-5 Test methods for ceramic tiles -- Part 5: Determination of resistance to deep abrasion for unglazed floor tiles JIS A 1509-6 Test methods for ceramic tiles -- Part 6: Determination of resistance to surface abrasion for glazed floor tiles JIS C 60068-2 Environmental testing -- Part 2: Tests -- Test Xb: Abrasion of markings and letterings caused by rubbing of fingers and hands JIS H 8682-1 Test methods for abrasion resistance of anodic oxide coatings on aluminium and aluminium alloys -- Part 1: Wheel wear test JIS H 8682-2 Test methods for abrasion resistance of anodic oxide coatings on aluminium and aluminium alloys -- Part 2: Abrasive jet test JIS H 8682-3 Test methods for abrasion resistance of anodic oxide coatings on aluminium and aluminium alloys -- Part 3: Sand-falling abrasion resistance test JIS K 5600-5-8 Testing methods for paints -- Part 5: Mechanical property of film -- Section 8: Abrasion resistance (Rotating abrasive-paper-covered wheel method) JIS K 7204 Plastics -- Determination of resistance to wear by abrasive wheels See also References Further reading “Wear Processes in Manufacturing”, Badahur and Magee, ASTM STP 1362, 1999 Materials degradation Tribology
Abrasion (mechanical)
[ "Chemistry", "Materials_science", "Engineering" ]
1,794
[ "Tribology", "Materials science", "Surface science", "Mechanical engineering", "Materials degradation" ]
25,389,339
https://en.wikipedia.org/wiki/DNA-dependent%20ATPase
DNA-dependent ATPase, abbreviated Dda and also known as Dda helicase and Dda DNA helicase, is the 439-amino acid 49,897-atomic mass unit protein coded by the Dda gene of the bacteriophage T4 phage, a virus that infects enterobacteria. Biochemistry Dda is a molecular motor, specifically a helicase that moves in the 5' end to 3' direction along a nucleic acid phosphodiester backbone, separating two annealed nucleic acid strands, using the free energy released by the hydrolysis of adenosine triphosphate. The National Center for Biotechnology Information (NCBI) Reference Sequence accession number is NP_049632. Molecular biology Dda is involved in the initiation of T4 DNA replication and DNA recombination. Genetics The Dda gene is 31,219 base pair long. The GenBank accession number is AAD42555. The coding strand (see also: sense strand) begins in base number 9,410 and ends in base number 10,729. Cellular biology Dda is toxic to cells at elevated levels. See also Enzymes Motor proteins Transcription (genetics) References External links National Center for Biotechnology – Biomedical and genomic information via the National Library of Medicine (NLM) at the National Institutes of Health (NIH). Molecular biology
DNA-dependent ATPase
[ "Chemistry", "Biology" ]
288
[ "Biochemistry stubs", "Biochemistry", "Protein stubs", "Molecular biology" ]
25,390,580
https://en.wikipedia.org/wiki/Hotspot%20Ecosystems%20Research%20on%20the%20Margins%20of%20European%20Seas
Hotspot Ecosystems Research on the Margins of European Seas, or HERMES, was an international multidisciplinary project, from April 2005 to March 2009, that studied deep-sea ecosystems along Europe's deep-ocean margin. The HERMES project was funded by the European Commission's Sixth Framework Programme, and was the predecessor to the HERMIONE project, which started in April 2009. References Hydrology Oceanography Climate change and the environment
Hotspot Ecosystems Research on the Margins of European Seas
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
91
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics", "Environmental engineering" ]
25,392,304
https://en.wikipedia.org/wiki/Orgel%20diagram
Orgel diagrams are correlation diagrams which show the relative energies of electronic terms in transition metal complexes, much like Tanabe–Sugano diagrams. They are named after their creator, Leslie Orgel. Orgel diagrams are restricted to only show weak field (i.e. high spin) cases, and offer no information about strong field (low spin) cases. Because Orgel diagrams are qualitative, no energy calculations can be performed from these diagrams; also, Orgel diagrams only show the symmetry states of the highest spin multiplicity instead of all possible terms, unlike a Tanabe–Sugano diagram. Orgel diagrams will, however, show the number of spin allowed transitions, along with their respective symmetry designations. In an Orgel diagram, the parent term (P, D, or F) in the presence of no ligand field is located in the center of the diagram, with the terms due to that electronic configuration in a ligand field at each side. There are two Orgel diagrams, one for d1, d4, d6, and d9 configurations and the other with d2, d3, d7, and d8 configurations. In an Orgel diagram, lines with the same Russell–Saunders terms will diverge due to the non-crossing rule, but all other lines will be linear. Also, for the D Orgel diagram, the left side contains d1 and d6 tetrahedral and d4 and d9 octahedral complexes. The right side contains d4 and d9 tetrahedral and d1 and d6 octahedral complexes. For the F Orgel diagram, the left side contains d2 and d7 tetrahedral and d3 and d8 octahedral complexes. The right side contains d3 and d8 tetrahedral and d2 and high spin d7 octahedral complexes. References External links Applying Electronic Spectra Calculations Using Orgel Diagrams Calculations Using Tanabe-Sugano Diagrams Spectroscopy of First Row Transition Metal Complexes – F Ground States Coordination chemistry Spectroscopy Inorganic chemistry Transition metals Eponymous diagrams of chemistry
Orgel diagram
[ "Physics", "Chemistry", "Astronomy" ]
423
[ "Spectroscopy stubs", "Molecular physics", "Inorganic compounds", "Spectrum (physical sciences)", "Instrumental analysis", "Coordination chemistry", "Astronomy stubs", "Inorganic compound stubs", "nan", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
25,393,279
https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20July%2024%2C%202055
A total solar eclipse will occur at the Moon's ascending node of orbit on Saturday, July 24, 2055, with a magnitude of 1.0359. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A total solar eclipse occurs when the Moon's apparent diameter is larger than the Sun's, blocking all direct sunlight, turning day into darkness. Totality occurs in a narrow path across Earth's surface, with the partial solar eclipse visible over a surrounding region thousands of kilometres wide. Occurring about 2.9 days before perigee (on July 27, 2055, at 6:00 UTC), the Moon's apparent diameter will be larger. The path of totality will be visible from parts of South Africa. A partial solar eclipse will also be visible for parts of southern and central Africa. Eclipse details Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2055 A partial solar eclipse on January 27. A total lunar eclipse on February 11. A total solar eclipse on July 24. A partial lunar eclipse on August 7. Metonic Preceded by: Solar eclipse of October 4, 2051 Followed by: Solar eclipse of May 11, 2059 Tzolkinex Preceded by: Solar eclipse of June 11, 2048 Followed by: Solar eclipse of September 3, 2062 Half-Saros Preceded by: Lunar eclipse of July 18, 2046 Followed by: Lunar eclipse of July 28, 2064 Tritos Preceded by: Solar eclipse of August 23, 2044 Followed by: Solar eclipse of June 22, 2066 Solar Saros 127 Preceded by: Solar eclipse of July 13, 2037 Followed by: Solar eclipse of August 3, 2073 Inex Preceded by: Solar eclipse of August 12, 2026 Followed by: Solar eclipse of July 3, 2084 Triad Preceded by: Solar eclipse of September 22, 1968 Followed by: Solar eclipse of May 25, 2142 Solar eclipses of 2054–2058 Saros 127 Metonic series Tritos series Inex series References External links NASA graphics 2055 07 24 2055 in science 2055 07 24 2055 07 24
Solar eclipse of July 24, 2055
[ "Astronomy" ]
588
[ "Future astronomical events", "Future solar eclipses" ]
25,393,646
https://en.wikipedia.org/wiki/Anti-apoptotic%20Ras%20signalling%20cascade
The Anti-apoptotic Ras signaling cascade is an intracellular signal transduction cascade that involves the Ras protein and inhibits apoptosis. It is the target of the cancer drug gefitinib. It may refer to the PI3K/AKT pathway. It may refer to the MAPK/ERK pathway which involves ras and can affect apoptosis. The anti-apoptotic STAT pathway does not involve Ras. See also Ras subfamily References External links Diagram of ras onwards. Signal transduction Cell cycle Apoptosis
Anti-apoptotic Ras signalling cascade
[ "Chemistry", "Biology" ]
113
[ "Signal transduction", "Cellular processes", "Apoptosis", "Biochemistry", "Neurochemistry", "Cell cycle" ]
25,394,605
https://en.wikipedia.org/wiki/Genetically%20modified%20virus
A genetically modified virus is a virus that has been altered or generated using biotechnology methods, and remains capable of infection. Genetic modification involves the directed insertion, deletion, artificial synthesis or change of nucleotide bases in viral genomes. Genetically modified viruses are mostly generated by the insertion of foreign genes intro viral genomes for the purposes of biomedical, agricultural, bio-control, or technological objectives. The terms genetically modified virus and genetically engineered virus are used synonymously. General usage Genetically modified viruses are generated through genetic modification, which involves the directed insertion, deletion, artificial synthesis, or change of nucleotide sequences in viral genomes using biotechnological methods. While most dsDNA viruses have single monopartite genomes, many RNA viruses have multipartite genomes, it is not necessary for all parts of a viral genome to be genetically modified for the virus to be considered a genetically modified virus. Infectious viruses capable of infection that are generated through artificial gene synthesis of all, or part of their genomes (for example based on inferred historical sequences) may also be considered as genetically modified viruses. Viruses that are changed solely through the action of spontaneous mutations, recombination or reassortment events (even in experimental settings), are not generally considered to be genetically modified viruses. Viruses are generally modified so they can be used as vectors for inserting new genetic information into a host organism or altering its preexisting genetic material. This can be achieved in at least three processes : Integration of all, or parts, of a viral genome into the host's genome (e.g. into its chromosomes). When the whole genetically modified viral genome is integrated it is then referred to as a genetically modified provirus. Where DNA or RNA which that has been packaged as part of a virus particle, but may not necessarily contain any viral genes, becomes integrated into a hosts genome this process is known as transduction. Maintenance of the viral genome within host cells but not as an integrated part of the host's genome. Where genes necessary for genome editing have been placed into the viral genome using biotechnology methods, editing of the host's genome is possible. This process does not require the integration of viral genomes into the host's genome. None of these three processes are mutually exclusive. Where only process 2. occurs and it results in the expression of a genetically modified gene this will often be referred to as a transient expression approach. The capacity to infect host cells or tissues is a necessary requirement for all applied uses of genetically modified viruses. However, a capacity for viral transmission (the transfer of infections between host individuals), is either not required or is considered undesirable for most applications. Only in a small minority of proposed uses is viral transmission considered necessary or desirable, an example is transmissible vaccines. This is because transmissibility considerably complicates efforts to monitor, control, or contain the spread of viruses. History In 1972, the earliest report of the insertion of a foreign sequence into a viral genome was published, when Paul Berg used the EcoRI restriction enzyme and DNA ligases to create the first ever recombinant DNA molecules. This was achieved by joining DNA from the monkey SV40 virus with that of the lambda virus. However, it was not established that either of the two viruses were capable of infection or replication. In 1974, the first report of a genetically modified virus that could also replicate and infect was submitted for publication by Noreen Murray and Kenneth Murray. Just two months later in August 1974, Marjorie Thomas, John Cameron and Ronald W. Davis submitted a report for publication of a similar achievement. Collectively, these experiments represented the very start of the development of what would eventually become known as biotechnology or recombinant DNA methods. Health applications Gene therapy Gene therapy uses genetically modified viruses to deliver genes that can cure diseases in human cells.These viruses can deliver DNA or RNA genetic material to the targeted cells. Gene therapy is also used by inactivating mutated genes that are causing the disease using viruses. Viruses that have been used for gene therapy are, adenovirus, lentivirus, retrovirus and the herpes simplex virus. The most common virus used for gene delivery come from adenoviruses as they can carry up to 7.5 kb of foreign DNA and infect a relatively broad range of host cells, although they have been known to elicit immune responses in the host and only provide short term expression. Other common vectors are adeno-associated viruses, which have lower toxicity and longer term expression, but can only carry about 4kb of DNA. Herpes simplex viruses is a promising vector, have a carrying capacity of over 30kb and provide long term expression, although it is less efficient at gene delivery than other vectors. The best vectors for long term integration of the gene into the host genome are retroviruses, but their propensity for random integration is problematic. Lentiviruses are a part of the same family as retroviruses with the advantage of infecting both dividing and non-dividing cells, whereas retroviruses only target dividing cells. Other viruses that have been used as vectors include alphaviruses, flaviviruses, measles viruses, rhabdoviruses, Newcastle disease virus, poxviruses, and picornaviruses. Although primarily still at trial stages, it has had some successes. It has been used to treat inherited genetic disorders such as severe combined immunodeficiency rising from adenosine deaminase deficiency (ADA-SCID), although the development of leukemia in some ADA-SCID patients along with the death of Jesse Gelsinger in another trial set back the development of this approach for many years. In 2009 another breakthrough was achieved when an eight year old boy with Leber’s congenital amaurosis regained normal eyesight and in 2016 GlaxoSmithKline gained approval to commercialise a gene therapy treatment for ADA-SCID. As of 2018, there are a substantial number of clinical trials underway, including treatments for hemophilia, glioblastoma, chronic granulomatous disease, cystic fibrosis and various cancers. Although some successes, gene therapy is still considered a risky technique and studies are still undergoing to ensure safety and effectiveness. Cancer treatment Another potential use of genetically modified viruses is to alter them so they can directly treat diseases. This can be through expression of protective proteins or by directly targeting infected cells. In 2004, researchers reported that a genetically modified virus that exploits the selfish behaviour of cancer cells might offer an alternative way of killing tumours. Since then, several researchers have developed genetically modified oncolytic viruses that show promise as treatments for various types of cancer. Vaccines  Most vaccines consist of viruses that have been attenuated, disabled, weakened or killed in some way so that their virulent properties are no longer effective. Genetic engineering could theoretically be used to create viruses with the virulent genes removed. In 2001, it was reported that genetically modified viruses can possibly be used to develop vaccines against diseases such as, AIDS, herpes, dengue fever and viral hepatitis by using a proven safe vaccine virus, such as adenovirus, and modify its genome to have genes that code for immunogenic proteins that can spike the immune systems response to then be able to fight the virus. Genetic engineered viruses should not have reduced infectivity, invoke a natural immune response and there is no chance that they will regain their virulence function, which can occur with some other vaccines. As such they are generally considered safer and more efficient than conventional vaccines, although concerns remain over non-target infection, potential side effects and horizontal gene transfer to other viruses. Another approach is to use vectors to create novel vaccines for diseases that have no vaccines available or the vaccines that are do not work effectively, such as AIDS, malaria, and tuberculosis. Vector-based vaccines have already been approved and many more are being developed. Heart pacemaker In 2012, US researchers reported that they injected a genetically modified virus into the heart of pigs. This virus inserted into the heart muscles a gene called Tbx18 which enabled heartbeats. The researchers forecast that one day this technique could be used to restore the heartbeat in humans who would otherwise need electronic pacemakers. Genetically modified viruses intended for use in the environment Animals In Spain and Portugal, by 2005 rabbits had declined by as much as 95% over 50 years due diseases such as myxomatosis, rabbit haemorrhagic disease and other causes. This in turn caused declines in predators like the Iberian lynx, a critically endangered species. In 2000 Spanish researchers investigated a genetically modified virus which might have protected rabbits in the wild against myxomatosis and rabbit haemorrhagic disease. However, there was concern that such a virus might make its way into wild populations in areas such as Australia and create a population boom. Rabbits in Australia are considered to be such a pest that land owners are legally obliged to control them. Genetically modified viruses that make the target animals infertile through immunocontraception have been created as well as others that target the developmental stage of the animal. There are concerns over virus containment and cross species infection. Trees Since 2009 genetically modified viruses expressing spinach defensin proteins have been field trialed in Florida (USA). The virus infection of orange trees aims to combat citrus greening disease, that had reduced orange production in Florida 70% since 2005. A permit application has been pending since February 13, 2017 (USDA 17-044-101r) to extend the experimental use permit to an area of 513,500 acres, this would make it the largest permit of this kind ever issued by the USDA Biotechnology Regulatory Services. Insect Allies program In 2016 DARPA, an agency of the U.S. Department of Defense, announced a tender for contracts to develop genetically modified plant viruses for an approach involving their dispersion into the environment using insects. The work plan stated:“Plant viruses hold significant promise as carriers of gene editing circuitry and are a natural partner for an insect-transmitted delivery platform.” The motivation provided for the program is to ensure food stability by protecting agricultural food supply and commodity crops:"By leveraging the natural ability of insect vectors to deliver viruses with high host plant specificity, and combining this capability with advances in gene editing, rapid enhancement of mature plants in the field can be achieved over large areas and without the need for industrial infrastructure.” Despite its name, the “Insect Allies” program is to a large extent a viral program, developing viruses that would essentially perform gene editing of crops in already-planted fields. The genetically modified viruses described in the work plan and other public documents are of a class of genetically modified viruses subsequently termed HEGAAs (horizontal environmental gene alteration agents). The Insect Allies program is scheduled to run from 2017 to 2021 with contracts being executed by three consortia. There are no plans to release the genetically modified viruses into the environment, with testing of the full insect dispersed system occurring in greenhouses (Biosafety level 3 facilities have been mentioned). Concerns have been expressed about how this program and any data it generates will impact biological weapon control and agricultural coexistence, though there has also been support for its stated objectives. Technological applications Lithium-ion batteries In 2009, MIT scientists created a genetically modified virus that has been used to construct a more environmentally friendly lithium-ion battery. The battery was constructed by genetically engineering different viruses such as, the E4 bacteriophage and the M13 bacteriophage, to be used as a cathode. This was done by editing the genes of the virus that code for the protein coat. The protein coat is edited to coat itself in iron phosphate to be able to adhere to highly conductive carbon-nanotubes. The viruses that have been modified to have a multifunctional protein coat can be used as a nano-structured cathode with causes ionic interactions with cations. Allowing the virus to be used as a small battery. Angela Blecher, the scientist who led the MIT research team on the project, says that the battery is powerful enough to be used as a rechargeable battery, power hybrid electric cars, and a number of personal electronics. While both the E4 and M13 viruses can infect and replicate within their bacterial host, it unclear if they retain this capacity after being part of a battery. Safety concerns and regulation Bio-hazard research limitations The National Institute of Health declared a research funding moratorium on select Gain-of-Function virus research in January 2015. In January 2017, the U.S. Government released final policy guidance for the review and oversight of research anticipated to create, transfer, or use enhanced potential pandemic pathogens (PPP). Questions about a potential escape of a modified virus from a biosafety lab and the utility of dual-use-technology, dual use research of concern (DURC), prompted the NIH funding policy revision. GMO lentivirus incident A scientist claims she was infected by a genetically modified virus while working for Pfizer. In her federal lawsuit she says she has been intermittently paralyzed by the Pfizer-designed virus. "McClain, of Deep River, suspects she was inadvertently exposed, through work by a former Pfizer colleague in 2002 or 2003, to an engineered form of the lentivirus, a virus similar to the one that can lead to acquired immune deficiency syndrome, or AIDS." The court found that McClain failed to demonstrate that her illness was caused by exposure to the lentivirus, but also that Pfizer violated whistleblower protection laws. References Viruses Genetically modified organisms
Genetically modified virus
[ "Engineering", "Biology" ]
2,808
[ "Viruses", "Tree of life (biology)", "Genetically modified organisms", "Genetic engineering", "Microorganisms" ]
25,394,665
https://en.wikipedia.org/wiki/Eco-costs%20value%20ratio
The EVR model is a life cycle assessment based method to analyse consumption patterns, business strategies and design options in terms of eco-efficient value creation. Next to this it is used to compare products and service systems (e.g. benchmarking). The eco-costs/value ratio (EVR) is an indicator to reveal sustainable and unsustainable consumption patterns of people. The eco-costs is an indicator for the environmental pollution of the products people buy, the value is the price they pay for it in our free market economy. Example: When somebody spends 1000 euro per month on housing (in Europe: EVR approx. 0,3) it is less harmful for the environment than when 1000 euro is spent on diesel (in Europe: EVR approx. 1,0). See section 3.1. The EVR is also relevant for business strategies, because companies are facing the slow but inevitable internalization of environmental costs. At the moment the costs of products don't take into account the environmental damage caused by these products. This "pollution is for free" mentality is less and less accepted by communities. The EVR makes companies aware of the relative importance of the environmental pollution of their products, and the relative risk they run that future production costs will increase because of this internalization of environmental costs. By using the EVR, companies can make decisions for their product portfolio: abandon products with low value and high environmental costs and stimulate products with high value and low environmental costs. See sections 2.3 and 3.2. Background information The EVR model has been introduced in 1998 and published in 2000–2004 in the International Journal of LCA, and in the Journal of Cleaner Production. The concept of EVR is based on eco-costs. In 2007, 2012 and in 2017, the eco-costs system was updated. General databases of eco-costs are provided (open source) at www.ecocostsvalue.com of Delft University of Technology (the Netherlands). In 2010 a book named "LCA-based assessment of sustainability: the Eco-costs/Value Ratio (EVR)" was published containing the most important articles about the EVR. Working principle The model EVR = Eco-costs/value. The basic idea of the EVR model is to link the 'value chain' to the ecological product chain. In the value chain, the added value (in terms of money) and the added costs are determined for each step of the product 'from cradle to grave'. Similarly, the ecological impact of each step in the product chain is expressed in terms of money, the so-called 'eco-costs'. See Figure 1. Note that there exists also a Porter chain from the right to the left in Figure 1, starting with waste and adding value by recycling. In this way the Porter chain becomes circular. Eco-costs Eco-costs express the amount of environmental burden of a product on basis of prevention of that burden. They are the marginal prevention costs (money) which should be made to reduce the environmental pollution and materials depletion in our world to a level which is in line with the carrying capacity of our earth. As such, the eco-costs are virtual costs, since they are not yet integrated in the real life costs of current production chains (Life Cycle Costs). The eco-costs should be regarded as hidden obligations. For example: for each 1000 kg emission, one should invest €135,- in offshore windmill parks (or other reduction systems at that price or less). When this is done consequently, the total emissions in the world will be reduced by 65% compared to the emissions in 2008. As a result global warming will stabilise. In short: "the eco-costs of 1000kg are € 135,-". Similar calculations can be made on the environmental burden of acidification, eutrification, summer smog, fine dust, eco-toxicity, and the use of metals, fossil fuels and land (nature). Eco-costs are used in Life Cycle Assessment, LCA, to assess the environmental performance of different materials, processes and End of Life methods. Of products The EVR combines eco-cost and value to see whether a product will be successful. The product should have low environmental impact in its lifecycle (low eco-costs) and an attractive value for consumers. The value here is the market value (perceived customer value, also called fair price). Figure 2 depicts the three dimensions of a product: the value, the costs and the eco-costs. It is a trend in society that heavy pollution of industry is not accepted anymore by the inhabitants of a country. This results in stricter regulations by countries (e.g. tradable emission rights, enforcement of best available technologies, eco-taxes, etc.). Eco-costs will then become part of the internal production costs. This internalizing of eco-costs might be a threat to a company, but it might also be an opportunity: “When my product has less eco-burden than that of my competitor, my product can withstand stricter regulations of the government. So this characteristic of low eco-costs of my product is a competitive edge.” To analyse the short term and the long term market prospects of a product or a product service combination (Product Service System, PSS), each product or PPS can be positioned in the portfolio matrix of Figure 3. The basic idea of the product portfolio matrix is the notion that a product, service or PSS is characterized by: its short term market potential: high value/costs ratio its long term market requirement: low eco-costs. In terms of product strategy, the matrix results in 3 strategic directions: enhance the value/costs ratio of a green design to create a bigger market lower the eco-costs of current successful products to make it fit for future markets abandon products with a low value/ costs ratio (not much profit, small market) and high eco-costs For many 'green designs', the usual problem is that they have a low current value/costs ratio. In most of the cases the production costs are higher than the production costs of the classic solution, in some cases even the (perceived) quality is poor. There are two ways to do something about it: a. enhance the (perceived) quality of the product b. attach to the product a service (create a PSS) in a way that the value of the bundle of the product and the service is more than the value of its components. For a product which has a good present value/costs ratio, but high eco-costs, the product and the production process have to be redesigned to lower the eco-costs. This road towards sustainability is often far more promising than the strategy of enhancing the value/costs ratio of a green design. The reason is that the economies of scale for production and distribution are available and that the new product is marketed to an existing client base which is used to the brand name, the quality standards, the service system, etc. Note: The most common fear of business managers is that their new green products end up with a deteriorated value/costs ratio, and hence will have a cumbersome position in the market. The stability of the governmental policy plays an important role here. When governmental regulations which level the playing field are postponed or even abandoned, proactive companies with sound product strategies are harmed. This can cause severe damage to the transition process and may lead to reluctance of players to move proactively in the future. The most successful design options are depicted in Figure 4. The best design strategy is: to increase value where value is high to decrease the eco-costs where the eco-costs are high Use De-linking In economics, de-linking (also known as decoupling) is often used in the context of economic production and environmental quality. In this context, it refers to the ability of an economy to grow without corresponding increases in environmental pressure. In many economies increasing production (GDP) would involve increased pressure on the environment. An economy that is able to sustain GDP growth, without also experiencing a worsening of environmental conditions, is said to be de-linked. There is a consumer's side of the de-linking of economy and ecology. Under the assumption that most of the households spend in their life what they earn in their life, the total EVR of the spending of households is the key towards sustainability. Only when this total EVR of the spending gets lower, the eco-costs related to the total spending can be reduced even at a higher level of spending. There are two ways of achieving this: At the production side: the improvement of eco-efficiency ('lowering EVR') of products and services by the industry At the consumer's side: the change of lifestyle of customers in the direction of 'low EVR' products. At the production side, society is heading in the right direction: gradually, industrial production is achieving higher levels of the value/costs ratio and is at the same time becoming cleaner. At the consumer's side, however, society is suffering from the fact that the consumers preferences are heading in the wrong direction: towards products and services with an unfavourable EVR (like driving in SUVs, more kilometres, intercontinental flights for holidays). These unfavourable preferences can be concluded from Figure 5. Figure 5 shows that people in the Netherlands (and probably in the other EC countries as well) spend relatively more money on cars and holidays when they have more money available. Other studies show that people tend to have intercontinental holidays at the moment they can afford it. This shift in consumer spending will become a big problem in the near future, since the EVR of e.g. housing and health care is much lower than the EVR of transport and (inter)continental holidays by plane. Figure 6 shows the EVR (= ecocosts/price) on the Y/axis as a function of the cumulative expenditures of all products and services of all citizens in the EU 25 on the X-axis. The data is from the EIPRO study of the European Commission (EIPRO = environmental impact of products). The area underneath the curve is proportional to the total eco-costs of the EU25. Basically there are two strategies to reduce the area under the curve: - ask industry to reduce the eco-costs of their products (this will shift the curve downward) - try to reduce expenditures of consumers in high end of the curve, and let them spend this money at the low end of the curve (this will shift the middle part of the curve to the right). The question is now how designers and engineers can contribute to this required shift towards sustainability and what this means to product portfolio strategies of companies. The solution is Eco-efficient Value Creation. Eco-efficient value creation The way towards sustainability requires a double aim in product innovation, see Figure 7: lower eco-costs, and at the same time higher value (a higher market price). We call this: eco-efficient value creation. The reason we need value creation for eco-efficient products is threefold: the higher price in the market is required to cover the higher production cost of green products (note that a higher price is only accepted by the consumer when the perceived value is higher, otherwise the consumer will not buy the product) the higher price prevents the rebound effect lowering the EVR appears the key to a sustainable development at the level of countries (Figure 6) Below, an example of eco-efficient value creation is given, which is the introduction of the Lexus RX 400h in the USA: the customer value has increased, by emphasising its combined power and comfort (from the advertisement in the US: "……While it may have a V6 engine under the hood, the extra boost from the electric-drive motor gives the vehicle the acceleration power of a V8……. and the noise levels in Lexus hybrid vehicles have been reduced even more") the eco-costs of driving are lower, since its excellent overall fuel economy Note that the acceleration of a car is an interesting issue in terms of value. High acceleration is associated with expensive sports cars (Porsche, Ferrari). But people who buy these fast cars hardly use it. For these people acceleration is more part of the image of the product than it is part of the product qualities they use on a daily basis. So reducing the acceleration is the wrong strategy: it eliminates the extra value, and it hardly reduces the overall eco-costs in practice. Environmental benchmarking in LCA Life cycle assessment (LCA) is the generally accepted method to compare two (or more) alternative products or services. A prerequisite for such a comparison is that the functionality ('functional unit') and the quality of the alternatives are the same (you cannot compare apples and oranges in the classical LCA). In cases of product design and architecture, however, this prerequisite seems to be a fundamental flaw in the application of LCA: the designer or architect is aiming at a better quality (in the broad sense of the word: including intangible aspects like beauty and image), so the new design never has the same quality. In some cases the functionality of the design is not the same, since the design solution is limited by a maximum budget, in some cases the functionality is the same, but the higher quality results in a higher price. In all these cases a single indicator in LCA (like the eco-costs) is not suitable for environmental benchmarking. In these cases however, it does make sense to compare the design alternatives on the basis of the eco-costs/value ratio (EVR), where the value is the perceived customer value (the fair price). See section 3.1 on Delinking. Example 1. Different types of armchairs differ in terms of comfort, aesthetics, etc. rather than in terms of functionality. A classical LCA (with a single indicator like eco-costs, carbon footprint, etc.) does not make sense here. Selection on the basis of EVR, however, is the key to a sustainable consumption pattern. The chair with the lowest EVR is the best solution in terms of sustainability. Example 2. In LCA, the comparison of a new building and a renovated building is in the majority of cases not possible, since, in practice, both solutions differ in almost all quality aspects (tangible as well as intangible). However, the solution with lowest EVR is the best in terms of sustainable consumption. Note that the renovated building is the best solution in most of the cases, because it has the lowest EVR in the production phase. However, in some cases the renovated building is not the best solution, because of unfavourable energy consumption (high EVR) in the use phase. References Environmental impact assessment Research Industrial ecology
Eco-costs value ratio
[ "Chemistry", "Engineering" ]
3,042
[ "Industrial ecology", "Industrial engineering", "Environmental engineering" ]
25,394,740
https://en.wikipedia.org/wiki/Spaghetti%20bridge
A spaghetti bridge is an architectural model of a bridge, made of uncooked spaghetti or other hard, dry, straight noodles. Bridges are constructed for both educational experiments and competitions. The aim is usually to construct a bridge with a specific quantity of materials over a specific span, that can sustain a load. In competitions, the bridge that can hold the greatest load for a short period of time wins the contest. There are many contests around the world, usually held by schools and colleges. Okanagan College contest The original Spaghetti Bridge competition has run at Okanagan College in British Columbia since 1983, and is open to international entrants who are full-time secondary or post-secondary students. The winners of the 2009 competition were Norbert Pozsonyi and Aliz Totivan of the Szechenyi Istvan University of Győr in Hungary. They won $1,500 with a bridge that weighed 982 grams and held 443.58 kg. Second place went to Brendon Syryda and Tyler Pearson of Okanagan College with a bridge that weighed 982 grams and held 98.71 kg. Contests Spaghetti bridge building contests around the world include: Abbotsford School District Australian Maritime College Budapest Technical University Camosun College Coonabarabran High School Delft University of Technology Ferris State University George Brown College Institute of Machine Design and Security Technology Instituto GayLussac - Ensino Fundamental e Médio Universidade da Coruña, Escola Politécnica de Enxeñaría Italy High School James Cook University Johns Hopkins University McGill University Monash University Nathan Hale High School Okanagan College Riga Technical University Rowan University Technical University of Madrid Universidad del Valle de Guatemala Universidade Federal do Rio Grande do Sul University of Architecture, Civil Engineering and Geodesy University of British Columbia University of Maribor University of Salento University of South Australia University of Southern California University of Technology Sydney University of Tehran University of the Andes Vilnius Gediminas Technical University Winston Science Woodside Elementary School Bezalel Academy of Arts and Design Universidad de Buenos Aires - Facultad de Arquitectura Diseño y Urbanismo Instituto Federal de Educação, Ciência e Tecnologia de Santa Catarina - Joinville, Brazil See also Architectural engineering Balsa wood bridge Civil engineering Physics Problem-based learning Statics Truss References Winston Science http://www.winston-school.org/?PageName=LatestNews&Section=Highlights&ItemID=106650&ISrc=School&Itype=Highlights&SchoolID=4831 Further reading - Estimating the weight and the failure load of a spaghetti bridge: a deep learning approach DOI:10.1080/0952813X.2019.1694590 External links Resources Bridges Scale modeling
Spaghetti bridge
[ "Physics", "Engineering" ]
571
[ "Structural engineering", "Scale modeling", "Bridges" ]
8,390,819
https://en.wikipedia.org/wiki/Specific%20surface%20area
Specific surface area (SSA) is a property of solids defined as the total surface area (SA) of a material per unit mass, (with units of m2/kg or m2/g). Alternatively, it may be defined as SA per solid or bulk volume (units of m2/m3 or m−1). It is a physical value that can be used to determine the type and properties of a material (e.g. soil or snow). It has a particular importance for adsorption, heterogeneous catalysis, and reactions on surfaces. Measurement Values obtained for specific surface area depend on the method of measurement. In adsorption based methods, the size of the adsorbate molecule (the probe molecule), the exposed crystallographic planes at the surface and measurement temperature all affect the obtained specific surface area. For this reason, in addition to the most commonly used Brunauer–Emmett–Teller (N2-BET) adsorption method, several techniques have been developed to measure the specific surface area of particulate materials at ambient temperatures and at controllable scales, including methylene blue (MB) staining, ethylene glycol monoethyl ether (EGME) adsorption, electrokinetic analysis of complex-ion adsorption and a Protein Retention (PR) method. A number of international standards exist for the measurement of specific surface area, including ISO standard 9277. Calculation The SSA can be simply calculated from a particle size distribution, making some assumption about the particle shape. This method, however, fails to account for surface associated with the surface texture of the particles. Adsorption The SSA can be measured by adsorption using the BET isotherm. This has the advantage of measuring the surface of fine structures and deep texture on the particles. However, the results can differ markedly depending on the substance adsorbed. The BET theory has inherent limitations but has the advantage to be simple and to yield adequate relative answers when the solids are chemically similar. In relatively rare cases, more complicated models based on thermodynamic approaches, or even quantum chemistry, may be applied to improve the consistency of the results, but at the cost of much more complex calculations requiring advanced knowledge and a good understanding from the operator. Gas permeability This depends upon a relationship between the specific surface area and the resistance to gas-flow of a porous bed of powder. The method is simple and quick, and yields a result that often correlates well with the chemical reactivity of a powder. However, it fails to measure much of the deep surface texture. See also Surface-area-to-volume ratio References Porous media Surface science A
Specific surface area
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
556
[ "Physical quantities", "Porous media", "Quantity", "Intensive quantities", "Mass", "Materials science", "Surface science", "Condensed matter physics", "Mass-specific quantities", "Matter" ]
8,391,998
https://en.wikipedia.org/wiki/Laser%20voltage%20prober
The laser voltage probe (LVP) is a laser-based voltage and timing waveform acquisition system which is used to perform failure analysis on flip-chip integrated circuits. The device to be analyzed is de-encapsulated in order to expose the silicon surface. The silicon substrate is thinned mechanically using a back side mechanical thinning tool. The thinned device is then mounted on a movable stage and connected to an electrical stimulus source. Signal measurements are performed through the back side of the device after substrate thinning has been performed. The device being probed must be electrically stimulated using a repeating test pattern, with a trigger pulse provided to the LVP as reference. The operation of the LVP is similar to that of a sampling oscilloscope. Theory of operation The LVP instrument measures voltage waveform signals in the device diffusion regions. Device imaging is accomplished through the use of a laser scanning microscope (LSM). The LVP uses dual infrared (IR) lasers to perform both device imaging and waveform acquisition. One laser is used to acquire images or waveforms from the device, while the second laser provides a reference which may be used to subtract unwanted noise from the signal data being acquired. On an electrically active device, the instrument monitors the changes in the phase of the electromagnetic field surrounding a signal being applied to a junction. The instrument obtains voltage waveform and timing information by monitoring the interaction of laser light with the changes in the electric field across a p-n junction. As the laser reaches the silicon surface, a certain amount of that light is reflected back. The amount of reflected laser light from the junction is sampled at various points in time. The changing electromagnetic field at the junction affects the amount of laser light that is reflected back. By plotting the variations in reflected laser light versus time, it is possible to construct a timing waveform of the signal at the junction. As the test pattern continues to loop, additional measurements are acquired and averaged into the previous measurements. Over a period of time, this averaging of measurements produces a more refined waveform. The end result is a waveform that is representative of the electrical signal present at the junction. References Reliability engineering Semiconductor analysis Diffraction
Laser voltage prober
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
455
[ "Systems engineering", "Spectrum (physical sciences)", "Reliability engineering", "Crystallography", "Diffraction", "Spectroscopy" ]
8,394,040
https://en.wikipedia.org/wiki/Steel%20design
Steel Design, or more specifically, Structural Steel Design, is an area of structural engineering used to design steel structures. These structures include schools, houses, bridges, commercial centers, tall buildings, warehouses, aircraft, ships and stadiums. The design and use of steel frames are commonly employed in the design of steel structures. More advanced structures include steel plates and shells. In structural engineering, a structure is a body or combination of pieces of the rigid bodies in space that form a fitness system for supporting loads and resisting moments. The effects of loads and moments on structures are determined through structural analysis. A steel structure is composed of structural members that are made of steel, usually with standard cross-sectional profiles and standards of chemical composition and mechanical properties. The depth of steel beams used in the construction of bridges is usually governed by the maximum moment, and the cross-section is then verified for shear strength near supports and lateral torsional buckling (by determining the distance between transverse members connecting adjacent beams). Steel column members must be verified as adequate to prevent buckling after axial and moment requirements are met. There are currently two common methods of steel design: The first method is the Allowable Strength Design (ASD) method. The second is the Load and Resistance Factor Design (LRFD) method. Both use a strength, or ultimate level design approach. Load combination equations Allowable Strength Design For ASD, the required strength, Ra, is determined from the following load combinations (according to the AISC SCM, 13 ed.) and: D + F D + H + F + L + T D + H + F + (Lr or S or R) D + H + F + 0.75(L + T) + 0.75(Lr or S or R) D + H + F ± (0.6W or 0.7E) D + H + F + (0.75W or 0.7E) + 0.75L + 0.75(Lr or S or R) 0.6D + 0.6W 0.6D ± 0.7E where: D = dead load, Di = weight of Ice, E = earthquake load, F = load due to fluids with well-defined pressures and maximum heights, Fa = flood load, H = load due to lateral earth pressure, ground water pressure, or pressure of bulk materials, L = live load due to occupancy, Lr = roof live load, S = snow load, R = nominal load due to initial rainwater or ice, exclusive of the ponding contribution, T = self straining load, W = wind load, Wi = wind on ice.. Special Provisions exist for accounting flood loads and atmospheric loads i.e. Di and Wi Note that Allowable Strength Design is NOT equivalent to Allowable Stress Design, as governed by AISC 9th Edition. Allowable Strength Design still uses a strength, or ultimate level, design approach. Load and Resistance Factor Design For LRFD, the required strength, Ru, is determined from the following factored load combinations: 1.4(D + F) 1.2(D + F + T) + 1.6(L + H) + 0.5(Lr or S or R) 1.2D + 1.6(Lr or S or R) + (L or 0.8W) 1.2D + 1.0W + L + 0.5(Lr or S or R) 1.2D ± 1.0E + L + 0.2S + 0.9D + 1.6W + 1.6H 0.9D + 1.6 H ± (1.6W or 1.0E) where the letters for the loads are the same as for ASD. AISC Steel Construction Manual The American Institute of Steel Construction (AISC), Inc. publishes the Steel Construction Manual (Steel construction manual, or SCM), which is currently in its 16th edition. Structural engineers use this manual in analyzing, and designing various steel structures. Some of the chapters of the book are as follows. Dimensions and properties of various types of steel sections available on the market (W, S, C, WT, HSS, etc.) General Design Considerations Design of Flexural Members Design of Compression Members Design of Tension members Design of Members Subject to Combined Loading Design Consideration for Bolts Design Considerations for Welds Design of Connecting Elements Design of Simple Shear Connections Design of Flexure Moment Connections Design of Fully Restrained (FR) Moment Connections Design of Bracing Connections and Truss Connections Design of Beam Bearing Plates, Column Base Plates, Anchor Rods, and Column Splices Design of Hanger Connections, Bracket Plates, and Crane-Rail Connections General Nomenclature Specification and Commentary for Structural Steel Buildings RCSC Specification and Commentary for Structural Joints Using High-Strength Bolts Code of Standard Practice and Commentary for Structural Steel Buildings and Bridges Miscellaneous Data and Mathematical Information CISC Handbook of Steel Construction Canadian Institute of Steel Construction publishes the "CISC Handbook of steel Construction". CISC is a national industry organization representing the structural steel, open-web steel joist and steel plate fabrication industries in Canada. It serves the same purpose as the AISC manual, but conforms with Canadian standards. See also Structural steel References Structural engineering Structural steel
Steel design
[ "Engineering" ]
1,086
[ "Structural engineering", "Civil engineering", "Structural steel", "Construction" ]
18,757,799
https://en.wikipedia.org/wiki/Basset%20force
In a body submerged in a fluid, unsteady forces due to acceleration of that body with respect to the fluid, can be divided into two parts: the virtual mass effect and the Basset force. The Basset force term describes the force due to the lagging boundary layer development with changing relative velocity (acceleration) of bodies moving through a fluid. The Basset term accounts for viscous effects and addresses the temporal delay in boundary layer development as the relative velocity changes with time. It is also known as the "history" term. The Basset force is difficult to implement and is commonly neglected for practical reasons; however, it can be substantially large when the body is accelerated at a high rate. This force in an accelerating Stokes flow has been proposed by Joseph Valentin Boussinesq in 1885 and Alfred Barnard Basset in 1888. Consequently, it is also referred to as the Boussinesq–Basset force. Acceleration of a flat plate Consider an infinitely large plate started impulsively with a step change in velocity—from 0 to u0—in the direction of the plate–fluid interface plane. The equation of motion for the fluid—Stokes flow at low Reynolds number—is where u(y,t) is the velocity of the fluid, at some time t, parallel to the plate, at a distance y from the plate, and vc is the kinematic viscosity of the fluid (c~continuous phase). The solution to this equation is, where erf and erfc denote the error function and the complementary error function, respectively. Assuming that an acceleration of the plate can be broken up into a series of such step changes in the velocity, it can be shown that the cumulative effect on the shear stress on the plate is where up(t) is the velocity of the plate, ρc is the mass density of the fluid, and μc is the viscosity of the fluid. Acceleration of a spherical particle Boussinesq (1885) and Basset (1888) found that the force F on an accelerating spherical particle in a viscous fluid is where D is the particle diameter, and u and v are the fluid and particle velocity vectors, respectively. See also Basset–Boussinesq–Oseen equation Stokes boundary layer References Fluid dynamics
Basset force
[ "Chemistry", "Engineering" ]
465
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
22,507,026
https://en.wikipedia.org/wiki/Photothermal%20optical%20microscopy
Photothermal optical microscopy / "photothermal single particle microscopy" is a technique that is based on detection of non-fluorescent labels. It relies on absorption properties of labels (gold nanoparticles, semiconductor nanocrystals, etc.), and can be realized on a conventional microscope using a resonant modulated heating beam, non-resonant probe beam and lock-in detection of photothermal signals from a single nanoparticle. It is the extension of the macroscopic photothermal spectroscopy to the nanoscopic domain. The high sensitivity and selectivity of photothermal microscopy allows even the detection of single molecules by their absorption. Similar to Fluorescence Correlation Spectroscopy (FCS), the photothermal signal may be recorded with respect to time to study the diffusion and advection characteristics of absorbing nanoparticles in a solution. This technique is called photothermal correlation spectroscopy (PhoCS). Forward detection scheme In this detection scheme a conventional scanning sample or laser-scanning transmission microscope is employed. Both the heating and the probing laser beam are coaxially aligned and superimposed using a dichroic mirror. Both beams are focused onto a sample, typically via a high-NA illumination microscope objective, and recollected using a detection microscope objective. The thereby collimated transmitted beam is then imaged onto a photodiode after filtering out the heating beam. The photothermal signal is then the change in the transmitted probe beam power due to the heating laser. To increase the signal-to-noise ratio a lock-in technique may be used. To this end, the heating laser beam is modulated at a high frequency of the order of MHz and the detected probe beam power is then demodulated on the same frequency. For quantitative measurements, the photothermal signal may be normalized to the background detected power (which is typically much larger than the change ), thereby defining the relative photothermal signal Detection mechanism The physical basis for the photothermal signal in the transmission detection scheme is the lensing action of the refractive index profile that is created upon the absorption of the heating laser power by the nanoparticle. The signal is homodyne in the sense that a steady state difference signal accounts for the mechanism and the forward scattered field's self-interference with the transmitted beam corresponds to an energy redistribution as expected for a simple lens. The lens is a Gradient Refractive INdex (GRIN) particle determined by the 1/r refractive index profile established due to the point-source temperature profile around the nanoparticle. For a nanoparticle of radius embedded in a homogeneous medium of refractive index with a thermorefractive coefficient the refractive index profile reads: in which the contrast of the thermal lens is determined by the nanoparticle absorption cross-section at the heating beam wavelength, the heating beam intensity at the point of the particle and the embedding medium's thermal conductivity via . Although the signal can be well-explained in a scattering framework, the most intuitive description can be found by an intuitive analogy to the Coulomb scattering of wave packets in particle physics. Backwards detection scheme In this detection scheme a conventional scanning sample or laser-scanning transmission microscope is employed. Both the heating and the probing laser beam are coaxially aligned and superimposed using a dichroic mirror. Both beams are focused onto a sample, typically via a high-NA illumination microscope objective. Alternatively, the probe-beam may be laterally displaced with respect to the heating beam. The retroreflected probe-beam power is then imaged onto a photodiode and the change as induced by the heating beam provides the photothermal signal Detection mechanism The detection is heterodyne in the sense that the scattered field of the probe beam by the thermal lens interferes in the backwards direction with a well-defined retroreflected part of the incidence probing beam. References Microscopy
Photothermal optical microscopy
[ "Chemistry" ]
799
[ "Microscopy" ]
22,508,443
https://en.wikipedia.org/wiki/SECS/GEM
SECS/GEM is the semiconductor industry's equipment interface protocol for equipment-to-host data communications. It is the messaging standard that facilitates communication between process equipment made by disparate manufacturers (etch, deposition, polish, clean, and more) and the factory host. In an automated fab, the interface can start and stop equipment processing, collect measurement data, change variables, and select recipes for products. The SECS (SEMI Equipment Communications Standard)/GEM (Generic Equipment Model) standards do all this in a defined way. Developed by the SEMI (Semiconductor Equipment and Materials International) organization, the standards define a common set of equipment behaviour and communications capabilities. The Generic Model for Communications and Control Of Manufacturing Equipment (GEM) standard is maintained and published by the non-profit organization Semiconductor Equipment and Materials International (SEMI). Generally speaking, the SECS/GEM standard defines messages, state machines and scenarios to enable factory software to control and monitor manufacturing equipment. The GEM standard is formally designated and referred to as SEMI standard E30, but frequently simply referred to as the GEM or SECS/GEM standard. GEM intends "to produce economic benefits for both device manufacturers and equipment suppliers..." by defining "... a common set of equipment behavior and communications capabilities that provide the functionality and flexibility to support the manufacturing automation programs of semiconductor device manufacturers" [SEMI E30, 1.3]. GEM is an implementation of the SECS-II standard, SEMI standard E5. Many equipment manufacturers in the industries of semiconductors (front end and back end), surface mount technology, electronics assembly, photovoltaics, flat panel displays and others provide a SECS/GEM interface so that the factory host software can communicate with the machine for monitoring and/or controlling purposes. Because the GEM standard was written with very few semiconductor-specific features, it can be applied to virtually any automated manufacturing equipment in any industry. All GEM compliant manufacturing equipment share a consistent interface and certain consistent behavior. GEM equipment can communicate with a GEM capable host using either TCP/IP (using the HSMS standard, SEMI E37) or RS-232 based protocol (using the SECS-I standard, SEMI E4). Often both protocols are supported. Each equipment can be monitored and controlled using a common set of SECS-II messages specified by GEM. There are many additional SEMI standards and factory specifications that reference the GEM standard and its features. These additional standards are either industry-specific or equipment-type specific. Following are a few examples. Semiconductor Front-End The semiconductor front-end industry defined a series of standards known as the GEM300 standards that includes SEMI standards E40, E87, E90, E94 and E116 and references the E39 standard. Each standard provides additional features to the GEM interface and builds upon the features in the GEM E30 standard. 300 mm factories worldwide use the underlying GEM standard's data collection features in order to monitor specific equipment activity such as wafer movement and process job execution. The SECS/GEM standard and the additional GEM300 standards are required on nearly each and every 300mm wafer manufacturing tool in order to implement full factory automation. This industry has been the strongest supporter of the GEM and related SEMI standards. Semiconductor Back-End Numerous equipment in the Semiconductor Back-End industry implement the GEM standard. Additional standards have been implemented such as SEMI E122 Standard for Tester Specific Equipment Model and SEMI E123 Standard for Handler Equipment Specific Equipment Model. Surface Mount Technology Many equipment in the Surface Mount Technology industry support the GEM standard, including chip placement, solder paste, oven and inspection equipment. The GEM standard has been used on these equipment for over 15 years, although it is not widely used in preference to other widely used Industry 4.0 options. Photovoltaic In 2008, the Photovoltaic industry officially decided to adopt the SECS/GEM standard and submitted a proposal for a new SEMI standard, ballot 4557, as a new PV industry standard. Even prior to adopting the GEM standard, several photovoltaic equipment suppliers already were capable of supporting the GEM standard. The new standard is called the PV02 "Guide for PV Equipment Communication Interfaces (PVECI)" and defines a framework that utilizes the SEMI E37 (HSMS), SEMI E5 (SECS-II), as well as sub-sets of SEMI E30 (GEM), SEMI E148 (NTP based time synchronization) and SEMI E10 (Definition and Measurement of Equipment Reliability and Availability) standards. References SEMI International Standards External links SEMI - Semiconductor Equipment and Materials International Semiconductor device fabrication Technology trade associations
SECS/GEM
[ "Materials_science" ]
957
[ "Semiconductor device fabrication", "Microtechnology" ]
22,508,780
https://en.wikipedia.org/wiki/Alternative%20natural%20materials
Alternative natural materials are natural materials like rock or adobe that are not as commonly used as materials such as wood or iron. Alternative natural materials have many practical uses in areas such as sustainable architecture and engineering. The main purpose of using such materials is to minimize the negative effects that built environments can have on the planet, while increasing the efficiency and adaptability of the structures. History Alternative natural materials have existed for quite some time but often in very basic forms, or only as ingredients to a particular material. For example, earth used as a building material for walls of houses has existed for thousands of years. Much more recently, in the 1920s, the United States government promoted rammed earth as a fireproof construction method for building farmhouses. Another more common example is adobe. Adobe homes are prominent in the southwestern U.S. and several Spanish-speaking countries. Straw bale construction is a more modern concept, but there exists evidence that straw was used to make homes in African prairies as far back as the Paleolithic times. Alternative natural materials, specifically their applications, have only recently made their way into more common use. The modern problems of global warming and climate change shifted more of a focus onto the materials and methods used to build our cityscape and homes. As environmentally conscious decisions became commonplace, the use of alternative natural materials instead of typical natural materials or man-made materials that rely heavily on natural resources became prominent. Structural materials Rock Rocks have two characteristics: good thermal mass and thermal insulation. The temperature in a house built from rock stays relatively constant, thus requiring less air conditioning and other cooling systems. Types of rocks that can be employed are reject stone (pieces of stone that are not able to be used for another task), limestone, and flagstone. Bamboo In Asian countries, bamboo is used for structures like bridges and homes. Bamboo is surprisingly strong and flexible and grows incredibly fast, making it an abundant material. Although it can be difficult to join corners together, bamboo's material strength makes up for the hardships that can be encountered while building with it. Rammed earth Rammed earth is a very abundant material that can be used in place of concrete and brick. Soil is packed tightly into wall molds where it is rammed together and hardened to form a durable wall packing made of nothing more than dirt, stones, and sticks. Rammed earth also provides thermal mass, resulting in energy savings. In addition, it is very weatherproof and durable enough that it was used in the Great Wall of China. Earth-sheltered Earth sheltering is a unique building technique in which buildings are completely constructed by some form of earth on at least one side, whether it be a grass roof, clay walls, or both. This unique system usually includes plenty of windows because of the difficulty involved with using too much electricity in such a house. This adds to the energy efficiency of the house by reducing lighting costs. Insulation materials Straw Straw bales can be used as a basis for walls instead of drywall. Straw provides excellent insulation and fire resistance in a traditional post-and-beam structure, where a wood frame supports the house. These straw walls are about 75% more energy efficient than standard drywall and because no oxygen can get through the walls, fire cannot spread and there is no chance of combustion. Cordwood Cordwood is a combination of small remnants of firewood and other lumber that would otherwise go to waste. These small blocks of wood can be put together easily to make a structure that, like stone, has insulation as well as thermal mass. Cordwood provides the rustic look of log cabins without the use of tons of lumber. An entire building can be constructed with just cordwood, or stones can be used to fill in the walls. Cork Cork is suitable as thermal insulation, as it is characterized by lightness, elasticity, impermeability, and fire resistance. In construction, cork can be applied in various construction elements like floors, walls, roofs, and lofts to reduce the need for heating or cooling and to enhance energy efficiency. Adobe Adobe is an age-old technique that is cheap, easy to obtain, and ideal for hot environments. A mixture of sand, clay, and water is poured into a mold and left in the sun to dry. When dried, it is exceptionally strong and heat-resistant. Adobe does not let much heat through to the inside of the structure, thus providing excellent insulation during the summer to reduce energy costs. Although this clay mixture provides excellent insulation from heat, it is not very waterproof and can be dangerous in earthquake prone areas due to its tendency to crack easily. Sawdust Sawdust can be combined with clay or cement mixtures and used for walls. Such walls are very sturdy and the method effectively recycles any trees needing excavation from the building area. Depending what type of sawdust is used (hardwood is best) the wood chips in the walls absorb moisture and help prevent cracking during freeze and thaw cycles. Sawdust may be combined with water and frozen to produce a material commonly known as pykrete, which is strong, and less prone to melting than regular ice. Papercrete Papercrete is a new material that serves as a good substitute for concrete. Papercrete is shredded paper, sand, and cement mixed together to form a very durable brick-like material. Buildings utilizing papercrete are well-insulated and resistant to termites and fire. Papercrete is very cheap as it usually only costs about $0.35 per square foot. Hempcrete Hempcrete, also known as hemplime, is a sustainable biocomposite composed of hemp hurds mixed with lime, sand, or pozzolans material used in construction and insulation. The material offers advantages such as ease of use, insulation, and moisture regulation without the brittleness of traditional concrete. However, it exhibits low mechanical performance and is not suitable for load-bearing structures. It has good thermal and acoustic insulation properties, making it suitable for (non-load bearing) walls, finishing plaster, and insulation. It also acts as a carbon sink. Hempcrete gained popularity in France since the 1990s, and is used in Canada for various construction purposes, such as indoor temperature control, prefabricated panels, and diverse insulation needs with different density mixtures. Examples Although alternative building materials are a newer concept, some buildings have already employed these materials, as well as other tactics, in pursuit of greater sustainability. One such example is the School of Art, Media, and Design located in Singapore. This school has a roof made completely of grass (an example of Earth-sheltering). This allows the use of less concrete and other materials for the roof, and the building also includes many windows to utilize natural lighting. See also Green building Green building and wood Green roof Hemp as a building material Natural building Sustainable architecture Sustainable landscaping Sustainable landscape architecture Sustainable gardening References Natural Sustainable architecture
Alternative natural materials
[ "Physics", "Engineering", "Environmental_science" ]
1,403
[ "Sustainable architecture", "Natural materials", "Building engineering", "Construction", "Materials", "Building materials", "Environmental social science", "Matter", "Architecture" ]
22,509,369
https://en.wikipedia.org/wiki/Iron%20phosphide
Iron phosphide is a chemical compound of iron and phosphorus, with a formula of FeP.< Its physical appearance is grey needles. Manufacturing of iron phosphide takes place at elevated temperatures, where the elements combine directly. Iron phosphide reacts with moisture and acids producing phosphine (PH3), a toxic and pyrophoric gas. Iron phosphide is a good electric and heat conductor. Below a Néel temperature of about 119 K, FeP takes on an helimagnetic structure. References Phosphorus compounds Iron compounds Phosphides Semiconductor materials
Iron phosphide
[ "Chemistry" ]
124
[ "Semiconductor materials" ]
22,510,359
https://en.wikipedia.org/wiki/Symmetric%20function
In mathematics, a function of variables is symmetric if its value is the same no matter the order of its arguments. For example, a function of two arguments is a symmetric function if and only if for all and such that and are in the domain of The most commonly encountered symmetric functions are polynomial functions, which are given by the symmetric polynomials. A related notion is alternating polynomials, which change sign under an interchange of variables. Aside from polynomial functions, tensors that act as functions of several vectors can be symmetric, and in fact the space of symmetric -tensors on a vector space is isomorphic to the space of homogeneous polynomials of degree on Symmetric functions should not be confused with even and odd functions, which have a different sort of symmetry. Symmetrization Given any function in variables with values in an abelian group, a symmetric function can be constructed by summing values of over all permutations of the arguments. Similarly, an anti-symmetric function can be constructed by summing over even permutations and subtracting the sum over odd permutations. These operations are of course not invertible, and could well result in a function that is identically zero for nontrivial functions The only general case where can be recovered if both its symmetrization and antisymmetrization are known is when and the abelian group admits a division by 2 (inverse of doubling); then is equal to half the sum of its symmetrization and its antisymmetrization. Examples Consider the real function By definition, a symmetric function with variables has the property that In general, the function remains the same for every permutation of its variables. This means that, in this case, and so on, for all permutations of Consider the function If and are interchanged the function becomes which yields exactly the same results as the original Consider now the function If and are interchanged, the function becomes This function is not the same as the original if which makes it non-symmetric. Applications U-statistics In statistics, an -sample statistic (a function in variables) that is obtained by bootstrapping symmetrization of a -sample statistic, yielding a symmetric function in variables, is called a U-statistic. Examples include the sample mean and sample variance. See also References F. N. David, M. G. Kendall & D. E. Barton (1966) Symmetric Function and Allied Tables, Cambridge University Press. Joseph P. S. Kung, Gian-Carlo Rota, & Catherine H. Yan (2009) Combinatorics: The Rota Way, §5.1 Symmetric functions, pp 222–5, Cambridge University Press, . Combinatorics Properties of binary operations
Symmetric function
[ "Physics", "Mathematics" ]
556
[ "Symmetry", "Discrete mathematics", "Combinatorics", "Symmetric functions", "Algebra" ]
22,513,078
https://en.wikipedia.org/wiki/Snf3
Snf3 is a protein which regulates glucose uptake in yeast. It senses glucose in the environment with high affinity. Introduction Glucose sensing and signaling in budding yeast is similar to the mammalian system in many ways. However, there are also significant differences. Mammalian cells regulate their glucose uptake via hormones (i.e. insulin and glucagon) or intermediary metabolites. In contrast, yeast as a unicellular organism does not depend on hormones but on nutrients in the medium. The presence of glucose induces a conformational change in the membrane proteins Snf3/Rgt2 or Gpr1, and regulates expression of genes involved in glucose metabolism. Homology and function Snf3 is homologous to multiple sugar transporters, it shares high similarity to the glucose transporters of rat brain cells and human HepG2 hepatoma cells, as well as to the arabinose and xylose transporters (AraE and XylE) of Escherichia coli. Based on this homology and on genetic studies, Snf3 was initially thought to be a high affinity glucose transporter. Later, it was found that Snf3 is not a glucose transporter, but rather a high affinity glucose sensor. It senses glucose at low concentrations and regulates transcription of the HXT genes, which encode for glucose transporters. If glucose is absent Snf3 is quiescent and transcription of the HXT genes is inhibited by a repressing complex. The complex consisting of several subunits such as Rgt1, Mth1/Std1, Cyc8 and Tup1 binds to the promoters of the HXT genes, thereby blocking their transcription. Snf3 is able to bind even low amounts of glucose due to its high affinity. The induction of Snf3 by glucose leads to the activation of YckI, a yeast casein kinase. This is followed by the recruitment of Mth1 and Std1 to the C-terminus of Snf3 which facilitates the phosphorylation of the two proteins by YckI. Phosphorylated Mth1 and Std1 are subsequently tagged for proteasome dependent degradation by SCFGrrl, an E3 ubiquitin ligase. Therefore, the inhibitory complex misses two of its key components and cannot be assembled. Thus, repression of the HXT genes is abolished, leading to the expression of the glucose transporters and subsequently glucose import. Structure Snf3 is a plasma membrane protein in yeasts that consists of 12 (2x6) transmembrane domains, like the homologous glucose transporters. Its structure is distinct from the homologous transporters in particular by a long C-terminal tail which is predicted to reside in the cytoplasm. The long C-terminal tail plays an important role in glucose signaling and is probably the signaling domain itself. A soluble version of the C-terminal tail alone is sufficient to induce glucose transport. All glucose transporters including Snf3 contain an arginine residue situated in a cytoplasmic loop preceding the fifth transmembrane domain. If this position is mutated, Snf3 adopts a state of constant glucose induction irrespective of whether there are nutrients present or not; this suggests an involvement in the glucose sensing process. Regulation The regulation of Snf3 in S. cerevisiae and its downstream events are still poorly understood, but it seems clear that a second glucose sensor Rgt2 influences Snf3 and vice versa. Furthermore, it is unclear whether these two proteins sense the glucose concentration on the outside or inside the cell. Snf3 and Rgt2 influence directly or indirectly several Hxt-transporters which are responsible for the glucose uptake. Low extracellular glucose concentrations are sensed by the Snf3 protein which probably leads to the expression of Hxt2-Genes for high affinity glucose transporters, while Rgt2 senses high glucose concentrations and leads to the expression of low affinity glucose transporters, like Hxt1 Although the downstream pathway is poorly understood it seems that Snf3 and Rgt2 transmit a signal directly or indirectly to Grr1, the DNA binding protein Rgt1, and the two cofactors Ssn6 and Tup1. Also needed for the transcription are the two nuclear proteins Mth1 and Std1. References Gancedo MJ (2008). The early steps of glucose signaling in yeast. FEMS Microbiol Rev 32, 673-704 Kruckeberg AL, Walsh MC, Van Dam K (1998). How do yeast cells sense glucose? BioEssays 20, 972-976 Schneper L, Düvel K, Broach JR (2004). Sense and sensibility: nutritional response and signal integration in yeast. Current Opinion in Microbiology 7, 624-630 Proteins Metabolism Receptors Membrane proteins
Snf3
[ "Chemistry", "Biology" ]
1,032
[ "Biomolecules by chemical classification", "Protein classification", "Signal transduction", "Receptors", "Membrane proteins", "Cellular processes", "Molecular biology", "Biochemistry", "Proteins", "Metabolism" ]
22,515,062
https://en.wikipedia.org/wiki/Railway%20stamp
In philately a railway stamp is a stamp issued to pay the cost of the conveyance of a letter or parcel by rail. A wide variety of railway stamps have been issued by different countries and by private and state railways. Railway stamps of an unofficial or semi-official type are considered cinderella stamps. The first railway stamp was issued in England in 1846 for parcels and Belgium has issued railway stamps since 1879. From 1891 British mainline railway companies issued railway letter stamps for the conveyance of letters by rail, although that service has now ceased apart from on some small tourist lines. Railway stamps of Denmark One of the countries that issued a lot of different railway stamps was Denmark. They were not only issued by Danske Statsbaner (Danish State Railways), but also by many local railway companies like Gribskovbanen (GDS), Hads-Ning Herreders Jernbane (HHJ) and Odsherreds Jernbane (OHJ). See also Parcel stamp Turner Collection of Railway Letter Stamps References Further reading (Reprint, 1983 by Tim Clutterbuck & Co.) External links Australia and New Zealand Railway Parcel Stamps. Talyllyn Railway, a Welsh private railway letter service. Cleator & Workington Junction Railway Postage Stamp. The Railway Philatelic Group Railway Newspaper and Parcel Stamps (1899) Archived here. Philatelic terminology Cinderella stamps Rail freight transport Topical postage stamps Transport in culture
Railway stamp
[ "Physics" ]
292
[ "Physical systems", "Transport", "Transport in culture" ]
1,924,637
https://en.wikipedia.org/wiki/Shielding%20gas
Shielding gases are inert or semi-inert gases that are commonly used in several welding processes, most notably gas metal arc welding and gas tungsten arc welding (GMAW and GTAW, more popularly known as MIG (Metal Inert Gas) and TIG (Tungsten Inert Gas), respectively). Their purpose is to protect the weld area from oxygen, and water vapour. Depending on the materials being welded, these atmospheric gases can reduce the quality of the weld or make the welding more difficult. Other arc welding processes use alternative methods of protecting the weld from the atmosphere as well – shielded metal arc welding, for example, uses an electrode covered in a flux that produces carbon dioxide when consumed, a semi-inert gas that is an acceptable shielding gas for welding steel. Improper choice of a welding gas can lead to a porous and weak weld, or to excessive spatter; the latter, while not affecting the weld itself, causes loss of productivity due to the labor needed to remove the scattered drops. If used carelessly, shielding gasses can displace oxygen, causing hypoxia and potentially death. Common shielding gases Shielding gases fall into two categories—inert or semi-inert. Only two of the noble gases, helium and argon, are cost effective enough to be used in welding. These inert gases are used in gas tungsten arc welding, and also in gas metal arc welding for the welding of non-ferrous metals. Semi-inert shielding gases, or active shield gases, include carbon dioxide, oxygen, nitrogen, and hydrogen. These active gases are used with GMAW on ferrous metals. Most of these gases, in large quantities, would damage the weld, but when used in small, controlled quantities, can improve weld characteristics. Properties The important properties of shielding gases are their thermal conductivity and heat transfer properties, their density relative to air, and the ease with which they undergo ionization. Gases heavier than air (e.g. argon) blanket the weld and require lower flow rates than gases lighter than air (e.g. helium). Heat transfer is important for heating the weld around the arc. Ionizability influences how easy the arc starts, and how high voltage is required. Shielding gases can be used pure, or as a blend of two or three gases. In laser welding, the shielding gas has an additional role, preventing formation of a cloud of plasma above the weld, absorbing significant fraction of the laser energy. This is important for CO2 lasers; Nd:YAG lasers show lower tendency to form such plasma. Helium plays this role best due to its high ionization potential; the gas can absorb high amount of energy before becoming ionized. Argon is the most common shielding gas, widely used as the base for the more specialized gas mixes. Carbon dioxide is the least expensive shielding gas, providing deep penetration, however it negatively affects the stability of the arc and enhances the molten metal's tendency to create droplets (spatter). Carbon dioxide in concentration of 1-2% is commonly used in the mix with argon to reduce the surface tension of the molten metal. Another common blend is 25% carbon dioxide and 75% argon for GMAW. Helium is lighter than air; larger flow rates are required. It is an inert gas, not reacting with the molten metals. Its thermal conductivity is high. It is not easy to ionize, requiring higher voltage to start the arc. Due to higher ionization potential it produces hotter arc at higher voltage, provides wide deep bead; this is an advantage for aluminium, magnesium, and copper alloys. Other gases are often added. Blends of helium with addition of 5–10% of argon and 2–5% of carbon dioxide ("tri-mix") can be used for welding of stainless steel. Used also for aluminium and other non-ferrous metals, especially for thicker welds. In comparison with argon, helium provides more energy-rich but less stable arc. Helium and carbon dioxide were the first shielding gases used, since the beginning of World War 2. Helium is used as a shield gas in laser welding for carbon dioxide lasers. Helium is more expensive than argon and requires higher flow rates, so despite its advantages it may not be a cost-effective choice for higher-volume production. Pure helium is not used for steel, as it causes an erratic arc and encourages spatter. Oxygen is used in small amounts as an addition to other gases; typically as 2–5% addition to argon. It enhances arc stability and reduces the surface tension of the molten metal, increasing wetting of the solid metal. It is used for spray transfer welding of mild carbon steels, low alloy and stainless steels. Its presence increases the amount of slag. Argon-oxygen (Ar-O2) blends are often being replaced with argon-carbon dioxide ones. Argon-carbon dioxide-oxygen blends are also used. Oxygen causes oxidation of the weld, so it is not suitable for welding aluminium, magnesium, copper, and some exotic metals. Increased oxygen makes the shielding gas oxidize the electrode, which can lead to porosity in the deposit if the electrode does not contain sufficient deoxidizers. Excessive oxygen, especially when used in application for which it is not prescribed, can lead to brittleness in the heat affected zone. Argon-oxygen blends with 1–2% oxygen are used for austenitic stainless steel where argon-CO2 can not be used due to required low content of carbon in the weld; the weld has a tough oxide coating and may require cleaning. Hydrogen is used for welding of nickel and some stainless steels, especially thicker pieces. It improves the molten metal fluidity, and enhances cleanness of the surface. It is added to argon in amounts typically under 10%. It can be added to argon-carbon dioxide blends to counteract the oxidizing effects of carbon dioxide. Its addition narrows the arc and increases the arc temperature, leading to better weld penetration. In higher concentrations (up to 25% hydrogen), it may be used for welding conductive materials such as copper. However, it should not be used on steel, aluminum or magnesium because it can cause porosity and hydrogen embrittlement; its application is usually limited only to some stainless steels. Nitric oxide addition serves to reduce production of ozone. It can also stabilize the arc when welding aluminium and high-alloyed stainless steel. Other gases can be used for special applications, pure or as blend additives; e.g. sulfur hexafluoride or dichlorodifluoromethane. Sulfur hexafluoride can be added to shield gas for aluminium welding to bind hydrogen in the weld area to reduce weld porosity. Dichlorodifluoromethane with argon can be used for protective atmosphere for melting of aluminium-lithium alloys. It reduces the content of hydrogen in the aluminium weld, preventing the associated porosity. This gas, however, is being used less because it has a strong ozone depletion potential. Common mixes Argon-carbon dioxide C-50 (50% argon/50% CO2) is used for short arc welding of pipes, C-40 (60% argon/40% CO2) is used for some flux-cored arc welding cases. Better weld penetration than C-25. C-25 (75% argon/25% CO2) is commonly used by hobbyists and in small-scale production. Limited to short circuit and globular transfer welding. Common for short-circuit gas metal arc welding of low carbon steel. C-20 (80% argon/20% CO2) is used for short-circuiting and spray transfer of carbon steel. C-15 (85% argon/15% CO2) is common in production environment for carbon and low alloy steels. Has lower spatter and good weld penetration, suitable for thicker plates and steel significantly covered with mill scale. Suitable for short circuit, globular, pulse and spray transfer welding. Maximum productivity for thin metals in short-circuiting mode; has lower tendency to burn through than higher-CO2 mixes and has suitably high deposition rates. C-10 (90% argon/10% CO2) is common in production environment. Has low spatter and good weld penetration, though lower than C-15; suitable for many steels. Same applications as 85/15 mix. Sufficient for ferritic stainless steels. C-5 (95% argon/5% CO2) is used for pulse spray transfer and short-circuiting of low alloy steel. Has better tolerance for mill scale and better puddle control than argon-oxygen, though less than C-10. Less heat than C-10. Sufficient for ferritic stainless steels. Similar performance to argon with 1% oxygen. O-5 (95% argon/5% oxygen) is the most common gas for general carbon steel welding. Higher oxygen content allows higher speed of welding. More than 5% oxygen makes the shielding gas oxidize the electrode, which can lead to porosity in the deposit if the electrode does not contain sufficient deoxidizers. O-2 (98% argon/2% oxygen) is used for spray arc on stainless steel, carbon steels, and low alloy steels. Better wetting than O-1. Weld is darker and more oxidized than with O-1. The addition of 2% oxygen encourages spray transfer, which is critical for spray-arc and pulsed spray-arc GMAW. O-1 (99% argon/1% oxygen) is used for stainless steels. Oxygen stabilizes the arc. Argon-helium A-25 (25% argon/75% helium) is used for nonferrous base when higher heat input and good weld appearance are needed. A-50 (50% argon/50% helium) is used for nonferrous metals thinner than 0.75 inch for high-speed mechanized welding. A-75 (75% argon/25% helium) is used for mechanized welding of thick aluminium. Reduces weld porosity in copper. Argon-hydrogen H-2 (98% argon/2% hydrogen) H-5 (95% argon/5% hydrogen) H-10 (80% argon/20% hydrogen) H-35 (65% argon/35% hydrogen) Others Argon with 25–35% helium and 1–2% CO2 provides high productivity and good welds on austenitic stainless steels. Can be used for joining stainless steel to carbon steel. Argon-CO2 with 1–2% hydrogen provides a reducing atmosphere that lowers amount of oxide on the weld surface, improves wetting and penetration. Good for austenitic stainless steels. Argon with 2–5% nitrogen and 2–5% CO2 in short-circuiting yields good weld shape and color and increases welding speed. For spray and pulsed spray transfer it is nearly equivalent to other mixes. When joining stainless to carbon steels in presence of nitrogen, care has to be taken to ensure the proper weld microstructure. Nitrogen increases arc stability and penetration and reduces distortion of the welded part. In duplex stainless steels assists in maintaining proper nitrogen content. 85–95% helium with 5–10% argon and 2–5% CO2 is an industry standard for short-circuit welding of carbon steel. Argon – carbon dioxide – oxygen Argon–helium–hydrogen Argon – helium – hydrogen – carbon dioxide Applications The applications of shielding gases are limited primarily by the cost of the gas, the cost of the equipment, and by the location of the welding. Some shielding gases, like argon, are expensive, limiting its use. The equipment used for the delivery of the gas is also an added cost, and as a result, processes like shielded metal arc welding, which require less expensive equipment, might be preferred in certain situations. Finally, because atmospheric movements can cause the dispersion of the shielding gas around the weld, welding processes that require shielding gases are often only done indoors, where the environment is stable and atmospheric gases can be effectively prevented from entering the weld area. The desirable rate of gas flow depends primarily on weld geometry, speed, current, the type of gas, and the metal transfer mode being utilized. Welding flat surfaces requires higher flow than welding grooved materials, since the gas is dispersed more quickly. Faster welding speeds, in general, mean that more gas needs to be supplied to provide adequate coverage. Additionally, higher current requires greater flow, and generally, more helium is required to provide adequate coverage than argon. Perhaps most importantly, the four primary variations of GMAW have differing shielding gas flow requirements—for the small weld pools of the short circuiting and pulsed spray modes, about 10 L/min (20 ft3/h) is generally suitable, while for globular transfer, around 15 L/min (30 ft3/h) is preferred. The spray transfer variation normally requires more because of its higher heat input and thus larger weld pool; along the lines of 20–25 L/min (40–50 ft3/h). See also Forming gas External links Shielding Gas Handbook References Arc welding Welding Industrial gases
Shielding gas
[ "Chemistry", "Engineering" ]
2,786
[ "Chemical process engineering", "Industrial gases", "Mechanical engineering", "Welding" ]
1,924,894
https://en.wikipedia.org/wiki/AND%20gate
The AND gate is a basic digital logic gate that implements the logical conjunction (∧) from mathematical logic AND gates behave according to their truth table. A HIGH output (1) results only if all the inputs to the AND gate are HIGH (1). If all of the inputs to the AND gate are not HIGH, a LOW (0) is outputted. The function can be extended to any number of inputs by multiple gates up in a chain. Symbols There are three symbols for AND gates: the American (ANSI or 'military') symbol and the IEC ('European' or 'rectangular') symbol, as well as the deprecated DIN symbol. Additional inputs can be added as needed. For more information see the Logic gate symbols article. It can also be denoted as symbol "^" or "&". The AND gate with inputs A and B and output C implements the logical expression . This expression also may be denoted as or . As of Unicode 16.0.0, the AND gate is also encoded in the Symbols for Legacy Computing Supplement block as . Implementations In logic families like TTL, NMOS, PMOS and CMOS, an AND gate is built from a NAND gate followed by an inverter. In the CMOS implementation above, transistors T1-T4 realize the NAND gate and transistors T5 and T6 the inverter. The need for an inverter makes AND gates less efficient than NAND gates. AND gates can also be made from discrete components and are readily available as integrated circuits in several different logic families. Analytical representation is the analytical representation of AND gate: Alternatives If no specific AND gates are available, one can be made from NAND or NOR gates, because NAND and NOR gates are "universal gates" meaning that they can be used to make all the others. AND gates with multiple inputs AND gates with multiple inputs are designated with the same symbol, with more lines leading in. While direct implementations with more than four inputs are possible in logic families like CMOS, these are inefficient. More efficient implementations use a cascade of NAND and NOR gates, as shown in the picture on the right below. This is more efficient than the cascade of AND gates shown on the left. See also OR gate NOT gate NAND gate NOR gate XOR gate XNOR gate IMPLY gate Boolean algebra Logic gate References Logic gates
AND gate
[ "Mathematics", "Engineering" ]
493
[ "Boolean algebra", "Digital electronics", "Mathematical logic", "Fields of abstract algebra", "Electronic engineering" ]
1,924,901
https://en.wikipedia.org/wiki/OR%20gate
The OR gate is a digital logic gate that implements logical disjunction. The OR gate outputs "true" if any of its inputs is "true"; otherwise it outputs "false". The input and output states are normally represented by different voltage levels. Description Any OR gate can be constructed with two or more inputs. It outputs a 1 if any of these inputs are 1, or outputs a 0 only if all inputs are 0. The inputs and outputs are binary digits ("bits") which have two possible logical states. In addition to 1 and 0, these states may be called true and false, high and low, active and inactive, or other such pairs of symbols. Thus it performs a logical disjunction (∨) from mathematical logic. The gate can be represented with the plus sign (+) because it can be used for logical addition. Equivalently, an OR gate finds the maximum between two binary digits, just as the AND gate finds the minimum. Together with the AND gate and the NOT gate, the OR gate is one of three basic logic gates from which any Boolean circuit may be constructed. All other logic gates may be made from these three gates; any function in binary mathematics may be implemented with them. It is sometimes called the inclusive OR gate to distinguish it from XOR, the exclusive OR gate. The behavior of OR is the same as XOR except in the case of a 1 for both inputs. In situations where this never arises (for example, in a full-adder) the two types of gates are interchangeable. This substitution is convenient when a circuit is being implemented using simple integrated circuit chips which contain only one gate type per chip. Symbols There are two logic gate symbols currently representing the OR gate: the American (ANSI or 'military') symbol and the IEC ('European' or 'rectangular') symbol. The DIN symbol is deprecated. The "≥1" on the IEC symbol indicates that the output is activated by at least one active input. As of Unicode 16.0.0, the OR gate is also encoded in the Symbols for Legacy Computing Supplement block as . Hardware description and pinout OR gates are basic logic gates, and are available in TTL and CMOS ICs logic families. The standard 4000 series CMOS IC is the 4071, which includes four independent two-input OR gates. The TTL device is the 7432. There are many offshoots of the original 7432 OR gate, all having the same pinout but different internal architecture, allowing them to operate in different voltage ranges and/or at higher speeds. In addition to the standard 2-input OR gate, 3- and 4-input OR gates are also available. In the CMOS series, these are: 4075: triple 3-input OR gate 4072: dual 4-input OR gate Variations include: 74LS32: quad 2-input OR gate (low power Schottky version) 74HC32: quad 2-input OR gate (high speed CMOS version) - has lower current consumption/wider voltage range 74AC32: quad 2-input OR gate (advanced CMOS version) - similar to 74HC32, but with significantly faster switching speeds and stronger drive 74LVC32: low voltage CMOS version of the same. Implementations Analytical representation is the analytical representation of OR gate: OR gates with many inputs OR gates with multiple inputs are designated with the same symbol, with more lines leading in. While direct implementations with more than three inputs are possible in logic families like CMOS, these are inefficient. More efficient implementations use a cascade of NOR and NAND gates, as shown in the picture below. Alternatives If no specific OR gates are available, one can be made from NAND or NOR gates in the configuration shown in the image below. Any logic gate can be made from a combination of NAND or NOR gates. Wired-OR With active low open collector logic outputs, as used for control signals in many circuits, an OR function can be produced by wiring together several outputs. This arrangement is called a wired OR. This implementation of an OR function typically is also found in integrated circuits of N or P-type only transistor processes. See also AND gate NOT gate NAND gate NOR gate XOR gate XNOR gate Boolean algebra Logic gate References Logic gates Boolean algebra Digital electronics
OR gate
[ "Mathematics", "Engineering" ]
897
[ "Boolean algebra", "Digital electronics", "Mathematical logic", "Fields of abstract algebra", "Electronic engineering" ]
1,925,126
https://en.wikipedia.org/wiki/Amine%20gas%20treating
Amine gas treating, also known as amine scrubbing, gas sweetening and acid gas removal, refers to a group of processes that use aqueous solutions of various alkylamines (commonly referred to simply as amines) to remove hydrogen sulfide (H2S) and carbon dioxide (CO2) from gases. It is a common unit process used in refineries, and is also used in petrochemical plants, natural gas processing plants and other industries. Processes within oil refineries or chemical processing plants that remove Hydrogen Sulfide are referred to as "sweetening" processes because the odor of the processed products is improved by the absence of "sour" hydrogen sulfide. An alternative to the use of amines involves membrane technology. However, membrane separation is less attractive due to the relatively high capital and operating costs as well as other technical factors. Many different amines are used in gas treating: Diethanolamine (DEA) Monoethanolamine (MEA) Methyldiethanolamine (MDEA) Diisopropanolamine (DIPA) Aminoethoxyethanol (Diglycolamine) (DGA) The most commonly used amines in industrial plants are the alkanolamines DEA, MEA, and MDEA. These amines are also used in many oil refineries to remove sour gases from liquid hydrocarbons such as liquified petroleum gas (LPG). Description of a typical amine treater Gases containing or both and are commonly referred to as sour gases or acid gases in the hydrocarbon processing industries. The chemistry involved in the amine treating of such gases varies somewhat with the particular amine being used. For one of the more common amines, monoethanolamine (MEA) denoted as RNH2, the acid-base reaction involving the protonation of the amine electron pair to form a positively charged ammonium group (RNH) can be expressed as: RNH2 + RNH + HS− RNH2 + RNH + The resulting dissociated and ionized species being more soluble in solution are trapped, or scrubbed, by the amine solution and so easily removed from the gas phase. At the outlet of the amine scrubber, the sweetened gas is thus depleted in and . A typical amine gas treating process (the Girbotol process, as shown in the flow diagram below) includes an absorber unit and a regenerator unit as well as accessory equipment. In the absorber, the downflowing amine solution absorbs and from the upflowing sour gas to produce a sweetened gas stream (i.e., a gas free of hydrogen sulfide and carbon dioxide) as a product and an amine solution rich in the absorbed acid gases. The resultant "rich" amine is then routed into the regenerator (a stripper with a reboiler) to produce regenerated or "lean" amine that is recycled for reuse in the absorber. The stripped overhead gas from the regenerator is concentrated and . Alternative processes Alternative stripper configurations include matrix, internal exchange, flashing feed, and multi-pressure with split feed. Many of these configurations offer more energy efficiency for specific solvents or operating conditions. Vacuum operation favors solvents with low heats of absorption while operation at normal pressure favors solvents with high heats of absorption. Solvents with high heats of absorption require less energy for stripping from temperature swing at fixed capacity. The matrix stripper recovers 40% of at a higher pressure and does not have inefficiencies associated with multi-pressure stripper. Energy and costs are reduced since the reboiler duty cycle is slightly less than normal pressure stripper. An Internal Exchange stripper has a smaller ratio of water vapor to in the overhead stream, and therefore less steam is required. The multi-pressure configuration with split feed reduces the flow into the bottom section, which also reduces the equivalent work. Flashing feed requires less heat input because it uses the latent heat of water vapor to help strip some of the in the rich stream entering the stripper at the bottom of the column. The multi-pressure configuration is more attractive for solvents with a higher heats of absorption. Amines The amine concentration in the absorbent aqueous solution is an important parameter in the design and operation of an amine gas treating process. Depending on which one of the following four amines the unit was designed to use and what gases it was designed to remove, these are some typical amine concentrations, expressed as weight percent of pure amine in the aqueous solution: Monoethanolamine: About 20 % for removing H2S and CO2, and about 32 % for removing only CO2. Diethanolamine: About 20 to 25 % removing H2S and CO2 Methyldiethanolamine: About 30 to 55 % for removing H2S and CO2 Diglycolamine: About 50 % for removing H2S and CO2 The choice of amine concentration in the circulating aqueous solution depends upon several factors and may be quite arbitrary. It is usually made simply on the basis of experience. The factors involved include whether the amine unit is treating raw natural gas or petroleum refinery by-product gases that contain relatively low concentrations of both H2S and CO2 or whether the unit is treating gases with a high percentage of CO2 such as the offgas from the steam reforming process used in ammonia production or the flue gases from power plants. Both H2S and CO2 are acid gases and hence corrosive to carbon steel. However, in an amine treating unit, CO2 is the stronger acid of the two. H2S forms a film of iron sulfide on the surface of the steel that acts to protect the steel. When treating gases with a high percentage of CO2, corrosion inhibitors are often used and that permits the use of higher concentrations of amine in the circulating solution. Another factor involved in choosing an amine concentration is the relative solubility of H2S and CO2 in the selected amine. The choice of the type of amine will affect the required circulation rate of amine solution, the energy consumption for the regeneration and the ability to selectively remove either H2S alone or CO2 alone if desired. For more information about selecting the amine concentration, the reader is referred to Kohl and Nielsen's book. MEA and DEA MEA and DEA are primary and secondary amines. They are very reactive and can effectively remove a high volume of gas due to a high reaction rate. However, due to stoichiometry, the loading capacity is limited to 0.5 mol CO2 per mole of amine. MEA and DEA also require a large amount of energy to strip the CO2 during regeneration, which can be up to 70% of total operating costs. They are also more corrosive and chemically unstable compared to other amines. Uses In oil refineries, that stripped gas is mostly H2S, much of which often comes from a sulfur-removing process called hydrodesulfurization. This H2S-rich stripped gas stream is then usually routed into a Claus process to convert it into elemental sulfur. In fact, the vast majority of the 64,000,000 metric tons of sulfur produced worldwide in 2005 was byproduct sulfur from refineries and other hydrocarbon processing plants. Another sulfur-removing process is the WSA process which recovers sulfur in any form as concentrated sulfuric acid. In some plants, more than one amine absorber unit may share a common regenerator unit. The current emphasis on removing CO2 from the flue gases emitted by fossil fuel power plants has led to much interest in using amines for removing CO2 (see also: carbon capture and storage and conventional coal-fired power plant). In the specific case of the industrial synthesis of ammonia, for the steam reforming process of hydrocarbons to produce gaseous hydrogen, amine treating is one of the commonly used processes for removing excess carbon dioxide in the final purification of the gaseous hydrogen. In the biogas production it is sometimes necessary to remove carbon dioxide from the biogas to make it comparable with natural gas. The removal of the sometimes high content of hydrogen sulfide is necessary to prevent corrosion of metallic parts after burning the bio gas. Carbon capture and storage Amines are used to remove CO2 in various areas ranging from natural gas production to the food and beverage industry, and have been since 1930. There are multiple classifications of amines, each of which has different characteristics relevant to CO2 capture. For example, monoethanolamine (MEA) reacts strongly with acid gases like CO2 and has a fast reaction time and an ability to remove high percentages of CO2, even at the low CO2 concentrations. Typically, monoethanolamine (MEA) can capture 85% to 90% of the CO2 from the flue gas of a coal-fired plant, which is one of the most effective solvent to capture CO2. Challenges of carbon capture using amine include: Low pressure gas increases difficulty of transferring CO2 from the gas into amine Oxygen content of the gas can cause amine degradation and acid formation CO2 degradation of primary (and secondary) amines High energy consumption Very large facilities Finding a suitable location (enhanced oil recovery, deep saline aquifers, basaltic rocks...) to dispose of the removed CO2 The partial pressure is the driving force to transfer CO2 into the liquid phase. Under low pressure, this transfer is hard to achieve without increasing the reboilers' heat duty, which will result in higher costs. Primary and secondary amines, for example, MEA and DEA, will react with CO2 and form degradation products. O2 from the inlet gas will cause degradation as well. The degraded amine is no longer able to capture CO2, which decreases the overall carbon capture efficiency. Currently, a variety of amine mixtures are being synthesized and tested to achieve a more desirable set of overall properties for use in CO2 capture systems. One major focus is on lowering the energy required for solvent regeneration, which has a major impact on process costs. However, there are trade-offs to consider. For example, the energy required for regeneration is typically related to the driving forces for achieving high capture capacities. Thus, reducing the regeneration energy can lower the driving force and thereby increase the amount of solvent and size of absorber needed to capture a given amount of CO2, thus, increasing the capital cost. See also Ammonia production Hydrodesulfurization WSA Process Claus process Selexol Rectisol Amine Ionic liquids in carbon capture Solid sorbents for carbon capture References External links Description of Gas Sweetening Equipment and Operating Conditions Selecting Amines for Sweetening Units, Polasek, J. (Bryan Research & Engineering) and Bullin, J.A. (Texas A&M University), Gas Processors Association Regional Meeting, Sept. 1994. Natural Gas Supply Association Scroll down to Sulfur and Carbon Dioxide Removal Description of the classic book on gas treating by Acid gas control Biogas technology Carbon capture and storage Chemical processes Gas technologies Natural gas technology Oil refining
Amine gas treating
[ "Chemistry", "Engineering", "Biology" ]
2,315
[ "Biofuels technology", "Geoengineering", "Petroleum technology", "Chemical processes", "Natural gas technology", "Oil refining", "nan", "Chemical process engineering", "Carbon capture and storage", "Biogas technology" ]
1,926,015
https://en.wikipedia.org/wiki/Electron%20paramagnetic%20resonance
Electron paramagnetic resonance (EPR) or electron spin resonance (ESR) spectroscopy is a method for studying materials that have unpaired electrons. The basic concepts of EPR are analogous to those of nuclear magnetic resonance (NMR), but the spins excited are those of the electrons instead of the atomic nuclei. EPR spectroscopy is particularly useful for studying metal complexes and organic radicals. EPR was first observed in Kazan State University by Soviet physicist Yevgeny Zavoisky in 1944, and was developed independently at the same time by Brebis Bleaney at the University of Oxford. Theory Origin of an EPR signal Every electron has a magnetic moment and spin quantum number , with magnetic components or . In the presence of an external magnetic field with strength , the electron's magnetic moment aligns itself either antiparallel () or parallel () to the field, each alignment having a specific energy due to the Zeeman effect: where is the electron's so-called g-factor (see also the Landé g-factor), for the free electron, is the Bohr magneton. Therefore, the separation between the lower and the upper state is for unpaired free electrons. This equation implies (since both and are constant) that the splitting of the energy levels is directly proportional to the magnetic field's strength, as shown in the diagram below. An unpaired electron can change its electron spin by either absorbing or emitting a photon of energy such that the resonance condition, , is obeyed. This leads to the fundamental equation of EPR spectroscopy: . Experimentally, this equation permits a large combination of frequency and magnetic field values, but the great majority of EPR measurements are made with microwaves in the 9000–10000 MHz (9–10 GHz) region, with fields corresponding to about 3500 G (0.35 T). Furthermore, EPR spectra can be generated by either varying the photon frequency incident on a sample while holding the magnetic field constant or doing the reverse. In practice, it is usually the frequency that is kept fixed. A collection of paramagnetic centers, such as free radicals, is exposed to microwaves at a fixed frequency. By increasing an external magnetic field, the gap between the and energy states is widened until it matches the energy of the microwaves, as represented by the double arrow in the diagram above. At this point the unpaired electrons can move between their two spin states. Since there typically are more electrons in the lower state, due to the Maxwell–Boltzmann distribution (see below), there is a net absorption of energy, and it is this absorption that is monitored and converted into a spectrum. The upper spectrum below is the simulated absorption for a system of free electrons in a varying magnetic field. The lower spectrum is the first derivative of the absorption spectrum. The latter is the most common way to record and publish continuous wave EPR spectra. For the microwave frequency of 9388.4 MHz, the predicted resonance occurs at a magnetic field of about = 0.3350 T = 3350 G Because of electron-nuclear mass differences, the magnetic moment of an electron is substantially larger than the corresponding quantity for any nucleus, so that a much higher electromagnetic frequency is needed to bring about a spin resonance with an electron than with a nucleus, at identical magnetic field strengths. For example, for the field of 3350 G shown above, spin resonance occurs near 9388.2 MHz for an electron compared to only about 14.3 MHz for 1H nuclei. (For NMR spectroscopy, the corresponding resonance equation is where and depend on the nucleus under study.) Field modulation As previously mentioned an EPR spectrum is usually directly measured as the first derivative of the absorption. This is accomplished by using field modulation. A small additional oscillating magnetic field is applied to the external magnetic field at a typical frequency of 100 kHz. By detecting the peak to peak amplitude the first derivative of the absorption is measured. By using phase sensitive detection only signals with the same modulation (100 kHz) are detected. This results in higher signal to noise ratios. Note field modulation is unique to continuous wave EPR measurements and spectra resulting from pulsed experiments are presented as absorption profiles. The same idea underlies the Pound-Drever-Hall technique for frequency locking of lasers to a high-finesse optical cavity. Maxwell–Boltzmann distribution In practice, EPR samples consist of collections of many paramagnetic species, and not single isolated paramagnetic centers. If the population of radicals is in thermodynamic equilibrium, its statistical distribution is described by the Boltzmann distribution: where is the number of paramagnetic centers occupying the upper energy state, is the Boltzmann constant, and is the thermodynamic temperature. At 298 K, X-band microwave frequencies ( ≈ 9.75 GHz) give ≈ 0.998, meaning that the upper energy level has a slightly smaller population than the lower one. Therefore, transitions from the lower to the higher level are more probable than the reverse, which is why there is a net absorption of energy. The sensitivity of the EPR method (i.e., the minimal number of detectable spins ) depends on the photon frequency according to where is a constant, is the sample's volume, is the unloaded quality factor of the microwave cavity (sample chamber), is the cavity filling coefficient, and is the microwave power in the spectrometer cavity. With and being constants, ~ , i.e., ~ , where ≈ 1.5. In practice, can change varying from 0.5 to 4.5 depending on spectrometer characteristics, resonance conditions, and sample size. A great sensitivity is therefore obtained with a low detection limit and a large number of spins. Therefore, the required parameters are: A high spectrometer frequency to minimize the Eq. 2. Common frequencies are discussed below A low temperature to decrease the number of spin at the high level of energy as shown in Eq. 1. This condition explains why spectra are often recorded on sample at the boiling point of liquid nitrogen or liquid helium. Spectral parameters In real systems, electrons are normally not solitary, but are associated with one or more atoms. There are several important consequences of this: An unpaired electron can gain or lose angular momentum, which can change the value of its g-factor, causing it to differ from . This is especially significant for chemical systems with transition-metal ions. Systems with multiple unpaired electrons experience electron–electron interactions that give rise to "fine" structure. This is realized as zero field splitting and exchange coupling, and can be large in magnitude. The magnetic moment of a nucleus with a non-zero nuclear spin will affect any unpaired electrons associated with that atom. This leads to the phenomenon of hyperfine coupling, analogous to J-coupling in NMR, splitting the EPR resonance signal into doublets, triplets and so forth. Additional smaller splittings from nearby nuclei is sometimes termed "superhyperfine" coupling. Interactions of an unpaired electron with its environment influence the shape of an EPR spectral line. Line shapes can yield information about, for example, rates of chemical reactions. These effects (g-factor, hyperfine coupling, zero field splitting, exchange coupling) in an atom or molecule may not be the same for all orientations of an unpaired electron in an external magnetic field. This anisotropy depends upon the electronic structure of the atom or molecule (e.g., free radical) in question, and so can provide information about the atomic or molecular orbital containing the unpaired electron. The g factor Knowledge of the g-factor can give information about a paramagnetic center's electronic structure. An unpaired electron responds not only to a spectrometer's applied magnetic field but also to any local magnetic fields of atoms or molecules. The effective field experienced by an electron is thus written where includes the effects of local fields ( can be positive or negative). Therefore, the resonance condition (above) is rewritten as follows: The quantity is denoted and called simply the g-factor, so that the final resonance equation becomes This last equation is used to determine in an EPR experiment by measuring the field and the frequency at which resonance occurs. If does not equal , the implication is that the ratio of the unpaired electron's spin magnetic moment to its angular momentum differs from the free-electron value. Since an electron's spin magnetic moment is constant (approximately the Bohr magneton), then the electron must have gained or lost angular momentum through spin–orbit coupling. Because the mechanisms of spin–orbit coupling are well understood, the magnitude of the change gives information about the nature of the atomic or molecular orbital containing the unpaired electron. In general, the g factor is not a number but a 3×3 matrix. The principal axes of this tensor are determined by the local fields, for example, by the local atomic arrangement around the unpaired spin in a solid or in a molecule. Choosing an appropriate coordinate system (say, x,y,z) allows one to "diagonalize" this tensor, thereby reducing the maximal number of its components from 9 to 3: gxx, gyy and gzz. For a single spin experiencing only Zeeman interaction with an external magnetic field, the position of the EPR resonance is given by the expression gxxBx + gyyBy + gzzBz. Here Bx, By and Bz are the components of the magnetic field vector in the coordinate system (x,y,z); their magnitudes change as the field is rotated, so does the frequency of the resonance. For a large ensemble of randomly oriented spins (as in a fluid solution), the EPR spectrum consists of three peaks of characteristic shape at frequencies gxxB0, gyyB0 and gzzB0. In first-derivative spectrum, the low-frequency peak is positive, the high-frequency peak is negative, and the central peak is bipolar. Such situations are commonly observed in powders, and the spectra are therefore called "powder-pattern spectra". In crystals, the number of EPR lines is determined by the number of crystallographically equivalent orientations of the EPR spin (called "EPR center"). At higher temperatures, the three peaks coalesce to a singlet, corresponding to giso, for isotropic. The relationship between giso and the components is: One elementary step in analyzing an EPR spectrum is to compare giso with the g-factor for the free electron, ge. Metal-based radicals giso is typically well above ge whereas organic radicals, giso ~ ge. The determination of the absolute value of the g factor is challenging due to the lack of a precise estimate of the local magnetic field at the sample location. Therefore, typically so-called g factor standards are measured together with the sample of interest. In the common spectrum, the spectral line of the g factor standard is then used as a reference point to determine the g factor of the sample. For the initial calibration of g factor standards, Herb et al. introduced a precise procedure by using double resonance techniques based on the Overhauser shift. Hyperfine coupling Since the source of an EPR spectrum is a change in an electron's spin state, the EPR spectrum for a radical (S = 1/2 system) would consist of one line. Greater complexity arises because the spin couples with nearby nuclear spins. The magnitude of the coupling is proportional to the magnetic moment of the coupled nuclei and depends on the mechanism of the coupling. Coupling is mediated by two processes, dipolar (through space) and isotropic (through bond). This coupling introduces additional energy states and, in turn, multi-lined spectra. In such cases, the spacing between the EPR spectral lines indicates the degree of interaction between the unpaired electron and the perturbing nuclei. The hyperfine coupling constant of a nucleus is directly related to the spectral line spacing and, in the simplest cases, is essentially the spacing itself. Two common mechanisms by which electrons and nuclei interact are the Fermi contact interaction and by dipolar interaction. The former applies largely to the case of isotropic interactions (independent of sample orientation in a magnetic field) and the latter to the case of anisotropic interactions (spectra dependent on sample orientation in a magnetic field). Spin polarization is a third mechanism for interactions between an unpaired electron and a nuclear spin, being especially important for -electron organic radicals, such as the benzene radical anion. The symbols "a" or "A" are used for isotropic hyperfine coupling constants, while "B" is usually employed for anisotropic hyperfine coupling constants. In many cases, the isotropic hyperfine splitting pattern for a radical freely tumbling in a solution (isotropic system) can be predicted. Multiplicity For a radical having M equivalent nuclei, each with a spin of I, the number of EPR lines expected is 2MI + 1. As an example, the methyl radical, CH3, has three 1H nuclei, each with I = 1/2, and so the number of lines expected is 2MI + 1 = 2(3)(1/2) + 1 = 4, which is as observed. For a radical having M1 equivalent nuclei, each with a spin of I1, and a group of M2 equivalent nuclei, each with a spin of I2, the number of lines expected is (2M1I1 + 1) (2M2I2 + 1). As an example, the methoxymethyl radical, has two equivalent 1H nuclei, each with I = 1/2 and three equivalent 1H nuclei each with I = 1/2, and so the number of lines expected is (2M1I1 + 1) (2M2I2 + 1) = [2(2)(1/2) + 1] [2(3)(1/2) + 1] = 3×4 = 12, again as observed. The above can be extended to predict the number of lines for any number of nuclei. While it is easy to predict the number of lines, the reverse problem, unraveling a complex multi-line EPR spectrum and assigning the various spacings to specific nuclei, is more difficult. In the often encountered case of I = 1/2 nuclei (e.g., 1H, 19F, 31P), the line intensities produced by a population of radicals, each possessing M equivalent nuclei, will follow Pascal's triangle. For example, the spectrum at the right shows that the three 1H nuclei of the CH3 radical give rise to 2MI + 1 = 2(3)(1/2) + 1 = 4 lines with a 1:3:3:1 ratio. The line spacing gives a hyperfine coupling constant of aH = 23 G for each of the three 1H nuclei. Note again that the lines in this spectrum are first derivatives of absorptions. As a second example, the methoxymethyl radical, H3COCH2. the OCH2 center will give an overall 1:2:1 EPR pattern, each component of which is further split by the three methoxy hydrogens into a 1:3:3:1 pattern to give a total of 3×4 = 12 lines, a triplet of quartets. A simulation of the observed EPR spectrum is shown and agrees with the 12-line prediction and the expected line intensities. Note that the smaller coupling constant (smaller line spacing) is due to the three methoxy hydrogens, while the larger coupling constant (line spacing) is from the two hydrogens bonded directly to the carbon atom bearing the unpaired electron. It is often the case that coupling constants decrease in size with distance from a radical's unpaired electron, but there are some notable exceptions, such as the ethyl radical (CH2CH3). Resonance linewidth definition Resonance linewidths are defined in terms of the magnetic induction B and its corresponding units, and are measured along the x axis of an EPR spectrum, from a line's center to a chosen reference point of the line. These defined widths are called halfwidths and possess some advantages: for asymmetric lines, values of left and right halfwidth can be given. The halfwidth is the distance measured from the line's center to the point in which absorption value has half of maximal absorption value in the center of resonance line. First inclination width is a distance from center of the line to the point of maximal absorption curve inclination. In practice, a full definition of linewidth is used. For symmetric lines, halfwidth , and full inclination width . Applications EPR/ESR spectroscopy is used in various branches of science, such as biology, chemistry and physics, for the detection and identification of free radicals in the solid, liquid, or gaseous state, and in paramagnetic centers such as F-centers. Chemical reactions EPR is a sensitive, specific method for studying both radicals formed in chemical reactions and the reactions themselves. For example, when ice (solid H2O) is decomposed by exposure to high-energy radiation, radicals such as H, OH, and HO2 are produced. Such radicals can be identified and studied by EPR. Organic and inorganic radicals can be detected in electrochemical systems and in materials exposed to UV light. In many cases, the reactions to make the radicals and the subsequent reactions of the radicals are of interest, while in other cases EPR is used to provide information on a radical's geometry and the orbital of the unpaired electron. EPR is useful in homogeneous catalysis research for characterization of paramagnetic complexes and reactive intermediates. EPR spectroscopy is a particularly useful tool to investigate their electronic structures, which is fundamental to understand their reactivity. EPR/ESR spectroscopy can be applied only to systems in which the balance between radical decay and radical formation keeps the free radicals concentration above the detection limit of the spectrometer used. This can be a particularly severe problem in studying reactions in liquids. An alternative approach is to slow down reactions by studying samples held at cryogenic temperatures, such as 77 K (liquid nitrogen) or 4.2 K (liquid helium). An example of this work is the study of radical reactions in single crystals of amino acids exposed to x-rays, work that sometimes leads to activation energies and rate constants for radical reactions. Medical and biological Medical and biological applications of EPR also exist. Although radicals are very reactive, so they do not normally occur in high concentrations in biology, special reagents have been developed to attach "spin labels", also called "spin probes", to molecules of interest. Specially-designed nonreactive radical molecules can attach to specific sites in a biological cell, and EPR spectra then give information on the environment of the spin labels. Spin-labeled fatty acids have been extensively used to study dynamic organisation of lipids in biological membranes, lipid-protein interactions and temperature of transition of gel to liquid crystalline phases. Injection of spin-labeled molecules allows for electron resonance imaging of living organisms. A type of dosimetry system has been designed for reference standards and routine use in medicine, based on EPR signals of radicals from irradiated polycrystalline α-alanine (the alanine deamination radical, the hydrogen abstraction radical, and the radical). This method is suitable for measuring gamma and X-rays, electrons, protons, and high-linear energy transfer (LET) radiation of doses in the 1 Gy to 100 kGy range. EPR can be used to measure microviscosity and micropolarity within drug delivery systems as well as the characterization of colloidal drug carriers. The study of radiation-induced free radicals in biological substances (for cancer research) poses the additional problem that tissue contains water, and water (due to its electric dipole moment) has a strong absorption band in the microwave region used in EPR spectrometers. Material characterization EPR/ESR spectroscopy is used in geology and archaeology as a dating tool. It can be applied to a wide range of materials such as organic shales, carbonates, sulfates, phosphates, silica or other silicates. When applied to shales, the EPR data correlates to the maturity of the kerogen in the shale. EPR spectroscopy has been used to measure properties of crude oil, such as determination of asphaltene and vanadium content. The free-radical component of the EPR signal is proportional to the amount of asphaltene in the oil regardless of any solvents, or precipitants that may be present in that oil. When the oil is subject to a precipitant such as hexane, heptane, pyridine however, then much of the asphaltene can be subsequently extracted from the oil by gravimetric techniques. The EPR measurement of that extract will then be function of the polarity of the precipitant that was used. Consequently, it is preferable to apply the EPR measurement directly to the crude. In the case that the measurement is made upstream of a separator (oil production), then it may also be necessary determine the oil fraction within the crude (e.g., if a certain crude contains 80% oil and 20% water, then the EPR signature will be 80% of the signature of downstream of the separator). EPR has been used by archaeologists for the dating of teeth. Radiation damage over long periods of time creates free radicals in tooth enamel, which can then be examined by EPR and, after proper calibration, dated. Similarly, material extracted from the teeth of people during dental procedures can be used to quantify their cumulative exposure to ionizing radiation. People (and other mammals) exposed to radiation from the atomic bombs, from the Chernobyl disaster, and from the Fukushima accident have been examined by this method. Radiation-sterilized foods have been examined with EPR spectroscopy, aiming to develop methods to determine whether a food sample has been irradiated and to what dose. Electrochemistry Applications EPR is a very important technique in the electrochemical field because it operates to detect paramagnetic species and unpaired electrons. The technique has a long history of being coupled to the field, starting with a report in 1958 using EPR to detect free radicals generated via electrochemistry. In an experiment performed by Austen, Given, Ingram, and Peover, solutions of aromatics were electrolyzed and placed into an EPR instrument, resulting in a broad signal response. While this result could not be used for any specific identification, the presence of an EPR signal validated the theory that free radical species were involved in electron transfer reactions as an intermediate state. Soon after, other groups discovered the possibility of coupling in situ electrolysis with EPR, producing the first resolved spectra of the nitrobenzene anion radical from a mercury electrode sealed within the instrument cavity. Since then, the impact of EPR on the field of electrochemistry has only expanded, serving as a way to monitor free radicals produced by other electrolysis reactions. In more recent years, EPR has also been used within the context of electrochemistry to study redox-flow reactions and batteries. Because of the in situ possibilities, it is possible to construct an electrochemical cell inside the EPR instrument and capture the short-lived intermediates involved at lower concentrations than necessitated for NMR. Often, NMR and EPR experiments are coupled to get a full picture of the electrochemical reaction over time. It is also possible to determine the concentration of a specific radical species via EPR, as it is proportional to the double integral of the EPR signal as referenced to a calibration standard. A specific application example can be seen in Lithium ion batteries, specifically studying Li-S battery sulfate ion formation or in Li-O2 battery oxygen radical formation via the 4-oxo-TEMP to 4-oxo-TEMPO conversion. Other electrochemical applications to EPR can be found in the context of water purification reactions and oxygen reduction reactions. In water purification reactions, reactive radical species such as singlet oxygen and hydroxyl, oxygen, and hydrogen radicals are consistently present, generated electrochemically in the breakdown of water pollutants. These intermediates are highly reactive and unstable, thus necessitating a technique such as EPR that can identify radical species specifically. Other applications In the field of quantum computing, pulsed EPR is used to control the state of electron spin qubits in materials such as diamond, silicon and gallium arsenide. High-field high-frequency measurements High-field high-frequency EPR measurements are sometimes needed to detect subtle spectroscopic details. However, for many years the use of electromagnets to produce the needed fields above 1.5 T was impossible, due principally to limitations of traditional magnet materials. The first multifunctional millimeter EPR spectrometer with a superconducting solenoid was described in the early 1970s by Y. S. Lebedev's group (Russian Institute of Chemical Physics, Moscow) in collaboration with L. G. Oranski's group (Ukrainian Physics and Technics Institute, Donetsk), which began working in the Institute of Problems of Chemical Physics, Chernogolovka around 1975. Two decades later, a W-band EPR spectrometer was produced as a small commercial line by the German Bruker Company, initiating the expansion of W-band EPR techniques into medium-sized academic laboratories. The EPR waveband is stipulated by the frequency or wavelength of a spectrometer's microwave source (see Table). EPR experiments often are conducted at X and, less commonly, Q bands, mainly due to the ready availability of the necessary microwave components (which originally were developed for radar applications). A second reason for widespread X and Q band measurements is that electromagnets can reliably generate fields up to about 1 tesla. However, the low spectral resolution over g-factor at these wavebands limits the study of paramagnetic centers with comparatively low anisotropic magnetic parameters. Measurements at > 40 GHz, in the millimeter wavelength region, offer the following advantages: EPR spectra are simplified due to the reduction of second-order effects at high fields. Increase in orientation selectivity and sensitivity in the investigation of disordered systems. The informativity and precision of pulse methods, e.g., ENDOR also increase at high magnetic fields. Accessibility of spin systems with larger zero-field splitting due to the larger microwave quantum energy h. The higher spectral resolution over g-factor, which increases with irradiation frequency and external magnetic field B0. This is used to investigate the structure, polarity, and dynamics of radical microenvironments in spin-modified organic and biological systems through the spin label and probe method. The figure shows how spectral resolution improves with increasing frequency. Saturation of paramagnetic centers occurs at a comparatively low microwave polarizing field B1, due to the exponential dependence of the number of excited spins on the radiation frequency . This effect can be successfully used to study the relaxation and dynamics of paramagnetic centers as well as of superslow motion in the systems under study. The cross-relaxation of paramagnetic centers decreases dramatically at high magnetic fields, making it easier to obtain more-precise and more-complete information about the system under study. This was demonstrated experimentally in the study of various biological, polymeric and model systems at D-band EPR. Hardware components Microwave bridge The microwave bridge contains both the microwave source and the detector. Older spectrometers used a vacuum tube called a klystron to generate microwaves, but modern spectrometers use a Gunn diode. Immediately after the microwave source there is an isolator which serves to attenuate any reflections back to the source which would result in fluctuations in the microwave frequency. The microwave power from the source is then passed through a directional coupler which splits the microwave power into two paths, one directed towards the cavity and the other the reference arm. Along both paths there is a variable attenuator that facilitates the precise control of the flow of microwave power. This in turn allows for accurate control over the intensity of the microwaves subjected to the sample. On the reference arm, after the variable attenuator there is a phase shifter that sets a defined phase relationship between the reference and reflected signal which permits phase sensitive detection. Most EPR spectrometers are reflection spectrometers, meaning that the detector should only be exposed to microwave radiation coming back from the cavity. This is achieved by the use of a device known as the circulator which directs the microwave radiation (from the branch that is heading towards the cavity) into the cavity. Reflected microwave radiation (after absorption by the sample) is then passed through the circulator towards the detector, ensuring it does not go back to the microwave source. The reference signal and reflected signal are combined and passed to the detector diode which converts the microwave power into an electrical current. Reference arm At low energies (less than 1 μW) the diode current is proportional to the microwave power and the detector is referred to as a square-law detector. At higher power levels (greater than 1 mW) the diode current is proportional to the square root of the microwave power and the detector is called a linear detector. In order to obtain optimal sensitivity as well as quantitative information the diode should be operating within the linear region. To ensure the detector is operating at that level the reference arm serves to provide a "bias". Magnet In an EPR spectrometer the magnetic assembly includes the magnet with a dedicated power supply as well as a field sensor or regulator such as a Hall probe. EPR spectrometers use one of two types of magnet which is determined by the operating microwave frequency (which determine the range of magnetic field strengths required). The first is an electromagnet which are generally capable of generating field strengths of up to 1.5 T making them suitable for measurements using the Q-band frequency. In order to generate field strengths appropriate for W-band and higher frequency operation superconducting magnets are employed. The magnetic field is homogeneous across the sample volume and has a high stability at static field. Microwave resonator (cavity) The microwave resonator is designed to enhance the microwave magnetic field at the sample in order to induce EPR transitions. It is a metal box with a rectangular or cylindrical shape that resonates with microwaves (like an organ pipe with sound waves). At the resonance frequency of the cavity microwaves remain inside the cavity and are not reflected back. Resonance means the cavity stores microwave energy and its ability to do this is given by the quality factor , defined by the following equation: The higher the value of the higher the sensitivity of the spectrometer. The energy dissipated is the energy lost in one microwave period. Energy may be lost to the side walls of the cavity as microwaves may generate currents which in turn generate heat. A consequence of resonance is the creation of a standing wave inside the cavity. Electromagnetic standing waves have their electric and magnetic field components exactly out of phase. This provides an advantage as the electric field provides non-resonant absorption of the microwaves, which in turn increases the dissipated energy and reduces . To achieve the largest signals and hence sensitivity the sample is positioned such that it lies within the magnetic field maximum and the electric field minimum. When the magnetic field strength is such that an absorption event occurs, the value of will be reduced due to the extra energy loss. This results in a change of impedance which serves to stop the cavity from being critically coupled. This means microwaves will now be reflected back to the detector (in the microwave bridge) where an EPR signal is detected. Pulsed electron paramagnetic resonance The dynamics of electron spins are best studied with pulsed measurements. Microwave pulses typically 10–100 ns long are used to control the spins in the Bloch sphere. The spin–lattice relaxation time can be measured with an inversion recovery experiment. As with pulsed NMR, the Hahn echo is central to many pulsed EPR experiments. A Hahn echo decay experiment can be used to measure the dephasing time, as shown in the animation below. The size of the echo is recorded for different spacings of the two pulses. This reveals the decoherence, which is not refocused by the pulse. In simple cases, an exponential decay is measured, which is described by the time. Pulsed electron paramagnetic resonance could be advanced into electron nuclear double resonance spectroscopy (ENDOR), which utilizes waves in the radio frequencies. Since different nuclei with unpaired electrons respond to different wavelengths, radio frequencies are required at times. Since the results of the ENDOR gives the coupling resonance between the nuclei and the unpaired electron, the relationship between them can be determined. See also Dynamic nuclear polarisation EDMR Electric dipole spin resonance Electron resonance imaging Ferromagnetic resonance Optically detected magnetic resonance Site-directed spin labeling Spin label Spin trapping Albumin transport function analysis by EPR spectroscopy References External links Electron Magnetic Resonance Program National High Magnetic Field Laboratory Electron Paramagnetic Resonance (Specialist Periodical Reports) Published by the Royal Society of Chemistry Using ESR to measure free radicals in used engine oil Scientific techniques Magnetism Russian inventions Neuroimaging Soviet inventions
Electron paramagnetic resonance
[ "Physics", "Chemistry" ]
6,969
[ "Electron paramagnetic resonance", "Spectroscopy", "Spectrum (physical sciences)" ]
1,926,158
https://en.wikipedia.org/wiki/Radical%20anion
In organic chemistry, a radical anion is a free radical species that carries a negative charge. Radical anions are encountered in organic chemistry as reduced derivatives of polycyclic aromatic compounds, e.g. sodium naphthenide. An example of a non-carbon radical anion is the superoxide anion, formed by transfer of one electron to an oxygen molecule. Radical anions are typically indicated by . Polycyclic radical anions Many aromatic compounds can undergo one-electron reduction by alkali metals. The electron is transferred from the alkali metal ion to an unoccupied antibonding p-p п* orbital of the aromatic molecule. This transfer is usually only energetically favorable if the aprotic solvent efficiently solvates the alkali metal ion. Effective solvents are those that bind to the alkali metal cation: diethyl ether < THF < 1,2-dimethoxyethane < HMPA. In principle any unsaturated molecule can form a radical anion, but the antibonding orbitals are only energetically accessible in more extensive conjugated systems. Ease of formation is in the order benzene < naphthalene < anthracene < pyrene, etc. Salts of the radical anions are often not isolated as solids but used in situ. They are usually deeply colored. Naphthalene in the form of Lithium naphthalene is obtained from the reaction of naphthalene with lithium. Sodium naphthalene is obtained from the reaction of naphthalene with sodium. Sodium 1-methylnaphthalene and 1-methylnaphthalene are more soluble than sodium naphthalene and naphthalene, respectively. biphenyl as its lithium salt. acenaphthylene is a milder reductant than the naphthalene anion. anthracene in the form of its alkali metal salts. pyrene as its sodium salt. Perylene in the form of its alkali metal (M = Li, Na, Cs) etherates. Other examples Cyclooctatetraene is reduced by elemental potassium to the dianion. The resulting dianion is a 10-pi electron system, which conforms to the Huckel rule for aromaticity. Quinone is reduced to a semiquinone radical anion. Semidiones are derived from the reduction of dicarbonyl compounds. Reactions Redox The pi-radical anions are used as reducing agents in specialized syntheses. Being soluble in at least some solvents, these salts act faster than the alkali metals themselves. The disadvantages are that the polycyclic hydrocarbon must be removed. The reduction potential of alkali metal naphthalene salts is about 3.1 V (vs Fc+/0). The reduction potentials of the larger systems are lower, for example acenaphthalene is 2.45 V. Many radical anions are susceptible to further reduction to dianions. Protonation Addition of a proton source (even water) to a radical anion results in protonation, i.e. the sequence of reduction followed by protonation is equivalent to hydrogenation. For instance, the anthracene radical anion forms mainly (but not exclusively) 9,10-dihydroanthracene. Radical anions and their protonation are central to the Birch reduction. Coordination to metal ions Radical anions of polycyclic aromatic compounds function as ligands in organometallic chemistry. Radical cations Cationic radical species are much less common than the anions. Denoted , they appear prominently in mass spectrometry. When a gas-phase molecule is subjected to electron ionization one electron is abstracted by an electron in the electron beam to create a radical cation M+.. This species represents the molecular ion or parent ion. A typical mass spectrum shows multiple signals because the molecular ion fragments into a complex mixture of ions and uncharged radical species. For example, the methanol radical cation fragments into a methenium cation and a hydroxyl radical. In naphthalene the unfragmented radical cation is by far the most prominent peak in the mass spectrum. Secondary species are generated from proton gain (M+1) and proton loss (M-1). Some compounds containing the dioxygenyl cation can be prepared in bulk. Organic conductors Radical cations figure prominently in the chemistry and properties of conducting polymers. Such polymers are formed by the oxidation of heterocycles to give radical cations, which condense with the parent heterocycle. For example, polypyrrole is prepared by oxidation of pyrrole using ferric chloride in methanol: Once formed, these polymers become conductive upon oxidation. Polarons and bipolarons are radical cations encountered in doped conducting polymers. References Reactive intermediates Mass spectrometry
Radical anion
[ "Physics", "Chemistry" ]
1,021
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Organic compounds", "Mass spectrometry", "Physical organic chemistry", "Reactive intermediates", "Matter" ]
1,926,607
https://en.wikipedia.org/wiki/Goldman%20equation
The Goldman–Hodgkin–Katz voltage equation, sometimes called the Goldman equation, is used in cell membrane physiology to determine the resting potential across a cell's membrane, taking into account all of the ions that are permeant through that membrane. The discoverers of this are David E. Goldman of Columbia University, and the Medicine Nobel laureates Alan Lloyd Hodgkin and Bernard Katz. Equation for monovalent ions The GHK voltage equation for monovalent positive ionic species and negative: This results in the following if we consider a membrane separating two -solutions: It is "Nernst-like" but has a term for each permeant ion: = the membrane potential (in volts, equivalent to joules per coulomb) = the selectivity for that ion (in meters per second) = the extracellular concentration of that ion (in moles per cubic meter, to match the other SI units) = the intracellular concentration of that ion (in moles per cubic meter) = the ideal gas constant (joules per kelvin per mole) = the temperature in kelvins = Faraday's constant (coulombs per mole) is approximately 26.7 mV at human body temperature (37 °C); when factoring in the change-of-base formula between the natural logarithm, ln, and logarithm with base 10 , it becomes , a value often used in neuroscience. The ionic charge determines the sign of the membrane potential contribution. During an action potential, although the membrane potential changes about 100mV, the concentrations of ions inside and outside the cell do not change significantly. They are always very close to their respective concentrations when the membrane is at their resting potential. Calculating the first term Using , , (assuming body temperature) and the fact that one volt is equal to one joule of energy per coulomb of charge, the equation can be reduced to which is the Nernst equation. Derivation Goldman's equation seeks to determine the voltage Em across a membrane. A Cartesian coordinate system is used to describe the system, with the z direction being perpendicular to the membrane. Assuming that the system is symmetrical in the x and y directions (around and along the axon, respectively), only the z direction need be considered; thus, the voltage Em is the integral of the z component of the electric field across the membrane. According to Goldman's model, only two factors influence the motion of ions across a permeable membrane: the average electric field and the difference in ionic concentration from one side of the membrane to the other. The electric field is assumed to be constant across the membrane, so that it can be set equal to Em/L, where L is the thickness of the membrane. For a given ion denoted A with valence nA, its flux jA—in other words, the number of ions crossing per time and per area of the membrane—is given by the formula The first term corresponds to Fick's law of diffusion, which gives the flux due to diffusion down the concentration gradient, i.e., from high to low concentration. The constant DA is the diffusion constant of the ion A. The second term reflects the flux due to the electric field, which increases linearly with the electric field; Formally, it is [A] multiplied by the drift velocity of the ions, with the drift velocity expressed using the Stokes–Einstein relation applied to electrophoretic mobility. The constants here are the charge valence nA of the ion A (e.g., +1 for K+, +2 for Ca2+ and −1 for Cl−), the temperature T (in kelvins), the molar gas constant R, and the faraday F, which is the total charge of a mole of electrons. This is a first-order ODE of the form y' = ay + b, with y = [A] and y''' = d[A]/dz; integrating both sides from z=0 to z=L with the boundary conditions [A](0) = [A]in and [A](L) = [A]out, one gets the solution where μ is a dimensionless number and PA is the ionic permeability, defined here as The electric current density JA equals the charge qA of the ion multiplied by the flux jA Current density has units of (Amperes/m2). Molar flux has units of (mol/(s m2)). Thus, to get current density from molar flux one needs to multiply by Faraday's constant F (Coulombs/mol). F will then cancel from the equation below. Since the valence has already been accounted for above, the charge qA of each ion in the equation above, therefore, should be interpreted as +1 or -1 depending on the polarity of the ion. There is such a current associated with every type of ion that can cross the membrane; this is because each type of ion would require a distinct membrane potential to balance diffusion, but there can only be one membrane potential. By assumption, at the Goldman voltage Em, the total current density is zero (Although the current for each ion type considered here is nonzero, there are other pumps in the membrane, e.g. Na+/K+-ATPase, not considered here which serve to balance each individual ion's current, so that the ion concentrations on either side of the membrane do not change over time in equilibrium.) If all the ions are monovalent—that is, if all the nA equal either +1 or -1—this equation can be written whose solution is the Goldman equation where If divalent ions such as calcium are considered, terms such as e2μ appear, which is the square of e''μ; in this case, the formula for the Goldman equation can be solved using the quadratic formula. See also Bioelectronics Cable theory GHK current equation Hindmarsh–Rose model Hodgkin–Huxley model Morris–Lecar model Nernst equation Saltatory conduction References External links Subthreshold membrane phenomena Includes a well-explained derivation of the Goldman-Hodgkin-Katz equation Nernst/Goldman Equation Simulator Goldman-Hodgkin-Katz Equation Calculator Nernst/Goldman interactive Java applet The membrane voltage is calculated interactively as the number of ions are changed between the inside and outside of the cell. Potential, Impedance, and Rectification in Membranes by Goldman (1943) Physical chemistry Electrochemical equations
Goldman equation
[ "Physics", "Chemistry", "Mathematics" ]
1,360
[ "Applied and interdisciplinary physics", "Mathematical objects", "Equations", "Electrochemistry", "nan", "Physical chemistry", "Electrochemical equations" ]
1,928,465
https://en.wikipedia.org/wiki/Flavour%20%28particle%20physics%29
In particle physics, flavour or flavor refers to the species of an elementary particle. The Standard Model counts six flavours of quarks and six flavours of leptons. They are conventionally parameterized with flavour quantum numbers that are assigned to all subatomic particles. They can also be described by some of the family symmetries proposed for the quark-lepton generations. Quantum numbers In classical mechanics, a force acting on a point-like particle can only alter the particle's dynamical state, i.e., its momentum, angular momentum, etc. Quantum field theory, however, allows interactions that can alter other facets of a particle's nature described by non-dynamical, discrete quantum numbers. In particular, the action of the weak force is such that it allows the conversion of quantum numbers describing mass and electric charge of both quarks and leptons from one discrete type to another. This is known as a flavour change, or flavour transmutation. Due to their quantum description, flavour states may also undergo quantum superposition. In atomic physics the principal quantum number of an electron specifies the electron shell in which it resides, which determines the energy level of the whole atom. Analogously, the five flavour quantum numbers (isospin, strangeness, charm, bottomness or topness) can characterize the quantum state of quarks, by the degree to which it exhibits six distinct flavours (u, d, c, s, t, b). Composite particles can be created from multiple quarks, forming hadrons, such as mesons and baryons, each possessing unique aggregate characteristics, such as different masses, electric charges, and decay modes. A hadron's overall flavour quantum numbers depend on the numbers of constituent quarks of each particular flavour. Conservation laws All of the various charges discussed above are conserved by the fact that the corresponding charge operators can be understood as generators of symmetries that commute with the Hamiltonian. Thus, the eigenvalues of the various charge operators are conserved. Absolutely conserved quantum numbers in the Standard Model are: electric charge () weak isospin () baryon number () lepton number () In some theories, such as the grand unified theory, the individual baryon and lepton number conservation can be violated, if the difference between them () is conserved (see Chiral anomaly). Strong interactions conserve all flavours, but all flavour quantum numbers are violated (changed, non-conserved) by electroweak interactions. Flavour symmetry If there are two or more particles which have identical interactions, then they may be interchanged without affecting the physics. All (complex) linear combinations of these two particles give the same physics, as long as the combinations are orthogonal, or perpendicular, to each other. In other words, the theory possesses symmetry transformations such as , where and are the two fields (representing the various generations of leptons and quarks, see below), and is any unitary matrix with a unit determinant. Such matrices form a Lie group called SU(2) (see special unitary group). This is an example of flavour symmetry. In quantum chromodynamics, flavour is a conserved global symmetry. In the electroweak theory, on the other hand, this symmetry is broken, and flavour changing processes exist, such as quark decay or neutrino oscillations. Flavour quantum numbers Leptons All leptons carry a lepton number . In addition, leptons carry weak isospin, , which is − for the three charged leptons (i.e. electron, muon and tau) and + for the three associated neutrinos. Each doublet of a charged lepton and a neutrino consisting of opposite are said to constitute one generation of leptons. In addition, one defines a quantum number called weak hypercharge, , which is −1 for all left-handed leptons. Weak isospin and weak hypercharge are gauged in the Standard Model. Leptons may be assigned the six flavour quantum numbers: electron number, muon number, tau number, and corresponding numbers for the neutrinos (electron neutrino, muon neutrino and tau neutrino). These are conserved in strong and electromagnetic interactions, but violated by weak interactions. Therefore, such flavour quantum numbers are not of great use. A separate quantum number for each generation is more useful: electronic lepton number (+1 for electrons and electron neutrinos), muonic lepton number (+1 for muons and muon neutrinos), and tauonic lepton number (+1 for tau leptons and tau neutrinos). However, even these numbers are not absolutely conserved, as neutrinos of different generations can mix; that is, a neutrino of one flavour can transform into another flavour. The strength of such mixings is specified by a matrix called the Pontecorvo–Maki–Nakagawa–Sakata matrix (PMNS matrix). Quarks All quarks carry a baryon number and all anti-quarks have They also all carry weak isospin, The positively charged quarks (up, charm, and top quarks) are called up-type quarks and have the negatively charged quarks (down, strange, and bottom quarks) are called down-type quarks and have Each doublet of up and down type quarks constitutes one generation of quarks. For all the quark flavour quantum numbers listed below, the convention is that the flavour charge and the electric charge of a quark have the same sign. Thus any flavour carried by a charged meson has the same sign as its charge. Quarks have the following flavour quantum numbers: The third component of isospin (usually just "isospin") (), which has value for the up quark and for the down quark. Strangeness (): Defined as where represents the number of strange quarks () and represents the number of strange antiquarks (). This quantum number was introduced by Murray Gell-Mann. This definition gives the strange quark a strangeness of −1 for the above-mentioned reason. Charm (): Defined as where represents the number of charm quarks () and represents the number of charm antiquarks. The charm quark's value is +1. Bottomness (or beauty) (): Defined as where represents the number of bottom quarks () and represents the number of bottom antiquarks. Topness (or truth) (): Defined as where represents the number of top quarks () and represents the number of top antiquarks. However, because of the extremely short half-life of the top quark (predicted lifetime of only ), by the time it can interact strongly it has already decayed to another flavour of quark (usually to a bottom quark). For that reason the top quark doesn't hadronize, that is it never forms any meson or baryon. These five quantum numbers, together with baryon number (which is not a flavour quantum number), completely specify numbers of all 6 quark flavours separately (as i.e. an antiquark is counted with the minus sign). They are conserved by both the electromagnetic and strong interactions (but not the weak interaction). From them can be built the derived quantum numbers: Hypercharge (): Electric charge (): (see Gell-Mann–Nishijima formula) The terms "strange" and "strangeness" predate the discovery of the quark, but continued to be used after its discovery for the sake of continuity (i.e. the strangeness of each type of hadron remained the same); strangeness of anti-particles being referred to as +1, and particles as −1 as per the original definition. Strangeness was introduced to explain the rate of decay of newly discovered particles, such as the kaon, and was used in the Eightfold Way classification of hadrons and in subsequent quark models. These quantum numbers are preserved under strong and electromagnetic interactions, but not under weak interactions. For first-order weak decays, that is processes involving only one quark decay, these quantum numbers (e.g. charm) can only vary by 1, that is, for a decay involving a charmed quark or antiquark either as the incident particle or as a decay byproduct, likewise, for a decay involving a bottom quark or antiquark Since first-order processes are more common than second-order processes (involving two quark decays), this can be used as an approximate "selection rule" for weak decays. A special mixture of quark flavours is an eigenstate of the weak interaction part of the Hamiltonian, so will interact in a particularly simple way with the W bosons (charged weak interactions violate flavour). On the other hand, a fermion of a fixed mass (an eigenstate of the kinetic and strong interaction parts of the Hamiltonian) is an eigenstate of flavour. The transformation from the former basis to the flavour-eigenstate/mass-eigenstate basis for quarks underlies the Cabibbo–Kobayashi–Maskawa matrix (CKM matrix). This matrix is analogous to the PMNS matrix for neutrinos, and quantifies flavour changes under charged weak interactions of quarks. The CKM matrix allows for CP violation if there are at least three generations. Antiparticles and hadrons Flavour quantum numbers are additive. Hence antiparticles have flavour equal in magnitude to the particle but opposite in sign. Hadrons inherit their flavour quantum number from their valence quarks: this is the basis of the classification in the quark model. The relations between the hypercharge, electric charge and other flavour quantum numbers hold for hadrons as well as quarks. Flavour problem The flavour problem (also known as the flavour puzzle) is the inability of current Standard Model flavour physics to explain why the free parameters of particles in the Standard Model have the values they have, and why there are specified values for mixing angles in the PMNS and CKM matrices. These free parameters - the fermion masses and their mixing angles - appear to be specifically tuned. Understanding the reason for such tuning would be the solution to the flavor puzzle. There are very fundamental questions involved in this puzzle such as why there are three generations of quarks (up-down, charm-strange, and top-bottom quarks) and leptons (electron, muon and tau neutrino), as well as how and why the mass and mixing hierarchy arises among different flavours of these fermions. Quantum chromodynamics Quantum chromodynamics (QCD) contains six flavours of quarks. However, their masses differ and as a result they are not strictly interchangeable with each other. The up and down flavours are close to having equal masses, and the theory of these two quarks possesses an approximate SU(2) symmetry (isospin symmetry). Chiral symmetry description Under some circumstances (for instance when the quark masses are much smaller than the chiral symmetry breaking scale of 250 MeV), the masses of quarks do not substantially contribute to the system's behavior, and to zeroth approximation the masses of the lightest quarks can be ignored for most purposes, as if they had zero mass. The simplified behavior of flavour transformations can then be successfully modeled as acting independently on the left- and right-handed parts of each quark field. This approximate description of the flavour symmetry is described by a chiral group . Vector symmetry description If all quarks had non-zero but equal masses, then this chiral symmetry is broken to the vector symmetry of the "diagonal flavour group" , which applies the same transformation to both helicities of the quarks. This reduction of symmetry is a form of explicit symmetry breaking. The strength of explicit symmetry breaking is controlled by the current quark masses in QCD. Even if quarks are massless, chiral flavour symmetry can be spontaneously broken if the vacuum of the theory contains a chiral condensate (as it does in low-energy QCD). This gives rise to an effective mass for the quarks, often identified with the valence quark mass in QCD. Symmetries of QCD Analysis of experiments indicate that the current quark masses of the lighter flavours of quarks are much smaller than the QCD scale, ΛQCD, hence chiral flavour symmetry is a good approximation to QCD for the up, down and strange quarks. The success of chiral perturbation theory and the even more naive chiral models spring from this fact. The valence quark masses extracted from the quark model are much larger than the current quark mass. This indicates that QCD has spontaneous chiral symmetry breaking with the formation of a chiral condensate. Other phases of QCD may break the chiral flavour symmetries in other ways. History Isospin Isospin, strangeness and hypercharge predate the quark model. The first of those quantum numbers, Isospin, was introduced as a concept in 1932 by Werner Heisenberg, to explain symmetries of the then newly discovered neutron (symbol n): The mass of the neutron and the proton (symbol ) are almost identical: They are nearly degenerate, and both are thus often referred to as “nucleons”, a term that ignores their differences. Although the proton has a positive electric charge, and the neutron is neutral, they are almost identical in all other aspects, and their nuclear binding-force interactions (old name for the residual color force) are so strong compared to the electrical force between some, that there is very little point in paying much attention to their differences. The strength of the strong interaction between any pair of nucleons is the same, independent of whether they are interacting as protons or as neutrons. Protons and neutrons were grouped together as nucleons and treated as different states of the same particle, because they both have nearly the same mass and interact in nearly the same way, if the (much weaker) electromagnetic interaction is neglected. Heisenberg noted that the mathematical formulation of this symmetry was in certain respects similar to the mathematical formulation of non-relativistic spin, whence the name "isospin" derives. The neutron and the proton are assigned to the doublet (the spin-, 2, or fundamental representation) of SU(2), with the proton and neutron being then associated with different isospin projections and respectively. The pions are assigned to the triplet (the spin-1, 3, or adjoint representation) of SU(2). Though there is a difference from the theory of spin: The group action does not preserve flavor (in fact, the group action is specifically an exchange of flavour). When constructing a physical theory of nuclear forces, one could simply assume that it does not depend on isospin, although the total isospin should be conserved. The concept of isospin proved useful in classifying hadrons discovered in the 1950s and 1960s (see particle zoo), where particles with similar mass are assigned an SU(2) isospin multiplet. Strangeness and hypercharge The discovery of strange particles like the kaon led to a new quantum number that was conserved by the strong interaction: strangeness (or equivalently hypercharge). The Gell-Mann–Nishijima formula was identified in 1953, which relates strangeness and hypercharge with isospin and electric charge. The eightfold way and quark model Once the kaons and their property of strangeness became better understood, it started to become clear that these, too, seemed to be a part of an enlarged symmetry that contained isospin as a subgroup. The larger symmetry was named the Eightfold Way by Murray Gell-Mann, and was promptly recognized to correspond to the adjoint representation of SU(3). To better understand the origin of this symmetry, Gell-Mann proposed the existence of up, down and strange quarks which would belong to the fundamental representation of the SU(3) flavor symmetry. GIM-Mechanism and charm To explain the observed absence of flavor-changing neutral currents, the GIM mechanism was proposed in 1970, which introduced the charm quark and predicted the J/psi meson. The J/psi meson was indeed found in 1974, which confirmed the existence of charm quarks. This discovery is known as the November Revolution. The flavor quantum number associated with the charm quark became known as charm. Bottomness and topness The bottom and top quarks were predicted in 1973 in order to explain CP violation, which also implied two new flavor quantum numbers: bottomness and topness. See also Standard Model (mathematical formulation) Cabibbo–Kobayashi–Maskawa matrix Strong CP problem and chirality (physics) Chiral symmetry breaking and quark matter Quark flavour tagging, such as B-tagging, is an example of particle identification in experimental particle physics. References Further reading Lessons in Particle Physics Luis Anchordoqui and Francis Halzen, University of Wisconsin, 18th Dec. 2009 External links The particle data group. Physical quantities Standard Model Quantum chromodynamics Quark matter Conservation laws
Flavour (particle physics)
[ "Physics", "Mathematics" ]
3,656
[ "Standard Model", "Physical phenomena", "Physical quantities", "Equations of physics", "Quark matter", "Conservation laws", "Quantity", "Astrophysics", "Particle physics", "Nuclear physics", "Physical properties", "Symmetry", "Physics theorems" ]
1,928,503
https://en.wikipedia.org/wiki/Ernst%20Gehrcke
Ernst J. L. Gehrcke (1 July 1878 in Berlin – 25 January 1960 in Hohen-Neuendorf) was a German experimental physicist. He was director of the optical department at the Reich Physical and Technical Institute. Concurrently, he was a professor at the University of Berlin. He developed the Lummer–Gehrcke method in interferometry and the multiplex interferometric spectroscope for precision resolution of spectral-line structures. As an anti-relativist, he was a speaker at an event organized in 1920 by the Working Society of German Scientists. He sat on the board of trustees of the Potsdam Astrophysical Observatory. After World War II, he worked at Carl Zeiss Jena, and he helped to develop and become the director of the Institute for Physiological Optics at the University of Jena. In 1949, he began work at the German Office for Materials and Product Testing. In 1953, he became the director of the optical department of the German Office for Weights and Measures. Education Gehrcke studied at the Friedrich-Wilhelms-Universität (today, the Humboldt-Universität zu Berlin) from 1897 to 1901. He received his doctorate under Emil Warburg in 1901. Career In 1901, Gehrcke joined the Physikalisch-Technische Reichsanstalt (PTR, Reich Physical and Technical Institute, after 1945 renamed the Physikalisch-Technische Bundesanstalt). In 1926, he became the director of the optical department, a position he held until 1946. Concurrent with his position at the PTR, he was a Privatdozent at the Friedrich-Wilhelms-Universität from 1904 to 1921 and an außerordentlicher Professor (extraordinarius professor) from 1921 to 1946. After the close of World War II, the University was in the Russian sector of Berlin. In 1946, Gehrcke worked at Carl Zeiss AG in Jena, and he helped to develop and become the director of the Institute for Physiological Optics at the Friedrich-Schiller-Universität Jena. In 1949, he went to East Berlin to the Deutsches Amt für Materialprüfung (German Office for Materials and Product Testing). In 1953, he became the director of the optical department of the Deutsches Amt für Maß und Gewicht (DAMG, German Office for Weights and Measures) in East Berlin, the East German equivalent to the West German Physikalisch-Technische Bundesanstalt (Federal Physical and Technical Institute). Gehrcke contributed to the experimental techniques of interference spectroscopy (interferometry), physiological optics, and the physics of electrical discharges in gases. In 1903, with Otto Lummer, he developed the Lummer–Gehrcke method in interferometry. In 1927, with Ernst Gustav Lau, he developed the multiplex interferometric spectroscope for precision resolution of spectral-line structures. Like a number of other prominent physicists of the time (including the leading Dutch theoretician H. A. Lorentz) Gehrcke, an experimentalist, was not prepared to give up the concept of the luminiferous aether, and for this and various other reasons had been highly critical of Einstein's theories of relativity at least since 1911. This led to an invitation to an event organized in 1920 by Paul Weyland. Weyland, a radical political activist, professional agitator, small-time criminal, and editor of the vehemently anti-Semitic periodical Völkische Monatshefte, believed that Einstein's theories had been excessively promoted in the Berlin press, which he imagined was dominated by Jews who were sympathetic to Einstein's cause for other than scientific reasons. In response, Weyland organized the Arbeitsgemeinschaft deutscher Naturforscher zur Erhaltung reiner Wissenschaft (Working Group of German Natural Scientists for the Preservation of Pure Science), which was never officially registered. Weyland tried to enlist the support of some prominent conservative scientists, such as the Nobel Laureate Philipp Lenard, to build support for the Society (although Lenard declined to participate in Weyland's meetings). The Society held its first and only event on 24 August 1920, featuring lectures against Albert Einstein’s theory of relativity. Weyland gave the first presentation in which he accused Einstein of being a plagiarizer. Gehrcke gave the second and last talks, in which he presented detailed criticisms of Einstein's theories. Einstein attended the event with Walther Nernst. Max von Laue, Walther Nernst, and Heinrich Rubens published a brief and dignified response to the event, in the leading Berlin daily Tägliche Rundschau, on 26 August. Einstein published his own somewhat lengthy reply on 27 August, which he later came to regret. Rising anti-Semitism and antipathy to recent trends in theoretical physics (especially with respect to the theory of relativity and quantum mechanics) were key motivational factors for the Deutsche Physik movement. Under advice from some of his closest associates, Einstein later publicly challenged his critics to debate him in a more professional environment, and several of his scientific adversaries, including Gehrcke and Lenard, accepted. The ensuing debate took place at the 86th meeting of the German Society of Scientists and Physicians in Bad Nauheim on 20 September, chaired by Friedrich von Müller, with Hendrik Lorentz, Max Planck, and Hermann Weyl present. In this meeting Gehrcke pressed his criticism that Einstein's general theory of relativity now admitted superluminal velocities in rotating frames of reference, which the special theory of relativity had ruled out (see Criticism of the theory of relativity). The physics Nobel Laureate Philipp Lenard suggested Gehrcke for the Nobel Prize in Physics in 1921. From 1922 to 1925, Gehrcke was also a member of the Kuratorium (board of trustees) of the Potsdam Astrophysical Observatory. On 9 February 1922, Max Planck nominated Gehrcke, Max von Laue, G. Müller, Walther Nernst to sit on the Kuratorium, and they were installed by the Preußische Akademie der Wissenschaften (Prussian Academy of Sciences). Gehrcke represented the Physikalisch-Technische Reichsanstalt. During their appointment, they sat four times with Albert Einstein present. This was a surprising collaboration in view of what had happened just 18 months earlier at the gathering under the auspices of the Arbeitsgemeinschaft deutscher Naturforscher and the responses in the press by Einstein, Laue, and Nernst. Memberships Gehrcke was a member of professional organizations, which included: Deutsche Physikalische Gesellschaft (German Physical Society) Berlin Society of Anthropology, Ethnology and Prehistory Literature by Gehrcke Ernst Gehrcke and Rudolf Seeliger Über das Leuchten der Gase unter dem Einfluss von Kathodenstrahlen, Verh. D. Deutsch. Phys. Ges. (2) 15, 534–539 (1912), cited in Mehra, Volume 1, Part 2, p. 776. Gehrcke, Ernst Die gegen die Relativitätstheorie erhobenen Einwände, Die Naturwissenschaften Volume 1, 62–66 (1913) Gehrcke, Ernst Zur Kritik und Geschichte der neueren Gravitationstheorien, Annalen der Physik Volume 51, Number 4, 119 – 124 (1916) Gehrcke, Ernst Berichtigung zum Dialog über die Relativitätstheorie, Die Naturwissenschaften Volume 7, 147 – 148 (1919) Gehrcke, Ernst Zur Diskussion über den Äther, Zeitschrift der Physik Volume 2, 67 – 68 (1920) Gehrcke, Ernst Wie die Energieverteilung der schwarzen Strahlung in Wirklichkeit gefunden wurde, Physikalische Zeitschrift Volume 37, 439 – 440 (1936) Books by Gehrcke Gehrcke, Ernst (editor) Handbuch der physikalischen Optik. In zwei Bänden (Barth, 1927–1928) Bibliography Beyerchen, Alan D. Scientists Under Hitler: Politics and the Physics Community in the Third Reich (Yale, 1977) Einstein, Albert Meine Antwort. Über die anti-relativitätstheoretische G.M.b.H., Berliner Tageblatt Volume 49, Number 402, Morning Edition A, p. 1 (27 August 1920), translated and published as Document #1, Albert Einstein: My Reply. On the Anti-Relativity Theoretical Co., Ltd. [August 27, 1920] in Klaus Hentschel (editor) and Ann M. Hentschel (editorial assistant and translator) Physics and National Socialism: An Anthology of Primary Sources (Birkhäuser, 1996) pp. 1 – 5. Clark, Ronald W. Einstein: The Life and Times (World, 1971) Goenner, Hubert The Reaction to Relativity Theory I: The Anti-Einstein Campaign in Germany in 1920 pp. 107–136 in Mara Beller (editor), Robert S. Cohen (editor), and Jürgen Renn Einstein in Context (Cambridge, 1993) (paperback) Heilbron, J. L. The Dilemmas of an Upright Man: Max Planck and the Fortunes of German Science (Harvard, 2000) Hentschel, Klaus (Editor) and Ann M. Hentschel (Editorial Assistant and Translator) Physics and National Socialism: An Anthology of Primary Sources (Birkhäuser, 1996) Mehra, Jagdish, and Helmut Rechenberg The Historical Development of Quantum Theory. Volume 1 Part 2 The Quantum Theory of Planck, Einstein, Bohr and Sommerfeld 1900 – 1925: Its Foundation and the Rise of Its Difficulties. (Springer, 2001) van Dongen, Jeroen Reactionaries and Einstein’s Fame: “German Scientists for the Preservation of Pure Science,” Relativity, and the Bad Nauheim Meeting, Physics in Perspective Volume 9, Number 2, 212–230 (June, 2007). Institutional affiliations of the author: (1) Einstein Papers Project, California Institute of Technology, Pasadena, CA 91125, USA, and (2) Institute for History and Foundations of Science, Utrecht University, P.O. Box 80.000, 3508 TA Utrecht, The Netherlands. References 1878 births 1960 deaths 20th-century German physicists Relativity critics
Ernst Gehrcke
[ "Physics" ]
2,238
[ "Relativity critics", "Theory of relativity" ]
1,928,832
https://en.wikipedia.org/wiki/SO%2810%29
In particle physics, SO(10) refers to a grand unified theory (GUT) based on the spin group Spin(10). The shortened name SO(10) is conventional among physicists, and derives from the Lie algebra or less precisely the Lie group of SO(10), which is a special orthogonal group that is double covered by Spin(10). SO(10) subsumes the Georgi–Glashow and Pati–Salam models, and unifies all fermions in a generation into a single field. This requires 12 new gauge bosons, in addition to the 12 of SU(5) and 9 of SU(4)×SU(2)×SU(2). History Before the SU(5) theory behind the Georgi–Glashow model, Harald Fritzsch and Peter Minkowski, and independently Howard Georgi, found that all the matter contents are incorporated into a single representation, spinorial 16 of SO(10). However, Georgi found the SO(10) theory just a few hours before finding SU(5) at the end of 1973. Important subgroups It has the branching rules to [SU(5)×U(1)χ]/Z5. If the hypercharge is contained within SU(5), this is the conventional Georgi–Glashow model, with the 16 as the matter fields, the 10 as the electroweak Higgs field and the 24 within the 45 as the GUT Higgs field. The superpotential may then include renormalizable terms of the form Tr(45 ⋅ 45); Tr(45 ⋅ 45 ⋅ 45); 10 ⋅ 45 ⋅ 10, 10 ⋅ 16* ⋅ 16 and 16* ⋅ 16. The first three are responsible to the gauge symmetry breaking at low energies and give the Higgs mass, and the latter two give the matter particles masses and their Yukawa couplings to the Higgs. There is another possible branching, under which the hypercharge is a linear combination of an SU(5) generator and χ. This is known as flipped SU(5). Another important subgroup is either [SU(4) × SU(2)L × SU(2)R]/Z2 or Z2 ⋊ [SU(4) × SU(2)L × SU(2)R]/Z2 depending upon whether or not the left-right symmetry is broken, yielding the Pati–Salam model, whose branching rule is Spontaneous symmetry breaking The symmetry breaking of SO(10) is usually done with a combination of (( a 45H OR a 54H) AND ((a 16H AND a ) OR (a 126H AND a )) ). Let's say we choose a 54H. When this Higgs field acquires a GUT scale VEV, we have a symmetry breaking to Z2 ⋊ [SU(4) × SU(2)L × SU(2)R]/Z2, i.e. the Pati–Salam model with a Z2 left-right symmetry. If we have a 45H instead, this Higgs field can acquire any VEV in a two dimensional subspace without breaking the standard model. Depending on the direction of this linear combination, we can break the symmetry to SU(5)×U(1), the Georgi–Glashow model with a U(1) (diag(1,1,1,1,1,-1,-1,-1,-1,-1)), flipped SU(5) (diag(1,1,1,-1,-1,-1,-1,-1,1,1)), SU(4)×SU(2)×U(1) (diag(0,0,0,1,1,0,0,0,-1,-1)), the minimal left-right model (diag(1,1,1,0,0,-1,-1,-1,0,0)) or SU(3)×SU(2)×U(1)×U(1) for any other nonzero VEV. The choice diag(1,1,1,0,0,-1,-1,-1,0,0) is called the Dimopoulos-Wilczek mechanism aka the "missing VEV mechanism" and it is proportional to B−L. The choice of a 16H and a breaks the gauge group down to the Georgi–Glashow SU(5). The same comment applies to the choice of a 126H and a . It is the combination of BOTH a 45/54 and a 16/ or 126/ which breaks SO(10) down to the Standard Model. The electroweak Higgs and the doublet-triplet splitting problem The electroweak Higgs doublets come from an SO(10) 10H. Unfortunately, this same 10 also contains triplets. The masses of the doublets have to be stabilized at the electroweak scale, which is many orders of magnitude smaller than the GUT scale whereas the triplets have to be really heavy in order to prevent triplet-mediated proton decays. See doublet-triplet splitting problem. Among the solutions for it is the Dimopoulos-Wilczek mechanism, or the choice of diag(1,1,1,0,0,-1,-1,-1,0,0) of <45>. Unfortunately, this is not stable once the 16/ or 126/ sector interacts with the 45 sector. Content Matter The matter representations come in three copies (generations) of the 16 representation. The Yukawa coupling is 10H 16f 16f. This includes a right-handed neutrino. One may either include three copies of singlet representations and a Yukawa coupling (the "double seesaw mechanism"); or else, add the Yukawa interaction or add the nonrenormalizable coupling . See seesaw mechanism. The 16f field branches to [SU(5)×U(1)χ]/Z5 and SU(4) × SU(2)L × SU(2)R as Gauge fields The 45 field branches to [SU(5)×U(1)χ]/Z5 and SU(4) × SU(2)L × SU(2)R as and to the standard model [SU(3)C × SU(2)L × U(1)Y]/Z6 as The four lines are the SU(3)C, SU(2)L, and U(1)B−L bosons; the SU(5) leptoquarks which don't mutate X charge; the Pati-Salam leptoquarks and SU(2)R bosons; and the new SO(10) leptoquarks. (The standard electroweak U(1)Y is a linear combination of the bosons.) Proton decay Note that SO(10) contains both the Georgi–Glashow SU(5) and flipped SU(5). Anomaly free from local and global anomalies It has been long known that the SO(10) model is free from all perturbative local anomalies, computable by Feynman diagrams. However, it only became clear in 2018 that the SO(10) model is also free from all nonperturbative global anomalies on non-spin manifolds --- an important rule for confirming the consistency of SO(10) grand unified theory, with a Spin(10) gauge group and chiral fermions in the 16-dimensional spinor representations, defined on non-spin manifolds. See also Flipped SO(10) Notes Grand Unified Theory
SO(10)
[ "Physics" ]
1,638
[ "Unsolved problems in physics", "Physics beyond the Standard Model", "Grand Unified Theory" ]
1,929,416
https://en.wikipedia.org/wiki/Doctor%20Light%20%28Kimiyo%20Hoshi%29
Doctor Light is a superhero appearing in comic books published by DC Comics. Kimiyo Hoshi is a distinct character from the villain of the same name. She has, however, crossed paths with the villainous Doctor Light on several occasions. Doctor Light appeared in the sixth season of the television series The Flash, portrayed by Emmie Nagata. Publication history Doctor Light first appeared in Crisis on Infinite Earths #4 and was created by Marv Wolfman and George Pérez. Fictional character biography Kimiyo Tazu Hoshi, a brilliant but overly-driven scientist, was the supervising astronomer at an observatory in Japan during Crisis on Infinite Earths. The Monitor gives her light-based powers to help battle the Anti-Monitor. Doctor Light has joined the Justice League a few times over the years, most notably as a member of Justice League Europe during the latter half of its incarnation. She also joins the Doom Patrol for a time and enters a relationship with Global Guardians member Rising Sun. Infinite Crisis and after In Green Arrow (vol. 3) #54 (November 2005), following his recovery from the mind-wipe he suffered at the hands of the Justice League, Arthur Light, the villainous male Doctor Light attacked Doctor Hoshi and temporarily depowers her. One Year Later and 52 An article discussing the destruction of Star City (and, by extension, Kimiyo's loss of power) appeared at the 52 website, which is designed to complement the weekly comic series. The article places a date on the city's destruction, which was depicted in the final 2 Pre-OYL Green Arrow arcs, specifying that the event took place on May 15. Problematically, this dating places the story after the events depicted in Infinite Crisis. Given this dating, Kimiyo's loss of power took place during the events of 52 - Week 2 which, given Kimiyo and Green Arrow's appearances at the end of 52 - Week 1, would appear to make sense, although it in turn makes nonsense of information contained in Green Arrow vol. 3, #54, where it is revealed that Kimiyo has not used her powers for two years. The story arc also concludes with Green Arrow experiencing a strange multiplying effect that places the story during Infinite Crisis, not two weeks after the event's conclusion (several other characters in the DCU experienced this effect in the issue of their titles that immediately preceded the OYL jump). Kimiyo Hoshi appeared in costume in 52 Week 35, alongside various other heroes. All are assisting the injured victims of Lex Luthor, who had caused a rain of 'supermen' by deactivating their powers. She is also shown in 52 Week 50, in the climactic battle of World War III. Dr. Light appears in World War III: United We Stand, the fourth issue of the World War III mini-series that coincided with 52 Week 50. She is one of the first wave of heroes who confront (and are taken down by) Black Adam. He grasps her neck with such force that she instantly blacks out; he throws her aside. Geoff Johns has revealed on his message board that he was working on storylines involving Doctor Light. Oracle invites Kimiyo to join the Birds of Prey (issue #100), but she was not selected to take part in the first mission. She does, however, appear in Birds of Prey #113 (January 2008), assisting Oracle by scanning the electromagnetic spectrum for any evidence that might lead her to the parties responsible for an influx of hi-tech weaponry being smuggled into Metropolis. She is unable to locate any such evidence. Doctor Light is only occasionally active in the superhero community because she is a single mother with two children: Imako, her daughter, and Yasu, her son. Gail Simone confirmed in an interview that Kimiyo's children have not been retconned out of existence by changes to DC continuity. Doctor Light works in S.T.A.R. Labs and has an interior monologue about the erratic fluctuations in her powers that lead to her retirement from being a superhero. Upon returning home from work, she is ambushed by the Shadow Cabinet. After briefly talking with the heroes, she becomes enraged and attacks them after coming to believe they have harmed her children, only to be quickly neutralized and captured. This is later revealed to have been orchestrated by Superman and Icon so that the League and Cabinet could gain information on each other. Hardware uses Arthur Light's powers to restore those of Kimiyo, allowing her to quickly defeat Shadow Thief and Starbreaker. Kimiyo has been confirmed to be a member of the newest incarnation of the Justice League. In Blackest Night, Kimiyo is attacked by Arthur Light's Black Lantern form and destroys him with a burst of light. Afterward, Donna Troy, Cyborg, Dick Grayson, and Starfire join the Justice League. With the costume given to her by Hardware destroyed, Kimiyo designs a new one and travels to Metropolis to recruit Mon-El and Guardian. Kimiyo briefly appears during the War of the Supermen, where she and the rest of the JLA attempt to repel General Zod's invasion. After just three issues together, the new JLA team loses most of its members, with Kimiyo temporarily leaving the team to be with her children. Back in Metropolis, Kimiyo helps Supergirl rescue Lana Lang after the Insect Queen possesses her. A short time later, Kimiyo and Gangbuster battle Supergirl's Bizarro counterpart, a refugee from Bizarro World. Despite resigning from active duty, Doctor Light remains with the Justice League as a reserve member. DC Rebirth Kimiyo appears in the Rebirth storyline Heroes in Crisis and is among many superheroes that are interviewed in the Sanctuary therapy center. Furthermore, she and Arthur Light were formerly married, during which they had three children: Tommy, Emma, and Sakura. Powers and abilities Doctor Light is a metahuman who can generate and manipulate light energy. This enables her to generate illusions, become invisible, fly at light-speeds, and teleport. Hoshi is also a skilled scientist and astronomer. Other versions An alternate universe variant of Doctor Light appears in JLA/The 99. An alternate universe variant of Doctor Light who is a member of H.I.V.E. appears in Flashpoint. An alternate universe variant of Kimiyo Hoshi appears in DC Comics Bombshells. This version is a scientist for Amanda Waller's eponymous Bombshells project and is in a relationship with Big Barda. In other media Television Doctor Light appears in Justice League Unlimited, voiced by Lauren Tom. This version is a member of the Justice League. Two incarnations of Doctor Light appear in The Flash: The first incarnation appears in the second season, portrayed by Malese Jow. This version is the Earth-2 counterpart of Linda Park and a criminal working for Zoom. Kimiyo Hoshi / Doctor Light appears in the sixth season, portrayed by Emmie Nagata. This version is a metahuman assassin armed with a UV gun who initially works for the organization Black Hole before defecting to work for Eva McCulloch. Doctor Light makes non-speaking cameo appearances in Justice League Action. Film An alternate universe variant of Kimiyo Hoshi appears in Justice League: Gods and Monsters. This version was a member of Lex Luthor's "Project Fair Play", a contingency program meant to destroy their universe's Justice League if necessary, until the Metal Men kill her and the other scientists involved. Doctor Light makes a minor non-speaking appearance in DC Super Hero Girls: Hero of the Year. Doctor Light appears in Justice League: Crisis on Infinite Earths, voiced by Erika Ishii. Video games Doctor Light appears in DC Universe Online. Doctor Light appears as a character summon in Scribblenauts Unmasked: A DC Comics Adventure. Miscellaneous An alternate universe incarnation of Doctor Light makes a cameo appearance in Teen Titans Go! #48. This version is the heroic counterpart of Arthur Light and a member of the Brotherhood of Justice. Doctor Light makes non-speaking background appearances in DC Super Hero Girls as a student of Super Hero High. References External links DCU Guide: Doctor Light (Kimiyo Hoshi) Incandescent: Losing the Light DCAU: Doctor Light (Kimiyo Hoshi) ComicVine: Doctor Light (Kimiyo Hoshi) Buddhist superheroes Characters created by George Pérez Characters created by Marv Wolfman Comics characters introduced in 1985 DC Comics female superheroes DC Comics metahumans DC Comics scientists Fictional astronomers Fictional characters who can manipulate light Fictional physicians Japanese superheroes
Doctor Light (Kimiyo Hoshi)
[ "Astronomy" ]
1,775
[ "Astronomers", "Fictional astronomers" ]
13,041,514
https://en.wikipedia.org/wiki/Neocatastrophism
Neocatastrophism is the hypothesis that life-exterminating events such as gamma-ray bursts have acted as a galactic regulation mechanism in the Milky Way upon the emergence of complex life in its habitable zone. It is one of several proposed solutions to the Fermi paradox since it provides a mechanism which would have delayed the advent of intelligent beings in local galaxies near Earth. The problem It is estimated that Earth-like planets in the Milky Way started forming 9 billion years ago, and that their median age is 6.4 ± 0.7 Ga. Moreover, 75% of stars in the galactic habitable zone are older than the Sun. This makes the existence of potential planets with evolved intelligent life more likely than not to be older than that of the Earth (4.54 Ga). This creates an observational dilemma since even slower-than-lightspeed interstellar travel could in theory take only 5 to 50 million years to colonize the galaxy. This leads to a conundrum first posed in 1950 by the physicist Enrico Fermi in his namesake paradox: "Why are no aliens or their artifacts physically here?" The neocatastrophism resolution The hypothesis posits that astrobiological evolution is subject to regulation mechanisms that arrest or postpone the advent of complex creatures capable of interstellar communication and traveling technology. These regulation mechanisms act to temporarily sterilize planets of biology in the galactic habitable zone. The main proposed regulation mechanism is gamma-ray bursts. Part of the neocatastrophism hypothesis is that stellar evolution produces a decreasing frequency of such catastrophic events increasing the length of the "window" in which intelligent life might arise as galaxies age. According to modeling, this creates the possibility of a phase transition at which point a galaxy turns from a place that is essentially dead (with a few pockets of simple life) to one that is crowded with complex life forms. See also Anthropic principle Drake equation Goldilocks Principle Great Filter Mediocrity principle Planetary habitability Rare Earth hypothesis References Astrobiology Origin of life Fermi paradox Biological hypotheses Disasters Astronomical controversies Astronomical hypotheses Gamma-ray bursts
Neocatastrophism
[ "Physics", "Astronomy", "Biology" ]
435
[ "Astronomical hypotheses", "Physical phenomena", "Origin of life", "History of astronomy", "Speculative evolution", "Astronomical events", "Astrobiology", "Gamma-ray bursts", "Astronomical controversies", "Fermi paradox", "Biological hypotheses", "Stellar phenomena", "Astronomical sub-discip...
13,047,079
https://en.wikipedia.org/wiki/Slope%20deflection%20method
The slope deflection method is a structural analysis method for beams and frames introduced in 1914 by George A. Maney. The slope deflection method was widely used for more than a decade until the moment distribution method was developed. In the book, "The Theory and Practice of Modern Framed Structures", written by J.B Johnson, C.W. Bryan and F.E. Turneaure, it is stated that this method was first developed "by Professor Otto Mohr in Germany, and later developed independently by Professor G.A. Maney". According to this book, professor Otto Mohr introduced this method for the first time in his book, "Evaluation of Trusses with Rigid Node Connections" or "Die Berechnung der Fachwerke mit Starren Knotenverbindungen". Introduction By forming slope deflection equations and applying joint and shear equilibrium conditions, the rotation angles (or the slope angles) are calculated. Substituting them back into the slope deflection equations, member end moments are readily determined. Deformation of member is due to the bending moment. Slope deflection equations The slope deflection equations can also be written using the stiffness factor and the chord rotation : Derivation of slope deflection equations When a simple beam of length and flexural rigidity is loaded at each end with clockwise moments and , member end rotations occur in the same direction. These rotation angles can be calculated using the unit force method or Darcy's Law. Rearranging these equations, the slope deflection equations are derived. Equilibrium conditions Joint equilibrium Joint equilibrium conditions imply that each joint with a degree of freedom should have no unbalanced moments i.e. be in equilibrium. Therefore, Here, are the member end moments, are the fixed end moments, and are the external moments directly applied at the joint. Shear equilibrium When there are chord rotations in a frame, additional equilibrium conditions, namely the shear equilibrium conditions need to be taken into account. Example The statically indeterminate beam shown in the figure is to be analysed. Members AB, BC, CD have the same length . Flexural rigidities are EI, 2EI, EI respectively. Concentrated load of magnitude acts at a distance from the support A. Uniform load of intensity acts on BC. Member CD is loaded at its midspan with a concentrated load of magnitude . In the following calculations, clockwise moments and rotations are positive. Degrees of freedom Rotation angles , , , of joints A, B, C, respectively are taken as the unknowns. There are no chord rotations due to other causes including support settlement. Fixed end moments Fixed end moments are: Slope deflection equations The slope deflection equations are constructed as follows: Joint equilibrium equations Joints A, B, C should suffice the equilibrium condition. Therefore Rotation angles The rotation angles are calculated from simultaneous equations above. Member end moments Substitution of these values back into the slope deflection equations yields the member end moments (in kNm): See also Beam theory Notes References Structural analysis
Slope deflection method
[ "Engineering" ]
630
[ "Structural engineering", "Structural analysis", "Mechanical engineering", "Aerospace engineering" ]
13,048,500
https://en.wikipedia.org/wiki/Vantieghems%20theorem
In number theory, Vantieghems theorem is a primality criterion. It states that a natural number n≥3 is prime if and only if Similarly, n is prime, if and only if the following congruence for polynomials in X holds: or: Example Let n=7 forming the product 1*3*7*15*31*63 = 615195. 615195 = 7 mod 127 and so 7 is prime Let n=9 forming the product 1*3*7*15*31*63*127*255 = 19923090075. 19923090075 = 301 mod 511 and so 9 is composite References . An article with proof and generalizations. Factorial and binomial topics Modular arithmetic Theorems about prime numbers
Vantieghems theorem
[ "Mathematics" ]
162
[ "Factorial and binomial topics", "Combinatorics", "Theorems about prime numbers", "Theorems in number theory", "Arithmetic", "Modular arithmetic", "Number theory" ]
13,052,519
https://en.wikipedia.org/wiki/Naphthalocyanine
Naphthalocyanine is a cross-shaped organic molecule consisting of 48 carbon, 8 nitrogen and 26 hydrogen atoms. It is a derivative of phthalocyanine, differing by having 4 extra carbon rings, one on each "arm." IBM Research labs used it for developing single-molecule logic switches and visualizing charge distribution in a single molecule. Naphthalocyanine derivatives have a potential use in photodynamic cancer treatment. References External links Timmer, J. (2007) Storing data in molecules: shifting atoms and flipping bits, ars technica online [accessed 8 September 2007] Phthalocyanines Molecular electronics
Naphthalocyanine
[ "Chemistry", "Materials_science" ]
133
[ "Nanotechnology", "Molecular physics", "Molecular electronics" ]
13,053,232
https://en.wikipedia.org/wiki/Hexafluoroacetylacetone
Hexafluoroacetylacetone is the chemical compound with the nominal formula CF3C(O)CH2C(O)CF3 (often abbreviated as hfacH). This colourless liquid is a ligand precursor and a reagent used in MOCVD. The compound exists exclusively as the enol CF3C(OH)=CHC(O)CF3. For comparison under the same conditions, acetylacetone is 85% enol. Metal complexes of the conjugate base exhibit enhanced volatility and Lewis acidity relative to analogous complexes derived from acetylacetone. The visible spectra of bis(hexafluoroacetylacetonato)copper(II) and its dehydrate have been reported in carbon tetrachloride. Compounds of the type bis(hexafluoroacetylacetonato)copper(II):Bn , where :B are Lewis bases such as N,N-dimethylacetamide, dimethyl sulfoxide, or pyridine and n = 1 or 2, have been prepared. Since bis(hexafluoroacetylacetonato)copper(II) is soluble in carbon tetrachloride, its Lewis acid properties have been studied for 1:1 adducts using a variety of Lewis bases. This organofluorine compound was first prepared by the condensation of ethyl ester of trifluoroacetic acid and 1,1,1-trifluoroacetone. It has been investigated as an etchant for copper and its complexes, such as Cu(Hfac)(trimethylvinylsilane) have been employed as precursors in microelectronics. Being highly electrophilic, hexafluoroacetylacetone is hydrated in water to give the tetraol. References Chelating agents Diketones Trifluoromethyl compounds Enols 3-Hydroxypropenals
Hexafluoroacetylacetone
[ "Chemistry" ]
418
[ "Enols", "Chelating agents", "Functional groups", "Process chemicals" ]
19,901,803
https://en.wikipedia.org/wiki/Budgeted%20cost%20of%20work%20performed
Budgeted cost of work performed (BCWP) also called earned value (EV), is the budgeted cost of work that has actually been performed in carrying out a scheduled task during a specific time period. The BCWP is the sum of the budgets for completed work packages and completed portions of open work packages, plus the applicable portion of the budgets for level of effort and apportioned effort. (The items identified in the Work breakdown structure plus overhead costs, plus costs related in proportion to the planning and performance.) According to the PMBOK (7th edition) by the Project Management Institute (PMI), Earned Value (EV) is defined as the "measure of work performed expressed in terms of the budget authorized for that work." BCWP is a term in Earned value management approach to Project management. BCWP is contrasted to Budgeted Cost of Work Scheduled (BCWS) also called Planned Value (PV). BCWS is the sum of the budget items for all work packages, planning packages, and overhead which was scheduled for the period, rather than the cost of the work actually performed. BCWP is also contrasted to Actual Cost of Work Performed (ACWP) which measures the actual amount spent rather than the budgeted estimates. Example To illustrate the difference between the three terms, assume that a schedule contains a task "Test hardware" estimated to run from 1 January to 10 January and to cost $1000, and that this is a simple effort with no overhead or allocated costs. However on 5 January, halfway through the time allowed, the work is 30% complete and has spent $250. BCWP is $1000 (budgeted cost) times 30% (work performed), or $300 BCWS is $1000 (budgeted cost) times 50% (scheduled amount), or $500 ACWP is $250 The comparison in Earned value management would view this as behind schedule and costing less overall than expected. The detailed calculation should multiply % complete of each task (completed or in progress) by its planned value See also Glossary of project management Earned value management Citations References Cost engineering Cost of work scheduled
Budgeted cost of work performed
[ "Engineering" ]
440
[ "Cost engineering" ]
19,903,176
https://en.wikipedia.org/wiki/Closed%20geodesic
In differential geometry and dynamical systems, a closed geodesic on a Riemannian manifold is a geodesic that returns to its starting point with the same tangent direction. It may be formalized as the projection of a closed orbit of the geodesic flow on the tangent space of the manifold. Definition In a Riemannian manifold (M,g), a closed geodesic is a curve that is a geodesic for the metric g and is periodic. Closed geodesics can be characterized by means of a variational principle. Denoting by the space of smooth 1-periodic curves on M, closed geodesics of period 1 are precisely the critical points of the energy function , defined by If is a closed geodesic of period p, the reparametrized curve is a closed geodesic of period 1, and therefore it is a critical point of E. If is a critical point of E, so are the reparametrized curves , for each , defined by . Thus every closed geodesic on M gives rise to an infinite sequence of critical points of the energy E. Examples On the -dimensional unit sphere with the standard metric, every geodesic – a great circle – is closed. On a smooth surface topologically equivalent to the sphere, this may not be true, but there are always at least three simple closed geodesics; this is the theorem of the three geodesics. Manifolds all of whose geodesics are closed have been thoroughly investigated in the mathematical literature. On a compact hyperbolic surface, whose fundamental group has no torsion, closed geodesics are in one-to-one correspondence with non-trivial conjugacy classes of elements in the Fuchsian group of the surface. See also Lyusternik–Fet theorem Theorem of the three geodesics Curve-shortening flow Selberg trace formula Selberg zeta function Zoll surface References Besse, A.: "Manifolds all of whose geodesics are closed", Ergebisse Grenzgeb. Math., no. 93, Springer, Berlin, 1978. Differential geometry Dynamical systems Geodesic (mathematics)
Closed geodesic
[ "Physics", "Mathematics" ]
448
[ "Mechanics", "Dynamical systems" ]
19,905,510
https://en.wikipedia.org/wiki/Vapor%E2%80%93liquid%E2%80%93solid%20method
The vapor–liquid–solid method (VLS) is a mechanism for the growth of one-dimensional structures, such as nanowires, from chemical vapor deposition. The growth of a crystal through direct adsorption of a gas phase on to a solid surface is generally very slow. The VLS mechanism circumvents this by introducing a catalytic liquid alloy phase which can rapidly adsorb a vapor to supersaturation levels, and from which crystal growth can subsequently occur from nucleated seeds at the liquid–solid interface. The physical characteristics of nanowires grown in this manner depend, in a controllable way, upon the size and physical properties of the liquid alloy. Historical background The VLS mechanism was proposed in 1964 as an explanation for silicon whisker growth from the gas phase in the presence of a liquid gold droplet placed upon a silicon substrate. The explanation was motivated by the absence of axial screw dislocations in the whiskers (which in themselves are a growth mechanism), the requirement of the gold droplet for growth, and the presence of the droplet at the tip of the whisker during the entire growth process. Introduction The VLS mechanism is typically described in three stages: Preparation of a liquid alloy droplet upon the substrate from which a wire is to be grown Introduction of the substance to be grown as a vapor, which adsorbs on to the liquid surface, and diffuses into the droplet Supersaturation and nucleation at the liquid/solid interface leading to axial crystal growth Experimental technique The VLS process takes place as follows: A thin (~1–10 nm) Au film is deposited onto a silicon (Si) wafer substrate by sputter deposition or thermal evaporation. The wafer is annealed at temperatures higher than the Au-Si eutectic point, creating Au-Si alloy droplets on the wafer surface (the thicker the Au film, the larger the droplets). Mixing Au with Si greatly reduces the melting temperature of the alloy as compared to the alloy constituents. The melting temperature of the Au:Si alloy reaches a minimum (~363 °C) when the ratio of its constituents is 4:1 Au:Si, also known as the Au:Si eutectic point. Lithography techniques can also be used to controllably manipulate the diameter and position of the droplets (and as you will see below, the resultant nanowires). One-dimensional crystalline nanowires are then grown by a liquid metal-alloy droplet-catalyzed chemical or physical vapor deposition process, which takes place in a vacuum deposition system. Au-Si droplets on the surface of the substrate act to lower the activation energy of normal vapor-solid growth. For example, Si can be deposited by means of a SiCl4:H2 gaseous mixture reaction (chemical vapor deposition), only at temperatures above 800 °C, in normal vapor-solid growth. Moreover, below this temperature almost no Si is deposited on the growth surface. However, Au particles can form Au-Si eutectic droplets at temperatures above 363 °C and adsorb Si from the vapor state (because Au can form a solid-solution with all Si concentrations up to 100%) until reaching a supersaturated state of Si in Au. Furthermore, nanosized Au-Si droplets have much lower melting points (ref) because the surface area-to-volume ratio is increasing, becoming energetically unfavorable, and nanometer-sized particles act to minimize their surface energy by forming droplets (spheres or half-spheres). Si has a much higher melting point (~1414 °C) than that of the eutectic alloy, therefore Si atoms precipitate out of the supersaturated liquid-alloy droplet at the liquid-alloy/solid-Si interface, and the droplet rises from the surface. This process is illustrated in figure 1. Typical features of the VLS method Greatly lowered reaction energy compared to normal vapor-solid growth. Wires grow only in the areas activated by the metal catalysts and the size and position of the wires are determined by that of the metal catalysts. This growth mechanism can also produce highly anisotropic nanowire arrays from a variety of material. Requirements for catalyst particles The requirements for catalysts are: It must form a liquid solution with the crystalline material to be grown at the nanowire growth temperature. The solid solubility of the catalyzing agent is low in the solid and liquid phases of the substrate material. The equilibrium vapor pressure of the catalyst over the liquid alloy must be small so that the droplet does not vaporize, shrink in volume (and therefore radius), and decrease the radius of the growing wire until, ultimately, growth is terminated. The catalyst must be inert (non-reacting) to the reaction products (during CVD nanowire growth). The vapor–solid, vapor–liquid, and liquid–solid interfacial energies play a key role in the shape of the droplets and therefore must be examined before choosing a suitable catalyst; small contact angles between the droplet and solid are more suitable for large area growth, while large contact angles result in the formation of smaller (decreased radius) whiskers. The solid-liquid interface must be well-defined crystallographically in order to produce highly directional growth of nanowires. It is also important to point out that the solid-liquid interface cannot, however, be completely smooth. Furthermore, if the solid liquid interface was atomically smooth, atoms near the interface trying to attach to the solid would have no place to attach to until a new island nucleates (atoms attach at step ledges), leading to an extremely slow growth process. Therefore, “rough” solid surfaces, or surfaces containing a large number of surface atomic steps (ideally 1 atom wide, for large growth rates) are needed for deposited atoms to attach and nanowire growth to proceed. Growth mechanism Catalyst droplet formation The materials system used, as well as the cleanliness of the vacuum system and therefore the amount of contamination and/or the presence of oxide layers at the droplet and wafer surface during the experiment, both greatly influence the absolute magnitude of the forces present at the droplet/surface interface and, in turn, determine the shape of the droplets. The shape of the droplet, i.e. the contact angle (β0, see Figure 4) can, be modeled mathematically, however, the actual forces present during growth are extremely difficult to measure experimentally. Nevertheless, the shape of a catalyst particle at the surface of a crystalline substrate is determined by a balance of the forces of surface tension and the liquid–solid interface tension. The radius of the droplet varies with the contact angle as: where r0 is the radius of the contact area and β0 is defined by a modified Young’s equation: , It is dependent on the surface (σs) and liquid–solid interface (σls) tensions, as well as an additional line tension (τ) which comes into effect when the initial radius of the droplet is small (nanosized). As a nanowire begins to grow, its height increases by an amount dh and the radius of the contact area decreases by an amount dr (see Figure 4). As the growth continues, the inclination angle at the base of the nanowires (α, set as zero before whisker growth) increases, as does β0: . The line tension therefore greatly influences the catalyst contact area. The most import result from this conclusion is that different line tensions will result in different growth modes. If the line tensions are too large, nanohillock growth will result and thus stop the growth. Nanowhisker diameter The diameter of the nanowire which is grown depends upon the properties of the alloy droplet. The growth of nano-sized wires requires nano-size droplets to be prepared on the substrate. In an equilibrium situation this is not possible as the minimum radius of a metal droplet is given by where Vl is the molar volume of the droplet, σlv the liquid-vapor surface energy, and s is the degree of supersaturation of the vapor. This equations restricts the minimum diameter of the droplet, and of any crystals which can be grown from it, under typically conditions to well above the nanometer level. Several techniques to generate smaller droplets have been developed, including the use of monodispersed nanoparticles spread in low dilution on the substrate, and the laser ablation of a substrate-catalyst mixture so to form a plasma which allows well-separated nanoclusters of the catalyst to form as the systems cools. Whisker growth kinetics During VLS whisker growth, the rate at which whiskers grow is dependent on the whisker diameter: the larger the whisker diameter, the faster the nanowire grows axially. This is because the supersaturation of the metal-alloy catalyst () is the main driving force for nanowhisker growth and decreases with decreasing whisker diameter (also known as the Gibbs-Thomson effect): . Again, Δμ is the main driving force for nanowhisker growth (the supersaturation of the metal droplet). More specifically, Δμ0 is the difference between the chemical potential of the depositing species (Si in the above example) in the vapor and solid whisker phase. Δμ0 is the initial difference proceeding whisker growth (when ), while is the atomic volume of Si and the specific free energy of the wire surface. Examination of the above equation, indeed reveals that small diameters (100 nm) exhibit small driving forces for whisker growth while large wire diameters exhibit large driving forces. Related growth techniques Laser-assisted growth Involves the removal of material from metal-containing solid targets by irradiating the surface with high-powered (~100 mJ/pulse) short (10 Hz) laser pulses, usually with wavelengths in the ultraviolet (UV) region of the light spectrum. When such a laser pulse is adsorbed by a solid target, material from the surface region of the target absorbs the laser energy and either (a) evaporates or sublimates from the surface or is (b) converted into a plasma (see laser ablation). These particles are easily transferred to the substrate where they can nucleate and grow into nanowires. The laser-assisted growth technique is particularly useful for growing nanowires with high melting temperatures, multicomponent or doped nanowires, as well as nanowires with extremely high crystalline quality. The high intensity of the laser pulse incident at the target allows the deposition of high melting point materials, without having to try to evaporate the material using extremely high temperature resistive or electron bombardment heating. Furthermore, targets can simply be made from a mixture of materials or even a liquid. Finally, the plasma formed during the laser absorption process allows for the deposition of charged particles as well as a catalytic means to lower the activation barrier of reactions between target constituents. Thermal evaporation Some very interesting nanowires microstructures can be obtained by simply thermally evaporating solid materials. This technique can be carried out in a relatively simple setup composed of a dual-zone vacuum furnace. The hot end of the furnace contains the evaporating source material, while the evaporated particles are carrier downstream, (by way of a carrier gas) to the colder end of the furnace where they can absorb, nucleate, and grow on a desired substrate. Metal-catalyzed molecular beam epitaxy Molecular beam epitaxy (MBE) has been used since 2000 to create high-quality semiconductor wires based on the VLS growth mechanism. However, in metal-catalyzed MBE the metal particles do not catalyze a reaction between precursors but rather adsorb vapor phase particles. This is because the chemical potential of the vapor can be drastically lowered by entering the liquid phase. MBE is carried out under ultra-high vacuum (UHV) conditions where the mean-free-path (distance between collisions) of source atoms or molecules is on the order of meters. Therefore, evaporated source atoms (from, say, an effusion cell) act as a beam of particles directed towards the substrate. The growth rate of the process is very slow, the deposition conditions are very clean, and as a result four superior capabilities arise, when compared to other deposition methods: UHV conditions minimize the amount of oxidation/contamination of the growing structures Relatively low growth temperatures prevent interdiffusion (mixing) of nano-sized heterostructures Very thin-film analysis techniques can be used in-situ (during growth), such as reflection high energy electron diffraction (RHEED) to monitor the microstructure at the surface of the substrate as well as the chemical composition, using Auger electron spectroscopy. References External links Growing Crystals in the Lab Lieber Research Group Home Page – Harvard University Nanomaterials Chemical processes da:Nanotråd#Fremstilling
Vapor–liquid–solid method
[ "Chemistry", "Materials_science" ]
2,692
[ "Chemical processes", "nan", "Chemical process engineering", "Nanotechnology", "Nanomaterials" ]
6,425,322
https://en.wikipedia.org/wiki/DNA%20clamp
A DNA clamp, also known as a sliding clamp, is a protein complex that serves as a processivity-promoting factor in DNA replication. As a critical component of the DNA polymerase III holoenzyme, the clamp protein binds DNA polymerase and prevents this enzyme from dissociating from the template DNA strand. The clamp-polymerase protein–protein interactions are stronger and more specific than the direct interactions between the polymerase and the template DNA strand; because one of the rate-limiting steps in the DNA synthesis reaction is the association of the polymerase with the DNA template, the presence of the sliding clamp dramatically increases the number of nucleotides that the polymerase can add to the growing strand per association event. The presence of the DNA clamp can increase the rate of DNA synthesis up to 1,000-fold compared with a nonprocessive polymerase. Structure The DNA clamp is an α+β protein that assembles into a multimeric, six-domain ring structure that completely encircles the DNA double helix as the polymerase adds nucleotides to the growing strand. Each domain is in turn made of two β-α-β-β-β structural repeats. The DNA clamp assembles on the DNA at the replication fork and "slides" along the DNA with the advancing polymerase, aided by a layer of water molecules in the central pore of the clamp between the DNA and the protein surface. Because of the toroidal shape of the assembled multimer, the clamp cannot dissociate from the template strand without also dissociating into monomers. The DNA clamp fold is found in bacteria, archaea, eukaryotes and some viruses. In bacteria, the sliding clamp is a homodimer composed of two identical beta subunits of DNA polymerase III and hence is referred to as the beta clamp. In archaea and eukaryotes, it is a trimer composed of three molecules of PCNA. The T4 bacteriophage also uses a sliding clamp, called gp45 that is a trimer similar in structure to PCNA but lacks sequence homology to either PCNA or the bacterial beta clamp. Bacterial The beta clamp is a specific DNA clamp and a subunit of the DNA polymerase III holoenzyme found in bacteria. Two beta subunits are assembled around the DNA by the gamma subunit and ATP hydrolysis; this assembly is called the pre-initiation complex. After assembly around the DNA, the beta subunits' affinity for the gamma subunit is replaced by an affinity for the alpha and epsilon subunits, which together create the complete holoenzyme. DNA polymerase III is the primary enzyme complex involved in prokaryotic DNA replication. The gamma complex of DNA polymerase III, composed of γδδ'χψ subunits, catalyzes ATP to chaperone two beta subunits to bind to DNA. Once bound to DNA, the beta subunits can freely slide along double stranded DNA. The beta subunits in turn bind the αε polymerase complex. The α subunit possesses DNA polymerase activity and the ε subunit is a 3’-5’ exonuclease. The beta chain of bacterial DNA polymerase III is composed of three topologically equivalent domains (N-terminal, central, and C-terminal). Two beta chain molecules are tightly associated to form a closed ring encircling duplex DNA. As a drug target Certain NSAIDs (carprofen, bromfenac, and vedaprofen) exhibit some suppression of bacterial DNA replication by inhibiting bacterial DNA clamp. Eukaryotic and archaeal The sliding clamp in eukaryotes is assembled from a specific subunit of DNA polymerase delta called the proliferating cell nuclear antigen (PCNA). The N-terminal and C-terminal domains of PCNA are topologically identical. Three PCNA molecules are tightly associated to form a closed ring encircling duplex DNA. The sequence of PCNA is well conserved between plants, animals and fungi, indicating a strong selective pressure for structure conservation, and suggesting that this type of DNA replication mechanism is conserved throughout eukaryotes. In eukaryotes, a homologous, heterotrimeric "9-1-1 clamp" made up of RAD9-RAD1-HUS1 (911) is responsible for DNA damage checkpoint control. This 9-1-1 clamp mounts onto DNA in the opposite direction. Archaea, probable evolutionary precursor of eukaryotes, also universally have at least one PCNA gene. This PCNA ring works with PolD, the single eukaryotic-like DNA polymerase in archaea responsible for multiple functions from replication to repair. Some unusual species have two or even three PCNA genes, forming heterotrimers or distinct specialized homotrimers. Archaeons also share with eukaryotes the PIP (PCNA-interacting protein) motif, but a wider variety of such proteins performing different functions are found. PCNA is also appropriated by some viruses. The giant virus genus Chlorovirus, with PBCV-1 as a representative, carries in its genome two PCNA genes (, ) and a eukaryotic-type DNA polymerase. Members of Baculoviridae also encode a PCNA homolog (). Caudoviral The viral gp45 sliding clamp subunit protein contains two domains. Each domain consists of two alpha helices and two beta sheets – the fold is duplicated and has internal pseudo two-fold symmetry. Three gp45 molecules are tightly associated to form a closed ring encircling duplex DNA. Herpesviral Some members of Herpesviridae encode a protein that has a DNA clamp fold but does not associate into a ring clamp. The two-domain protein does, however, associate with the viral DNA polymerase and also acts to increase processivity. As it does not form a ring, it does not need a clamp loader to be attached to DNA. Assembly Sliding clamps are loaded onto their associated DNA template strands by specialized proteins known as "sliding clamp loaders", which also disassemble the clamps after replication has completed. The binding sites for these initiator proteins overlap with the binding sites for the DNA polymerase, so the clamp cannot simultaneously associate with a clamp loader and with a polymerase. Thus the clamp will not be actively disassembled while the polymerase remains bound. DNA clamps also associate with other factors involved in DNA and genome homeostasis, such as nucleosome assembly factors, Okazaki fragment ligases, and DNA repair proteins. All of these proteins also share a binding site on the DNA clamp that overlaps with the clamp loader site, ensuring that the clamp will not be removed while any enzyme is still working on the DNA. The activity of the clamp loader requires ATP hydrolysis to "close" the clamp around the DNA. References Further reading Clamping down on pathogenic bacteria– how to shut down a key DNA polymerase complex. Quips at PDBe External links SCOP DNA clamp fold CATH box architecture Biotechnology Protein folds DNA replication
DNA clamp
[ "Biology" ]
1,510
[ "Genetics techniques", "Biotechnology", "DNA replication", "Molecular genetics", "nan" ]
6,426,156
https://en.wikipedia.org/wiki/Hexachlorophosphazene
Hexachlorophosphazene is an inorganic compound with the chemical formula . The molecule has a cyclic, unsaturated backbone consisting of alternating phosphorus and nitrogen atoms, and can be viewed as a trimer of the hypothetical compound (phosphazyl dichloride). Its classification as a phosphazene highlights its relationship to benzene. There is large academic interest in the compound relating to the phosphorus-nitrogen bonding and phosphorus reactivity. Occasionally, commercial or suggested practical applications have been reported, too, utilising hexachlorophosphazene as a precursor chemical. Derivatives of noted interest include the hexalkoxyphosphazene lubricants obtained from nucleophilic substitution of hexachlorophosphazene with alkoxides, or chemically resistant inorganic polymers with desirable thermal and mechanical properties known as polyphosphazenes produced from the polymerisation of hexachlorophosphazene. Structure and characterisation Bond lengths and conformation Hexachlorophosphazene is a cyclic molecule, containing a core with alternating nitrogen and phosphorus atoms, and two additional chlorine atoms bonded to each phosphorus atom. Hexachlorophosphazene molecule contains six equivalent P–N bonds, for which the adjacent P–N distances are 157 pm. This is characteristically shorter than the ca. 177 pm P–N bonds in the valence saturated phosphazane analogues. The molecule possesses D3h symmetry, and each phosphorus center is tetrahedral with a Cl–P–Cl angle of 101°. The ring in hexachlorophosphazene deviates from planarity and is slightly ruffled (see chair conformation). By contrast, the ring in the related hexafluorophosphazene species is completely planar. Characterisation methods 31P-NMR spectroscopy is the usual method for assaying hexachlorophosphazene and its reactions. Hexachlorophosphazene exhibits a single resonance at 20.6 ppm as all P environments are chemically equivalent. In it IR spectrum, the 1370 and 1218 cm−1 vibrational bands are assigned to νP–N stretches. Other bands are found at 860 and 500–600 cm−1, respectively assigned to ring and νP–Cl. Hexachlorophosphazene and many of its derivatives have been characterized by single crystal X-ray crystallography. Bonding Early analyses Cyclophosphazenes such as hexachlorophosphazene are distinguished by notable stability and equal P–N bond lengths which, in many such cyclic molecules, would imply delocalization or even aromaticity. To account for these features, early bonding models starting from the mid-1950s invoked a delocalised π system arising from the overlap of N 2p and P 3d orbitals. Modern bonding models Starting from the late 1980s, more modern calculations and the lack of spectroscopic evidence reveal that the P 3d contribution is negligible, invalidating the earlier hypothesis. Instead, a charge separated model is generally accepted. According to this description, the P–N bond is viewed as a very polarised one (between notional and ), with sufficient ionic character to account for most of the bond strength. The rest (~15%) of the bond strength may be attributed to a negative hyperconjugation interaction: the N lone pairs can donate some electron density into π-accepting σ* molecular orbitals on the P. Synthesis The synthesis of hexachlorophosphazene was first reported by von Liebig in 1834. In that report he describes experiments conducted with Wöhler. They found that phosphorus pentachloride () and ammonia () react exothermically to yield a new substance that could be washed with cold water to remove the ammonium chloride () coproduct. The new compound contained P, N, and Cl, on the basis of elemental analysis. It was sensitive toward hydrolysis by hot water. Modern syntheses are based on the developments by Schenk and Römer who used ammonium chloride in place of ammonia and inert chlorinated solvents. By replacing ammonia with ammonium chloride allows the reaction to proceed without a strong exothermic reaction associated with the /. Typical chlorocarbon solvents are 1,1,2,2-tetrachloroethane or chlorobenzene, which tolerate the hydrogen chloride (HCl) side product. Since ammonium chloride is insoluble in chlorinated solvents, workup is facilitated. For the reaction under such conditions, the following stoichiometry applies: where n can usually take values of 2 (the dimer tetrachlorodiphosphazene), 3 (the trimer hexachlorotriphosphazene), and 4 (the tetramer octachlorotetraphosphazene). Purification by sublimation gives mainly the trimer and tetramer. Slow vacuum sublimation at approximately 60 °C affords the pure trimer free of the tetramer. Reaction conditions such as temperature may also be tuned to maximise the yield of the trimer at the expense of the other possible products; nonetheless, commercial samples of hexachlorophosphazene usually contain appreciable amounts of octachlorotetraphosphazene, even up to 40%. Formation mechanism The mechanism of the above reaction has not been resolved, but it has been suggested that is found in its ionic form (tetrachlorophosphonium hexachlorophosphate(V)) and the reaction proceeds via nucleophilic attack of (tetrachlorophosphonium) by (from dissociation). Elimination of HCl (the major side product) creates a reactive nucleophilic intermediate which through further attack of and subsequent HCl elimination, creates a growing acyclic intermediate , etc. until an eventual intramolecular attack leads to the formation of one of the cyclic oligomers. Reactions Substitution at P Hexachlorophosphazene reacts readily with alkali metal alkoxides and amides. The nucleophilic polysubstitution of chloride by alkoxide proceeds via displacement of chloride at separate phosphorus centers: The observed regioselectivity is due to the combined steric effects and oxygen lone pair π-backdonation (which deactivates already substituted P atoms). Ring-opening polymerisation Heating hexachlorophosphazene to ca. 250 °C induces polymerisation. The tetramer also polymerises in this manner, although more slowly. The conversion is a type of ring-opening polymerisation (ROP). The ROP mechanism is found to be catalysed by Lewis acids, but is overall not very well understood. Prolonged heating of the polymer at higher temperatures (ca. 350 °C) will cause depolymerisation. The structure of the inorganic chloropolymer product (Poly(dichlorophosphazene)) comprises a linear – chain, where n ~ 15000. It was first observed in the late 19th century and its form after chain cross-linking has been called "inorganic rubber" due to its elastomeric behaviour. This polydichlorophosphazene product is the starting material for a wide class of polymeric compounds, collectively known as polyphosphazenes. Substitution of the chloride groups by other nucleophilic groups, especially alkoxides as laid out above, yields numerous characterised derivatives. Lewis basicity The nitrogen centres of hexachlorophosphazene are weakly basic, and this Lewis base behaviour has been suggested to play a role in the polymerisation mechanism. Specifically, hexachlorophosphazene has been reported to form adducts of various stoichiometries with Lewis acids , , , , , , but no isolable product with . Among these, the best structurally characterised are the 1:1 adducts with aluminium trichloride or with gallium trichloride; they are found with the Al/Ga atom bound to a N and assume a more prominently distorted chair conformation compared to the free hexachlorophosphazene. The adducts also exhibit fluxional behaviour in solution for temperatures down to −60 °C, which can be monitored with 15N and 31P-NMR. Coupling reagent Hexachlorophosphazene has also found applications in research by enabling aromatic coupling reactions between pyridine and either N,N-dialkylanilines or indole, resulting in 4,4'-substituted phenylpyridine derivatives, postulated to go through a cyclophosphazene pyridinium salt intermediate. The compound may also be used as a peptide coupling reagent for the synthesis of oligopeptides in chloroform, though for this application the tetramer octachlorotetraphosphazene usually proves more effective. Photochemical degradation Both the trimer and tetramer in hydrocarbon solutions photochemically react forming clear liquids identified as alkyl-substituted derivatives , where n = 3, 4. Such reactions proceed under prolonged UVC (mercury arc) illumination without affecting the rings. Solid films of the trimer and tetramer will not undergo any chemical change under such irradiation conditions. Applications The hexalkoxyphosphazenes (especially the aryloxy species), resulting from the nucleophilic hexasubstitution of the hexachlorophosphazene P atoms, have attracted interest for their high thermal and chemical stability as well as their low glass transition temperature. Certain hexalkoxyphosphazenes (such as the hexa-phenoxy derivative) have been put to commercial use as fireproof materials and high temperature lubricants. Polyphosphazenes obtained from polymerised hexachlorophosphazene (poly(dichlorophosphazene)) have garnered attention within the field of inorganic polymers. The elastomeric and thermoplastic properties have been investigated. Some of them appear promising for future applications as fibre- or membrane-forming high performance materials, since they combine transparency, backbone flexibility, tunable hydrophilicity or hydrophobicity, and various other desirable properties. Polyphosphazene-based components have been used in O-rings, fuel lines and shock absorbers, where the polyphosphazenes confer fire resistance, imperviousness to oils, and flexibility even at very low temperatures. Further reading Discovery of cyclophosphazenes: Liebig-Wöhler, Briefwechsel vol. 1, 63; Ann. Chem. (Liebig), vol. 11 (1834), 146. First reports on their polymerisation: H. N. Stokes (1895), On the chloronitrides of phosphorus. American Chemical Journal, vol. 17, p. 275.H. N. Stokes (1896), On Trimetaphosphimic acid and its decomposition products. American Chemical Journal, vol. 18 issue 8, p. 629. Example of hexalkoxyphosphazene synthesis from hexachlorophosphazene and structure description: Novel hexalkoxyphosphazene synthesis not starting from hexachlorophosphazene: References Chlorine compounds Nitrogen heterocycles Inorganic compounds Nitrides Phosphorus heterocycles Six-membered rings Phosphazenes Phosphorus-nitrogen compounds
Hexachlorophosphazene
[ "Chemistry" ]
2,476
[ "Inorganic compounds" ]
6,432,722
https://en.wikipedia.org/wiki/Photon%20polarization
Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave. An individual photon can be described as having right or left circular polarization, or a superposition of the two. Equivalently, a photon can be described as having horizontal or vertical linear polarization, or a superposition of the two. The description of photon polarization contains many of the physical concepts and much of the mathematical machinery of more involved quantum descriptions, such as the quantum mechanics of an electron in a potential well. Polarization is an example of a qubit degree of freedom, which forms a fundamental basis for an understanding of more complicated quantum phenomena. Much of the mathematical machinery of quantum mechanics, such as state vectors, probability amplitudes, unitary operators, and Hermitian operators, emerge naturally from the classical Maxwell's equations in the description. The quantum polarization state vector for the photon, for instance, is identical with the Jones vector, usually used to describe the polarization of a classical wave. Unitary operators emerge from the classical requirement of the conservation of energy of a classical wave propagating through lossless media that alter the polarization state of the wave. Hermitian operators then follow for infinitesimal transformations of a classical polarization state. Many of the implications of the mathematical machinery are easily verified experimentally. In fact, many of the experiments can be performed with polaroid sunglass lenses. The connection with quantum mechanics is made through the identification of a minimum packet size, called a photon, for energy in the electromagnetic field. The identification is based on the theories of Planck and the interpretation of those theories by Einstein. The correspondence principle then allows the identification of momentum and angular momentum (called spin), as well as energy, with the photon. Polarization of classical electromagnetic waves Polarization states Linear polarization The wave is linearly polarized (or plane polarized) when the phase angles are equal, This represents a wave with phase polarized at an angle with respect to the x axis. In this case the Jones vector can be written with a single phase: The state vectors for linear polarization in x or y are special cases of this state vector. If unit vectors are defined such that and then the linearly polarized polarization state can be written in the "x–y basis" as Circular polarization If the phase angles and differ by exactly and the x amplitude equals the y amplitude the wave is circularly polarized. The Jones vector then becomes where the plus sign indicates left circular polarization and the minus sign indicates right circular polarization. In the case of circular polarization, the electric field vector of constant magnitude rotates in the x–y plane. If unit vectors are defined such that and then an arbitrary polarization state can be written in the "R–L basis" as where and We can see that Elliptical polarization The general case in which the electric field rotates in the x–y plane and has variable magnitude is called elliptical polarization. The state vector is given by Geometric visualization of an arbitrary polarization state To get an understanding of what a polarization state looks like, one can observe the orbit that is made if the polarization state is multiplied by a phase factor of and then having the real parts of its components interpreted as x and y coordinates respectively. That is: If only the traced out shape and the direction of the rotation of is considered when interpreting the polarization state, i.e. only (where and are defined as above) and whether it is overall more right circularly or left circularly polarized (i.e. whether or vice versa), it can be seen that the physical interpretation will be the same even if the state is multiplied by an arbitrary phase factor, since and the direction of rotation will remain the same. In other words, there is no physical difference between two polarization states and , between which only a phase factor differs. It can be seen that for a linearly polarized state, M will be a line in the xy plane, with length 2 and its middle in the origin, and whose slope equals to . For a circularly polarized state, M will be a circle with radius and with the middle in the origin. Energy, momentum, and angular momentum of a classical electromagnetic wave Energy density of classical electromagnetic waves Energy in a plane wave The energy per unit volume in classical electromagnetic fields is (cgs units) and also Planck units: For a plane wave, this becomes: where the energy has been averaged over a wavelength of the wave. Fraction of energy in each component The fraction of energy in the x component of the plane wave is with a similar expression for the y component resulting in . The fraction in both components is Momentum density of classical electromagnetic waves The momentum density is given by the Poynting vector For a sinusoidal plane wave traveling in the z direction, the momentum is in the z direction and is related to the energy density: The momentum density has been averaged over a wavelength. Angular momentum density of classical electromagnetic waves Electromagnetic waves can have both orbital and spin angular momentum. The total angular momentum density is For a sinusoidal plane wave propagating along axis the orbital angular momentum density vanishes. The spin angular momentum density is in the direction and is given by where again the density is averaged over a wavelength. Optical filters and crystals Passage of a classical wave through a polaroid filter A linear filter transmits one component of a plane wave and absorbs the perpendicular component. In that case, if the filter is polarized in the x direction, the fraction of energy passing through the filter is Example of energy conservation: Passage of a classical wave through a birefringent crystal An ideal birefringent crystal transforms the polarization state of an electromagnetic wave without loss of wave energy. Birefringent crystals therefore provide an ideal test bed for examining the conservative transformation of polarization states. Even though this treatment is still purely classical, standard quantum tools such as unitary and Hermitian operators that evolve the state in time naturally emerge. Initial and final states A birefringent crystal is a material that has an optic axis with the property that the light has a different index of refraction for light polarized parallel to the axis than it has for light polarized perpendicular to the axis. Light polarized parallel to the axis are called "extraordinary rays" or "extraordinary photons", while light polarized perpendicular to the axis are called "ordinary rays" or "ordinary photons". If a linearly polarized wave impinges on the crystal, the extraordinary component of the wave will emerge from the crystal with a different phase than the ordinary component. In mathematical language, if the incident wave is linearly polarized at an angle with respect to the optic axis, the incident state vector can be written and the state vector for the emerging wave can be written While the initial state was linearly polarized, the final state is elliptically polarized. The birefringent crystal alters the character of the polarization. Dual of the final state The initial polarization state is transformed into the final state with the operator U. The dual of the final state is given by where is the adjoint of U, the complex conjugate transpose of the matrix. Unitary operators and energy conservation The fraction of energy that emerges from the crystal is In this ideal case, all the energy impinging on the crystal emerges from the crystal. An operator U with the property that where I is the identity operator and U is called a unitary operator. The unitary property is necessary to ensure energy conservation in state transformations. Hermitian operators and energy conservation If the crystal is very thin, the final state will be only slightly different from the initial state. The unitary operator will be close to the identity operator. We can define the operator H by and the adjoint by Energy conservation then requires This requires that Operators like this that are equal to their adjoints are called Hermitian or self-adjoint. The infinitesimal transition of the polarization state is Thus, energy conservation requires that infinitesimal transformations of a polarization state occur through the action of a Hermitian operator. Photons: connection to quantum mechanics Energy, momentum, and angular momentum of photons Energy The treatment to this point has been classical. It is a testament, however, to the generality of Maxwell's equations for electrodynamics that the treatment can be made quantum mechanical with only a reinterpretation of classical quantities. The reinterpretation is based on the theories of Max Planck and the interpretation by Albert Einstein of those theories and of other experiments. Einstein's conclusion from early experiments on the photoelectric effect is that electromagnetic radiation is composed of irreducible packets of energy, known as photons. The energy of each packet is related to the angular frequency of the wave by the relation where is an experimentally determined quantity known as the reduced Planck constant. If there are photons in a box of volume , the energy in the electromagnetic field is and the energy density is The photon energy can be related to classical fields through the correspondence principle that states that for a large number of photons, the quantum and classical treatments must agree. Thus, for very large , the quantum energy density must be the same as the classical energy density The number of photons in the box is then Momentum The correspondence principle also determines the momentum and angular momentum of the photon. For momentum where is the wave number. This implies that the momentum of a photon is Angular momentum and spin Similarly for the spin angular momentum where is field strength. This implies that the spin angular momentum of the photon is the quantum interpretation of this expression is that the photon has a probability of of having a spin angular momentum of and a probability of of having a spin angular momentum of . We can therefore think of the spin angular momentum of the photon being quantized as well as the energy. The angular momentum of classical light has been verified. A photon that is linearly polarized (plane polarized) is in a superposition of equal amounts of the left-handed and right-handed states. Spin operator The spin of the photon is defined as the coefficient of in the spin angular momentum calculation. A photon has spin 1 if it is in the state and −1 if it is in the state. The spin operator is defined as the outer product The eigenvectors of the spin operator are and with eigenvalues 1 and −1, respectively. The expected value of a spin measurement on a photon is then An operator S has been associated with an observable quantity, the spin angular momentum. The eigenvalues of the operator are the allowed observable values. This has been demonstrated for spin angular momentum, but it is in general true for any observable quantity. Spin states We can write the circularly polarized states as where s = 1 for and s = −1 for . An arbitrary state can be written where and are phase angles, θ is the angle by which the frame of reference is rotated, and Spin and angular momentum operators in differential form When the state is written in spin notation, the spin operator can be written The eigenvectors of the differential spin operator are To see this note The spin angular momentum operator is Nature of probability in quantum mechanics Probability for a single photon There are two ways in which probability can be applied to the behavior of photons; probability can be used to calculate the probable number of photons in a particular state, or probability can be used to calculate the likelihood of a single photon to be in a particular state. The former interpretation violates energy conservation. The latter interpretation is the viable, if nonintuitive, option. Dirac explains this in the context of the double-slit experiment: Some time before the discovery of quantum mechanics people realized that the connection between light waves and photons must be of a statistical character. What they did not clearly realize, however, was that the wave function gives information about the probability of one photon being in a particular place and not the probable number of photons in that place. The importance of the distinction can be made clear in the following way. Suppose we have a beam of light consisting of a large number of photons split up into two components of equal intensity. On the assumption that the beam is connected with the probable number of photons in it, we should have half the total number going into each component. If the two components are now made to interfere, we should require a photon in one component to be able to interfere with one in the other. Sometimes these two photons would have to annihilate one another and other times they would have to produce four photons. This would contradict the conservation of energy. The new theory, which connects the wave function with probabilities for one photon gets over the difficulty by making each photon go partly into each of the two components. Each photon then interferes only with itself. Interference between two different photons never occurs.—Paul Dirac, The Principles of Quantum Mechanics, 1930, Chapter 1 Probability amplitudes The probability for a photon to be in a particular polarization state depends on the fields as calculated by the classical Maxwell's equations. The polarization state of the photon is proportional to the field. The probability itself is quadratic in the fields and consequently is also quadratic in the quantum state of polarization. In quantum mechanics, therefore, the state or probability amplitude contains the basic probability information. In general, the rules for combining probability amplitudes look very much like the classical rules for composition of probabilities: [The following quote is from Baym, Chapter 1] The probability amplitude for two successive probabilities is the product of amplitudes for the individual possibilities. For example, the amplitude for the x polarized photon to be right circularly polarized and for the right circularly polarized photon to pass through the y-polaroid is the product of the individual amplitudes. The amplitude for a process that can take place in one of several indistinguishable ways is the sum of amplitudes for each of the individual ways. For example, the total amplitude for the x polarized photon to pass through the y-polaroid is the sum of the amplitudes for it to pass as a right circularly polarized photon, plus the amplitude for it to pass as a left circularly polarized photon, The total probability for the process to occur is the absolute value squared of the total amplitude calculated by 1 and 2. Uncertainty principle Mathematical preparation For any legal operators the following inequality, a consequence of the Cauchy–Schwarz inequality, is true. If B A ψ and A B ψ are defined, then by subtracting the means and re-inserting in the above formula, we deduce where is the operator mean of observable X in the system state ψ and Here is called the commutator of A and B. This is a purely mathematical result. No reference has been made to any physical quantity or principle. It simply states that the uncertainty of one operator times the uncertainty of another operator has a lower bound. Application to angular momentum The connection to physics can be made if we identify the operators with physical operators such as the angular momentum and the polarization angle. We have then which means that angular momentum and the polarization angle cannot be measured simultaneously with infinite accuracy. (The polarization angle can be measured by checking whether the photon can pass through a polarizing filter oriented at a particular angle, or a polarizing beam splitter. This results in a yes/no answer that, if the photon was plane-polarized at some other angle, depends on the difference between the two angles.) States, probability amplitudes, unitary and Hermitian operators, and eigenvectors Much of the mathematical apparatus of quantum mechanics appears in the classical description of a polarized sinusoidal electromagnetic wave. The Jones vector for a classical wave, for instance, is identical with the quantum polarization state vector for a photon. The right and left circular components of the Jones vector can be interpreted as probability amplitudes of spin states of the photon. Energy conservation requires that the states be transformed with a unitary operation. This implies that infinitesimal transformations are transformed with a Hermitian operator. These conclusions are a natural consequence of the structure of Maxwell's equations for classical waves. Quantum mechanics enters the picture when observed quantities are measured and found to be discrete rather than continuous. The allowed observable values are determined by the eigenvalues of the operators associated with the observable. In the case angular momentum, for instance, the allowed observable values are the eigenvalues of the spin operator. These concepts have emerged naturally from Maxwell's equations and Planck's and Einstein's theories. They have been found to be true for many other physical systems. In fact, the typical program is to assume the concepts of this section and then to infer the unknown dynamics of a physical system. This was done, for instance, with the dynamics of electrons. In that case, working back from the principles in this section, the quantum dynamics of particles were inferred, leading to Schrödinger's equation, a departure from Newtonian mechanics. The solution of this equation for atoms led to the explanation of the Balmer series for atomic spectra and consequently formed a basis for all of atomic physics and chemistry. This is not the only occasion in which Maxwell's equations have forced a restructuring of Newtonian mechanics. Maxwell's equations are relativistically consistent. Special relativity resulted from attempts to make classical mechanics consistent with Maxwell's equations (see, for example, Moving magnet and conductor problem). See also Angular momentum of light Spin angular momentum of light Orbital angular momentum of light Quantum decoherence Stern–Gerlach experiment Wave–particle duality Double-slit experiment Spin polarization References Further reading Quantum mechanics Physical phenomena Polarization (waves)
Photon polarization
[ "Physics" ]
3,652
[ "Physical phenomena", "Theoretical physics", "Quantum mechanics", "Astrophysics", "Polarization (waves)" ]
6,434,629
https://en.wikipedia.org/wiki/Thermal%20spraying
Thermal spraying techniques are coating processes in which melted (or heated) materials are sprayed onto a surface. The "feedstock" (coating precursor) is heated by electrical (plasma or arc) or chemical means (combustion flame). Thermal spraying can provide thick coatings (approx. thickness range is 20 microns to several mm, depending on the process and feedstock), over a large area at high deposition rate as compared to other coating processes such as electroplating, physical and chemical vapor deposition. Coating materials available for thermal spraying include metals, alloys, ceramics, plastics and composites. They are fed in powder or wire form, heated to a molten or semimolten state and accelerated towards substrates in the form of micrometer-size particles. Combustion or electrical arc discharge is usually used as the source of energy for thermal spraying. Resulting coatings are made by the accumulation of numerous sprayed particles. The surface may not heat up significantly, allowing the coating of flammable substances. Coating quality is usually assessed by measuring its porosity, oxide content, macro and micro-hardness, bond strength and surface roughness. Generally, the coating quality increases with increasing particle velocities Variations Several variations of thermal spraying are distinguished: Plasma spraying Detonation spraying Wire arc spraying Flame spraying High velocity oxy-fuel coating spraying (HVOF) High velocity air fuel (HVAF) Warm spraying Cold spraying Spray and Fuse In classical (developed between 1910 and 1920) but still widely used processes such as flame spraying and wire arc spraying, the particle velocities are generally low (< 150 m/s), and raw materials must be molten to be deposited. Plasma spraying, developed in the 1970s, uses a high-temperature plasma jet generated by arc discharge with typical temperatures >15,000 K, which makes it possible to spray refractory materials such as oxides, molybdenum, etc. System overview A typical thermal spray system consists of the following: Spray torch (or spray gun) – the core device performing the melting and acceleration of the particles to be deposited Feeder – for supplying the powder, wire or liquid to the torch through tubes. Media supply – gases or liquids for the generation of the flame or plasma jet, gases for carrying the powder, etc. Robot/Labour – for manipulating the torch or the substrates to be coated Power supply – often standalone for the torch Control console(s) – either integrated or individual for all of the above Detonation thermal spraying process The detonation gun consists of a long water-cooled barrel with inlet valves for gases and powder. Oxygen and fuel (acetylene most common) are fed into the barrel along with a charge of powder. A spark is used to ignite the gas mixture, and the resulting detonation heats and accelerates the powder to supersonic velocity through the barrel. A pulse of nitrogen is used to purge the barrel after each detonation. This process is repeated many times a second. The high kinetic energy of the hot powder particles on impact with the substrate results in a buildup of a very dense and strong coating. The coating adheres through a mechanical bond resulting from the deformation of the base substrate wrapping around the sprayed particles after the high speed impact. Plasma spraying In plasma spraying process, the material to be deposited (feedstock) — typically as a powder, sometimes as a liquid, suspension or wire — is introduced into the plasma jet, emanating from a plasma torch. In the jet, where the temperature is on the order of 10,000 K, the material is melted and propelled towards a substrate. There, the molten droplets flatten, rapidly solidify and form a deposit. Commonly, the deposits remain adherent to the substrate as coatings; free-standing parts can also be produced by removing the substrate. There are a large number of technological parameters that influence the interaction of the particles with the plasma jet and the substrate and therefore the deposit properties. These parameters include feedstock type, plasma gas composition and flow rate, energy input, torch offset distance, substrate cooling, etc. Deposit properties The deposits consist of a multitude of pancake-like 'splats' called lamellae, formed by flattening of the liquid droplets. As the feedstock powders typically have sizes from micrometers to above 100 micrometers, the lamellae have thickness in the micrometer range and lateral dimension from several to hundreds of micrometers. Between these lamellae, there are small voids, such as pores, cracks and regions of incomplete bonding. As a result of this unique structure, the deposits can have properties significantly different from bulk materials. These are generally mechanical properties, such as lower strength and modulus, higher strain tolerance, and lower thermal and electrical conductivity. Also, due to the rapid solidification, metastable phases can be present in the deposits. Applications This technique is mostly used to produce coatings on structural materials. Such coatings provide protection against high temperatures (for example thermal barrier coatings for exhaust heat management), corrosion, erosion, wear; they can also change the appearance, electrical or tribological properties of the surface, replace worn material, etc. When sprayed on substrates of various shapes and removed, free-standing parts in the form of plates, tubes, shells, etc. can be produced. It can also be used for powder processing (spheroidization, homogenization, modification of chemistry, etc.). In this case, the substrate for deposition is absent and the particles solidify during flight or in a controlled environment (e.g., water). This technique with variation may also be used to create porous structures, suitable for bone ingrowth, as a coating for medical implants. A polymer dispersion aerosol can be injected into the plasma discharge in order to create a grafting of this polymer on to a substrate surface. This application is mainly used to modify the surface chemistry of polymers. Variations Plasma spraying systems can be categorized by several criteria. Plasma jet generation: direct current (DC plasma), where the energy is transferred to the plasma jet by a direct current, high-power electric arc induction plasma or RF plasma, where the energy is transferred by induction from a coil around the plasma jet, through which an alternating, radio-frequency current passes Plasma-forming medium: gas-stabilized plasma (GSP), where the plasma forms from a gas; typically argon, hydrogen, helium or their mixtures water-stabilized plasma (WSP), where plasma forms from water (through evaporation, dissociation and ionization) or other suitable liquid hybrid plasma – with combined gas and liquid stabilization, typically argon and water Spraying environment: atmospheric plasma spraying (APS), performed in ambient air controlled atmosphere plasma spraying (CAPS), usually performed in a closed chamber, either filled with inert gas or evacuated variations of CAPS: high-pressure plasma spraying (HPPS), low-pressure plasma spraying (LPPS), the extreme case of which is vacuum plasma spraying (VPS, see below) underwater plasma spraying Another variation consists of having a liquid feedstock instead of a solid powder for melting, this technique is known as Solution precursor plasma spray Vacuum plasma spraying Vacuum plasma spraying (VPS) is a technology for etching and surface modification to create porous layers with high reproducibility and for cleaning and surface engineering of plastics, rubbers and natural fibers as well as for replacing CFCs for cleaning metal components. This surface engineering can improve properties such as frictional behavior, heat resistance, surface electrical conductivity, lubricity, cohesive strength of films, or dielectric constant, or it can make materials hydrophilic or hydrophobic. The process typically operates at 39–120 °C to avoid thermal damage. It can induce non-thermally activated surface reactions, causing surface changes which cannot occur with molecular chemistries at atmospheric pressure. Plasma processing is done in a controlled environment inside a sealed chamber at a medium vacuum, around 13–65 Pa. The gas or mixture of gases is energized by an electrical field from DC to microwave frequencies, typically 1–500 W at 50 V. The treated components are usually electrically isolated. The volatile plasma by-products are evacuated from the chamber by the vacuum pump, and if necessary can be neutralized in an exhaust scrubber. In contrast to molecular chemistry, plasmas employ: Molecular, atomic, metastable and free radical species for chemical effects. Positive ions and electrons for kinetic effects. Plasma also generates electromagnetic radiation in the form of vacuum UV photons to penetrate bulk polymers to a depth of about 10 μm. This can cause chain scissions and cross-linking. Plasmas affect materials at an atomic level. Techniques like X-ray photoelectron spectroscopy and scanning electron microscopy are used for surface analysis to identify the processes required and to judge their effects. As a simple indication of surface energy, and hence adhesion or wettability, often a water droplet contact angle test is used. The lower the contact angle, the higher the surface energy and more hydrophilic the material is. Changing effects with plasma At higher energies ionization tends to occur more than chemical dissociations. In a typical reactive gas, 1 in 100 molecules form free radicals whereas only 1 in 106 ionizes. The predominant effect here is the forming of free radicals. Ionic effects can predominate with selection of process parameters and if necessary the use of noble gases. Wire arc spray Wire arc spray is a form of thermal spraying where two consumable metal wires are fed independently into the spray gun. These wires are then charged and an arc is generated between them. The heat from this arc melts the incoming wire, which is then entrained in an air jet from the gun. This entrained molten feedstock is then deposited onto a substrate with the help of compressed air. This process is commonly used for metallic, heavy coatings. Plasma transferred wire arc Plasma transferred wire arc (PTWA) is another form of wire arc spray which deposits a coating on the internal surface of a cylinder, or on the external surface of a part of any geometry. It is predominantly known for its use in coating the cylinder bores of an engine, enabling the use of Aluminum engine blocks without the need for heavy cast iron sleeves. A single conductive wire is used as "feedstock" for the system. A supersonic plasma jet melts the wire, atomizes it and propels it onto the substrate. The plasma jet is formed by a transferred arc between a non-consumable cathode and the type of a wire. After atomization, forced air transports the stream of molten droplets onto the bore wall. The particles flatten when they impinge on the surface of the substrate, due to the high kinetic energy. The particles rapidly solidify upon contact. The stacked particles make up a high wear resistant coating. The PTWA thermal spray process utilizes a single wire as the feedstock material. All conductive wires up to and including 0.0625" (1.6mm) can be used as feedstock material, including "cored" wires. PTWA can be used to apply a coating to the wear surface of engine or transmission components to replace a bushing or bearing. For example, using PTWA to coat the bearing surface of a connecting rod offers a number of benefits including reductions in weight, cost, friction potential, and stress in the connecting rod. High velocity oxygen fuel spraying (HVOF) During the 1980s, a class of thermal spray processes called high velocity oxy-fuel spraying was developed. A mixture of gaseous or liquid fuel and oxygen is fed into a combustion chamber, where they are ignited and combusted continuously. The resultant hot gas at a pressure close to 1 MPa emanates through a converging–diverging nozzle and travels through a straight section. The fuels can be gases (hydrogen, methane, propane, propylene, acetylene, natural gas, etc.) or liquids (kerosene, etc.). The jet velocity at the exit of the barrel (>1000 m/s) exceeds the speed of sound. A powder feed stock is injected into the gas stream, which accelerates the powder up to 800 m/s. The stream of hot gas and powder is directed towards the surface to be coated. The powder partially melts in the stream, and deposits upon the substrate. The resulting coating has low porosity and high bond strength. HVOF coatings may be as thick as 12 mm (1/2"). It is typically used to deposit wear and corrosion resistant coatings on materials, such as ceramic and metallic layers. Common powders include WC-Co, chromium carbide, MCrAlY, and alumina. The process has been most successful for depositing cermet materials (WC–Co, etc.) and other corrosion-resistant alloys (stainless steels, nickel-based alloys, aluminium, hydroxyapatite for medical implants, etc.). High Velocity Air Fuel (HVAF) HVAF coating technology is the combustion of propane in a compressed air stream. Like HVOF, this produces a uniform high velocity jet. HVAF differs by including a heat baffle to further stabilize the thermal spray mechanisms. Material is injected into the air-fuel stream and coating particles are propelled toward the part. HVAF has a maximum flame temperature of 3,560° to 3,650 °F and an average particle velocity of 3,300 ft/sec. Since the maximum flame temperature is relatively close to the melting point of most spray materials, HVAF results in a more uniform, ductile coating. This also allows for a typical coating thickness of 0.002-0.050". HVAF coatings also have a mechanical bond strength of greater that 12,000 psi. Common HVAF coating materials include, but are not limited to; tungsten carbide, chrome carbide, stainless steel, hastelloy, and inconel. Due to its ductile nature hvaf coatings can help resist cavitation damage. Spray and Fuse Spray and fuse uses high heat to increase the bond between the thermal spray coating and the substrate of the part. Unlike other types of thermal spray, spray and fuse creates a metallurgical bond between the coating and the surface. This means that instead of relying on friction for coating adhesion, it melds the surface and coating material into one material. Spray and fuse comes down to the difference between adhesion and cohesion. This process usually involves spraying a powdered material onto the component then following with an acetylene torch. The torch melts the coating material and the top layer of the component material; fusing them together. Due to the high heat of spray and fuse, some heat distortion may occur, and care must be taken to determine if a component is a good candidate. These high temperatures are akin to those used in welding. This metallurgical bond creates an extremely wear and abrasion resistant coating. Spray and fuse delivers the benefits of hardface welding with the ease of thermal spray. Cold spraying Cold spraying (or gas dynamic cold spraying) was introduced to the market in the 1990s. The method was originally developed in the Soviet Union – while experimenting with the erosion of the target substrate, which was exposed to a two-phase high-velocity flow of fine powder in a wind tunnel, scientists observed accidental rapid formation of coatings. In cold spraying, particles are accelerated to very high speeds by the carrier gas forced through a converging–diverging de Laval type nozzle. Upon impact, solid particles with sufficient kinetic energy deform plastically and bond mechanically to the substrate to form a coating. The critical velocity needed to form bonding depends on the material's properties, powder size and temperature. Metals, polymers, ceramics, composite materials and nanocrystalline powders can be deposited using cold spraying. Soft metals such as Cu and Al are best suited for cold spraying, but coating of other materials (W, Ta, Ti, MCrAlY, WC–Co, etc.) by cold spraying has been reported. The deposition efficiency is typically low for alloy powders, and the window of process parameters and suitable powder sizes is narrow. To accelerate powders to higher velocity, finer powders (<20 micrometers) are used. It is possible to accelerate powder particles to much higher velocity using a processing gas having high speed of sound (helium instead of nitrogen). However, helium is costly and its flow rate, and thus consumption, is higher. To improve acceleration capability, nitrogen gas is heated up to about 900 °C. As a result, deposition efficiency and tensile strength of deposits increase. Warm spraying Warm spraying is a novel modification of high velocity oxy-fuel spraying, in which the temperature of combustion gas is lowered by mixing nitrogen with the combustion gas, thus bringing the process closer to the cold spraying. The resulting gas contains much water vapor, unreacted hydrocarbons and oxygen, and thus is dirtier than the cold spraying. However, the coating efficiency is higher. On the other hand, lower temperatures of warm spraying reduce melting and chemical reactions of the feed powder, as compared to HVOF. These advantages are especially important for such coating materials as Ti, plastics, and metallic glasses, which rapidly oxidize or deteriorate at high temperatures. Applications Crankshaft reconditioning or conditioning Corrosion protection Fouling protection Altering thermal conductivity or electrical conductivity Wear control: either hardfacing (wear-resistant) or abradable coating Repairing damaged surfaces Temperature/oxidation protection (thermal barrier coatings) Medical implants coatings (by using polymer derived ceramics) Production of functionally graded materials (for any of the above applications) Limitations Thermal spraying is a line of sight process and the bond mechanism is primarily mechanical. Thermal spray application is not compatible with the substrate if the area to which it is applied is complex or blocked by other bodies. Safety Thermal spraying need not be a dangerous process if the equipment is treated with care and correct spraying practices are followed. As with any industrial process, there are a number of hazards of which the operator should be aware and against which specific precautions should be taken. Ideally, equipment should be operated automatically in enclosures specially designed to extract fumes, reduce noise levels, and prevent direct viewing of the spraying head. Such techniques will also produce coatings that are more consistent. There are occasions when the type of components being treated, or their low production levels, require manual equipment operation. Under these conditions, a number of hazards peculiar to thermal spraying are experienced in addition to those commonly encountered in production or processing industries. Noise Metal spraying equipment uses compressed gases which create noise. Sound levels vary with the type of spraying equipment, the material being sprayed, and the operating parameters. Typical sound pressure levels are measured at 1 meter behind the arc. UV light Combustion spraying equipment produces an intense flame, which may have a peak temperature more than 3,100 °C and is very bright. Electric arc spraying produces ultra-violet light which may damage delicate body tissues. Plasma also generates quite a lot of UV radiation, easily burning exposed skin and can also cause "flash burn" to the eyes. Spray booths and enclosures should be fitted with ultra-violet absorbent dark glass. Where this is not possible, operators, and others in the vicinity should wear protective goggles containing BS grade 6 green glass. Opaque screens should be placed around spraying areas. The nozzle of an arc pistol should never be viewed directly unless it is certain that no power is available to the equipment. Dust and fumes The atomization of molten materials produces a large amount of dust and fumes made up of very fine particles (ca. 80–95% of the particles by number <100 nm). Proper extraction facilities are vital not only for personal safety, but to minimize entrapment of re-frozen particles in the sprayed coatings. The use of respirators fitted with suitable filters is strongly recommended where equipment cannot be isolated. Certain materials offer specific known hazards: Finely divided metal particles are potentially pyrophoric and harmful when accumulated in the body. Certain materials e.g. aluminum, zinc and other base metals may react with water to evolve hydrogen. This is potentially explosive and special precautions are necessary in fume extraction equipment. Fumes of certain materials, notably zinc and copper alloys, have a disagreeable odour and may cause a fever-type reaction in certain individuals (known as metal fume fever). This may occur some time after spraying and usually subsides rapidly. If it does not, medical advice must be sought. Fumes of reactive compounds can dissociate and create harmful gasses. Respirators should be worn in these areas and gas meters should be used to monitor the air before respirators are removed. Heat Combustion spraying guns use oxygen and fuel gases. The fuel gases are potentially explosive. In particular, acetylene may only be used under approved conditions. Oxygen, while not explosive, will sustain combustion and many materials will spontaneously ignite if excessive oxygen levels are present. Care must be taken to avoid leakage and to isolate oxygen and fuel gas supplies when not in use. Shock hazards Electric arc guns operate at low voltages (below 45 V dc), but at relatively high currents. They may be safely hand-held. The power supply units are connected to 440 V AC sources, and must be treated with caution. See also List of coating techniques Thin film References Coatings Materials science Chemical processes Thin film deposition Metallurgical processes
Thermal spraying
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
4,439
[ "Applied and interdisciplinary physics", "Thin film deposition", "Metallurgical processes", "Metallurgy", "Coatings", "Thin films", "Materials science", "Chemical processes", "nan", "Chemical process engineering", "Planes (geometry)", "Solid state engineering" ]
26,883,627
https://en.wikipedia.org/wiki/Via%20fence
A via fence, also called a picket fence, is a structure used in planar electronic circuit technologies to improve isolation between components that would otherwise be coupled by electromagnetic fields. It consists of a row of via holes which, if spaced close enough together, form a barrier to electromagnetic wave propagation of slab modes in the substrate. Additionally if radiation in the air above the board is also to be suppressed, then a strip pad with via fence allows a shielding can to be electrically attached to the top side, but electrically behave as if it continued through the PCB. Modern electronics have components and sub-units at high densities to achieve small size. Typically, many functions are integrated on to the same board or die. If these are not properly shielded from each other, many problems can result including poor frequency response, noise performance, and distortion. Via fences are used to shield microstrip and stripline transmission lines, guard edges of printed circuit boards, shield functional circuit units from each other, and to form the walls of waveguides integrated into a planar format. Via fences are cheap and easy to implement, but use up board space and are not as effective as solid metal walls. Purpose Planar technologies are used at microwave frequencies and make use of printed circuit tracks as transmission lines. As well as interconnections, these lines can be used to form components of functional units such as filters and couplers. Planar lines readily couple to each other when in close proximity, an effect called parasitic coupling. The coupling is due to fringing fields spreading from the edges of the line and intersecting adjacent lines or components. This is a desirable feature within the unit where it is made use of as part of the design. It is not desirable, however, that the fields couple to adjacent units. Modern electronic devices are usually required to be small. That, and the drive to keep down costs, leads to a high degree of integration and circuit units in less than desirable proximity. Via fences are one method that can be used to reduce parasitic coupling between such units. Among the many problems that can be caused by parasitic coupling are reducing bandwidth, degrading passband flatness, reducing amplifier output power, increasing reflections, worsening noise figure, causing amplifier instability, and providing undesirable feedback paths. In stripline, via fences running parallel to the line on either side serve to tie together the groundplanes, so preventing the propagation of parallel-plate modes. A similar arrangement is used to suppress unwanted modes in metal-backed coplanar waveguide. Structure A via fence consists of a row of via holes, that is, holes that pass through the substrate and are metallised on the inside to connect to pads on the top and bottom of the substrate. In a stripline format both the top and bottom of the dielectric sheet are covered with a metal ground plane so any via holes are automatically grounded at both ends. In other planar formats such as microstrip there is a ground plane only at the bottom of the substrate. In these formats it is the usual practice to connect the top pads of the via fence with a metal track (see figure 2). This still does not completely fence off the field as can be done in stripline. In stripline the field can only propagate between the ground planes, but in microstrip it is able to leak over the top of the via fence. Nevertheless, connecting the top pads improves isolation by . In some technologies it is more convenient to form the fence from conducting posts rather than vias. Isolation can be further improved by placing a metal wall on top of the via fence. These walls commonly form part of the device enclosure. The large holes in the via fences seen in figures 1 and 5 are screw holes for clamping these walls in place. The wall casting belonging to this circuit is shown in figure 3. The design of the fence needs to consider the size and spacing of the vias. Ideally, vias should act as short circuits, but they are not ideal and a via equivalent circuit can be modelled as a shunt inductance. Sometimes, a more complex model is required such as the equivalent circuit shown in figure 4. L1 is due to the inductance of the pads and C is the capacitance between them. R and L2 are, respectively, the resistance and inductance of the via hole metallisation. Resonances must be considered, in particular the parallel resonance of C and L2 will allow electromagnetic waves to pass at the resonant frequency. This resonance needs to be placed outside the operating frequencies of the equipment concerned. Spacing of the fences needs to be small in comparison to a wavelength (λ) in the substrate dielectric so as to make the fence appear solid to impinging waves. If too large, waves will be able to pass through the gaps. A common rule of thumb is to make the spacing less than λ/20 at the maximum operating frequency. Applications Via fences are used primarily at RF and microwave frequencies wherever planar formats are being applied. They are used in printed circuit technologies such as microstrip, ceramic technologies such as low temperature co-fired ceramic, monolithic microwave integrated circuits, and system-in-a-package technology. They are especially important in isolating circuit units operating at different frequencies. Also called via stitching, via fences can be used around the edge of a printed circuit board, an example can be seen in figure 5. This may be done to prevent electromagnetic interference with other equipment, or even to block radiation re-entering from elsewhere on the same circuit. Via fences are also used in post-wall waveguide, also known as laminated waveguide (LWG). In LWG, two parallel via fences form the sidewalls of a waveguide. Between them, and the upper and lower groundplanes of the substrate, is an electromagnetically isolated space. There is no electrical conductor within this space, but electromagnetic waves can exist within the enclosed dielectric material of the substrate and their direction of propagation is guided by the LWG. This technology is typically used at millimetre band frequencies and consequently dimensions are quite small. Furthermore, good isolation requires that the vias are closely spaced. Typically, isolation is required between guides, that is per fence. A typical W band () fence specification meeting this requirement in LWG is vias spaced between centres. This can be challenging to manufacture, and a higher density of vias is sometimes achieved by constructing the fence from two staggered rows of vias. Advantages and disadvantages Via fences are cheap and convenient. When used on planar formats they require no additional processes to manufacture. On a printed circuit for instance, they are made in the same process that creates the track patterns. However, via fences are not able to approach the isolation achievable with unbroken metal walls. Via fences use up a lot of valuable substrate real estate and so will increase the overall size of the assembly. Via fences too close to the line being guarded can degrade the isolation otherwise achievable. In stripline, a rule of thumb is to place the fences at least four times the trace to groundplane distance away from the line being guarded. References Bibliography Archambeault, Bruce, PCB Design for Real-World EMI Control, Springer, 2002 . Bahl, Inder, Lumped Elements for RF and Microwave Circuits, Artech House, 2003 . Harper, Charles A., High Performance Printed Circuit Boards, McGraw Hill Professional, 2000 . Joffe, Elya B.; Lock, Kai-Song, Grounds for Grounding: A Circuit to System Handbook, John Wiley & Sons, 2010 . Pao, Hseuh-Yuan; Aguirre, Jerry, "Phased array", in Duixian Liu; Pfeiffer, Ulrich; Grzyb, Janusz; Gaucher, Brian; Advanced Millimeter-wave Technologies: Antennas, Packaging and Circuits, John Wiley & Sons, 2009 . Ponchak, G.E.; Tentzeris, E.M.; Papapolymerou, J., "Coupling between microstrip lines embedded in polyimide layers for 3D-MMICs on Si", IEE Proceedings - Microwaves, Antennas and Propagation, volume 150, issue 5, pages 344–350, October 2003. Microwave technology Distributed element circuits Electronic design Electronics manufacturing
Via fence
[ "Engineering" ]
1,713
[ "Electronic design", "Electronic engineering", "Distributed element circuits", "Electronics manufacturing", "Design" ]
4,912,409
https://en.wikipedia.org/wiki/Bayonet%20mount
A bayonet mount (mainly as a method of mechanical attachment, such as fitting a lens to a camera using a matching lens mount) or bayonet connector (for electrical use) is a fastening mechanism consisting of a cylindrical male side with one or more radial pegs, and a female receptor with matching L-shaped slot(s) and with spring(s) to keep the two parts locked together. The slots are shaped like a capital letter L with serif (a short upward segment at the end of the horizontal arm); the peg slides into the vertical arm of the L, rotates across the horizontal arm, then is pushed slightly upwards into the short vertical "serif" by the spring; the connector is no longer free to rotate unless pushed down against the spring until the peg is out of the "serif". The bayonet mount is the standard light bulb fitting in the United Kingdom and in many countries that were members of the British Empire including Australia, Hong Kong, Fiji, India, Pakistan, Sri Lanka, Ireland and New Zealand, parts of the Middle East and Africa and, historically, in France and Greece. Design To couple the two parts, the pin(s) on the male are aligned with the slot(s) on the female and the two pushed together. Once the pins reach the bottom of the slot, one or both parts are rotated so that the pin slides along the horizontal arm of the L until it reaches the "serif". The spring then pushes the male connector up into the "serif" to keep the pin locked into place. A practised user can connect them quickly and, unlike screw connectors, they are not subject to cross-threading. To disconnect, the two parts are pushed together to move the pin out of the "serif" while twisting in the opposite direction than for connecting, and then pulling apart. The strength of the joint comes from the strength of the pins and the L slots, and the spring. To disengage unintentionally, the pins must break, the sleeve into which the connector slides must be distorted or torn enough to free the pins, or the spring must fail and allow the connector to be pushed down and rotate—for example due to vibration. It is possible to push down the connector and rotate it, but not far enough to engage and lock; it will stay in place temporarily, but accidental disconnection is very likely. Bayonet electrical connectors are used in the same applications where other connectors are used, to transmit either power or signals. Bayonet connections can be made faster than screw connections, and more securely than push-fit connections; they are more resistant to vibration than both these types. They may be used to connect two cables, or to connect a cable to a connector on the panel of a piece of equipment. The coupling system is usually made of two bayonet ramps machined on the external side of the receptacle connector and 2 stainless steel studs mounted inside the plug connector’s coupling nut. Several classes of electrical cable connectors, including audio, video, and data cables use bayonet connectors. Examples include BNC, C, and ST connectors. (The BNC connector is not exactly as described in this article, as the male, not female, connector has the slots and spring.) The GU-10 light fittings in common use for both halogen and LED miniature spotlight lamps have a similar means of connection but the retaining pins are fitted to the end of the lamp and also double as the electrical contacts. The pins are cylindrical but the ends have a larger diameter, resembling a T when viewed from the side. The receptacle has two slots resembling curved keyholes which have holes at one end sized to accept the pin ends. The lamp is inserted into the receptacle by placing the pins in the holes and rotating in a clockwise direction. Note that, unlike the traditional bayonet fitting, the retaining springs act laterally on the pins so no inward pressure is required to lock the lamp in the fitting. GU-10 fittings are available in heat-resistant form for use with halogen lamps which generate heat. History The first documented use of this type of fitting (without the name "bayonet") may be by al-Jazari in the 13th century, who used it to mount candles into his candle-clocks. This type of fitting was later used for soldiers who needed to quickly mount bayonets to the ends of their rifles, hence the name. Light bulbs The bayonet light bulb mount is the standard fitting in many former members of the British Empire including the United Kingdom, Australia, India, Ireland, and New Zealand, Hong Kong, as well as parts of the Middle East and Africa (although not Canada, which primarily uses Edison screw sockets along with the United States and Mexico). The standard size is B22d-2, often referred to in the context of lighting as simply BC or B22. Older installations in some other countries, including France and Greece use this base. First developed by St. George Lane Fox-Pitt in the UK and improved upon by the Brush Electric Company from the late 1870s onward, standard bulbs have two pins on opposite sides of the cap; however, some specialized bulbs have three pins (cap designation B22d-3) to prevent use in domestic light fittings. Examples of three-pin bulbs are found in mercury street lamps and fireglow bulbs in some older models of electric radiative heater. Older railway carriages in the UK also made use of a 3 pin bulb base to discourage theft. Bayonet cap bulbs are also very common worldwide in applications where vibration may loosen screw-mount bulbs, such as automotive lighting and other small indicators, and in many flashlights. In many other countries the Edison screw (E) base is used for lighting. Some bulbs may have slightly offset lugs to ensure they can be only inserted in one orientation, for example the 1157 automobile tail-light which has two different filaments to act as both a tail light and a brake light. In this bulb each filament has a different brightness and is connected to a separate contact on the bottom of the base; the two contacts are symmetrically positioned about the axis of the base, but the pins are offset so that the bulb can only be fitted in the correct orientation. Newer bulbs use a wedge base which can be inserted either way without complication. Some special-purpose bulbs, such as infra-red, have 3 pins 120 degrees apart to prevent them being used in any but the intended socket. Bayonet bases or caps are often abbreviated to BA, often with a number after. The number refers to the diameter of the base (e.g., BA22 is a 22 mm diameter bayonet cap lamp). BA15, a 15 mm base, can also be referred to as SBC standing for small bayonet cap. The lower-case letter s or d specifies whether the bulb has single or double contacts. The entries from the table below pertain to IEC 60061 "Lamp caps and holders together with gauges for the control of interchangeability and safety" and to DIN 49xxx. These are the available sizes in the UK: Of these, only the BC (BA22d, often abbreviated as B22) is widely used in homes. Formerly, some linear fluorescent lamps in the UK used BA22d end caps, owing to material shortages arising from the Second World War, which prevented the development of the bi-pin cap design that was becoming commonplace elsewhere in the world; notably, in the United States. Production of these lamps continued until the early 1980s, although manufacturers had produced adaptors that permitted bi-pin lamps being used in older luminaires (equipped with bayonet lamp holders) since the 1960s. The BA20d (sometimes called a Bosch fitting) was once a common automotive (twin filament) headlamp fitting but has largely been superseded by more modern, higher-rated H-series sockets and is only used for some lower-powered applications such as combined automotive tail and stop lamps. In Japan, the JIS C 8310 “hook ceiling” bayonet mount is quite common. It is designed to both provide power and carry the weight of a lamp. A similar concept existed in BS 7001 as the slide-in “luminaire-supporting coupler” (LSC), but its prominence is unknown. Other uses Many cameras with interchangeable lenses use a bayonet lens mount to allow lenses to be changed rapidly and locked accurately in position. Camera lens mounts usually employ stronger flattened tabs rather than pins, though their function is the same. A bayonet mount is often used to mate a cylinder with a base in cylindrical packaging such as that for CD spindles. See also Bi-pin lamp base Storz Arri bayonet Joseph Swan BNC connector Edison screw References Further reading IEC 61184: Bayonet lampholders, International Electrotechnical Commission, 1997. (also: BS EN 61184). Specifies requirements and tests for the B15 and B22 bayonet holders for light bulbs used in some Commonwealth countries External links Line-voltage Socket Design Competition (GU24) Types of lamp Fasteners Electrical connectors Mechanical standards
Bayonet mount
[ "Engineering" ]
1,916
[ "Construction", "Mechanical standards", "Fasteners", "Mechanical engineering" ]
4,913,827
https://en.wikipedia.org/wiki/Uranium%20mining
Uranium mining is the process of extraction of uranium ore from the ground. Over 50,000 tons of uranium were produced in 2019. Kazakhstan, Canada, and Australia were the top three uranium producers, respectively, and together account for 68% of world production. Other countries producing more than 1,000 tons per year included Namibia, Niger, Russia, Uzbekistan and China. Nearly all of the world's mined uranium is used to power nuclear power plants. Historically uranium was also used in applications such as uranium glass or ferrouranium but those applications have declined due to the radioactivity and toxicity of uranium and are nowadays mostly supplied with a plentiful cheap supply of depleted uranium which is also used in uranium ammunition. In addition to being cheaper, depleted uranium is also less radioactive due to a lower content of short-lived and than natural uranium. Uranium is mined by in-situ leaching (57% of world production) or by conventional underground or open-pit mining of ores (43% of production). During in-situ mining, a leaching solution is pumped down drill holes into the uranium ore deposit where it dissolves the ore minerals. The uranium-rich fluid is then pumped back to the surface and processed to extract the uranium compounds from solution. In conventional mining, ores are processed by grinding the ore materials to a uniform particle size and then treating the ore to extract the uranium by chemical leaching. The milling process commonly yields dry powder-form material consisting of natural uranium, "yellowcake", which is nowadays commonly sold on the uranium market as U3O8. While some nuclear power plants – most notably heavy water reactors like the CANDU – can operate with natural uranium (usually in the form of uranium dioxide), the vast majority of commercial nuclear power plants and many research reactors require uranium enrichment, which raises the content of from the natural 0.72% to 3–5% (for use in light water reactors) or even higher, depending on the application. Enrichment requires conversion of the yellowcake into uranium hexafluoride and production of the fuel (again usually uranium dioxide, but sometimes uranium carbide, uranium hydride or uranium nitride) from that feedstock. History Early uranium mining Before 1789, when Martin Heinrich Klaproth discovered the element, uranium compounds produced included nitrate, sulfate, phosphate, acetate and potassium- and sodium-diuranate. Klaproth detected the element in pitchblende from the George Wagsfort mine, Ore Mountains, and established commercial use as glass coloring. Pitchblende from these mountains was mentioned as early as 1565, and 110 t of uranium was produced from 1825 until 1898. In 1852, the uranium mineral autunite from the Massif Central was identified. Around 1850, uranium mining began in Joachimsthal, Bohemia, where more than 620 t of uranium metal (tU) was produced from 1850 and 1898, with 10,000 tU produced before closure in 1968. In 1871, uranium ore mining began in Central City, Colorado, where 50 t were mined before 1895. In 1873, the uranium mining began in the South Terras mine, St Stephen-in-Brannel, Cornwall, producing most of the 300 tU from that area in the 19th century. In 1898, carnotite was first mined in the Uravan Mineral Belt, yielding 10 tU annually. In 1898, Pierre Curie and Marie Skłodowska-Curie took delivery of 1 t of pitchblende from St. Joachimsthal, from which Marie identified the element radium. Pierre advocated its usage as a cancer cure, which fostered a spa business for that town. In 1913, the Shinkolobwe, Katanga Province, was discovered. In 1931, the Port Radium deposit was discovered. Other significant discoveries included Beira Province, Tyuya Muyun, and Radium Hill. Atomic age In 1922, Union Minière du Haut Katanga started producing medicinal radium from the Shinkolobwe mine, but closed down in the late 1930s as the radium market diminished. In May 1940, the Nazis invaded Belgium and seized Union Minière's uranium ore stored there. On 18 September 1942, 1250 t of Shinkolobwe uranium ore for the Manhattan Project was purchased from Union Minière's Edgar Sengier, who had stockpiled the ore in an Archer Daniels Midland warehouse near the Bayonne Bridge, Staten Island. In 1943, the Sengier reopened the Sinkolobwe mine with U.S. Army Corps of Engineers' resources, and a $13 million investment from the United States. Sengier reported that uranium ore had been extracted from the mine down to a depth of 79 meters, but that another 101 meters of ore was available for extraction. This amounted to 10,000 tons of up to 60% triuranium octoxide. The project also acquired most of the production from the Eldorado Mine (Northwest Territories). According to Richard Rhodes, referring to German uranium research, "Auer, the thorium specialists ... delivered the first ton of pure uranium oxide processed from Joachimsthal ores to the War Office in January 1940. In June 1940 ... Auer ordered sixty tons of refined uranium oxide from the Union Miniére in occupied Belgium." While the Soviet Republics of Kazakhstan and the RSFSR would later become some of the leading uranium producers in the world, immediately after the end of World War II the availability of large uranium deposits in the USSR wasn't yet known and thus the Soviets developed immense mining operations in their satellite states East Germany and Czechoslovakia which had known uranium deposits in the Ore Mountains. The deliberately opaquely named SDAG Wismut (the German term "Wismut" for Bismuth should give the illusion of prospection for a metal the Soviets definitely weren't after) became the biggest employer in the Saxon Ore Mountains and remote mining towns like Johanngeorgenstadt swelled to ten times their population in a few years. The mining cost immense amounts of money and miners were on the one hand subject to heavier repression and surveillance but on the other hand allowed more generous supply with consumer goods than other East Germans. While production was never able to compete with global uranium market prices, the dual use nature of the mined material as well as the possibility to pay miners in soft currency but sell uranium for hard currency or substitute imports which would have had to be paid for in hard currency tipped the scales in favor of continuing mining operations throughout the Cold War. After German reunification, mining was wound down and the arduous task of rehabilitating the land impacted by mining was begun. The seventeen towns and mines under Wismut's control contributed 50 percent of the uranium used in the Soviet's first atomic bomb, Joe-1, and 80 percent of the uranium used in the Soviet nuclear program. Of the 150,000 laborers, 1281 were killed in accidents and 20,000 suffered injuries. After Stalin's death in 1953, the Red Army turned over control of production to SDAG, and prison laborers were released, reducing the population of laborers to 45,000. At its peak in 1953, the St. Joachimsthal mines had 16,100 inmates, half of whom were Soviet political prisoners. By 1975, 75% of global uranium ore production came from quartz-pebble conglomerates and sandstones located in the Elliot Lake area of Canada, Witwatersrand, and the Colorado Plateau. In 1990, 55% of world production came from underground mines, but this shrank to 33% by 1999. From 2000, new Canadian mines again increased the proportion of underground mining, and with Olympic Dam it is now 37%. In situ leach (ISL, or ISR) mining has been steadily increasing its share of the total, mainly due to Kazakhstan. In 2009, top producing mines included the McArthur River uranium mine at 7400 tU, the Ranger Uranium Mine at 4423 tU, the Rössing uranium mine at 3574 tU, the Moiynkum Desert mines at 3250 tU, the Streltsovsk mine at 3003 tU, the Olympic Dam mine at 2981 tU, the Arlit mine at 1808 tU, the Rabbit Lake mine at 1400 tU, the Akouta mine at 1435 tU, and the McClean Lake mine at 1400 tU. The world's largest deposits include the Olympic Dam mine at 295,000 tU, the Imouraren mine at 183,520 tU, the McArthur River mine at 128,900 tU, the Streltsovsk mine at 118,341 tU, the Novokonstantinovka mines at 93,630, the Cigar Lake Mine at 80,500 tU, Uzbekistan mines at 76,000 tU, the Elkon mine at 71,300 tU, the Brazilian Itataia complex at 67,240 tU, the Marenica project at 62,856 tU, the Langer Heinrich Mine at 60,830 tU, the Dominion mine at 55,753 tU, the Inkai Uranium Project at 51,808 tU, the Kiggavik project at 51,574 tU, the Rössing mine at 50,657 tU, the Australian Yeleerie project at 44,077, and the Trekkopje mine at 42,243 tU. Deposit types Many different types of uranium deposits have been discovered and mined. There are mainly three types of uranium deposits including unconformity-type deposits, namely paleoplacer deposits and sandstone-type, also known as roll front type deposits. Uranium deposits are classified into 15 categories according to their geological setting and the type of rock in which they are found. This geological classification system is determined by the International Atomic Energy Agency (IAEA). Uranium is also contained in seawater but at present prices on the uranium market, costs would have to be lowered by a factor of 3–6 to make its recovery economical. Sedimentary Uranium deposits in sedimentary rocks include those in sandstone (in Canada and the western US), Precambrian unconformities (in Canada), phosphate, Precambrian quartz-pebble conglomerate, collapse breccia pipes (see Arizona breccia pipe uranium mineralization), and calcrete. Sandstone uranium deposits are generally of two types. Roll-front type deposits occur at the boundary between the up dip and oxidized part of a sandstone body and the deeper down dip reduced part of a sandstone body. Peneconcordant sandstone uranium deposits, also called Colorado Plateau–type deposits, most often occur within generally oxidized sandstone bodies, often in localized reduced zones, such as in association with carbonized wood in the sandstone. Precambrian quartz-pebble conglomerate-type uranium deposits occur only in rocks older than two billion years old. The conglomerates also contain pyrite. These deposits have been mined in the Blind River–Elliot Lake district of Ontario, Canada, and from the gold-bearing Witwatersrand conglomerates of South Africa. Unconformity-type deposits make up about 33% of the World Outside Centrally Planned Economies Areas' (WOCA) uranium deposits. Igneous or hydrothermal Hydrothermal uranium deposits encompass the vein-type uranium ores. Vein-type hydrothermal uranium deposits represent epigenetic concentrations of uranium minerals that typically fill breccias, fractures, and shear zones. Many studies have sought to identify the source of uranium with hydrothermal vein-type deposits and the potential sources still remains a mystery, but are thought to include preexisting rocks that have been broken down by weathering and force that come from areas of long-term sediment build up. The South Chine Block is an example of a region that has been relying on vein-type hydrothermal uranium deposit demand for the past half century. Igneous deposits include nepheline syenite intrusives at Ilimaussaq, Greenland; the disseminated uranium deposit at Rossing, Namibia; uranium-bearing pegmatites, and the Aurora crater lake deposit of the McDermitt Caldera in Oregon. Disseminated deposits are also found in the states of Washington and Alaska in the US. Breccia Breccia uranium deposits are found in rocks that have been broken due to tectonic fracturing, or weathering. Breccia uranium deposits are most common in India, Australia and the United States. A large mass of breccia is called a breccia pipe or chimney and is composed of the rock forming an irregular and almost cylinder-like shape. The origin of breccia pipe is uncertain but it is thought that they form on intersections and faults.  When the formations are found solid in ground host rock called rock flour, it usually is often a site for copper or uranium mining. Copper Creek, Arizona, is home to approximately 500 mineralized breccia pipes and Cripple Creek, Colorado, also is a site that contains breccia pipe ore deposits that is associated with a volcanic pipe. Olympic Dam mine, the world's largest uranium deposit, was discovered by Western Mining Corporation in 1975 and is owned by BHP. Exploration Uranium prospecting is similar to other forms of mineral exploration with the exception of some specialized instruments for detecting the presence of radioactive isotopes. The Geiger counter was the original radiation detector, recording the total count rate from all energy levels of radiation. Ionization chambers and Geiger counters were first adapted for field use in the 1930s. The first transportable Geiger–Müller counter (weighing 25 kg) was constructed at the University of British Columbia in 1932. H.V. Ellsworth of the GSC built a lighter weight, more practical unit in 1934. Subsequent models were the principal instruments used for uranium prospecting for many years, until geiger counters were replaced by scintillation counters. The use of airborne detectors to prospect for radioactive minerals was first proposed by G. C. Ridland, a geophysicist working at Port Radium in 1943. In 1947, the earliest recorded trial of airborne radiation detectors (ionization chambers and Geiger counters) was conducted by Eldorado Mining and Refining Limited. (a Canadian Crown Corporation since sold to become Cameco Corporation). The first patent for a portable gamma-ray spectrometer was filed by Professors Pringle, Roulston & Brownell of the University of Manitoba in 1949, the same year as they tested the first portable scintillation counter on the ground and in the air in northern Saskatchewan. Airborne gamma-ray spectrometry is now the accepted leading technique for uranium prospecting with worldwide applications for geological mapping, mineral exploration & environmental monitoring. Airborne gamma-ray spectrometry used specifically for uranium measurement and prospecting must account for a number of factors like the distance between the source and the detector and the scattering of radiation through the minerals, surrounding earth and even in the air. In Australia, a Weathering Intensity Index has been developed to help prospectors based on the Shuttle Radar Topography Mission (SRTM) elevation and airborne gamma-ray spectrometry images. A deposit of uranium, discovered by geophysical techniques, is evaluated and sampled to determine the amounts of uranium materials that are extractable at specified costs from the deposit. Uranium reserves are the amounts of ore that are estimated to be recoverable at stated costs. As prices rise or technology allows for lower cost of recovery of known, previously uneconomic, deposits, reserves increase. For uranium this effect is particularly pronounced as the biggest currently uneconomic reserve – uranium extraction from seawater – is bigger than all known land based resources of uranium combined. From 2008 through at least 2024, the only four countries that have reported non-domestic uranium exploration and development expenses are: China, Japan, France, and Russia. The U.S. is investigating whether China is circumventing a ban on Russian uranium imports by exporting its uranium to the U.S. while importing enriched uranium from Russia. This inquiry follows a spike in Chinese uranium exports to the U.S. after the December 2023 ban, which aimed to cut off funding for Russia's war in Ukraine. Mining techniques As with other types of hard rock mining there are several methods of extraction. In 2016, the percentage of the mined uranium produced by each mining method was: in-situ leach (49.7 percent), underground mining (30.8 percent), open pit (12.9 percent), heap leaching (0.4 percent), co-product/by-product (6.1%). The remaining 0.1% was derived as miscellaneous recovery. Open pit In open pit mining, overburden is removed by drilling and blasting to expose the ore body, which is then mined by blasting and excavation using loaders and dump trucks. Workers spend much time in enclosed cabins thus limiting exposure to radiation. Water is extensively used to suppress airborne dust levels. Groundwater is an issue in all types of mining, but in open pit mining, the usual way of dealing with it – i.e. when the target mineral is found below the natural water table – is to lower the water table by pumping off the water. The ground may settle considerably when groundwater is removed and may again move unpredictably when groundwater is allowed to rise again after mining is concluded. Land reclamation after mining takes different routes, depending on the amount of material removed. Due to the high energy density of uranium, it is often sufficient to fill in the former mine with the overburden, but in case of a mass deficit exceeding the height difference between the previous surface level and the natural water table, artificial lakes develop when groundwater removal ceases. If sulfites, sulfides or sulfates are present in the now-exposed rocks acid mine drainage can be a concern for those newly developing bodies of water. Mining companies are now required by law to establish a fund for future reclamation while mining is ongoing and those funds are usually deposited in such a way as to be unaffected by bankruptcy of the mining company. Underground If the uranium is too far below the surface for open pit mining, an underground mine might be used with tunnels and shafts dug to access and remove uranium ore. Underground uranium mining is in principle no different from any other hard rock mining and other ores are often mined in association (e.g., copper, gold, silver). Once the ore body has been identified a shaft is sunk in the vicinity of the ore veins, and crosscuts are driven horizontally to the veins at various levels, usually every 100 to 150 metres. Similar tunnels, known as drifts, are driven along the ore veins from the crosscut. To extract the ore, the next step is to drive tunnels, known as raises when driven upwards and winzes when driven downward, through the deposit from level to level. Raises are subsequently used to develop the stopes where the ore is mined from the veins. The stope, which is the workshop of the mine, is the excavation from which the ore is extracted. Three methods of stope mining are commonly used. In the "cut and fill" or "open stoping" method, the space remaining following removal of ore after blasting is filled with waste rock and cement. In the "shrinkage" method, only sufficient broken ore is removed via the chutes below to allow miners working from the top of the pile to drill and blast the next layer to be broken off, eventually leaving a large hole. The method known as "room and pillar" is used for thinner, flatter ore bodies. In this method the ore body is first divided into blocks by intersecting drives, removing ore while so doing, and then systematically removing the blocks, leaving enough ore for roof support. The health effects discovered from radon exposure in unventilated uranium mining prompted the switch away from uranium mining via tunnel mining towards open cut and in-situ leaching technology, a method of extraction that does not produce the same occupational hazards, or mine tailings, as conventional mining. With regulations in place to ensure the use of high volume ventilation technology if any confined space uranium mining is occurring, occupational exposure and mining deaths can be largely eliminated. The Olympic Dam and Canadian underground mines are ventilated with powerful fans with radon levels being kept at a very low to practically "safe level" in uranium mines. Naturally occurring radon in other, non-uranium mines, also may need control by ventilation. Heap leaching Heap leaching is an extraction process by which chemicals (usually sulfuric acid) are used to extract the economic element from ore which has been mined and placed in piles on the surface. Heap leaching is generally economically feasible only for oxide ore deposits. Oxidation of sulfide deposits occurs during the geological process called weathering. Therefore, oxide ore deposits are typically found close to the surface. If there are no other economic elements within the ore a mine might choose to extract the uranium using a leaching agent, usually a low molar sulfuric acid. If the economic and geological conditions are right, the mining company will level large areas of land with a small gradient, layering it with thick plastic (usually HDPE or LLDPE), sometimes with clay, silt or sand beneath the plastic liner. The extracted ore will typically be run through a crusher and placed in heaps atop the plastic. The leaching agent will then be sprayed on the ore for 30–90 days. As the leaching agent filters through the heap, the uranium will break its bonds with the oxide rock and enter the solution. The solution will then filter along the gradient into collecting pools which will then be pumped to on-site plants for further processing. Only some of the uranium (commonly about 70%) is actually extracted. The uranium concentrations within the solution are very important for the efficient separation of pure uranium from the acid. As different heaps will yield different concentrations, the solution is pumped to a mixing plant that is carefully monitored. The properly balanced solution is then pumped into a processing plant where the uranium is separated from the sulfuric acid. Heap leach is significantly cheaper than traditional milling processes. The low costs allow for lower grade ore to be economically feasible (given that it is the right type of ore body). US environmental law requires that the surrounding ground water is continually monitored for possible contamination. The mine will also have to have continued monitoring even after the shutdown of the mine. In the past mining companies would sometimes go bankrupt, leaving the responsibility of mine reclamation to the public. 21st century additions to US mining law require that companies set aside the money for reclamation before the beginning of the project. The money will be held by the public to insure adherence to environmental standards if the company were to ever go bankrupt. In-situ leaching In-situ leaching (ISL), also known as solution mining, or in-situ recovery (ISR) in North America, involves leaving the ore where it is in the ground, and recovering the minerals from it by dissolving them and pumping the pregnant solution to the surface where the minerals can be recovered. Consequently, there is little surface disturbance and no tailings or waste rock generated. However, the orebody needs to be permeable to the liquids used, and located so that they do not contaminate ground water away from the orebody. Uranium ISL uses the native groundwater in the orebody which is fortified with a complexing agent and in most cases an oxidant. It is then pumped through the underground orebody to recover the minerals in it by leaching. Once the pregnant solution is returned to the surface, the uranium is recovered in much the same way as in any other uranium plant (mill). In Australian ISL mines (Beverley, Four Mile and Honeymoon Mine) the oxidant used is hydrogen peroxide and the complexing agent sulfuric acid. Kazakh ISL mines generally do not employ an oxidant but use much higher acid concentrations in the circulating solutions. ISL mines in the USA use an alkali leach due to the presence of significant quantities of acid-consuming minerals such as gypsum and limestone in the host aquifers. Any more than a few percent carbonate minerals means that alkali leach must be used in preference to the more efficient acid leach. The Australian government has published a best practice guide for in situ leach mining of uranium, which is being revised to take account of international differences. Seawater recovery The uranium concentration in sea water is low, approximately 3.3 parts per billion or 3.3 micrograms per liter of seawater. But the quantity of this resource is gigantic and some scientists believe this resource is practically limitless with respect to world-wide demand. That is to say, if even a portion of the uranium in seawater could be used the entire world's nuclear power generation fuel could be provided over a long time period. Some proponents claim this statistic is exaggerated. Although research and development for recovery of this low-concentration element by inorganic adsorbents such as titanium oxide compounds has occurred since the 1960s in the United Kingdom, France, Germany, and Japan, this research was halted due to low recovery efficiency. At the Takasaki Radiation Chemistry Research Establishment of the Japan Atomic Energy Research Institute (JAERI Takasaki Research Establishment), research and development has continued culminating in the production of adsorbent by irradiation of polymer fiber. Adsorbents have been synthesized that have a functional group (amidoxime group) that selectively adsorbs heavy metals, and the performance of such adsorbents has been improved. Uranium adsorption capacity of the polymer fiber adsorbent is high, approximately tenfold greater in comparison to the conventional titanium oxide adsorbent. One method of extracting uranium from seawater is using a uranium-specific nonwoven fabric as an adsorbent. The total amount of uranium recovered from three collection boxes containing 350 kg of fabric was >1 kg of yellowcake after 240 days of submersion in the ocean. The experiment by Seko et al. was repeated by Tamada et al. in 2006. They found that the cost varied from ¥15,000 to ¥88,000 depending on assumptions and "The lowest cost attainable now is ¥25,000 with 4g-U/kg-adsorbent used in the sea area of Okinawa, with 18 repetitionuses." With the May, 2008 exchange rate, this was about $240/kg-U. In 2012, ORNL researchers announced the successful development of a new adsorbent material dubbed "HiCap", which vastly outperforms previous best adsorbents, which perform surface retention of solid or gas molecules, atoms or ions. "We have shown that our adsorbents can extract five to seven times more uranium at uptake rates seven times faster than the world's best adsorbents," said Chris Janke, one of the inventors and a member of ORNL's Materials Science and Technology Division. HiCap also effectively removes toxic metals from water, according to results verified by researchers at Pacific Northwest National Laboratory. In 2012 it was estimated that this fuel source could be extracted at 10 times the current price of uranium. In 2014, with the advances made in the efficiency of seawater uranium extraction, it was suggested that it would be economically competitive to produce fuel for light water reactors from seawater if the process was implemented at large scale. Uranium extracted on an industrial scale from seawater would constantly be replenished by both river erosion of rocks and the natural process of uranium dissolved from the surface area of the ocean floor, both of which maintain the solubility equilibria of seawater concentration at a stable level. Some commentators have argued that this strengthens the case for nuclear power to be considered a renewable energy. Co-product/by-product Uranium can be recovered as a by-product along with other co-products such as molybdenum, vanadium, nickel, zinc and petroleum products. Uranium is also often found in phosphate minerals, where it has to be removed because phosphate is mostly used for fertilizers. Phosphogypsum is a waste product from phosphate mining that can contain significant amounts of uranium and radium. Coal fly ash also contains significant amounts of uranium and has been suggested as a source for uranium extraction. Resources Uranium occurs naturally in many rocks, and even in seawater. However, like other metals, it is seldom sufficiently concentrated to be economically recoverable. Like any resource, uranium cannot be mined at any desired concentration. No matter the technology, at some point it is too costly to mine lower grade ores. Mining companies usually consider concentrations greater than 0.075% (750 ppm) as ore, or rock economical to mine at current uranium market prices. There are around 40 trillion tons of uranium in Earth's crust, but most is distributed at trace concentration over its mass. Estimates of the amount concentrated into ores affordable to extract for under $130 per kg can be less than a millionth of that total. Uranium-235, the fissile isotope of uranium used in nuclear reactors, makes up about 0.7% of uranium from ore. It is the only naturally occurring isotope capable of directly generating nuclear power. While uranium-235 can be "bred" from , a natural decay product of present at 55 ppm in all natural uranium samples, uranium-235 is ultimately a finite non-renewable resource. Due to the currently low price of uranium, the majority of commercial light water reactors operate on a "once through fuel cycle" which leaves virtually all the energy contained in the original , which makes up over 99% of natural uranium, unused. Nuclear reprocessing can recover part of that energy by producing MOX fuel or Remix Fuel for use in conventional power generating light water reactors. This technology is currently used at industrial scale in France, Russia and Japan. However, at current uranium prices, this is widely deemed uneconomical if only the "input" side is considered. Breeder reactor technology could allow the current reserves of uranium to provide power for humanity for billions of years, thus making nuclear power a sustainable energy. Reserves Reserves are the most readily available resources. About 96% of the global uranium reserves are found in these ten countries: Australia, Canada, Kazakhstan, South Africa, Brazil, Namibia, Uzbekistan, the United States, Niger, and Russia. The known uranium resources represent a higher level of assured resources than is normal for most minerals. Further exploration and higher prices will certainly, on the basis of present geological knowledge, yield further resources as present ones are used up. There was very little uranium exploration between 1985 and 2005, so the significant increase in exploration effort that we are now seeing could readily double the known economic resources. On the basis of analogies with other metal minerals, a doubling of price from price levels in 2007 could be expected to create about a tenfold increase in measured resources, over time. Known conventional resources Known conventional resources are resources that are known to exist and easy to mine. In 2006, there were about 4 million tons of conventional resources. In 2011, this increased to 7 million tonnes. Exploration for uranium has increased: from 1981 to 2007, annual exploration expenditures grew modestly, from US$4 million to US$7 million. This increased to US$11 million in 2011. The world's largest deposits of uranium are found in three countries. Australia has just over 30% of the world's reasonably assured resources and inferred resources of uranium – about . Kazakhstan has about 12% of the world's reserves, or about . Canada has of uranium, representing about 9%. Undiscovered conventional resources Undiscovered conventional resources are resources that are thought to exist but have not been mined. It will take a significant exploration and development effort to locate the remaining deposits and begin mining them. However, since the entire earth's geography has not been explored for uranium at this time, there is still the potential to discover exploitable resources. The OECD Redbook cites areas still open to exploration throughout the world. Many countries are conducting complete aeromagnetic gradiometer radiometric surveys to get an estimate the size of their undiscovered mineral resources. Combined with a gamma-ray survey, these methods can locate undiscovered uranium and thorium deposits. The U.S. Department of Energy conducted the first and only national uranium assessment in 1980 – the National Uranium Resource Evaluation (NURE) program. Secondary resources Secondary uranium resources are recovered from other sources such as nuclear weapons, inventories, reprocessing and re-enrichment. Since secondary resources have exceedingly low discovery costs and very low production costs, they have displaced a significant portion of primary production. In 2017, about 7% of uranium demand was met from secondary resources. Due to reduction in nuclear weapons stockpiles, a large amount of former weapons uranium was released for use in civilian nuclear reactors. As a result, starting in 1990, a significant portion of uranium nuclear power requirements were supplied by former weapons uranium, rather than newly mined uranium. In 2002, mined uranium supplied only 54 percent of nuclear power requirements. But as the supply of former weapons uranium has been used up, mining has increased, so that in 2012, mining provided 95 percent of reactor requirements, and the OCED Nuclear Energy Agency and the International Atomic Energy Agency projected that the gap in supply would be completely erased in 2013. Inventories Inventories are kept by a variety of organizations – government, commercial and others. The US DOE keeps inventories for security of supply to cover for emergencies where uranium is not available at any price. Decommissioning nuclear weapons Both the US and Russia have committed to recycle their nuclear weapons into fuel for electricity production. This program is known as the Megatons to Megawatts Program. Down blending of Russian weapons high enriched uranium (HEU) will result in about of low enriched uranium (LEU) over 20 years. This is equivalent to about of natural U, or just over twice annual world demand. Since 2000, of military HEU is displacing about of uranium oxide mine production per year which represents some 13% of world reactor requirements. The Megatons to Megawatts program came to an end in 2013. Plutonium recovered from nuclear weapons or other sources can be blended with uranium fuel to produce a mixed-oxide fuel. In June 2000, the US and Russia agreed to dispose of each of weapons-grade plutonium by 2014. The US undertook to pursue a self-funded dual track program (immobilization and MOX). The G-7 nations provided US$1 billion to set up Russia's program. The latter was initially MOX specifically designed for VVER reactors, the Russian version of the Pressurized Water Reactor (PWR), the high cost being because this was not part of Russia's fuel cycle policy. This MOX fuel for both countries is equivalent to about of natural uranium. The U.S. also has commitments to dispose of of non-waste HEU. Reprocessing and recycling Nuclear reprocessing (or recycling) can increase the supply of uranium by separating the uranium from spent nuclear fuel. Spent nuclear fuel is primarily composed of uranium, with a typical concentration of around 96% by mass. The composition of reprocessed uranium depends on the time the fuel has been in the reactor, but it is mostly uranium-238, with about 1% uranium-235, 1% uranium-236 and smaller amounts of other isotopes including uranium-232. Currently, there are eleven reprocessing plants in the world. Of these, two are large-scale commercially operated plants for the reprocessing of spent fuel elements from light water reactors with throughputs of more than of uranium per year. These are La Hague, France with a capacity of per year and Sellafield, England at uranium per year. The rest are small experimental plants. The two large-scale commercial reprocessing plants together can reprocess 2,800 tonnes of uranium waste annually. The United States had reprocessing plants in the past but banned reprocessing in the late 1970s due to the high costs and the risk of nuclear proliferation via plutonium. The main problems with uranium reprocessing are the cost of mined uranium compared to the cost of reprocessing, At present, reprocessing and the use of plutonium as reactor fuel is far more expensive than using uranium fuel and disposing of the spent fuel directly – even if the fuel is only reprocessed once. Reprocessing is most useful as part of a nuclear fuel cycle using fast-neutron reactors since reprocessed uranium and reactor-grade plutonium both have isotopic compositions not optimal for use in today's thermal-neutron reactors. Unconventional resources Unconventional resources are occurrences that require novel technologies for their exploitation and/or use. Often unconventional resources occur in low-concentration. The exploitation of unconventional uranium requires additional research and development efforts for which there is no imminent economic need, given the large conventional resource base and the option of reprocessing spent fuel. Phosphates, seawater, uraniferous coal ash, and some type of oil shales are examples of unconventional uranium resources. Phosphates Uranium occurs at concentrations of 50 to 200 parts per million (ppm) in phosphate-laden earth or phosphate rock. As uranium prices increase, there has been interest in extraction of uranium from phosphate rock, which is normally used as the basis of phosphate fertilizers. There are 22 million tons of uranium in phosphate deposits. Recovery of uranium from phosphates is a mature technology; it has been used in Belgium and the United States, but high recovery costs limit the use of these resources, with estimated production costs in the range of US$60–100/kgU including capital investment, according to a 2003 OECD report for a new 100 tU/year project. Historical operating costs for the uranium recovery from phosphoric acid range from $48–$119/kg U3O8. In 2011, the average price paid for U3O8 in the United States was $122.66/kg. Worldwide, approximately 400 wet-process phosphoric acid plants were in operation. Assuming an average recoverable content of 100 ppm of uranium, and that uranium prices do not increase so that the main use of the phosphates are for fertilizers, this scenario would result in a maximum theoretical annual output of U3O8. Seawater Unconventional uranium resources include up to of uranium contained in sea water. Several technologies to extract uranium from sea water have been demonstrated at the laboratory scale. According to the OECD, uranium may be extracted from seawater for about US$300/kgU. In 2012, ORNL researchers announced the successful development of a new absorbent material dubbed HiCap, which vastly outperforms previous best adsorbents, which perform surface retention of solid or gas molecules, atoms or ions. "We have shown that our adsorbents can extract five to seven times more uranium at uptake rates seven times faster than the world's best adsorbents", said Chris Janke, one of the inventors and a member of ORNL's Materials Science and Technology Division. HiCap also effectively removes toxic metals from water, according to results verified by researchers at Pacific Northwest National Laboratory. Uraniferous coal ash According to a study by Oak Ridge National Laboratory, the theoretical maximum energy potential (when used in breeder reactors) of trace uranium and thorium in coal actually exceeds the energy released by burning the coal itself. This is despite very low concentration of uranium in coal of only several parts per million average before combustion. From 1965 to 1967 Union Carbide operated a mill in North Dakota, United States, burning uraniferous lignite and extracting uranium from the ash. The plant produced about 150 metric tons of U3O8 before shutting down. An international consortium has set out to explore the commercial extraction of uranium from uraniferous coal ash from coal power stations located in Yunnan province, China. The first laboratory scale amount of yellowcake uranium recovered from uraniferous coal ash was announced in 2007. The three coal power stations at Xiaolongtang, Dalongtang and Kaiyuan have piled up their waste ash. Initial tests from the Xiaolongtang ash pile indicate that the material contains (160–180 parts per million uranium), suggesting a total of U3O8 could be recovered from that ash pile alone. Oil shales Some oil shales contain uranium, which may be recovered as a byproduct. Between 1946 and 1952, a marine type of Dictyonema shale was used for uranium production in Sillamäe, Estonia, and between 1950 and 1989 alum shale was used in Sweden for the same purpose. Breeding A breeder reactor produces more nuclear fuel than it consumes and thus can extend the uranium supply. It typically turns the dominant isotope in natural uranium, uranium-238, into fissile plutonium-239. This results in a hundredfold increase in the amount of energy to be produced per mass unit of uranium, because uranium-238, which comprises 99.3% of natural uranium, is not used in conventional reactors, which instead use uranium-235 (comprising 0.7% of natural uranium). In 1983, physicist Bernard Cohen proposed that the world supply of uranium is effectively inexhaustible, and could therefore be considered a form of renewable energy. He claims that fast breeder reactors, fueled by naturally-replenished uranium-238 extracted from seawater, could supply energy at least as long as the sun's expected remaining lifespan of five billion years. There are two types of breeders: fast breeders and thermal breeders. Efforts at commercializing breeder reactors have been largely unsuccessful, due to higher costs and complexity compared to LWR, as well as political opposition. A few commercial breeder reactors exist. In 2016, the Russian BN-800 fast-neutron breeder reactor started producing commercially at full power (800 MWe), joining the previous BN-600. , the Chinese CFR-600 is under construction after the success of the China Experimental Fast Reactor, based on the BN-800. These reactors are currently generating mostly electricity rather than new fuel because the abundance and low price of mined and reprocessed uranium oxide makes breeding uneconomical, but they can switch to breed new fuel and close the cycle as needed. The CANDU reactor, which was designed to be fueled with natural uranium, is capable of using spent fuel from Light Water Reactors as fuel, since it contains more fissile material than natural uranium. Research into "DUPIC" – direct use of PWR spent fuel in CANDU type reactors – is ongoing and could increase the usability of fuel without the need for reprocessing. Fast breeder A fast breeder, in addition to consuming uranium-235, converts fertile uranium-238 into plutonium-239, a fissile fuel. Fast breeder reactors are more expensive to build and operate, including the reprocessing, and could only be justified economically if uranium prices were to rise to pre-1980 values in real terms. In addition to considerably extending the exploitable fuel supply, these reactors have an advantage in that they produce less long-lived transuranic wastes, and can consume nuclear waste from current light water reactors, generating energy in the process. Uranium turned out to be far more plentiful than anticipated, and the price of uranium declined rapidly (with an upward blip in the 1970s). This is why the United States halted their use in 1977, and the UK abandoned the idea in 1994. Significant technical and materials problems were encountered with FBRs, and geological exploration showed that scarcity of uranium was not going to be a concern for some time. By the 1980s, due to both factors, it was clear that FBRs would not be commercially competitive with existing light water reactors. The economics of FBRs still depend on the value of the plutonium fuel which is bred, relative to the cost of fresh uranium. At higher uranium prices breeder reactors may be economically justified. Many nations have ongoing breeder research programs. China, India, and Japan plan large scale use of breeder reactors during the coming decades. 300 reactor-years experience has been gained in operating them. Thermal breeder Fissile uranium can be produced from thorium in thermal breeder reactors. Thorium is three times more plentiful than uranium. Thorium-232 is in itself not fissile, but it can be made into fissile uranium-233 in a breeder reactor. In turn, the uranium-233 can be fissioned, with the advantage that smaller amounts of transuranics are produced by neutron capture, compared to uranium-235 and especially compared to plutonium-239. Despite the thorium fuel cycle having a number of attractive features, development on a large scale can run into difficulties, mainly due to the complexity of fuel separation and reprocessing. Advocates for liquid core and molten salt reactors such as LFTR claim that these technologies negate the above-mentioned thorium's disadvantages present in solid-fueled reactors. The first successful commercial reactor at the Indian Point Energy Center in Buchanan, New York, (Indian Point Unit 1) ran on thorium. The first core did not live up to expectations. Production Uranium production is highly concentrated. The world's top uranium producers in 2017 were Kazakhstan (39% of world production), Canada (22%) and Australia (10%). Other major producers include Namibia (6.7%), Niger (6%), and Russia (5%). Uranium production in 2017 was 59,462 tonnes, 93% of the demand. The balance came from inventories held by utilities and other fuel cycle companies, inventories held by governments, used reactor fuel that has been reprocessed, recycled materials from military nuclear programs and uranium in depleted uranium stockpiles. Demand World annual commercial reactor-related uranium requirements amounted to around 60,100 tonnes as of January 2021. As some countries are not able to supply their own needs of uranium economically, countries have resorted to importing uranium ore from elsewhere. For example, owners of U.S. nuclear power reactors bought of natural uranium in 2006. Out of that 84%, or , were imported from foreign suppliers, according to the Energy Department. Because of the improvements in gas centrifuge technology in the 2000s, replacing former gaseous diffusion plants, cheaper separative work units have enabled the economic production of more enriched uranium from a given amount of natural uranium, by re-enriching tails ultimately leaving a depleted uranium tail of lower enrichment. This has somewhat lowered the demand for natural uranium. Demand forecasts According to Cameco Corporation, the demand for uranium is directly linked to the amount of electricity generated by nuclear power plants. Reactor capacity is growing slowly, reactors are being run more productively, with higher capacity factors, and reactor power levels. Improved reactor performance translates into greater uranium consumption. Nuclear power stations of 1000 megawatt electrical generation capacity require around of natural uranium per year. For example, the United States has 103 operating reactors with an average generation capacity of 950 MWe demanded over of natural uranium in 2005. As the number of nuclear power plants increases, so does the demand for uranium. As nuclear power plants take a long time to build and refuelling is undertaken at sporadic, predictable intervals, uranium demand is rather predictable in the short term. It is also less dependent on short-term economic boom–bust cycles as nuclear power has one of strongest fixed costs to variable costs ratios (i.e. the marginal costs of running, rather than leaving idle an already constructed power plant are very low, compared to the capital costs of construction) and it is thus nearly never advisable to leave a nuclear power plant idle for economic reasons. However, nuclear policy can lead to short term fluctuations in demand, as evidenced by the German nuclear phaseout, which was decided upon by the government of Gerhard Schröder (1998–2005) reversed during the second Merkel cabinet (2009–2013) only for a reversal of that reversal to occur as a consequence of the Fukushima nuclear accident, which also led to the temporary shutdown of several German nuclear power plants. Prices Generally speaking, in the case of nuclear energy the cost of fuel has the lowest share in total energy costs of all fuel consuming energy forms (i.e. Fossil fuels, biomass and nuclear). Furthermore, given the immense energy density of nuclear fuel (particularly in the form of enriched uranium or high grade plutonium), it is easy to stockpile amounts of fuel material to last several years at constant consumption. Power plants that do not have online refuelling capabilities, as is the case for the vast majority of commercial power plants in operation, will refuel as seldom as possible to avoid costly downtime and usually plan refuelling shutdowns long in advance so as to allow maintenance and inspection to use the scheduled downtime as well. As such power plant operators tend to have long-term contracts with fuel suppliers that are – if at all – only minorly affected by the fluctuations of uranium prices. The effect on electricity price for end consumers is negligible even in countries like France, which derive a majority of their electric energy from nuclear power. Nonetheless, short term price developments like the 2007 uranium bubble, can have drastic effects on mining companies, prospection and the economic calculations as to whether a certain deposit is worthwhile for commercial purposes. Since 1981 uranium prices and quantities in the US are reported by the Department of Energy. The import price dropped from 32.90 US$/lb-U3O8 in 1981 down to 12.55 in 1990 and to below 10 US$/lb-U3O8 in the year 2000. Prices paid for uranium during the 1970s were higher, 43 US$/lb-U3O8 is reported as the selling price for Australian uranium in 1978 by the Nuclear Information Centre. Uranium prices reached an all-time low in 2001, costing US$7/lb, but in April 2007 the price of Uranium on the spot market rose to US$113.00/lb, a high point of the uranium bubble of 2007. This was very close to the all time high (adjusted for inflation) in 1977. Following the 2011 Fukushima nuclear disaster, the global uranium sector remained depressed with the uranium price falling more than 50%, declining share values, and reduced profitability of uranium producers since March 2011 and into 2014. As a result, uranium companies worldwide are reducing costs, and limiting operations. As an example, Westwater Resources (previously Uranium Resources), has had to cease all uranium operations due to unfavorable prices. Since then, Westwater has tried branching out into other markets, namely lithium and graphite. As of July 2014, the price of uranium concentrate remained near a five-year low, the uranium price having fallen more than 50% from the peak spot price in January 2011, reflecting the loss of Japanese demand following the 2011 Fukushima nuclear disaster. As a result of continued low prices, in February 2014 mining company Cameco deferred plans to expand production from existing Canadian mines, although it continued work to open a new mine at Cigar Lake. Also in February 2014, Paladin energy suspended operations at its mine in Malawi, saying that the high-cost operation was losing money at current prices. Effect of price on mining and nuclear power plants In general short term fluctuations in the price of uranium are of more concern to operators and owners of mines and potentially lucrative deposits than to power plant operators. Due to its high energy density, uranium is easy to stockpile in the form of strategic reserves and thus a short term increase in prices can be compensated by accessing those reserves. Furthermore, many countries have de facto reserves in the form of reprocessed uranium or depleted uranium which still contain a share of fissile material that can make re-enrichment worthwhile if market conditions call for it. Nuclear reprocessing of spent fuel is – as of the 2020s – done commercially primarily to use the fissile material still contained in spent fuel. The commonly employed PUREX process recovers uranium and plutonium which can then be converted into MOX-fuel for use in the same light water reactors that produced the spent fuel. Whether reprocessing is economical is subject to much debate and depends in part on assumptions as to the price of uranium and the cost of disposal via deep geological repository or nuclear transmutation. Reactors that can run on natural uranium consume less mined uranium per unit of power produced but can have higher capital costs to build due to the need for heavy water as moderator. Furthermore they need to be capable of online refueling because the burnup achievable with natural uranium is lower than that achievable with enriched uranium – having to shut down the entire reactor for every refueling would quickly make such a reactor uneconomic. Breeder reactors also become more economical as uranium prices rise and it was among other things a decline in uranium prices in the 1970s that led to a decline in interest in breeder reactor technology. The thorium fuel cycle is a further alternative if and when uranium prices remain at a sustained high level and consequently interest in this alternative to current "mainstream" light water reactor technology is dependent in no small part on uranium prices. Legality Uranium mining is illegal in a number of jurisdictions. As uranium is often mined incidental to other minerals a ban in practice typically means that uranium is buried again at the mine after initial extraction. Politics In March 1951, the United States Atomic Energy Commission (AEC) set a high price for uranium ore. The resultant uranium rush attracted many prospectors to the Southwest. Charles Steen made a significant discover near Moab, Utah, while Paddy Martinez made another near Grants, New Mexico. However, by the 1960s, the United States, USSR, France and China were reducing their acquisitions of uranium. The United States started enriching only uranium mined within its country, but by 1965, production had dropped by 40 percent. By 1971, in an attempt to stop further reductions in prices, mining executives from UCAN, Nufcor, Rio Tinto, and government representatives agreed to share the market with Canadians getting 33.5 percent, South Africa 23.75 percent, France 21.75 percent, Australia 17 percent, and Rio Tinto Zinc 4 percent. By 1974, this market share agreement ended as uranium prices rose in concert with energy prices due to OPEC boycotts, and the United States ending its trade ban of foreign uranium. In Europe a mixed situation exists. Considerable nuclear power capacities have been developed, notably in Belgium, Finland, France, Germany, Spain, Sweden, Switzerland, and the UK. In many countries development of nuclear power has been stopped and phased out by legal actions. In Italy the use of nuclear power was barred by a referendum in 1987; this is now under revision. Ireland in 2008 also had no plans to change its non-nuclear stance. The years 1976 and 1977 saw uranium mining become a major political issue in Australia, with the Ranger Inquiry (Fox) report opening up a public debate about uranium mining. The Movement Against Uranium Mining group was formed in 1976, and many protests and demonstrations against uranium mining were held. Concerns relate to the health risks and environmental damage from uranium mining. Notable Australian anti-uranium activists have included Kevin Buzzacott, Jacqui Katona, Yvonne Margarula, and Jillian Marsh. The World Uranium Hearing was held in Salzburg, Austria, in September 1992. Anti-nuclear speakers from all continents, including indigenous speakers and scientists, testified to the health and environmental problems of uranium mining and processing, nuclear power, nuclear weapons, nuclear tests, and radioactive waste disposal. People who spoke at the 1992 hearing include: Thomas Banyacya, Katsumi Furitsu, Manuel Pino and Floyd Red Crow Westerman. They highlighted the threat of radioactive contamination to all peoples, especially indigenous communities, and said that their survival requires self-determination and emphasis on spiritual and cultural values. Increased renewable energy commercialization was advocated. The Kingdom of Saudi Arabia with the help of China has built an extraction facility to obtain uranium yellowcake from uranium ore. According to Western officials with information regarding the extraction site, the process is conducted by the oil-rich kingdom to champion nuclear technology. However, Saudi Energy Minister denied having built a uranium ore facility and claimed that the extraction of minerals is a fundamental part of the kingdom's strategy to diversify its economy. Despite sanctions on Russia some countries still buy its uranium in 2022, and some argue the EU should stop. S&P Global say non-Russian miners await more certainty before deciding whether to invest in new mines. Health risks Uranium ore emits radon gas. The health effects of high exposure to radon are a particular problem in the mining of uranium; significant excess lung cancer deaths have been identified in epidemiological studies of uranium miners employed in the 1940s and 1950s. The first major studies with radon and health occurred in the context of uranium mining, first in the Joachimsthal region of Bohemia and then in the Southwestern United States during the early Cold War. Because radon is a product of the radioactive decay of uranium, underground uranium mines may have high concentrations of radon. Many uranium miners in the Four Corners region contracted lung cancer and other pathologies as a result of high levels of exposure to radon in the mid-1950s. The increased incidence of lung cancer was particularly pronounced among Navajo and Mormon (who generally have low rates of lung cancer) miners. This is in part due to the religious prohibition on smoking in Mormonism. Safety standards requiring expensive ventilation were not widely implemented or policed during this period. While radon exposure is the main source of lung cancer in non-smokers who aren't exposed to asbestos, there is evidence that the combination of smoking and radon exposure increases the risk above the combined risks of either harmful substance. In studies of uranium miners, workers exposed to radon levels of 50 to 150 picocuries of radon per liter of air (2000–6000 Bq/m3) for about 10 years have shown an increased frequency of lung cancer. Statistically significant excesses in lung cancer deaths were present after cumulative exposures of less than 50 WLM. There is unexplained heterogeneity in these results (whose confidence intervals do not always overlap). The size of the radon-related increase in lung cancer risk varied by more than an order of magnitude between the different studies. Since that time, ventilation and other measures have been used to reduce radon levels in most affected mines that continue to operate. In recent years, the average annual exposure of uranium miners has fallen to levels similar to the concentrations inhaled in some homes. This has reduced the risk of occupationally induced cancer from radon, although it still remains an issue both for those who are currently employed in affected mines and for those who have been employed in the past. The power to detect any excess risks in miners nowadays is likely to be small, exposures being much smaller than in the early years of mining. Coal mining in addition to other health risks can also expose miners to radon as uranium (and its decay product radon) are often found in and near coal deposits and can accumulate underground as radon is denser than air. In the USA, the Radiation Exposure Compensation Act provides compensation to sufferers of various health problems linked to radiation exposure, or to their surviving relatives. Uranium miners, uranium mill workers and uranium transport workers have been compensated under the scheme. United States clean-up efforts Despite efforts made in cleaning up uranium sites, significant problems stemming from the legacy of uranium development still exist today on the territory of the Navajo Nation and in the states of Utah, Colorado, New Mexico, and Arizona. Hundreds of abandoned mines have not been cleaned up and present environmental and health risks in many communities. At the request of the U.S. House Committee on Oversight and Government Reform in October 2007, and in consultation with the Navajo Nation, the Environmental Protection Agency (EPA), along with the Bureau of Indian Affairs (BIA), the Nuclear Regulatory Commission (NRC), the Department of Energy (DOE), and the Indian Health Service (IHS), developed a coordinated Five-Year Plan to address uranium contamination. Similar interagency coordination efforts are beginning in the State of New Mexico as well. In 1978, Congress passed the Uranium Mill Tailings Radiation Control Act (UMTRCA), a measure designed to assist in the cleanup of 22 inactive ore-processing sites throughout the southwest. This also included constructing 19 disposal sites for the tailings, which contain a total of 40 million cubic yards of low-level radioactive material. The Environmental Protection Agency estimates that there are 4000 mines with documented uranium production, and another 15,000 locations with uranium occurrences in 14 western states, most found in the Four Corners area and Wyoming. The Uranium Mill Tailings Radiation Control Act is a United States environmental law that amended the Atomic Energy Act of 1954 and gave the Environmental Protection Agency the authority to establish health and environmental standards for the stabilization, restoration, and disposal of uranium mill tailings. Title 1 of the Act required the EPA to set environmental protection standards consistent with the Resource Conservation and Recovery Act, including groundwater protection limits; the Department of Energy to implement EPA standards and provide perpetual care for some sites; and the Nuclear Regulatory Commission to review cleanups and license sites to states or the DOE for perpetual care. Title 1 established a uranium mill remedial action program jointly funded by the federal government and the state. Title 1 of the Act also designated 22 inactive uranium mill sites for remediation, resulting in the containment of 40 million cubic yards of low-level radioactive material in UMTRCA Title 1 holding cells. Peak uranium Peak uranium is the point in time that the maximum global uranium production rate is reached. Predictions of peak uranium differ greatly. Pessimistic predictions of future high-grade uranium production operate on the thesis that either the peak has already occurred in the 1980s or that a second peak may occur sometime around 2035. Optimistic predictions claim that the supply is far more than demand and do not predict peak uranium. , identified uranium reserves recoverable at US$130/kg were 6.14 million tons (compared to 5.72 million tons in 2015). At the rate of consumption in 2017, these reserves are sufficient for slightly over 130 years of supply. The identified reserves as of 2017 recoverable at US$260/kg are 7.99 million tons (compared to 7.64 million tons in 2015). The expected amount of usable uranium for nuclear power that is recoverable depends greatly on how it is used. The main factor is the nuclear technology: light-water reactors, which comprise the great majority of reactors today, only consume about 0.5% of their uranium fuel, leaving over 99% of it as spent fuel waste. Fast breeder reactors instead consume closer to 99% of uranium fuel. Another factor is the ability to extract uranium from seawater. About 4.5 billion tons of uranium are available from seawater at about 10 times the current price of uranium with current extraction technology, which is about a thousand times the known uranium reserves. The Earth's crust contains approximately 65 trillion tons of uranium, of which about 32 thousand tons flow into oceans per year via rivers, which are themselves fed via geological cycles of erosion, subduction and uplift. The ability to extract uranium from seawater economically would therefore make uranium a renewable resource in practice. Uranium can also be bred from thorium (which is itself 3–4 times as abundant as uranium) in certain breeder reactors, although there are currently no commercially practical thorium reactors in the world and their development would require substantial financial investment which is not justified given the current low prices of natural uranium. Thirteen countries have hit peak and exhausted their economically recoverable uranium resources at current prices according to the Energy Watch Group. In a similar manner to every other natural metal resource, for every tenfold increase in the cost per kilogram of uranium, there is a three-hundredfold increase in available lower quality ores that would then become economical. The theory could be observed in practice during the uranium bubble of 2007 when an unprecedented price hike led to investments in the development of uranium mining of lower quality deposits, which mostly became stranded assets after uranium prices returned to a lower level. Uranium supply There are around 40 trillion tons of uranium in Earth's crust, but most is distributed at low parts per million trace concentration over its mass. Estimates of the amount concentrated into ores affordable to extract for under $130/kg can be less than a millionth of that total. One highly criticized life cycle study by Jan Willem Storm van Leeuwen suggested that below 0.01–0.02% (100–200 ppm) in ore, the energy required to extract and process the ore to supply the fuel, operate reactors and dispose properly comes close to the energy gained by using the uranium as a fissible material in the reactor. Researchers at the Paul Scherrer Institute who analyzed the Jan Willem Storm van Leeuwen paper, however, have detailed the number of incorrect assumptions of Jan Willem Storm van Leeuwen that led them to this evaluation, including their assumption that all the energy used in the mining of Olympic Dam is energy used in the mining of uranium, when that mine is predominantly a copper mine and uranium is produced only as a co-product, along with gold and other metals. The report by Jan Willem Storm van Leeuwen also assumes that all enrichment is done in the older and more energy intensive gaseous diffusion technology, whereas the less energy intensive gas centrifuge technology has produced the majority of the world's enriched uranium now for a number of decades. In the early days of the nuclear industry, uranium was thought to be very scarce, so a closed fuel cycle would be needed. Fast breeder reactors would be needed to create nuclear fuel for other power producing reactors. In the 1960s, new discoveries of reserves and new uranium enrichment techniques allayed these concerns. An appraisal of nuclear power by a team at MIT in 2003, and updated in 2009, stated that: Production According to Robert Vance of the OECD's Nuclear Energy Agency, the world production rate of uranium has already reached its peak in 1980, amounting to of UO from 22 countries. However, this is not due to lack of production capacity. Historically, uranium mines and mills around the world have operated at about 76% of total production capacity, varying within a range of 57% and 89%. The low production rates have been largely attributable to excess capacity. Slower growth of nuclear power and competition from secondary supply significantly reduced demand for freshly mined uranium until very recently. Secondary supplies include military and commercial inventories, enriched uranium tails, reprocessed uranium and mixed oxide fuel. According to data from the International Atomic Energy Agency, world production of mined uranium has peaked twice in the past: once, circa 1960 in response to stockpiling for military use, and again in 1980, in response to stockpiling for use in commercial nuclear power. Up until about 1990, the mined uranium production was in excess of consumption by power plants. But since 1990, consumption by power plants has outstripped the uranium being mined; the deficit being made up by liquidation of the military (through decommissioning of nuclear weapons) and civilian stockpiles. Uranium mining has increased since the mid-1990s, but is still less than the consumption by power plants. Primary sources Various agencies have tried to estimate how long uranium primary resources will last, assuming a once-through cycle. The European Commission said in 2001 that at the current level of uranium consumption, known uranium resources would last 42 years. When added to military and secondary sources, the resources could be stretched to 72 years. Yet this rate of usage assumes that nuclear power continues to provide only a fraction of the world's energy supply. If electric capacity were increased six-fold, then the 72-year supply would last just 12 years. The world's present measured resources of uranium, economically recoverable at a price of US$130/kg according to the industry groups Organisation for Economic Co-operation and Development (OECD), Nuclear Energy Agency (NEA) and International Atomic Energy Agency (IAEA), are enough to last for "at least a century" at current consumption rates. According to the World Nuclear Association, yet another industry group, assuming the world's current rate of consumption at 66,500 tonnes of uranium per year and the world's present measured resources of uranium (4.7–5.5 Mt) are enough to last for some 70–80 years. Predictions There have been numerous predictions of peak uranium in the past. In 1943, Alvin M. Weinberg et al. believed that there were serious limitations on nuclear energy if only U were used as a nuclear power plant fuel. They concluded that breeding was required to usher in the age of nearly endless energy. In 1956, M. King Hubbert declared world fissionable reserves adequate for at least the next few centuries, assuming breeding and reprocessing would be developed into economical processes. In 1975 the US Department of the Interior, Geological Survey, distributed the press release "Known US Uranium Reserves Won't Meet Demand". It was recommended that the US not depend on foreign imports of uranium. Pessimistic predictions Many analysts predicted a uranium peak and exhaustion of uranium reserves in the past or the near future. Edward Steidle, Dean of the School of Mineral Industries at Pennsylvania State College, predicted in 1952 that supplies of fissionable elements were too small to support commercial-scale energy production. Michael Meacher, the former environment minister of the UK (1997–2003), reports that peak uranium happened in 1981. He also predicts a major shortage of uranium sooner than 2013 accompanied with hoarding and its value pushed up to the levels of precious metals. M. C. Day projected in 1975 that uranium reserves could run out as soon as 1989, but, more optimistically, would be exhausted by 2015. Jan Willem Storm van Leeuwen, an independent analyst with Ceedata Consulting, contends that supplies of the high-grade uranium ore required to fuel nuclear power generation will, at current levels of consumption, last to about 2034. Afterwards, he expects the cost of energy to extract the uranium will exceed the price the electric power provided. The Energy Watch Group has calculated that, even with steep uranium prices, uranium production will have reached its peak by 2035 and that it will only be possible to satisfy the fuel demand of nuclear plants until then. Various agencies have tried to estimate how long these resources will last. The European Commission said in 2001 that at the current level of uranium consumption, known uranium resources would last 42 years. When added to military and secondary sources, the resources could be stretched to 72 years. Yet this rate of usage assumes that nuclear power continues to provide only a fraction of the world's energy supply. If electric capacity were increased six-fold, then the 72-year supply would last just 12 years. According to the industry groups OECD, NEA and IAEA, the world's present measured resources of uranium, economically recoverable at a price of US$130/kg, are enough to last for 100 years at current consumption. According to the Australian Uranium Association, another industry group, assuming the world's current rate of consumption at 66,500 tonnes of uranium per year and the world's present measured resources of uranium (4.7 Mt) are enough to last for 70 years. Optimistic predictions All the following references claim that the supply is far more than demand. Therefore, they do not predict peak uranium. In his 1956 paper, M. King Hubbert wrote that nuclear energy would last for the "foreseeable future". Hubbert's study assumed that breeder reactors would replace light water reactors and that uranium would be bred into plutonium (and possibly thorium would be bred into uranium). He also assumed that economic means of reprocessing would be discovered. For political, economic and nuclear proliferation reasons, the plutonium economy never materialized. Without it, uranium is used up in a once-through process and will peak and run out much sooner. However, at present, it is generally found to be cheaper to mine new uranium out of the ground than to use reprocessed uranium, and therefore the use of reprocessed uranium is limited to only a few nations. The OECD estimates that with the world nuclear electricity generating rates of 2002, with LWR, once-through fuel cycle, there are enough conventional resources to last 85 years using known resources and 270 years using known and as yet undiscovered resources. With breeders, this is extended to 8,500 years. If one is willing to pay $300/kg for uranium, there is a vast quantity available in the ocean. It is worth noting that since fuel cost only amounts to a small fraction of nuclear energy total cost per kWh, and raw uranium price also constitutes a small fraction of total fuel costs, such an increase on uranium prices would not involve a very significant increase in the total cost per kWh produced. In 1983, physicist Bernard Cohen proposed that uranium is effectively inexhaustible, and could therefore be considered a renewable source of energy. He claims that fast breeder reactors, fueled by naturally replenished uranium extracted from seawater, could supply energy at least as long as the sun's expected remaining lifespan of five billion years. While uranium is a finite mineral resource within the earth, the hydrogen in the sun is finite too – thus, if the resource of nuclear fuel can last over such time scales, as Cohen contends, then nuclear energy is every bit as sustainable as solar power or any other source of energy, in terms of sustainability over the time scale of life surviving on this planet. His paper assumes extraction of uranium from seawater at the rate of per year of uranium. The current demand for uranium is near per year; however, the use of breeder reactors means that uranium would be used at least 60 times more efficiently than today. James Hopf, a nuclear engineer writing for American Energy Independence in 2004, believes that there is several hundred years' supply of recoverable uranium even for standard reactors. For breeder reactors, "it is essentially infinite". The IAEA estimates that using only known reserves at the current rate of demand and assuming a once-through nuclear cycle that there is enough uranium for at least 100 years. However, if all primary known reserves, secondary reserves, undiscovered and unconventional sources of uranium are used, uranium will be depleted in 47,000 years. Kenneth S. Deffeyes estimates that if one can accept ore one tenth as rich then the supply of available uranium increases 300 times. His paper shows that uranium concentration in ores is log-normal distributed. There is relatively little high-grade uranium and a large supply of very low grade uranium. Ernest Moniz, a professor at the Massachusetts Institute of Technology and the former United States Secretary of Energy, testified in 2009 that an abundance of uranium had put into question plans to reprocess spent nuclear fuel. The reprocessing plans dated from decades previous, when uranium was thought to be scarce. But now, "roughly speaking, we've got uranium coming out of our ears, for a long, long time". Possible effects and consequences As uranium production declines, uranium prices would be expected to increase. However, the price of uranium makes up only 9% of the cost of running a nuclear power plant, much lower than the cost of coal in a coal-fired power plant (77%), or the cost of natural gas in a gas-fired power plant (93%). Uranium is different from conventional energy resources, such as oil and coal, in several key aspects. Those differences limit the effects of short-term uranium shortages, but most have no bearing on the eventual depletion. Some key features are: The uranium market is diverse, and no country has a monopoly influence on its prices. Thanks to the extremely high energy density of uranium, stockpiling of several years' worth of fuel is feasible. Significant secondary supplies of already mined uranium exist, including decommissioned nuclear weapons, depleted uranium tails suitable for reenrichment, and existing stockpiles. Vast amounts of uranium, roughly 800 times the known reserves of mined uranium, are contained in extremely dilute concentrations in seawater. Introduction of fast neutron reactors would increase the uranium use efficiency by about 100 times. Substitutes An alternative to uranium is thorium, which is three times more common than uranium. Fast breeder reactors are not needed. Compared to conventional uranium reactors, thorium reactors using the thorium fuel cycle may produce about 40 times the amount of energy per unit of mass. However, creating the technology, infrastructure and know-how needed for a thorium-fuel economy is uneconomical at current and predicted uranium prices. See also Botanical prospecting for uranium Energy development Energy security Isotopes of uranium List of uranium projects Nuclear fuel cycle Uranium metallurgy Uranium mining in France Uranium tile Uranium in the environment Uranium mining debate World energy supply and consumption References Further reading Books Herring, J.: Uranium and Thorium Resource Assessment, Encyclopedia of Energy, Boston University, Boston, 2004, . Articles Power Struggle for Uranium of Nepal: A Travel Note (2024). External links Health Impacts for Uranium Mine and Mill Residents – Science Issues. Uranium mining left a legacy of death. World Uranium Mining (giving production statistics) , World Nuclear Association, July 2006 In Situ Leaching Method at Uranium SA Website (South Australian Chamber of Mines and Energy) Evaluation of Cost of Seawater Uranium Recovery and Technical Problems toward Implementation Watch Uranium, a 1990 documentary on the risks of uranium mining World Supply of Uranium — World Nuclear Association, March 2007 The Guardian (22 Jan. 2008): Awards shine spotlight on big business green record Extracting a disaster The Guardian, 2008 Uranium glows ever hotter (Investors Chronicle, UK) Nuclear energy Nuclear power
Uranium mining
[ "Physics", "Chemistry" ]
16,029
[ "Nuclear power", "Physical quantities", "Power (physics)", "Nuclear energy", "Nuclear physics", "Radioactivity" ]
4,914,004
https://en.wikipedia.org/wiki/Rachitrema
Rachitrema is a poorly known genus of ichthyosaur from the Triassic of France. Its remains were found in France by two independent collectors, towards the end of the nineteenth century. They were only isolated bone fragments. Classification The type species is R. pellati, described by Sauvage in 1883. When first described, Sauvage classified it as a dinosaur. Later, Franz Nopcsa referred the genus to Anchisauridae, while Karl Alfred von Zittel referred it to either Zanclodontidae or Megalosauridae. The ichthyosaur nature of Rachitrema was recognized by Friedrich von Huene, who synonymized it with Shastasaurus. Sauvage conceded that Rachitrema was non-dinosaurian, and the ichthyosaur classification of the genus became universally accepted by several authors. McGowan and Motani (2003) considered Rachitrema dinosaurian without comment. However, recent re-examination of the type material of Rachitrema reaffirms the ichthyosaurian classification of the genus, with most of the original remains referable to Ichthyosauria, and the rest being indeterminate beyond Reptilia. References External links Dinosaur Mailing List entry, which discusses the genus Nomina dubia Fossils of France Fossil taxa described in 1883
Rachitrema
[ "Biology" ]
280
[ "Biological hypotheses", "Nomina dubia", "Controversial taxa" ]
4,916,038
https://en.wikipedia.org/wiki/Mechanical%20biological%20treatment
A mechanical biological treatment (MBT) system is a type of waste processing facility that combines a sorting facility with a form of biological treatment such as composting or anaerobic digestion. MBT plants are designed to process mixed household waste as well as commercial and industrial wastes. Process The terms mechanical biological treatment or mechanical biological pre-treatment relate to a group of solid waste treatment systems. These systems enable the recovery of materials contained within the mixed waste and facilitate the stabilisation of the biodegradable component of the material. Twenty two facilities in the UK have implemented MBT/BMT treatment processes. The sorting component of the plants typically resemble a materials recovery facility. This component is either configured to recover the individual elements of the waste or produce a refuse-derived fuel that can be used for the generation of power. The components of the mixed waste stream that can be recovered include: Ferrous metal Non-ferrous metal Plastic Glass Terminology MBT is also sometimes termed biological mechanical treatment (BMT), however this simply refers to the order of processing (i.e., the biological phase of the system precedes the mechanical sorting). MBT should not be confused with mechanical heat treatment (MHT). Mechanical sorting The "mechanical" element is usually an automated mechanical sorting stage. This either removes recyclable elements from a mixed waste stream (such as metals, plastics, glass, and paper) or processes them. It typically involves factory style conveyors, industrial magnets, eddy current separators, trommels, shredders, and other tailor made systems, or the sorting is done manually at hand picking stations. The mechanical element has a number of similarities to a materials recovery facility (MRF). Some systems integrate a wet MRF to separate by density and flotation and to recover and wash the recyclable elements of the waste in a form that can be sent for recycling. MBT can alternatively process the waste to produce a high calorific fuel termed refuse derived fuel (RDF). RDF can be used in cement kilns or thermal combustion power plants and is generally made up from plastics and biodegradable organic waste. Systems which are configured to produce RDF include the Herhof and Ecodeco processes. It is a common misconception that all MBT processes produce RDF; this is not the case, and depends strictly on system configuration and suitable local markets for MBT outputs. Biological processing The "biological" element refers to either: Anaerobic digestion Composting Biodrying Anaerobic digestion harnesses anaerobic microorganisms to break down the biodegradable component of the waste to produce biogas and soil improver. The biogas can be used to generate electricity and heat. Biological can also refer to a composting stage. Here the organic component is broken down by naturally occurring aerobic microorganisms. They break down the waste into carbon dioxide and compost. There is no green energy produced by systems employing only composting treatment for the biodegradable waste. In the case of biodrying, the waste material undergoes a period of rapid heating through the action of aerobic microbes. During this partial composting stage the heat generated by the microbes result in rapid drying of the waste. These systems are often configured to produce a refuse-derived fuel where a dry, light material is advantageous for later transport and combustion. Some systems incorporate both anaerobic digestion and composting. This may either take the form of a full anaerobic digestion phase, followed by the maturation (composting) of the digestate. Alternatively a partial anaerobic digestion phase can be induced on water that is percolated through the raw waste, dissolving the readily available sugars, with the remaining material being sent to a windrow composting facility. By processing the biodegradable waste either by anaerobic digestion or by composting MBT technologies help to reduce the contribution of greenhouse gases to global warming. Usable wastes for this system: Municipal solid waste Commercial and industrial waste Sewage sludge Possible products of this system: Renewable fuel (biogas) leading to renewable power Recovered recyclable materials such as metals, paper, plastics, glass etc. Digestate - an organic fertiliser and soil improver Carbon credits – additional revenues High calorific fraction refuse derived fuel - renewable fuel content dependent upon biological component Residual unusable materials prepared for their final safe treatment (e.g., incineration or gasification) and/or landfill Further advantages: Small fraction of inert residual waste Reduction of the waste volume to be deposited to at least a half (density > 1.3 t/m3), thus the lifetime of the landfill is at least twice as long as usual Utilisation of the leachate in the process Landfill gas not problematic as biological component of waste has been stabilised Daily covering of landfill not necessary Consideration of applications MBT systems can form an integral part of a region's waste treatment infrastructure. These systems are typically integrated with kerbside collection schemes. In the event that a refuse-derived fuel is produced as a by-product then a combustion facility would be required. This could either be an incineration facility or a gasifier. Alternatively MBT solutions can diminish the need for home separation and kerbside collection of recyclable elements of waste. This gives the ability of local authorities, municipalities and councils to reduce the use of waste vehicles on the roads and keep recycling rates high. Position of environmental groups Friends of the Earth suggests that the best environmental route for residual waste is to firstly maximise removal of remaining recyclable materials from the waste stream (such as metals, plastics and paper). The amount of waste remaining should be composted or anaerobically digested and disposed of to landfill, unless sufficiently clean to be used as compost. A report by Eunomia undertook a detailed analysis of the climate impacts of different residual waste technologies. It found that an MBT process that extracts both the metals and plastics prior to landfilling is one of the best options for dealing with our residual waste, and has a lower impact than either MBT processes producing RDF for incineration or incineration of waste without MBT. Friends of the Earth does not support MBT plants that produce refuse derived fuel (RDF), and believes MBT processes should occur in small, localised treatment plants. See also Anaerobic digestion Composting List of solid waste treatment technologies Materials recovery facility Plastic pollution Renewable energy Waste Waste management References External links Waste-to-Resources World's largest conference on mechanical biological treatment (MBT) of municipal solid waste (MSW) and material recovery facilities (MRF) Environment Agency Waste Technology Data Centre An independent UK government review of advanced waste treatment technologies. Kuehle-Weidemeier et al. (2007) Plants for Mechanical-Biological Waste Treatment Summary of the evaluation of all German MBT plants in the introduction phase 2005–2006. By order of the German EPA (Umweltbundesamt) Juniper MBT report An independent study of MBT technologies commissioned with the use of UK landfill tax credits. SEPA MBT Planning Information Sheet Fact Sheet for Scottish Planning Considerations Compostinfo An independent comprehensive bibliography and review web site focusing on "mixed waste" sources GTZ (2003) Sector project mechanical-biological waste treatment. Final report Mechanical-biological waste treatment concept of FABER-AMBRA - Scientific results and videos Biodegradable waste management Environmental engineering Industrial composting Bioenergy Sewerage
Mechanical biological treatment
[ "Chemistry", "Engineering", "Environmental_science" ]
1,581
[ "Biodegradable waste management", "Chemical engineering", "Water pollution", "Biodegradation", "Sewerage", "Civil engineering", "Environmental engineering" ]
4,916,402
https://en.wikipedia.org/wiki/Pizzino
Pizzino (; plural as pizzini) is an Italian language word derived from the Sicilian language equivalent pizzinu meaning "small piece of paper". The word has been widely used to refer to small slips of paper that the Sicilian Mafia uses for high-level communications. Sicilian Mafia boss Bernardo Provenzano is among those best known for using pizzini, most notably in his instruction that Matteo Messina Denaro become his successor. The pizzini of other mafiosi have significantly aided police investigations. Provenzano case Provenzano used a version of the Caesar cipher, used by Julius Caesar in wartime communications. The Caesar code involves shifting each letter of the alphabet forward three places; Provenzano's pizzini code did the same, then replaced letters with numbers indicating their position in the alphabet. For example, one reported note by Provenzano read "I met 512151522 191212154 and we agreed that we will see each other after the holidays...". This name was decoded as "Binnu Riina". Discovery Channel News quotes cryptography expert Bruce Schneier saying "Looks like kindergarten cryptography to me. It will keep your kid sister out, but it won't keep the police out. But what do you expect from someone who is computer illiterate?". References Pizzino Cryptography Organized crime terminology
Pizzino
[ "Mathematics", "Engineering" ]
280
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
4,916,481
https://en.wikipedia.org/wiki/Inverse%20scattering%20transform
In mathematics, the inverse scattering transform is a method that solves the initial value problem for a nonlinear partial differential equation using mathematical methods related to wave scattering. The direct scattering transform describes how a function scatters waves or generates bound-states. The inverse scattering transform uses wave scattering data to construct the function responsible for wave scattering. The direct and inverse scattering transforms are analogous to the direct and inverse Fourier transforms which are used to solve linear partial differential equations. Using a pair of differential operators, a 3-step algorithm may solve nonlinear differential equations; the initial solution is transformed to scattering data (direct scattering transform), the scattering data evolves forward in time (time evolution), and the scattering data reconstructs the solution forward in time (inverse scattering transform). This algorithm simplifies solving a nonlinear partial differential equation to solving 2 linear ordinary differential equations and an ordinary integral equation, a method ultimately leading to analytic solutions for many otherwise difficult to solve nonlinear partial differential equations. The inverse scattering problem is equivalent to a Riemann–Hilbert factorization problem, at least in the case of equations of one space dimension. This formulation can be generalized to differential operators of order greater than two and also to periodic problems. In higher space dimensions one has instead a "nonlocal" Riemann–Hilbert factorization problem (with convolution instead of multiplication) or a d-bar problem. History The inverse scattering transform arose from studying solitary waves. J.S. Russell described a "wave of translation" or "solitary wave" occurring in shallow water. First J.V. Boussinesq and later D. Korteweg and G. deVries discovered the Korteweg-deVries (KdV) equation, a nonlinear partial differential equation describing these waves. Later, N. Zabusky and M. Kruskal, using numerical methods for investigating the Fermi–Pasta–Ulam–Tsingou problem, found that solitary waves had the elastic properties of colliding particles; the waves' initial and ultimate amplitudes and velocities remained unchanged after wave collisions. These particle-like waves are called solitons and arise in nonlinear equations because of a weak balance between dispersive and nonlinear effects. Gardner, Greene, Kruskal and Miura introduced the inverse scattering transform for solving the Korteweg–de Vries equation. Lax, Ablowitz, Kaup, Newell, and Segur generalized this approach which led to solving other nonlinear equations including the nonlinear Schrödinger equation, sine-Gordon equation, modified Korteweg–De Vries equation, Kadomtsev–Petviashvili equation, the Ishimori equation, Toda lattice equation, and the Dym equation. This approach has also been applied to different types of nonlinear equations including differential-difference, partial difference, multidimensional equations and fractional integrable nonlinear systems. Description Nonlinear partial differential equation The independent variables are a spatial variable and a time variable . Subscripts or differential operators () indicate differentiation. The function is a solution of a nonlinear partial differential equation, , with initial condition (value) . Requirements The differential equation's solution meets the integrability and Fadeev conditions: Integrability condition: Fadeev condition: Differential operator pair The Lax differential operators, and , are linear ordinary differential operators with coefficients that may contain the function or its derivatives. The self-adjoint operator has a time derivative and generates a eigenvalue (spectral) equation with eigenfunctions and time-constant eigenvalues (spectral parameters) . and The operator describes how the eigenfunctions evolve over time, and generates a new eigenfunction of operator from eigenfunction of . The Lax operators combine to form a multiplicative operator, not a differential operator, of the eigenfuctions . The Lax operators are chosen to make the multiplicative operator equal to the nonlinear differential equation. The AKNS differential operators, developed by Ablowitz, Kaup, Newell, and Segur, are an alternative to the Lax differential operators and achieve a similar result. Direct scattering transform The direct scattering transform generates initial scattering data; this may include the reflection coefficients, transmission coefficient, eigenvalue data, and normalization constants of the eigenfunction solutions for this differential equation. Scattering data time evolution The equations describing how scattering data evolves over time occur as solutions to a 1st order linear ordinary differential equation with respect to time. Using varying approaches, this first order linear differential equation may arise from the linear differential operators (Lax pair, AKNS pair), a combination of the linear differential operators and the nonlinear differential equation, or through additional substitution, integration or differentiation operations. Spatially asymptotic equations () simplify solving these differential equations. Inverse scattering transform The Marchenko equation combines the scattering data into a linear Fredholm integral equation. The solution to this integral equation leads to the solution, u(x,t), of the nonlinear differential equation. Example: Korteweg–De Vries equation The nonlinear differential Korteweg–De Vries equation is Lax operators The Lax operators are: and The multiplicative operator is: Direct scattering transform The solutions to this differential equation may include scattering solutions with a continuous range of eigenvalues (continuous spectrum) and bound-state solutions with discrete eigenvalues (discrete spectrum). The scattering data includes transmission coefficients , left reflection coefficient , right reflection coefficient , discrete eigenvalues , and left and right bound-state normalization (norming) constants. Scattering data time evolution The spatially asymptotic left and right Jost functions simplify this step. The dependency constants relate the right and left Jost functions and right and left normalization constants. The Lax differential operator generates an eigenfunction which can be expressed as a time-dependent linear combination of other eigenfunctions. The solutions to these differential equations, determined using scattering and bound-state spatially asymptotic Jost functions, indicate a time-constant transmission coefficient , but time-dependent reflection coefficients and normalization coefficients. Inverse scattering transform The Marchenko kernel is . The Marchenko integral equation is a linear integral equation solved for . The solution to the Marchenko equation, , generates the solution to the nonlinear partial differential equation. Examples of integrable equations Korteweg–de Vries equation nonlinear Schrödinger equation Camassa-Holm equation Sine-Gordon equation Toda lattice Ishimori equation Dym equation See also Quantum inverse scattering method Integrable system Citations References Further reading External links   Inverse Scattering Transform and the Theory of Solitons Scattering theory Exactly solvable models Partial differential equations Transforms Integrable systems
Inverse scattering transform
[ "Physics", "Chemistry", "Mathematics" ]
1,389
[ "Functions and mappings", "Scattering theory", "Integrable systems", "Theoretical physics", "Mathematical objects", "Scattering", "Mathematical relations", "Transforms" ]
4,917,686
https://en.wikipedia.org/wiki/N-body%20simulation
In physics and astronomy, an N-body simulation is a simulation of a dynamical system of particles, usually under the influence of physical forces, such as gravity (see n-body problem for other applications). N-body simulations are widely used tools in astrophysics, from investigating the dynamics of few-body systems like the Earth-Moon-Sun system to understanding the evolution of the large-scale structure of the universe. In physical cosmology, N-body simulations are used to study processes of non-linear structure formation such as galaxy filaments and galaxy halos from the influence of dark matter. Direct N-body simulations are used to study the dynamical evolution of star clusters. Nature of the particles The 'particles' treated by the simulation may or may not correspond to physical objects which are particulate in nature. For example, an N-body simulation of a star cluster might have a particle per star, so each particle has some physical significance. On the other hand, a simulation of a gas cloud cannot afford to have a particle for each atom or molecule of gas as this would require on the order of particles for each mole of material (see Avogadro constant), so a single 'particle' would represent some much larger quantity of gas (often implemented using Smoothed Particle Hydrodynamics). This quantity need not have any physical significance, but must be chosen as a compromise between accuracy and manageable computer requirements. Dark matter simulation Dark matter plays an important role in the formation of galaxies. The time evolution of the density f (in phase space) of dark matter particles, can be described by the collisionless Boltzmann equation In the equation, is the velocity, and Φ is the gravitational potential given by Poisson's Equation. These two coupled equations are solved in an expanding background Universe, which is governed by the Friedmann equations, after determining the initial conditions of dark matter particles. The conventional method employed for initializing positions and velocities of dark matter particles involves moving particles within a uniform Cartesian lattice or a glass-like particle configuration. This is done by using a linear theory approximation or a low-order perturbation theory. Direct gravitational N-body simulations In direct gravitational N-body simulations, the equations of motion of a system of N particles under the influence of their mutual gravitational forces are integrated numerically without any simplifying approximations. These calculations are used in situations where interactions between individual objects, such as stars or planets, are important to the evolution of the system. The first direct gravitational N-body simulations were carried out by Erik Holmberg at the Lund Observatory in 1941, determining the forces between stars in encountering galaxies via the mathematical equivalence between light propagation and gravitational interaction: putting light bulbs at the positions of the stars and measuring the directional light fluxes at the positions of the stars by a photo cell, the equations of motion can be integrated with effort. The first purely calculational simulations were then done by Sebastian von Hoerner at the Astronomisches Rechen-Institut in Heidelberg, Germany. Sverre Aarseth at the University of Cambridge (UK) has dedicated his entire scientific life to the development of a series of highly efficient N-body codes for astrophysical applications which use adaptive (hierarchical) time steps, an Ahmad-Cohen neighbour scheme and regularization of close encounters. Regularization is a mathematical trick to remove the singularity in the Newtonian law of gravitation for two particles which approach each other arbitrarily close. Sverre Aarseth's codes are used to study the dynamics of star clusters, planetary systems and galactic nuclei. General relativity simulations Many simulations are large enough that the effects of general relativity in establishing a Friedmann-Lemaitre-Robertson-Walker cosmology are significant. This is incorporated in the simulation as an evolving measure of distance (or scale factor) in a comoving coordinate system, which causes the particles to slow in comoving coordinates (as well as due to the redshifting of their physical energy). However, the contributions of general relativity and the finite speed of gravity can otherwise be ignored, as typical dynamical timescales are long compared to the light crossing time for the simulation, and the space-time curvature induced by the particles and the particle velocities are small. The boundary conditions of these cosmological simulations are usually periodic (or toroidal), so that one edge of the simulation volume matches up with the opposite edge. Calculation optimizations N-body simulations are simple in principle, because they involve merely integrating the 6N ordinary differential equations defining the particle motions in Newtonian gravity. In practice, the number N of particles involved is usually very large (typical simulations include many millions, the Millennium simulation included ten billion) and the number of particle-particle interactions needing to be computed increases on the order of N2, and so direct integration of the differential equations can be prohibitively computationally expensive. Therefore, a number of refinements are commonly used. Numerical integration is usually performed over small timesteps using a method such as leapfrog integration. However all numerical integration leads to errors. Smaller steps give lower errors but run more slowly. Leapfrog integration is roughly 2nd order on the timestep, other integrators such as Runge–Kutta methods can have 4th order accuracy or much higher. One of the simplest refinements is that each particle carries with it its own timestep variable, so that particles with widely different dynamical times don't all have to be evolved forward at the rate of that with the shortest time. There are two basic approximation schemes to decrease the computational time for such simulations. These can reduce the computational complexity to O(N log N) or better, at the loss of accuracy. Tree methods In tree methods, such as a Barnes–Hut simulation, an octree is usually used to divide the volume into cubic cells and only interactions between particles from nearby cells need to be treated individually; particles in distant cells can be treated collectively as a single large particle centered at the distant cell's center of mass (or as a low-order multipole expansion). This can dramatically reduce the number of particle pair interactions that must be computed. To prevent the simulation from becoming swamped by computing particle-particle interactions, the cells must be refined to smaller cells in denser parts of the simulation which contain many particles per cell. For simulations where particles are not evenly distributed, the well-separated pair decomposition methods of Callahan and Kosaraju yield optimal O(n log n) time per iteration with fixed dimension. Particle mesh method Another possibility is the particle mesh method in which space is discretised on a mesh and, for the purposes of computing the gravitational potential, particles are assumed to be divided between the surrounding 2x2 vertices of the mesh. The potential energy Φ can be found with the Poisson equation where G is Newton's constant and is the density (number of particles at the mesh points). The fast Fourier transform can solve this efficiently by going to the frequency domain where the Poisson equation has the simple form where is the comoving wavenumber and the hats denote Fourier transforms. Since , the gravitational field can now be found by multiplying by and computing the inverse Fourier transform (or computing the inverse transform and then using some other method). Since this method is limited by the mesh size, in practice a smaller mesh or some other technique (such as combining with a tree or simple particle-particle algorithm) is used to compute the small-scale forces. Sometimes an adaptive mesh is used, in which the mesh cells are much smaller in the denser regions of the simulation. Special-case optimizations Several different gravitational perturbation algorithms are used to get fairly accurate estimates of the path of objects in the Solar System. People often decide to put a satellite in a frozen orbit. The path of a satellite closely orbiting the Earth can be accurately modeled starting from the 2-body elliptical orbit around the center of the Earth, and adding small corrections due to the oblateness of the Earth, gravitational attraction of the Sun and Moon, atmospheric drag, etc. It is possible to find a frozen orbit without calculating the actual path of the satellite. The path of a small planet, comet, or long-range spacecraft can often be accurately modeled starting from the 2-body elliptical orbit around the Sun, and adding small corrections from the gravitational attraction of the larger planets in their known orbits. Some characteristics of the long-term paths of a system of particles can be calculated directly. The actual path of any particular particle does not need to be calculated as an intermediate step. Such characteristics include Lyapunov stability, Lyapunov time, various measurements from ergodic theory, etc. Two-particle systems Although there are millions or billions of particles in typical simulations, they typically correspond to a real particle with a very large mass, typically 109 solar masses. This can introduce problems with short-range interactions between the particles such as the formation of two-particle binary systems. As the particles are meant to represent large numbers of dark matter particles or groups of stars, these binaries are unphysical. To prevent this, a softened Newtonian force law is used, which does not diverge as the inverse-square radius at short distances. Most simulations implement this quite naturally by running the simulations on cells of finite size. It is important to implement the discretization procedure in such a way that particles always exert a vanishing force on themselves. Softening Softening is a numerical trick used in N-body techniques to prevent numerical divergences when a particle comes too close to another (and the force goes to infinity). This is obtained by modifying the regularized gravitational potential of each particle as (rather than 1/r) where is the softening parameter. The value of the softening parameter should be set small enough to keep simulations realistic. Results from N-body simulations N-body simulations give findings on the large-scale dark matter distribution and the structure of dark matter halos. According to simulations of cold dark matter, the overall distribution of dark matter on a large scale is not entirely uniform. Instead, it displays a structure resembling a network, consisting of voids, walls, filaments, and halos. Also, simulations show that the relationship between the concentration of halos and factors such as mass, initial fluctuation spectrum, and cosmological parameters is linked to the actual formation time of the halos. In particular, halos with lower mass tend to form earlier, and as a result, have higher concentrations due to the higher density of the Universe at the time of their formation. Shapes of halos are found to deviate from being perfectly spherical. Typically, halos are found to be elongated and become increasingly prolate towards their centers. However, interactions between dark matter and baryons would affect the internal structure of dark matter halos. Simulations that model both dark matters and baryons are needed to study small-scale structures. Incorporating baryons, leptons and photons into simulations Many simulations simulate only cold dark matter, and thus include only the gravitational force. Incorporating baryons, leptons and photons into the simulations dramatically increases their complexity and often radical simplifications of the underlying physics must be made. However, this is an extremely important area and many modern simulations are now trying to understand processes that occur during galaxy formation which could account for galaxy bias. Computational complexity Reif and Tate prove that if the n-body reachability problem is defined as follows – given n bodies satisfying a fixed electrostatic potential law, determining if a body reaches a destination ball in a given time bound where we require a poly(n) bits of accuracy and the target time is poly(n) is in PSPACE. On the other hand, if the question is whether the body eventually reaches the destination ball, the problem is PSPACE-hard. These bounds are based on similar complexity bounds obtained for ray tracing. Example simulations Common boilerplate code The simplest implementation of N-body simulations where is a naive propagation of orbiting bodies; naive implying that the only forces acting on the orbiting bodies is the gravitational force which they exert on each other. In object-oriented programming languages, such as C++, some boilerplate code is useful for establishing the fundamental mathematical structures as well as data containers required for propagation; namely state vectors, and thus vectors, and some fundamental object containing this data, as well as the mass of an orbiting body. This method is applicable to other types of N-body simulations as well; a simulation of point masses with charges would use a similar method, however the force would be due to attraction or repulsion by interaction of electric fields. Regardless, acceleration of particle is a result of summed force vectors, divided by the mass of the particle: An example of a programmatically stable and scalable method for containing kinematic data for a particle is the use of fixed length arrays, which in optimised code allows for easy memory allocation and prediction of consumed resources; as seen in the following C++ code: struct Vector3 { double e[3] = { 0 }; Vector3() {} ~Vector3() {} inline Vector3(double e0, double e1, double e2) { this->e[0] = e0; this->e[1] = e1; this->e[2] = e2; } }; struct OrbitalEntity { double e[7] = { 0 }; OrbitalEntity() {} ~OrbitalEntity() {} inline OrbitalEntity(double e0, double e1, double e2, double e3, double e4, double e5, double e6) { this->e[0] = e0; this->e[1] = e1; this->e[2] = e2; this->e[3] = e3; this->e[4] = e4; this->e[5] = e5; this->e[6] = e6; } };Note that contains enough room for a state vector, where: , the projection of the objects position vector in Cartesian space along , the projection of the objects position vector in Cartesian space along , the projection of the objects position vector in Cartesian space along , the projection of the objects velocity vector in Cartesian space along , the projection of the objects velocity vector in Cartesian space along , the projection of the objects velocity vector in Cartesian space along Additionally, contains enough room for a mass value. Initialisation of simulation parameters Commonly, N-body simulations will be systems based on some type of equations of motion; of these, most will be dependent on some initial configuration to "seed" the simulation. In systems such as those dependent on some gravitational or electric potential, the force on a simulation entity is independent on its velocity. Hence, to seed the forces of the simulation, merely initial positions are needed, but this will not allow propagation- initial velocities are required. Consider a planet orbiting a star- it has no motion, but is subject to gravitational attraction to its host star. As a time progresses, and time steps are added, it will gather velocity according to its acceleration. For a given instant in time, , the resultant acceleration of a body due to its neighbouring masses is independent of its velocity, however, for the time step , the resulting change in position is significantly different due the propagation's inherent dependency on velocity. In basic propagation mechanisms, such as the symplectic euler method to be used below, the position of an object at is only dependent on its velocity at , as the shift in position is calculated via Without acceleration, is static, however, from the perspective of an observer seeing only position, it will take two time steps to see a change in velocity. A solar-system-like simulation can be accomplished by taking average distances of planet equivalent point masses from a central star. To keep code simple, a non-rigorous approach based on semi-major axes and mean velocities will be used. Memory space for these bodies must be reserved before the bodies are configured; to allow for scalability, a malloc command may be used: OrbitalEntity* orbital_entities = malloc(sizeof(OrbitalEntity) * (9 + N_ASTEROIDS)); orbital_entities[0] = { 0.0,0.0,0.0, 0.0,0.0,0.0, 1.989e30 }; // a star similar to the sun orbital_entities[1] = { 57.909e9,0.0,0.0, 0.0,47.36e3,0.0, 0.33011e24 }; // a planet similar to mercury orbital_entities[2] = { 108.209e9,0.0,0.0, 0.0,35.02e3,0.0, 4.8675e24 }; // a planet similar to venus orbital_entities[3] = { 149.596e9,0.0,0.0, 0.0,29.78e3,0.0, 5.9724e24 }; // a planet similar to earth orbital_entities[4] = { 227.923e9,0.0,0.0, 0.0,24.07e3,0.0, 0.64171e24 }; // a planet similar to mars orbital_entities[5] = { 778.570e9,0.0,0.0, 0.0,13e3,0.0, 1898.19e24 }; // a planet similar to jupiter orbital_entities[6] = { 1433.529e9,0.0,0.0, 0.0,9.68e3,0.0, 568.34e24 }; // a planet similar to saturn orbital_entities[7] = { 2872.463e9,0.0,0.0, 0.0,6.80e3,0.0, 86.813e24 }; // a planet similar to uranus orbital_entities[8] = { 4495.060e9,0.0,0.0, 0.0,5.43e3,0.0, 102.413e24 }; // a planet similar to neptune where is a variable which will remain at 0 temporarily, but allows for future inclusion of significant numbers of asteroids, at the users discretion. A critical step for the configuration of simulations is to establish the time ranges of the simulation, to , as well as the incremental time step which will progress the simulation forward: double t_0 = 0; double t = t_0; double dt = 86400; double t_end = 86400 * 365 * 10; // approximately a decade in seconds double BIG_G = 6.67e-11; // gravitational constant The positions and velocities established above are interpreted to be correct for . The extent of a simulation would logically be for the period where . Propagation An entire simulation can consist of hundreds, thousands, millions, billions, or sometimes trillions of time steps. At the elementary level, each time step (for simulations with particles moving due to forces exerted on them) involves calculating the forces on each body calculating the accelerations of each body () calculating the velocities of each body ( calculating the new position of each body ( The above can be implemented quite simply with a while loop which continues while exists in the aforementioned range: while (t < t_end) { for (size_t m1_idx = 0; m1_idx < 9 + N_ASTEROIDS; m1_idx++) { Vector3 a_g = { 0,0,0 }; for (size_t m2_idx = 0; m2_idx < 9 + N_ASTEROIDS; m2_idx++) { if (m2_idx != m1_idx) { Vector3 r_vector; r_vector.e[0] = orbital_entities[m1_idx].e[0] - orbital_entities[m2_idx].e[0]; r_vector.e[1] = orbital_entities[m1_idx].e[1] - orbital_entities[m2_idx].e[1]; r_vector.e[2] = orbital_entities[m1_idx].e[2] - orbital_entities[m2_idx].e[2]; double r_mag = sqrt( r_vector.e[0] * r_vector.e[0] + r_vector.e[1] * r_vector.e[1] + r_vector.e[2] * r_vector.e[2]); double acceleration = -1.0 * BIG_G * (orbital_entities[m2_idx].e[6]) / pow(r_mag, 2.0); Vector3 r_unit_vector = { r_vector.e[0] / r_mag, r_vector.e[1] / r_mag, r_vector.e[2] / r_mag }; a_g.e[0] += acceleration * r_unit_vector.e[0]; a_g.e[1] += acceleration * r_unit_vector.e[1]; a_g.e[2] += acceleration * r_unit_vector.e[2]; } } orbital_entities[m1_idx].e[3] += a_g.e[0] * dt; orbital_entities[m1_idx].e[4] += a_g.e[1] * dt; orbital_entities[m1_idx].e[5] += a_g.e[2] * dt; } for (size_t entity_idx = 0; entity_idx < 9 + N_ASTEROIDS; entity_idx++) { orbital_entities[entity_idx].e[0] += orbital_entities[entity_idx].e[3] * dt; orbital_entities[entity_idx].e[1] += orbital_entities[entity_idx].e[4] * dt; orbital_entities[entity_idx].e[2] += orbital_entities[entity_idx].e[5] * dt; } t += dt; } Focusing on the inner four rocky planets in the simulation, the trajectories resulting from the above propagation is shown below: See also References Further reading . Physical cosmology Gravity Simulation Cosmological simulation Articles containing video clips Computational physics Particles
N-body simulation
[ "Physics", "Astronomy" ]
4,797
[ "Astronomical sub-disciplines", "Theoretical physics", "Astrophysics", "Computational physics", "Physical cosmology", "Physical objects", "Particles", "Cosmological simulation", "Matter" ]
21,009,545
https://en.wikipedia.org/wiki/Algebra%20Universalis
Algebra Universalis is an international scientific journal focused on universal algebra and lattice theory. The journal, founded in 1971 by George Grätzer, is currently published by Springer-Verlag. Honorary editors in chief of the journal included Alfred Tarski and Bjarni Jónsson. External links Algebra Universalis on Springer.com Algebra Universalis homepage, including instructions to authors Universal algebra Algebra journals Academic journals established in 1971 Springer Science+Business Media academic journals
Algebra Universalis
[ "Mathematics" ]
91
[ "Fields of abstract algebra", "Algebra journals", "Universal algebra", "Algebra" ]
21,009,897
https://en.wikipedia.org/wiki/Crossed%20molecular%20beam
In analytical chemistry, crossed molecular beam experiments involve two beams of atoms or molecules which are collided together to study the dynamics of the chemical reaction, and can detect individual reactive collisions. Technique In a crossed molecular beam apparatus, two collimated beams of gas-phase atoms or molecules, each dilute enough to ignore collisions within each beam, intersect in a vacuum chamber. The direction and velocity of the resulting product molecules are then measured, and are frequently coupled with mass spectrometric data. These data yield information about the partitioning of energy among translational, rotational, and vibrational modes of the product molecules. History The crossed molecular beam technique was developed by Dudley Herschbach and Yuan T. Lee, for which they were awarded the 1986 Nobel Prize in Chemistry. While the technique was demonstrated in 1953 by Taylor and Datz of Oak Ridge National Laboratory, Herschbach and Lee refined the apparatus and began probing gas-phase reactions in unprecedented detail. Early crossed beam experiments investigated alkali metals such as potassium, rubidium, and cesium. When the scattered alkali metal atoms collided with a hot metal filament, they ionized, creating a small electric current. Because this detection method is nearly perfectly efficient, the technique was quite sensitive. Unfortunately, this simple detection system only detects alkali metals. New techniques for detection were needed to analyze main group elements. Detecting scattered particles through a metal filament gave a good indication of angular distribution but has no sensitivity to kinetic energy. In order to gain insight into the kinetic energy distribution, early crossed molecular beam apparatuses used a pair of slotted disks placed between the collision center and the detector. By controlling the rotation speed of the disks, only particles with a certain known velocity could pass through and be detected. With information about the velocity, angular distribution, and identity of the scattered species, useful information about the dynamics of the system can be derived. Later improvements included the use of quadrupole mass filters to select only the products of interest, as well as time-of-flight mass spectrometers to allow easy measurement of kinetic energy. These improvements also allowed the detection of a vast array of compounds, marking the advent of the "universal" crossed molecular beam apparatus. The inclusion of supersonic nozzles to collimate the gases expanded the variety and scope of experiments, and the use of lasers to excite the beams (either before impact or at the point of reaction) further broadened the applicability of this technique. See also Molecularity References Physical chemistry American inventions Taiwanese inventions
Crossed molecular beam
[ "Physics", "Chemistry" ]
518
[ "Physical chemistry", "Applied and interdisciplinary physics", "nan" ]
21,013,211
https://en.wikipedia.org/wiki/Vanadium%28III%29%20iodide
Vanadium(III) iodide is the inorganic compound with the formula VI3. This paramagnetic solid is generated by the reaction of vanadium powder with iodine at around 500 °C. The black hygroscopic crystals dissolve in water to give green solutions, characteristic of V(III) ions. The purification of vanadium metal by the chemical transport reaction involving the reversible formation of vanadium(III) iodides in the presence of iodine and its subsequent decomposition to yield pure metal: 2 V + 3 I2 ⇌ 2 VI3 VI3 crystallizes in the motif adopted by bismuth(III) iodide: the iodides are hexagonal-closest packed and the vanadium centers occupy one third of the octahedral holes. When solid samples are heated, the gas contains VI4, which is probably the volatile vanadium component in the vapor transport method. Thermal decomposition of the triiodide leaves a residue of vanadium(II) iodide: 2 VI3 → VI2 + VI4 ΔH = ; ΔS = mol−1 K−1. References Iodides Metal halides Vanadium(III) compounds
Vanadium(III) iodide
[ "Chemistry" ]
255
[ "Inorganic compounds", "Metal halides", "Salts" ]
21,014,315
https://en.wikipedia.org/wiki/Circumstellar%20envelope
A circumstellar envelope (CSE) is a part of a star that has a roughly spherical shape and is not gravitationally bound to the star core. Usually circumstellar envelopes are formed from the dense stellar wind, or they are present before the formation of the star. Circumstellar envelopes of old stars (Mira variables and OH/IR stars) eventually evolve into protoplanetary nebulae, and circumstellar envelopes of young stellar objects evolve into circumstellar discs. Types of circumstellar envelopes Circumstellar envelopes of AGB stars Circumstellar envelopes around young stellar objects See also Circumstellar dust Common envelopes Stellar evolution References External links The Structure and Evolution of Envelopes and Disks in Young Stellar Systems Stellar evolution
Circumstellar envelope
[ "Physics", "Astronomy" ]
174
[ "Astronomy stubs", "Astrophysics", "Stellar evolution", "Stellar astronomy stubs", "Astrophysics stubs" ]
21,014,764
https://en.wikipedia.org/wiki/Kuratowski%20embedding
In mathematics, the Kuratowski embedding allows one to view any metric space as a subset of some Banach space. It is named after Kazimierz Kuratowski. The statement obviously holds for the empty space. If (X,d) is a metric space, x0 is a point in X, and Cb(X) denotes the Banach space of all bounded continuous real-valued functions on X with the supremum norm, then the map defined by is an isometry. The above construction can be seen as embedding a pointed metric space into a Banach space. The Kuratowski–Wojdysławski theorem states that every bounded metric space X is isometric to a closed subset of a convex subset of some Banach space. (N.B. the image of this embedding is closed in the convex subset, not necessarily in the Banach space.) Here we use the isometry defined by The convex set mentioned above is the convex hull of Ψ(X). In both of these embedding theorems, we may replace Cb(X) by the Banach space ℓ ∞(X) of all bounded functions X → R, again with the supremum norm, since Cb(X) is a closed linear subspace of ℓ ∞(X). These embedding results are useful because Banach spaces have a number of useful properties not shared by all metric spaces: they are vector spaces which allows one to add points and do elementary geometry involving lines and planes etc.; and they are complete. Given a function with codomain X, it is frequently desirable to extend this function to a larger domain, and this often requires simultaneously enlarging the codomain to a Banach space containing X. History Formally speaking, this embedding was first introduced by Kuratowski, but a very close variation of this embedding appears already in the papers of Fréchet. Those papers make use of the embedding respectively to exhibit as a "universal" separable metric space (it isn't itself separable, hence the scare quotes) and to construct a general metric on by pulling back the metric on a simple Jordan curve in . See also Tight span, an embedding of any metric space into an injective metric space defined similarly to the Kuratowski embedding References Functional analysis Metric geometry
Kuratowski embedding
[ "Mathematics" ]
489
[ "Functional analysis", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
845,722
https://en.wikipedia.org/wiki/Tunnel%20magnetoresistance
Tunnel magnetoresistance (TMR) is a magnetoresistive effect that occurs in a magnetic tunnel junction (MTJ), which is a component consisting of two ferromagnets separated by a thin insulator. If the insulating layer is thin enough (typically a few nanometres), electrons can tunnel from one ferromagnet into the other. Since this process is forbidden in classical physics, the tunnel magnetoresistance is a strictly quantum mechanical phenomenon, and lies in the study of spintronics. Magnetic tunnel junctions are manufactured in thin film technology. On an industrial scale the film deposition is done by magnetron sputter deposition; on a laboratory scale molecular beam epitaxy, pulsed laser deposition and electron beam physical vapor deposition are also utilized. The junctions are prepared by photolithography. Phenomenological description The direction of the two magnetizations of the ferromagnetic films can be switched individually by an external magnetic field. If the magnetizations are in a parallel orientation it is more likely that electrons will tunnel through the insulating film than if they are in the oppositional (antiparallel) orientation. Consequently, such a junction can be switched between two states of electrical resistance, one with low and one with very high resistance. History The effect was originally discovered in 1975 by Michel Jullière (University of Rennes, France) in Fe/Ge-O/Co-junctions at 4.2 K. The relative change of resistance was around 14%, and did not attract much attention. In 1991 Terunobu Miyazaki (Tohoku University, Japan) found a change of 2.7% at room temperature. Later, in 1994, Miyazaki found 18% in junctions of iron separated by an amorphous aluminum oxide insulator and Jagadeesh Moodera found 11.8% in junctions with electrodes of CoFe and Co. The highest effects observed at this time with aluminum oxide insulators was around 70% at room temperature. Since the year 2000, tunnel barriers of crystalline magnesium oxide (MgO) have been under development. In 2001 Butler and Mathon independently made the theoretical prediction that using iron as the ferromagnet and MgO as the insulator, the tunnel magnetoresistance can reach several thousand percent. The same year, Bowen et al. were the first to report experiments showing a significant TMR in a MgO based magnetic tunnel junction [Fe/MgO/FeCo(001)]. In 2004, Parkin and Yuasa were able to make Fe/MgO/Fe junctions that reach over 200% TMR at room temperature. In 2008, effects of up to 604% at room temperature and more than 1100% at 4.2 K were observed in junctions of CoFeB/MgO/CoFeB by S. Ikeda, H. Ohno group of Tohoku University in Japan. Applications The read-heads of modern hard disk drives work on the basis of magnetic tunnel junctions. TMR, or more specifically the magnetic tunnel junction, is also the basis of MRAM, a new type of non-volatile memory. The 1st generation technologies relied on creating cross-point magnetic fields on each bit to write the data on it, although this approach has a scaling limit at around 90–130 nm. There are two 2nd generation techniques currently being developed: Thermal Assisted Switching (TAS) and Spin-transfer torque. Magnetic tunnel junctions are also used for sensing applications. Today they are commonly used for position sensors and current sensors in various automotive, industrial and consumer applications. These higher performance sensors are replacing Hall sensors in many applications due to their improved performance. Physical explanation The relative resistance change—or effect amplitude—is defined as where is the electrical resistance in the anti-parallel state, whereas is the resistance in the parallel state. The TMR effect was explained by Jullière with the spin polarizations of the ferromagnetic electrodes. The spin polarization P is calculated from the spin dependent density of states (DOS) at the Fermi energy: The spin-up electrons are those with spin orientation parallel to the external magnetic field, whereas the spin-down electrons have anti-parallel alignment with the external field. The relative resistance change is now given by the spin polarizations of the two ferromagnets, P1 and P2: If no voltage is applied to the junction, electrons tunnel in both directions with equal rates. With a bias voltage U, electrons tunnel preferentially to the positive electrode. With the assumption that spin is conserved during tunneling, the current can be described in a two-current model. The total current is split in two partial currents, one for the spin-up electrons and another for the spin-down electrons. These vary depending on the magnetic state of the junctions. There are two possibilities to obtain a defined anti-parallel state. First, one can use ferromagnets with different coercivities (by using different materials or different film thicknesses). And second, one of the ferromagnets can be coupled with an antiferromagnet (exchange bias). In this case the magnetization of the uncoupled electrode remains "free". The TMR becomes infinite if P1 and P2 equal 1, i.e. if both electrodes have 100% spin polarization. In this case the magnetic tunnel junction becomes a switch, that switches magnetically between low resistance and infinite resistance. Materials that come into consideration for this are called ferromagnetic half-metals. Their conduction electrons are fully spin-polarized. This property is theoretically predicted for a number of materials (e.g. CrO2, various Heusler alloys) but its experimental confirmation has been the subject of subtle debate. Nevertheless, if one considers only those electrons that enter into transport, measurements by Bowen et al. of up to 99.6% spin polarization at the interface between La0.7Sr0.3MnO3 and SrTiO3 pragmatically amount to experimental proof of this property. The TMR decreases with both increasing temperature and increasing bias voltage. Both can be understood in principle by magnon excitations and interactions with magnons, as well as due to tunnelling with respect to localized states induced by oxygen vacancies (see Symmetry Filtering section hereafter). Symmetry-filtering in tunnel barriers Prior to the introduction of epitaxial magnesium oxide (MgO), amorphous aluminum oxide was used as the tunnel barrier of the MTJ, and typical room temperature TMR was in the range of tens of percent. MgO barriers increased TMR to hundreds of percent. This large increase reflects a synergetic combination of electrode and barrier electronic structures, which in turn reflects the achievement of structurally ordered junctions. Indeed, MgO filters the tunneling transmission of electrons with a particular symmetry that are fully spin-polarized within the current flowing across body-centered cubic Fe-based electrodes. Thus, in the MTJ's parallel (P) state of electrode magnetization, electrons of this symmetry dominate the junction current. In contrast, in the MTJ's antiparallel (AP) state, this channel is blocked, such that electrons with the next most favorable symmetry to transmit dominate the junction current. Since those electrons tunnel with respect to a larger barrier height, this results in the sizeable TMR. Beyond these large values of TMR across MgO-based MTJs, this impact of the barrier's electronic structure on tunnelling spintronics has been indirectly confirmed by engineering the junction's potential landscape for electrons of a given symmetry. This was first achieved by examining how the electrons of a lanthanum strontium manganite half-metallic electrode with both full spin (P=+1 ) and symmetry polarization tunnel across an electrically biased SrTiO3 tunnel barrier. The conceptually simpler experiment of inserting an appropriate metal spacer at the junction interface during sample growth was also later demonstrated . While theory, first formulated in 2001, predicts large TMR values associated with a 4eV barrier height in the MTJ's P state and 12eV in the MTJ's AP state, experiments reveal barrier heights as low as 0.4eV. This contradiction is lifted if one takes into account the localized states of oxygen vacancies in the MgO tunnel barrier. Extensive solid-state tunnelling spectroscopy experiments across MgO MTJs revealed in 2014 that the electronic retention on the ground and excited states of an oxygen vacancy, which is temperature-dependent, determines the tunnelling barrier height for electrons of a given symmetry, and thus crafts the effective TMR ratio and its temperature dependence. This low barrier height in turn enables the high current densities required for spin-transfer torque, discussed hereafter. Spin-transfer torque in magnetic tunnel junctions (MTJs) The effect of spin-transfer torque has been studied and applied widely in MTJs, where there is a tunnelling barrier sandwiched between a set of two ferromagnetic electrodes such that there is (free) magnetization of the right electrode, while assuming that the left electrode (with fixed magnetization) acts as spin-polarizer. This may then be pinned to some selecting transistor in a magnetoresistive random-access memory device, or connected to a preamplifier in a hard disk drive application. The spin-transfer torque vector, driven by the linear response voltage, can be computed from the expectation value of the torque operator: where is the gauge-invariant nonequilibrium density matrix for the steady-state transport, in the zero-temperature limit, in the linear-response regime, and the torque operator is obtained from the time derivative of the spin operator: Using the general form of a 1D tight-binding Hamiltonian: where total magnetization (as macrospin) is along the unit vector and the Pauli matrices properties involving arbitrary classical vectors , given by it is then possible to first obtain an analytical expression for (which can be expressed in compact form using , and the vector of Pauli spin matrices ). The spin-transfer torque vector in general MTJs has two components: a parallel and perpendicular component: A parallel component: And a perpendicular component: In symmetric MTJs (made of electrodes with the same geometry and exchange splitting), the spin-transfer torque vector has only one active component, as the perpendicular component disappears: . Therefore, only vs. needs to be plotted at the site of the right electrode to characterise tunnelling in symmetric MTJs, making them appealing for production and characterisation at an industrial scale. Note: In these calculations the active region (for which it is necessary to calculate the retarded Green's function) should consist of the tunnel barrier + the right ferromagnetic layer of finite thickness (as in realistic devices). The active region is attached to the left ferromagnetic electrode (modeled as semi-infinite tight-binding chain with non-zero Zeeman splitting) and the right N electrode (semi-infinite tight-binding chain without any Zeeman splitting), as encoded by the corresponding self-energy terms. Discrepancy between theory and experiment Theoretical tunnelling magneto-resistance ratios of 10000% have been predicted. However, the largest that have been observed are only 604%. One suggestion is that grain boundaries could be affecting the insulating properties of the MgO barrier; however, the structure of films in buried stack structures is difficult to determine. The grain boundaries may act as short circuit conduction paths through the material, reducing the resistance of the device. Recently, using new scanning transmission electron microscopy techniques, the grain boundaries within FeCoB/MgO/FeCoB MTJs have been atomically resolved. This has allowed first principles density functional theory calculations to be performed on structural units that are present in real films. Such calculations have shown that the band gap can be reduced by as much as 45%. In addition to grain boundaries, point defects such as boron interstitial and oxygen vacancies could be significantly altering the tunnelling magneto-resistance. Recent theoretical calculations have revealed that boron interstitials introduce defect states in the band gap potentially reducing the TMR further These theoretical calculations have also been backed up by experimental evidence showing the nature of boron within the MgO layer between two different systems and how the TMR is different. See also Quantum tunneling Magnetoresistance Giant Magnetoresistance (GMR) Spin-transfer torque References Electric and magnetic fields in matter Spintronics Magnetoresistance
Tunnel magnetoresistance
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,614
[ "Magnetoresistance", "Physical quantities", "Spintronics", "Electric and magnetic fields in matter", "Materials science", "Magnetic ordering", "Condensed matter physics", "Electrical resistance and conductance" ]
848,494
https://en.wikipedia.org/wiki/Shock%20tube
For the pyrotechnic initiator, see Shock tube detonator A shock tube is an instrument used to replicate and direct blast waves at a sensor or model in order to simulate explosions and their effects, usually on a smaller scale. Shock tubes (and related impulse facilities such as shock tunnels, expansion tubes, and expansion tunnels) can also be used to study aerodynamic flow under a wide range of temperatures and pressures that are difficult to obtain in other types of testing facilities. Shock tubes are also used to investigate compressible flow phenomena and gas phase combustion reactions. More recently, shock tubes have been used in biomedical research to study how biological specimens are affected by blast waves. A shock wave inside a shock tube may be generated by a small explosion (blast-driven) or by the buildup of high pressures which cause diaphragm(s) to burst and a shock wave to propagate down the shock tube (compressed-gas driven). History An early study of compression driven shock tubes was published in 1899 by French scientist Paul Vieille, though the apparatus was not called a shock tube until the 1940s. In the 1930s it was rediscovered by W. H. Payman and WCF Shepherd of English Safety in Mines Research Board in order to study underground methane explosions, but the term was not coined until Bleakney et al. publication of 1949. In the 1940s, interest revived and shock tubes were increasingly used to study the flow of fast moving gases over objects, the chemistry and physical dynamics of gas phase combustion reactions. The modern version of the shock tube was developed during WWII at Princeton University by a group led by Walker Bleakney, who published overviews of their studies in 1946 and 1949. In 1966, Duff and Blackwell described a type of shock tube driven by high explosives. These ranged in diameter from 0.6 to 2 m and in length from 3 m to 15 m. The tubes themselves were constructed of low-cost materials and produced shock waves with peak dynamic pressures of 7 MPa to 200 MPa and durations of a few hundred microseconds to several milliseconds. Both compression-driven and blast-driven shock tubes are currently used for scientific as well as military applications. Compressed-gas driven shock tubes are more easily obtained and maintained in laboratory conditions; however, the shape of the pressure wave is different from a blast wave in some important respects and may not be suitable for some applications. Blast-driven shock tubes generate pressure waves that are more realistic to free-field blast waves. However, they require facilities and expert personnel for handling high explosives. Also, in addition to the initial pressure wave, a jet effect caused by the expansion of compressed gases (compression-driven) or production of rapidly expanding gases (blast-driven) follows and may transfer momentum to a sample after the blast wave has passed. More recently, laboratory scale shock tubes driven by fuel-air mixtures have been developed that produce realistic blast waves and can be operated in more ordinary laboratory facilities. Because the molar volume of gas is much less, the jet effect is a fraction of that for compressed-gas driven shock tubes. To date, the smaller size and lower peak pressures generated by these shock tubes make them most useful for preliminary, nondestructive testing of materials, validation of measurement equipment such as high speed pressure transducers, and for biomedical research as well as military applications. Operation A simple shock tube is a tube, rectangular or circular in cross-section, usually constructed of metal, in which a gas at low pressure and a gas at high pressure are separated using some form of diaphragm. See, for instance, texts by Soloukhin, Gaydon and Hurle, and Bradley. The diaphragm suddenly bursts open under predetermined conditions to produce a wave propagating through the low pressure section. The shock that eventually forms increases the temperature and pressure of the test gas and induces a flow in the direction of the shock wave. Observations can be made in the flow behind the incident front or take advantage of the longer testing times and vastly enhanced pressures and temperatures behind the reflected wave. The low-pressure gas, referred to as the driven gas, is subjected to the shock wave. The high pressure gas is known as the driver gas. The corresponding sections of the tube are likewise called the driver and driven sections. The driver gas is usually chosen to have a low molecular weight, (e.g., helium or hydrogen) for safety reasons, with high speed of sound, but may be slightly diluted to 'tailor' interface conditions across the shock. To obtain the strongest shocks the pressure of the driven gas is well below atmospheric pressure (a partial vacuum is induced in the driven section before detonation). The test begins with the bursting of the diaphragm. Several methods are commonly used to burst the diaphragm. A mechanically-driven plunger is sometimes used to pierce it or an explosive charge may be used to burst it. Another method is to use diaphragms of plastic or metals to define specific bursting pressures. Plastics are used for the lowest burst pressures, aluminum and copper for somewhat higher levels and mild steel and stainless steel for the highest burst pressures. These diaphragms are frequently scored in a cross-shaped pattern to a calibrated depth to ensure that they rupture evenly, contouring the petals so that the full section of the tube remains open during the test time. Yet another method of rupturing the diaphragm utilizes a mixture of combustible gases, with an initiator designed to produce a detonation within it, producing a sudden and sharp increase in what may or may not be a pressurized driver. This blast wave increases the temperature and pressure of the driven gas and induces a flow in the direction of the shock wave but at lower velocity than the lead wave. The bursting diaphragm produces a series of pressure waves, each increasing the speed of sound behind them, so that they compress into a shock propagating through the driven gas. This shock wave increases the temperature and pressure of the driven gas and induces a flow in the direction of the shock wave but at lower velocity than the lead wave. Simultaneously, a rarefaction wave, often referred to as the Prandtl-Meyer wave, travels back in to the driver gas. The interface, across which a limited degree of mixing occurs, separates driven and driver gases is referred to as the contact surface and follows, at a lower velocity, the lead wave. A 'Chemical Shock Tube' involves separating driver and driven gases by a pair of diaphragms designed to fail after pre-determined delays with an end 'dump tank' of greatly increased cross-section. This allows an extreme rapid reduction (quench) in temperature of the heated gases. Uses In addition to measurements of rates of chemical kinetics shock tubes have been used to measure dissociation energies and molecular relaxation rates they have been used in aerodynamic tests. The fluid flow in the driven gas can be used much as a wind tunnel, allowing higher temperatures and pressures therein replicating conditions in the turbine sections of jet engines. However, test times are limited to a few milliseconds, either by the arrival of the contact surface or the reflected shock wave. They have been further developed into shock tunnels, with an added nozzle and dump tank. The resultant high temperature hypersonic flow can be used to simulate atmospheric re-entry of spacecraft or hypersonic craft, again with limited testing times. Shock tubes have been developed in a wide range of sizes. The size and method of producing the shock wave determine the peak and duration of the pressure wave it produces. Thus, shock tubes can be used as a tool used to both create and direct blast waves at a sensor or an object in order to imitate actual explosions and the damage that they cause on a smaller scale, provided that such explosions do not involve elevated temperatures and shrapnel or flying debris. Results from shock tube experiments can be used to develop and validate numerical model of the response of a material or object to an ambient blast wave without shrapnel or flying debris. Shock tubes can be used to experimentally determine which materials and designs would be best suited to the job of attenuating ambient blast waves without shrapnel or flying debris. The results can then be incorporated into designs to protect structures and people that might be exposed to an ambient blast wave without shrapnel or flying debris. Shock tubes are also used in biomedical research to find out how biological tissues are affected by blast waves. There are alternatives to the classical shock tube; for laboratory experiments at very high pressure, shock waves can also be created using high-intensity short-pulse lasers. See also Hypersonic wind tunnel Light-gas gun Ludwieg tube Expansion fan Shock tube detonator Shock wave Supersonic wind tunnel References External links RPI Shock Tube page Shock Tube Calculator Shock tube x-t Diagram High Pressure shock tube group Laboratory equipment Aerodynamics Engineering equipment Shock waves
Shock tube
[ "Physics", "Chemistry", "Engineering" ]
1,843
[ "Physical phenomena", "Shock waves", "Aerodynamics", "Waves", "nan", "Aerospace engineering", "Fluid dynamics" ]
848,629
https://en.wikipedia.org/wiki/Adiabatic%20theorem
The adiabatic theorem is a concept in quantum mechanics. Its original form, due to Max Born and Vladimir Fock (1928), was stated as follows: A physical system remains in its instantaneous eigenstate if a given perturbation is acting on it slowly enough and if there is a gap between the eigenvalue and the rest of the Hamiltonian's spectrum. In simpler terms, a quantum mechanical system subjected to gradually changing external conditions adapts its functional form, but when subjected to rapidly varying conditions there is insufficient time for the functional form to adapt, so the spatial probability density remains unchanged. Adiabatic pendulum At the 1911 Solvay conference, Einstein gave a lecture on the quantum hypothesis, which states that for atomic oscillators. After Einstein's lecture, Hendrik Lorentz commented that, classically, if a simple pendulum is shortened by holding the wire between two fingers and sliding down, it seems that its energy will change smoothly as the pendulum is shortened. This seems to show that the quantum hypothesis is invalid for macroscopic systems, and if macroscopic systems do not follow the quantum hypothesis, then as the macroscopic system becomes microscopic, it seems the quantum hypothesis would be invalidated. Einstein replied that although both the energy and the frequency would change, their ratio would still be conserved, thus saving the quantum hypothesis. Before the conference, Einstein had just read a paper by Paul Ehrenfest on the adiabatic hypothesis. We know that he had read it because he mentioned it in a letter to Michele Besso written before the conference. Diabatic vs. adiabatic processes At some initial time a quantum-mechanical system has an energy given by the Hamiltonian ; the system is in an eigenstate of labelled . Changing conditions modify the Hamiltonian in a continuous manner, resulting in a final Hamiltonian at some later time . The system will evolve according to the time-dependent Schrödinger equation, to reach a final state . The adiabatic theorem states that the modification to the system depends critically on the time during which the modification takes place. For a truly adiabatic process we require ; in this case the final state will be an eigenstate of the final Hamiltonian , with a modified configuration: The degree to which a given change approximates an adiabatic process depends on both the energy separation between and adjacent states, and the ratio of the interval to the characteristic timescale of the evolution of for a time-independent Hamiltonian, , where is the energy of . Conversely, in the limit we have infinitely rapid, or diabatic passage; the configuration of the state remains unchanged: The so-called "gap condition" included in Born and Fock's original definition given above refers to a requirement that the spectrum of is discrete and nondegenerate, such that there is no ambiguity in the ordering of the states (one can easily establish which eigenstate of corresponds to ). In 1999 J. E. Avron and A. Elgart reformulated the adiabatic theorem to adapt it to situations without a gap. Comparison with the adiabatic concept in thermodynamics The term "adiabatic" is traditionally used in thermodynamics to describe processes without the exchange of heat between system and environment (see adiabatic process), more precisely these processes are usually faster than the timescale of heat exchange. (For example, a pressure wave is adiabatic with respect to a heat wave, which is not adiabatic.) Adiabatic in the context of thermodynamics is often used as a synonym for fast process. The classical and quantum mechanics definition is instead closer to the thermodynamical concept of a quasistatic process, which are processes that are almost always at equilibrium (i.e. that are slower than the internal energy exchange interactions time scales, namely a "normal" atmospheric heat wave is quasi-static, and a pressure wave is not). Adiabatic in the context of mechanics is often used as a synonym for slow process. In the quantum world adiabatic means for example that the time scale of electrons and photon interactions is much faster or almost instantaneous with respect to the average time scale of electrons and photon propagation. Therefore, we can model the interactions as a piece of continuous propagation of electrons and photons (i.e. states at equilibrium) plus a quantum jump between states (i.e. instantaneous). The adiabatic theorem in this heuristic context tells essentially that quantum jumps are preferably avoided, and the system tries to conserve the state and the quantum numbers. The quantum mechanical concept of adiabatic is related to adiabatic invariant, it is often used in the old quantum theory and has no direct relation with heat exchange. Example systems Simple pendulum As an example, consider a pendulum oscillating in a vertical plane. If the support is moved, the mode of oscillation of the pendulum will change. If the support is moved sufficiently slowly, the motion of the pendulum relative to the support will remain unchanged. A gradual change in external conditions allows the system to adapt, such that it retains its initial character. The detailed classical example is available in the Adiabatic invariant page and here. Quantum harmonic oscillator The classical nature of a pendulum precludes a full description of the effects of the adiabatic theorem. As a further example consider a quantum harmonic oscillator as the spring constant is increased. Classically this is equivalent to increasing the stiffness of a spring; quantum-mechanically the effect is a narrowing of the potential energy curve in the system Hamiltonian. If is increased adiabatically then the system at time will be in an instantaneous eigenstate of the current Hamiltonian , corresponding to the initial eigenstate of . For the special case of a system like the quantum harmonic oscillator described by a single quantum number, this means the quantum number will remain unchanged. Figure 1 shows how a harmonic oscillator, initially in its ground state, , remains in the ground state as the potential energy curve is compressed; the functional form of the state adapting to the slowly varying conditions. For a rapidly increased spring constant, the system undergoes a diabatic process in which the system has no time to adapt its functional form to the changing conditions. While the final state must look identical to the initial state for a process occurring over a vanishing time period, there is no eigenstate of the new Hamiltonian, , that resembles the initial state. The final state is composed of a linear superposition of many different eigenstates of which sum to reproduce the form of the initial state. Avoided curve crossing For a more widely applicable example, consider a 2-level atom subjected to an external magnetic field. The states, labelled and using bra–ket notation, can be thought of as atomic angular-momentum states, each with a particular geometry. For reasons that will become clear these states will henceforth be referred to as the diabatic states. The system wavefunction can be represented as a linear combination of the diabatic states: With the field absent, the energetic separation of the diabatic states is equal to ; the energy of state increases with increasing magnetic field (a low-field-seeking state), while the energy of state decreases with increasing magnetic field (a high-field-seeking state). Assuming the magnetic-field dependence is linear, the Hamiltonian matrix for the system with the field applied can be written where is the magnetic moment of the atom, assumed to be the same for the two diabatic states, and is some time-independent coupling between the two states. The diagonal elements are the energies of the diabatic states ( and ), however, as is not a diagonal matrix, it is clear that these states are not eigenstates of due to the off-diagonal coupling constant. The eigenvectors of the matrix are the eigenstates of the system, which we will label and , with corresponding eigenvalues It is important to realise that the eigenvalues and are the only allowed outputs for any individual measurement of the system energy, whereas the diabatic energies and correspond to the expectation values for the energy of the system in the diabatic states and . Figure 2 shows the dependence of the diabatic and adiabatic energies on the value of the magnetic field; note that for non-zero coupling the eigenvalues of the Hamiltonian cannot be degenerate, and thus we have an avoided crossing. If an atom is initially in state in zero magnetic field (on the red curve, at the extreme left), an adiabatic increase in magnetic field will ensure the system remains in an eigenstate of the Hamiltonian throughout the process (follows the red curve). A diabatic increase in magnetic field will ensure the system follows the diabatic path (the dotted blue line), such that the system undergoes a transition to state . For finite magnetic field slew rates there will be a finite probability of finding the system in either of the two eigenstates. See below for approaches to calculating these probabilities. These results are extremely important in atomic and molecular physics for control of the energy-state distribution in a population of atoms or molecules. Mathematical statement Under a slowly changing Hamiltonian with instantaneous eigenstates and corresponding energies , a quantum system evolves from the initial state to the final state where the coefficients undergo the change of phase with the dynamical phase and geometric phase In particular, , so if the system begins in an eigenstate of , it remains in an eigenstate of during the evolution with a change of phase only. Proofs {| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Sakurai in Modern Quantum Mechanics |- | This proof is partly inspired by one given by Sakurai in Modern Quantum Mechanics. The instantaneous eigenstates and energies , by assumption, satisfy the time-independent Schrödinger equation at all times . Thus, they constitute a basis that can be used to expand the state at any time . The evolution of the system is governed by the time-dependent Schrödinger equation where (see ). Insert the expansion of , use , differentiate with the product rule, take the inner product with and use orthonormality of the eigenstates to obtain This coupled first-order differential equation is exact and expresses the time-evolution of the coefficients in terms of inner products between the eigenstates and the time-differentiated eigenstates. But it is possible to re-express the inner products for in terms of matrix elements of the time-differentiated Hamiltonian . To do so, differentiate both sides of the time-independent Schrödinger equation with respect to time using the product rule to get Again take the inner product with and use and orthonormality to find Insert this into the differential equation for the coefficients to obtain This differential equation describes the time-evolution of the coefficients, but now in terms of matrix elements of . To arrive at the adiabatic theorem, neglect the right hand side. This is valid if the rate of change of the Hamiltonian is small and there is a finite gap between the energies. This is known as the adiabatic approximation. Under the adiabatic approximation, which integrates precisely to the adiabatic theorem with the phases defined in the statement of the theorem. The dynamical phase is real because it involves an integral over a real energy. To see that the geometric phase is purely real, differentiate the normalization of the eigenstates and use the product rule to find that Thus, is purely imaginary, so the geometric phase is purely real. |} {| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Adiabatic approximation |- | Proof with the details of the adiabatic approximation We are going to formulate the statement of the theorem as follows: For a slowly varying Hamiltonian in the time range T the solution of the schroedinger equation with initial conditions where is the eigenvector of the instantaneous Schroedinger equation can be approximated as: where the adiabatic approximation is: and also called Berry phase And now we are going to prove the theorem. Consider the time-dependent Schrödinger equation with Hamiltonian We would like to know the relation between an initial state and its final state at in the adiabatic limit First redefine time as : At every point in time can be diagonalized with eigenvalues and eigenvectors . Since the eigenvectors form a complete basis at any time we can expand as: where The phase is called the dynamic phase factor. By substitution into the Schrödinger equation, another equation for the variation of the coefficients can be obtained: The term gives , and so the third term of left side cancels out with the right side, leaving Now taking the inner product with an arbitrary eigenfunction , the on the left gives , which is 1 only for m = n and otherwise vanishes. The remaining part gives For the will oscillate faster and faster and intuitively will eventually suppress nearly all terms on the right side. The only exceptions are when has a critical point, i.e. . This is trivially true for . Since the adiabatic theorem assumes a gap between the eigenenergies at any time this cannot hold for . Therefore, only the term will remain in the limit . In order to show this more rigorously we first need to remove the term. This can be done by defining We obtain: This equation can be integrated: or written in vector notation Here is a matrix and is basically a Fourier transform. It follows from the Riemann-Lebesgue lemma that as . As last step take the norm on both sides of the above equation: and apply Grönwall's inequality to obtain Since it follows for . This concludes the proof of the adiabatic theorem. In the adiabatic limit the eigenstates of the Hamiltonian evolve independently of each other. If the system is prepared in an eigenstate its time evolution is given by: So, for an adiabatic process, a system starting from nth eigenstate also remains in that nth eigenstate like it does for the time-independent processes, only picking up a couple of phase factors. The new phase factor can be canceled out by an appropriate choice of gauge for the eigenfunctions. However, if the adiabatic evolution is cyclic, then becomes a gauge-invariant physical quantity, known as the Berry phase. |} {| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Generic proof in parameter space |- | Let's start from a parametric Hamiltonian , where the parameters are slowly varying in time, the definition of slow here is defined essentially by the distance in energy by the eigenstates (through the uncertainty principle, we can define a timescale that shall be always much lower than the time scale considered). This way we clearly also identify that while slowly varying the eigenstates remains clearly separated in energy (e.g. also when we generalize this to the case of bands as in the TKNN formula the bands shall remain clearly separated). Given they do not intersect the states are ordered and in this sense this is also one of the meanings of the name topological order. We do have the instantaneous Schrödinger equation: And instantaneous eigenstates: The generic solution: plugging in the full Schrödinger equation and multiplying by a generic eigenvector: And if we introduce the adiabatic approximation: for each We have and where And C is the path in the parameter space, This is the same as the statement of the theorem but in terms of the coefficients of the total wave function and its initial state. Now this is slightly more general than the other proofs given we consider a generic set of parameters, and we see that the Berry phase acts as a local geometric quantity in the parameter space. Finally integrals of local geometric quantities can give topological invariants as in the case of the Gauss-Bonnet theorem. In fact if the path C is closed then the Berry phase persists to Gauge transformation and becomes a physical quantity. |} Example applications Often a solid crystal is modeled as a set of independent valence electrons moving in a mean perfectly periodic potential generated by a rigid lattice of ions. With the Adiabatic theorem we can also include instead the motion of the valence electrons across the crystal and the thermal motion of the ions as in the Born–Oppenheimer approximation. This does explain many phenomena in the scope of: thermodynamics: Temperature dependence of specific heat, thermal expansion, melting transport phenomena: the temperature dependence of electric resistivity of conductors, the temperature dependence of electric conductivity in insulators, Some properties of low temperature superconductivity optics: optic absorption in the infrared for ionic crystals, Brillouin scattering, Raman scattering Deriving conditions for diabatic vs adiabatic passage We will now pursue a more rigorous analysis. Making use of bra–ket notation, the state vector of the system at time can be written where the spatial wavefunction alluded to earlier is the projection of the state vector onto the eigenstates of the position operator It is instructive to examine the limiting cases, in which is very large (adiabatic, or gradual change) and very small (diabatic, or sudden change). Consider a system Hamiltonian undergoing continuous change from an initial value , at time , to a final value , at time , where . The evolution of the system can be described in the Schrödinger picture by the time-evolution operator, defined by the integral equation which is equivalent to the Schrödinger equation. along with the initial condition . Given knowledge of the system wave function at , the evolution of the system up to a later time can be obtained using The problem of determining the adiabaticity of a given process is equivalent to establishing the dependence of on . To determine the validity of the adiabatic approximation for a given process, one can calculate the probability of finding the system in a state other than that in which it started. Using bra–ket notation and using the definition , we have: We can expand In the perturbative limit we can take just the first two terms and substitute them into our equation for , recognizing that is the system Hamiltonian, averaged over the interval , we have: After expanding the products and making the appropriate cancellations, we are left with: giving where is the root mean square deviation of the system Hamiltonian averaged over the interval of interest. The sudden approximation is valid when (the probability of finding the system in a state other than that in which is started approaches zero), thus the validity condition is given by which is a statement of the time-energy form of the Heisenberg uncertainty principle. Diabatic passage In the limit we have infinitely rapid, or diabatic passage: The functional form of the system remains unchanged: This is sometimes referred to as the sudden approximation. The validity of the approximation for a given process can be characterized by the probability that the state of the system remains unchanged: Adiabatic passage In the limit we have infinitely slow, or adiabatic passage. The system evolves, adapting its form to the changing conditions, If the system is initially in an eigenstate of , after a period it will have passed into the corresponding eigenstate of . This is referred to as the adiabatic approximation. The validity of the approximation for a given process can be determined from the probability that the final state of the system is different from the initial state: Calculating adiabatic passage probabilities The Landau–Zener formula In 1932 an analytic solution to the problem of calculating adiabatic transition probabilities was published separately by Lev Landau and Clarence Zener, for the special case of a linearly changing perturbation in which the time-varying component does not couple the relevant states (hence the coupling in the diabatic Hamiltonian matrix is independent of time). The key figure of merit in this approach is the Landau–Zener velocity: where is the perturbation variable (electric or magnetic field, molecular bond-length, or any other perturbation to the system), and and are the energies of the two diabatic (crossing) states. A large results in a large diabatic transition probability and vice versa. Using the Landau–Zener formula the probability, , of a diabatic transition is given by The numerical approach For a transition involving a nonlinear change in perturbation variable or time-dependent coupling between the diabatic states, the equations of motion for the system dynamics cannot be solved analytically. The diabatic transition probability can still be obtained using one of the wide varieties of numerical solution algorithms for ordinary differential equations. The equations to be solved can be obtained from the time-dependent Schrödinger equation: where is a vector containing the adiabatic state amplitudes, is the time-dependent adiabatic Hamiltonian, and the overdot represents a time derivative. Comparison of the initial conditions used with the values of the state amplitudes following the transition can yield the diabatic transition probability. In particular, for a two-state system: for a system that began with . See also Landau–Zener formula Berry phase Quantum stirring, ratchets, and pumping Adiabatic quantum motor Born–Oppenheimer approximation Eigenstate thermalization hypothesis Adiabatic process References Theorems in quantum mechanics
Adiabatic theorem
[ "Physics", "Mathematics" ]
4,519
[ "Theorems in quantum mechanics", "Equations of physics", "Quantum mechanics", "Theorems in mathematical physics", "Physics theorems" ]
849,376
https://en.wikipedia.org/wiki/Transition%20state
In chemistry, the transition state of a chemical reaction is a particular configuration along the reaction coordinate. It is defined as the state corresponding to the highest potential energy along this reaction coordinate. It is often marked with the double dagger (‡) symbol. As an example, the transition state shown below occurs during the SN2 reaction of bromoethane with a hydroxide anion: The activated complex of a reaction can refer to either the transition state or to other states along the reaction coordinate between reactants and products, especially those close to the transition state. According to the transition state theory, once the reactants have passed through the transition state configuration, they always continue to form products. History of concept The concept of a transition state has been important in many theories of the rates at which chemical reactions occur. This started with the transition state theory (also referred to as the activated complex theory), which was first developed around 1935 by Eyring, Evans and Polanyi, and introduced basic concepts in chemical kinetics that are still used today. Explanation A collision between reactant molecules may or may not result in a successful reaction. The outcome depends on factors such as the relative kinetic energy, relative orientation and internal energy of the molecules. Even if the collision partners form an activated complex they are not bound to go on and form products, and instead the complex may fall apart back to the reactants. Observing transition states Because the structure of the transition state is a first-order saddle point along a potential energy surface, the population of species in a reaction that are at the transition state is negligible. Since being at a saddle point along the potential energy surface means that a force is acting along the bonds to the molecule, there will always be a lower energy structure that the transition state can decompose into. This is sometimes expressed by stating that the transition state has a fleeting existence, with species only maintaining the transition state structure for the time-scale of vibrations of chemical bonds (femtoseconds). However, cleverly manipulated spectroscopic techniques can get us as close as the timescale of the technique allows. Femtochemical IR spectroscopy was developed for that reason, and it is possible to probe molecular structure extremely close to the transition point. Often, along the reaction coordinate, reactive intermediates are present not much lower in energy from a transition state making it difficult to distinguish between the two. Determining the geometry of a transition state Transition state structures can be determined by searching for first-order saddle points on the potential energy surface (PES) of the chemical species of interest. A first-order saddle point is a critical point of index one, that is, a position on the PES corresponding to a minimum in all directions except one. This is further described in the article geometry optimization. The Hammond–Leffler postulate The Hammond–Leffler postulate states that the structure of the transition state more closely resembles either the products or the starting material, depending on which is higher in enthalpy. A transition state that resembles the reactants more than the products is said to be early, while a transition state that resembles the products more than the reactants is said to be late. Thus, the Hammond–Leffler Postulate predicts a late transition state for an endothermic reaction and an early transition state for an exothermic reaction. A dimensionless reaction coordinate that quantifies the lateness of a transition state can be used to test the validity of the Hammond–Leffler postulate for a particular reaction. The structure–correlation principle The structure–correlation principle states that structural changes that occur along the reaction coordinate can reveal themselves in the ground state as deviations of bond distances and angles from normal values along the reaction coordinate. According to this theory if one particular bond length on reaching the transition state increases then this bond is already longer in its ground state compared to a compound not sharing this transition state. One demonstration of this principle is found in the two bicyclic compounds depicted below. The one on the left is a bicyclo[2.2.2]octene, which, at 200 °C, extrudes ethylene in a retro-Diels–Alder reaction. Compared to the compound on the right (which, lacking an alkene group, is unable to give this reaction) the bridgehead carbon-carbon bond length is expected to be shorter if the theory holds, because on approaching the transition state this bond gains double bond character. For these two compounds the prediction holds up based on X-ray crystallography. Implications for enzymatic catalysis One way that enzymatic catalysis proceeds is by stabilizing the transition state through electrostatics. By lowering the energy of the transition state, it allows a greater population of the starting material to attain the energy needed to overcome the transition energy and proceed to product. See also Transition state theory Transition state analogs, chemical compounds mimicking the substrate's transition state and act as enzyme inhibitors Reaction intermediate Reactive intermediate Activated complex References Chemical kinetics
Transition state
[ "Chemistry" ]
1,036
[ "Chemical kinetics", "Chemical reaction engineering" ]
2,669,197
https://en.wikipedia.org/wiki/Pulsed-field%20gel%20electrophoresis
Pulsed-field gel electrophoresis (PFGE) is a technique used for the separation of large DNA molecules by applying an electric field that periodically changes direction to a gel matrix. Unlike standard agarose gel electrophoresis, which can separate DNA fragments of up to 50 kb, PFGE resolves fragments up to 10 Mb. This allows for the direct analysis of genomic DNA. History In 1984, David C. Schwartz and Charles Cantor published the first successful application of alternating electric fields for the separation of large DNA molecules. This technique, which they named PFGE, resulted in the development of several variations, including Orthogonal Field Alternation Gel Electrophoresis (OFAGE), Transverse Alternating Field Electrophoresis (TAFE), Field-Inversion Gel Electrophoresis (FIGE), and Clamped Homogeneous Electric Fields (CHEF), among others. Procedure The procedure for PFGE is similar to that of standard agarose gel electrophoresis, with the main exception being the application of the electric current. Generally, in PFGE electrophoresis chambers, the voltage periodically switches between three directions: one along the central axis, and two at a 60 degree angle along each side. The application of the voltage can change depending on the variation of PFGE used. Applications PFGE may be used for genotyping or genetic fingerprinting. It has commonly been considered a gold standard in epidemiological studies of pathogenic organisms for several decades. For instance, subtyping bacterial isolates with this method has made it easier to discriminate among strains of Listeria monocytogenes, Lactococcus garvieae and some clinical isolates of Bacillus cereus group isolated from diseases aquatic organisms and thus to link environmental or food isolates with clinical infections. It is now in the process of being superseded by next generation sequencing methods. See also Gel electrophoresis Nonlinear frictiophoresis References External links Pulse field method Applied Maths BioNumerics PFGE typing Biological techniques and tools Electrophoresis Genetics techniques
Pulsed-field gel electrophoresis
[ "Chemistry", "Engineering", "Biology" ]
435
[ "Genetics techniques", "Instrumental analysis", "Genetic engineering", "Biochemical separation processes", "Molecular biology techniques", "nan", "Electrophoresis" ]
2,670,468
https://en.wikipedia.org/wiki/Dakin%E2%80%93West%20reaction
The Dakin–West reaction is a chemical reaction that transforms an amino-acid into a keto-amide using an acid anhydride and a base, typically pyridine. It is named for Henry Drysdale Dakin and Randolph West. In 2016 Schreiner and coworkers reported the first asymmetric variant of this reaction employing short oligopeptides as catalysts. With pyridine as a base and solvent, refluxing conditions are required. However, with the addition of 4-dimethylaminopyridine (DMAP) as a catalyst, the reaction can take place at room temperature. With some acids, this reaction can take place even in the absence of an α-amino group. This reaction should not be confused with the Dakin reaction. Reaction mechanism The reaction mechanism involves the acylation and activation of the acid 1 to the mixed anhydride 3. The amide will serve as a nucleophile for the cyclization forming the azlactone 4. Deprotonation and acylation of the azlactone forms the key carbon-carbon bond. Subsequent ring-opening of 6 and decarboxylation give the final keto-amide product. General ketone synthesis Modern variations on the Dakin–West reaction permit many enolizable carboxylic acids – not merely amino acids – to be converted to their corresponding methyl ketones. For example, β-aryl carboxylic acids can be efficiently converted to β-aryl ketones by treatment of an acetic anhydride solution of the acid with catalytic N-methylimidazole. This reactivity is attributed in part to generation of acetylimidazolium, a powerful cationic acetylating agent, in situ. See also Robinson–Gabriel synthesis - A process for converting the keto-amide products of this reaction into oxazoles References Carbon-carbon bond forming reactions Substitution reactions Name reactions
Dakin–West reaction
[ "Chemistry" ]
415
[ "Name reactions", "Carbon-carbon bond forming reactions", "Organic reactions" ]
2,670,939
https://en.wikipedia.org/wiki/Breaker%20%28hydraulic%29
A breaker is a powerful percussion hammer fitted to an excavator for demolishing hard (rock or concrete) structures. It is powered by an auxiliary hydraulic system from the excavator, which is fitted with a foot-operated valve for this purpose. Additionally, demolition crews employ the hoe ram for jobs too large for jackhammering or areas where blasting is not possible due to safety or environmental issues. Breakers are often referred to as "hammers", "peckers", "hoe rams" or "hoe rammers". These terms are popular and commonly used amongst construction/demolition workers. The first hydraulic breaker, Hydraulikhammer HM 400, was invented in 1967 by German company Krupp (today German company Atlas Copco) in Essen. Notable manufacturers See also Excavator Particulates References External links Info on hydraulic breaker repairs and maintenance Comprehensive Guide to Hydraulic Hammers Bibliography Hydraulic tools Construction equipment German inventions 1967 in science 1967 establishments in West Germany
Breaker (hydraulic)
[ "Physics", "Engineering" ]
204
[ "Construction equipment", "Physical systems", "Construction", "Hydraulics", "Hydraulic tools", "Industrial machinery" ]
2,671,210
https://en.wikipedia.org/wiki/Nitryl
Nitryl is the nitrogen dioxide (NO2) moiety when it occurs in a larger compound as a univalent fragment. Examples include nitryl fluoride (NO2F) and nitryl chloride (NO2Cl). Like nitrogen dioxide, the nitryl moiety contains a nitrogen atom with two bonds to the two oxygen atoms, and a third bond shared equally between the nitrogen and the two oxygen atoms. The nitrogen-centred radical is then free to form a bond with another univalent fragment (X) to produce an N−X bond, where X can be F, Cl, OH, etc. In organic nomenclature, the nitryl moiety is known as the nitro group. For instance, nitryl benzene is normally called nitrobenzene (PhNO2). See also Dinitrogen tetroxide Nitro compound Nitrosyl (R−N=O) Isocyanide (R−N≡C) Nitryl fluoride Nitrate References Inorganic nitrogen compounds Oxides Free radicals Nitrogen–oxygen compounds
Nitryl
[ "Chemistry", "Biology" ]
224
[ "Inorganic compounds", "Free radicals", "Oxides", "Inorganic nitrogen compounds", "Salts", "Senescence", "Biomolecules" ]
2,671,762
https://en.wikipedia.org/wiki/Helek
The helek, also spelled chelek (Hebrew חלק, meaning "portion", plural halakim חלקים) is a unit of time used in the calculation of the Molad. Other spellings used are chelak and chelek, both with plural chalakim. The hour is divided into 1080 halakim. A helek is 3 seconds or 1/18 minute. The helek derives from a small Babylonian time period called a she, meaning '"barleycorn", itself equal to 1/72 of a Babylonian time degree (1° of celestial rotation). 360 degrees × 72 shes per degree / 24 hours = 1080 shes per hour. The Hebrew calendar defines its mean month to be exactly equal to 29 days 12 hours and 793 halakim, which is 29 days 12 hours 44 minutes and 3 seconds. It defines its mean year as exactly 235/19 times this amount, or 365 days, 5 hours, 55 minutes, and 25 and 25/57 seconds (approximately 365.2468222 days). Bibliography Hebrew calendar Units of time
Helek
[ "Physics", "Mathematics" ]
234
[ "Physical quantities", "Time", "Time stubs", "Units of time", "Quantity", "Spacetime", "Units of measurement" ]
2,672,374
https://en.wikipedia.org/wiki/Homeobox%20protein%20NANOG
Homeobox protein NANOG (hNanog) is a transcriptional factor that helps embryonic stem cells (ESCs) maintain pluripotency by suppressing cell determination factors. hNanog is encoded in humans by the NANOG gene. Several types of cancer are associated with NANOG. Etymology The name NANOG derives from Tír na nÓg (Irish for "Land of the Young"), a name given to the Celtic Otherworld in Irish and Scottish mythology. Structure The human hNanog protein coded by the NANOG gene, consists of 305 amino acids and possesses 3 functional domains: the N-terminal domain, the C- terminal domain, and the conserved homeodomain motif. The homeodomain region facilitates DNA binding. The NANOG is located on chromosome 12, and the mRNA contains a 915 bp open reading frame (ORF) with 4 exons and 3 introns. The N-terminal region of hNanog is rich in serine, threonine and proline residues, and the C-terminus contains a tryptophan-rich domain. The homeodomain in hNANOG ranges from residues 95 to 155. There are also additional NANOG genes (NANOG2, NANOG p8) which potentially affect ESCs' differentiation. Scientists have shown that NANOG is fundamental for self-renewal and pluripotency, and NANOG p8 is highly expressed in cancer cells. Function NANOG is a transcription factor in embryonic stem cells (ESCs) and is thought to be a key factor in maintaining pluripotency. NANOG is thought to function in concert with other factors such as POU5F1 (Oct-4) and SOX2 to establish ESC identity. These cells offer an important area of study because of their ability to maintain pluripotency. In other words, these cells have the ability to become virtually any cell of any of the three germ layers (endoderm, ectoderm, mesoderm). It is for this reason that understanding the mechanisms that maintain a cell's pluripotency is critical for researchers to understand how stem cells work, and may lead to future advances in treating degenerative diseases. NANOG has been described to be expressed in the posterior side of the epiblast at the onset of gastrulation. There, NANOG has been implicated in inhibiting embryonic hematopoiesis by repressing the expression of the transcription factor Tal1. In this embryonic stage, NANOG represses Pou3f1, a transcription factor crucial for the anterior-posterior axis formation. Analysis of arrested embryos demonstrated that embryos express pluripotency marker genes such as POU5F1, NANOG and Rex1. Derived human ESC lines also expressed specific pluripotency markers: TRA-1-60 TRA-1-81 SSEA4 alkaline phosphatase TERT Rex1 These markers allowed for the differentiation in vitro and in vivo conditions into derivatives of all three germ layers. POU5F1, TDGF1 (CRIPTO), SALL4, LECT1, and BUB1 are also related genes all responsible for self-renewal and pluripotent differentiation. The NANOG protein has been found to be a transcriptional activator for the Rex1 promoter, playing a key role in sustaining Rex1 expression. Knockdown of NANOG in embryonic stem cells results in a reduction of Rex1 expression, while forced expression of NANOG stimulates Rex1 expression. Besides the effects of NANOG in the embryonic stages of life, ectopic expression of NANOG in the adult stem cells can restore the proliferation and differentiation potential that is lost due to organismal aging or cellular senescence. Clinical significance Cancer NANOG is highly expressed in cancer stem cells and may thus function as an oncogene to promote carcinogenesis. High expression of NANOG correlates with poor survival in cancer patients. Recent research has shown that the localization of NANOG and other transcription factors have potential consequences on cellular function. Experimental evidence has shown that the level of NANOG p8 expression is elevated specially in cancer cells, which mean that NANOG p8 gene is a critical member in (CSCs) Cancer stem cells, so knocking it down could reduce the cancer malignancy. Diagnostics NANOG p8 gene has been evaluated as a prognostic and predictive cancer biomarker. Cancer stem cells Nanog is a transcription factor that controls both self-renewal and pluripotency of embryonic stem cells. Similarly, the expression of Nanog family proteins is increased in many types of cancer and correlates with a worse prognosis. Evolution Humans and chimpanzees share ten NANOG pseudogenes (NanogP2-P11) during evaluation, two of them are located on the X chromosome and they characterized by the 5’ promoter sequences and the absence of introns as a result of mRNA retrotransposition all in the same places: one duplication pseudogene and nine retropseudogenes. Of the nine shared NANOG retropseudogenes, two lack the poly-(A) tails characteristic of most retropseudogenes, indicating that copying errors occurred during their creation. Due to the high improbability that the same pseudogenes (copying errors included) would exist in the same places in two unrelated genomes, evolutionary biologists point to NANOG and its pseudogenes as providing evidence of common descent between humans and chimpanzees. See also Enhancer Histone Oct-4 Pribnow box Promoter RNA polymerase Brachyury Transcription factors Gene regulatory network Bioinformatics References Further reading External links Discovery reveals more about stem cells' immortality Gene expression Oncology Transcription factors
Homeobox protein NANOG
[ "Chemistry", "Biology" ]
1,207
[ "Gene expression", "Signal transduction", "Molecular genetics", "Induced stem cells", "Cellular processes", "Molecular biology", "Biochemistry", "Transcription factors" ]
2,672,856
https://en.wikipedia.org/wiki/Floer%20homology
In mathematics, Floer homology is a tool for studying symplectic geometry and low-dimensional topology. Floer homology is a novel invariant that arises as an infinite-dimensional analogue of finite-dimensional Morse homology. Andreas Floer introduced the first version of Floer homology, now called symplectic Floer homology, in his proof of the Arnold conjecture in symplectic geometry. Floer also developed a closely related theory for Lagrangian submanifolds of a symplectic manifold. A third construction, also due to Floer, associates homology groups to closed three-dimensional manifolds using the Yang–Mills functional. These constructions and their descendants play a fundamental role in current investigations into the topology of symplectic and contact manifolds as well as (smooth) three- and four-dimensional manifolds. Floer homology is typically defined by associating to the object of interest an infinite-dimensional manifold and a real valued function on it. In the symplectic version, this is the free loop space of a symplectic manifold with the symplectic action functional. For the (instanton) version for three-manifolds, it is the space of SU(2)-connections on a three-dimensional manifold with the Chern–Simons functional. Loosely speaking, Floer homology is the Morse homology of the function on the infinite-dimensional manifold. A Floer chain complex is formed from the abelian group spanned by the critical points of the function (or possibly certain collections of critical points). The differential of the chain complex is defined by counting the flow lines of the function's gradient vector field connecting fixed pairs of critical points (or collections thereof). Floer homology is the homology of this chain complex. The gradient flow line equation, in a situation where Floer's ideas can be successfully applied, is typically a geometrically meaningful and analytically tractable equation. For symplectic Floer homology, the gradient flow equation for a path in the loopspace is (a perturbed version of) the Cauchy–Riemann equation for a map of a cylinder (the total space of the path of loops) to the symplectic manifold of interest; solutions are known as pseudoholomorphic curves. The Gromov compactness theorem is then used to show that the counts of flow lines defining the differential are finite, so that the differential is well-defined and squares to zero. Thus the Floer homology is defined. For instanton Floer homology, the gradient flow equation is exactly the Yang–Mills equation on the three-manifold crossed with the real line. Symplectic Floer homology Symplectic Floer Homology (SFH) is a homology theory associated to a symplectic manifold and a nondegenerate symplectomorphism of it. If the symplectomorphism is Hamiltonian, the homology arises from studying the symplectic action functional on the (universal cover of the) free loop space of a symplectic manifold. SFH is invariant under Hamiltonian isotopy of the symplectomorphism. Here, nondegeneracy means that 1 is not an eigenvalue of the derivative of the symplectomorphism at any of its fixed points. This condition implies that the fixed points are isolated. SFH is the homology of the chain complex generated by the fixed points of such a symplectomorphism, where the differential counts certain pseudoholomorphic curves in the product of the real line and the mapping torus of the symplectomorphism. This itself is a symplectic manifold of dimension two greater than the original manifold. For an appropriate choice of almost complex structure, punctured holomorphic curves (of finite energy) in it have cylindrical ends asymptotic to the loops in the mapping torus corresponding to fixed points of the symplectomorphism. A relative index may be defined between pairs of fixed points, and the differential counts the number of holomorphic cylinders with relative index 1. The symplectic Floer homology of a Hamiltonian symplectomorphism of a compact manifold is isomorphic to the singular homology of the underlying manifold. Thus, the sum of the Betti numbers of that manifold yields the lower bound predicted by one version of the Arnold conjecture for the number of fixed points for a nondegenerate symplectomorphism. The SFH of a Hamiltonian symplectomorphism also has a pair of pants product that is a deformed cup product equivalent to quantum cohomology. A version of the product also exists for non-exact symplectomorphisms. For the cotangent bundle of a manifold M, the Floer homology depends on the choice of Hamiltonian due to its noncompactness. For Hamiltonians that are quadratic at infinity, the Floer homology is the singular homology of the free loop space of M (proofs of various versions of this statement are due to Viterbo, Salamon–Weber, Abbondandolo–Schwarz, and Cohen). There are more complicated operations on the Floer homology of a cotangent bundle that correspond to the string topology operations on the homology of the loop space of the underlying manifold. The symplectic version of Floer homology figures in a crucial way in the formulation of the homological mirror symmetry conjecture. PSS isomorphism In 1996 S. Piunikhin, D. Salamon and M. Schwarz summarized the results about the relation between Floer homology and quantum cohomology and formulated as the following. The Floer cohomology groups of the loop space of a semi-positive symplectic manifold (M,ω) are naturally isomorphic to the ordinary cohomology of M, tensored by a suitable Novikov ring associated the group of covering transformations. This isomorphism intertwines the quantum cup product structure on the cohomology of M with the pair-of-pants product on Floer homology. The above condition of semi-positive and the compactness of symplectic manifold M is required for us to obtain Novikov ring and for the definition of both Floer homology and quantum cohomology. The semi-positive condition means that one of the following holds (note that the three cases are not disjoint): for every A in π2(M) where λ≥0 (M is monotone). for every A in 2(M). The minimal Chern Number N ≥ 0 defined by is greater than or equal to n − 2. The quantum cohomology group of symplectic manifold M can be defined as the tensor products of the ordinary cohomology with Novikov ring Λ, i.e. This construction of Floer homology explains the independence on the choice of the almost complex structure on M and the isomorphism to Floer homology provided from the ideas of Morse theory and pseudoholomorphic curves, where we must recognize the Poincaré duality between homology and cohomology as the background. Floer homology of three-manifolds There are several equivalent Floer homologies associated to closed three-manifolds. Each yields three types of homology groups, which fit into an exact triangle. A knot in a three-manifold induces a filtration on the chain complex of each theory, whose chain homotopy type is a knot invariant. (Their homologies satisfy similar formal properties to the combinatorially-defined Khovanov homology.) These homologies are closely related to the Donaldson and Seiberg invariants of 4-manifolds, as well as to Taubes's Gromov invariant of symplectic 4-manifolds; the differentials of the corresponding three-manifold homologies to these theories are studied by considering solutions to the relevant differential equations (Yang–Mills, Seiberg–Witten, and Cauchy–Riemann, respectively) on the 3-manifold cross R. The 3-manifold Floer homologies should also be the targets of relative invariants for four-manifolds with boundary, related by gluing constructions to the invariants of a closed 4-manifold obtained by gluing together bounded 3-manifolds along their boundaries. (This is closely related to the notion of a topological quantum field theory.) For Heegaard Floer homology, the 3-manifold homology was defined first, and an invariant for closed 4-manifolds was later defined in terms of it. There are also extensions of the 3-manifold homologies to 3-manifolds with boundary: sutured Floer homology and bordered Floer homology . These are related to the invariants for closed 3-manifolds by gluing formulas for the Floer homology of a 3-manifold described as the union along the boundary of two 3-manifolds with boundary. The three-manifold Floer homologies also come equipped with a distinguished element of the homology if the three-manifold is equipped with a contact structure. Kronheimer and Mrowka first introduced the contact element in the Seiberg–Witten case. Ozsvath and Szabo constructed it for Heegaard Floer homology using Giroux's relation between contact manifolds and open book decompositions, and it comes for free, as the homology class of the empty set, in embedded contact homology. (Which, unlike the other three, requires a contact structure for its definition. For embedded contact homology see . These theories all come equipped with a priori relative gradings; these have been lifted to absolute gradings (by homotopy classes of oriented 2-plane fields) by Kronheimer and Mrowka (for SWF), Gripp and Huang (for HF), and Hutchings (for ECH). Cristofaro-Gardiner has shown that Taubes' isomorphism between ECH and Seiberg–Witten Floer cohomology preserves these absolute gradings. Instanton Floer homology This is a three-manifold invariant connected to Donaldson theory introduced by Floer himself. It is obtained using the Chern–Simons functional on the space of connections on a principal SU(2)-bundle over the three-manifold (more precisely, homology 3-spheres). Its critical points are flat connections and its flow lines are instantons, i.e. anti-self-dual connections on the three-manifold crossed with the real line. Instanton Floer homology may be viewed as a generalization of the Casson invariant because the Euler characteristic of the Floer homology agrees with the Casson invariant. Soon after Floer's introduction of Floer homology, Donaldson realized that cobordisms induce maps. This was the first instance of the structure that came to be known as a topological quantum field theory. Seiberg–Witten Floer homology Seiberg–Witten Floer homology or monopole Floer homology is a homology theory for smooth 3-manifolds (equipped with a spinc structure). It may be viewed as the Morse homology of the Chern–Simons–Dirac functional on U(1) connections on the three-manifold. The associated gradient flow equation corresponds to the Seiberg–Witten equations on the 3-manifold crossed with the real line. Equivalently, the generators of the chain complex are translation-invariant solutions to Seiberg–Witten equations (known as monopoles) on the product of a 3-manifold and the real line, and the differential counts solutions to the Seiberg–Witten equations on the product of a three-manifold and the real line, which are asymptotic to invariant solutions at infinity and negative infinity. One version of Seiberg–Witten–Floer homology was constructed rigorously in the monograph Monopoles and Three-manifolds by Peter Kronheimer and Tomasz Mrowka, where it is known as monopole Floer homology. Taubes has shown that it is isomorphic to embedded contact homology. Alternate constructions of SWF for rational homology 3-spheres have been given by and ; they are known to agree. Heegaard Floer homology Heegaard Floer homology is an invariant due to Peter Ozsváth and Zoltán Szabó of a closed 3-manifold equipped with a spinc structure. It is computed using a Heegaard diagram of the space via a construction analogous to Lagrangian Floer homology. announced a proof that Heegaard Floer homology is isomorphic to Seiberg–Witten Floer homology, and announced a proof that the plus-version of Heegaard Floer homology (with reverse orientation) is isomorphic to embedded contact homology. A knot in a three-manifold induces a filtration on the Heegaard Floer homology groups, and the filtered homotopy type is a powerful knot invariant, called knot Floer homology. It categorifies the Alexander polynomial. Knot Floer homology was defined by and independently by . It is known to detect knot genus. Using grid diagrams for the Heegaard splittings, knot Floer homology was given a combinatorial construction by . The Heegaard Floer homology of the double cover of S^3 branched over a knot is related by a spectral sequence to Khovanov homology . The "hat" version of Heegaard Floer homology was described combinatorially by . The "plus" and "minus" versions of Heegaard Floer homology, and the related Ozsváth–Szabó four-manifold invariants, can be described combinatorially as well . Embedded contact homology Embedded contact homology, due to Michael Hutchings, is an invariant of 3-manifolds (with a distinguished second homology class, corresponding to the choice of a spinc structure in Seiberg–Witten Floer homology) isomorphic (by work of Clifford Taubes) to Seiberg–Witten Floer cohomology and consequently (by work announced by and ) to the plus-version of Heegaard Floer homology (with reverse orientation). It may be seen as an extension of Taubes's Gromov invariant, known to be equivalent to the Seiberg–Witten invariant, from closed symplectic 4-manifolds to certain non-compact symplectic 4-manifolds (namely, a contact three-manifold cross R). Its construction is analogous to symplectic field theory, in that it is generated by certain collections of closed Reeb orbits and its differential counts certain holomorphic curves with ends at certain collections of Reeb orbits. It differs from SFT in technical conditions on the collections of Reeb orbits that generate it—and in not counting all holomorphic curves with Fredholm index 1 with given ends, but only those that also satisfy a topological condition given by the ECH index, which in particular implies that the curves considered are (mainly) embedded. The Weinstein conjecture that a contact 3-manifold has a closed Reeb orbit for any contact form holds on any manifold whose ECH is nontrivial, and was proved by Taubes using techniques closely related to ECH; extensions of this work yielded the isomorphism between ECH and SWF. Many constructions in ECH (including its well-definedness) rely upon this isomorphism . The contact element of ECH has a particularly nice form: it is the cycle associated to the empty collection of Reeb orbits. An analog of embedded contact homology may be defined for mapping tori of symplectomorphisms of a surface (possibly with boundary) and is known as periodic Floer homology, generalizing the symplectic Floer homology of surface symplectomorphisms. More generally, it may be defined with respect to any stable Hamiltonian structure on the 3-manifold; like contact structures, stable Hamiltonian structures define a nonvanishing vector field (the Reeb vector field), and Hutchings and Taubes have proven an analogue of the Weinstein conjecture for them, namely that they always have closed orbits (unless they are mapping tori of a 2-torus). Lagrangian intersection Floer homology The Lagrangian Floer homology of two transversely intersecting Lagrangian submanifolds of a symplectic manifold is the homology of a chain complex generated by the intersection points of the two submanifolds and whose differential counts pseudoholomorphic Whitney discs. Given three Lagrangian submanifolds L0, L1, and L2 of a symplectic manifold, there is a product structure on the Lagrangian Floer homology: which is defined by counting holomorphic triangles (that is, holomorphic maps of a triangle whose vertices and edges map to the appropriate intersection points and Lagrangian submanifolds). Papers on this subject are due to Fukaya, Oh, Ono, and Ohta; the recent work on "cluster homology" of Lalonde and Cornea offer a different approach to it. The Floer homology of a pair of Lagrangian submanifolds may not always exist; when it does, it provides an obstruction to isotoping one Lagrangian away from the other using a Hamiltonian isotopy. Several kinds of Floer homology are special cases of Lagrangian Floer homology. The symplectic Floer homology of a symplectomorphism of M can be thought of as a case of Lagrangian Floer homology in which the ambient manifold is M crossed with M and the Lagrangian submanifolds are the diagonal and the graph of the symplectomorphism. The construction of Heegaard Floer homology is based on a variant of Lagrangian Floer homology for totally real submanifolds defined using a Heegaard splitting of a three-manifold. Seidel–Smith and Manolescu constructed a link invariant as a certain case of Lagrangian Floer homology, which conjecturally agrees with Khovanov homology, a combinatorially-defined link invariant. Atiyah–Floer conjecture The Atiyah–Floer conjecture connects the instanton Floer homology with the Lagrangian intersection Floer homology. Consider a 3-manifold Y with a Heegaard splitting along a surface . Then the space of flat connections on modulo gauge equivalence is a symplectic manifold of dimension 6g − 6, where g is the genus of the surface . In the Heegaard splitting, bounds two different 3-manifolds; the space of flat connections modulo gauge equivalence on each 3-manifold with boundary embeds into as a Lagrangian submanifold. One can consider the Lagrangian intersection Floer homology. Alternately, we can consider the Instanton Floer homology of the 3-manifold Y. The Atiyah–Floer conjecture asserts that these two invariants are isomorphic. Relations to mirror symmetry The homological mirror symmetry conjecture of Maxim Kontsevich predicts an equality between the Lagrangian Floer homology of Lagrangians in a Calabi–Yau manifold and the Ext groups of coherent sheaves on the mirror Calabi–Yau manifold. In this situation, one should not focus on the Floer homology groups but on the Floer chain groups. Similar to the pair-of-pants product, one can construct multi-compositions using pseudo-holomorphic n-gons. These compositions satisfy the -relations making the category of all (unobstructed) Lagrangian submanifolds in a symplectic manifold into an -category, called the Fukaya category. To be more precise, one must add additional data to the Lagrangian – a grading and a spin structure. A Lagrangian with a choice of these structures is often called a brane in homage to the underlying physics. The Homological Mirror Symmetry conjecture states there is a type of derived Morita equivalence between the Fukaya category of the Calabi–Yau and a dg category underlying the bounded derived category of coherent sheaves of the mirror, and vice versa. Symplectic field theory (SFT) This is an invariant of contact manifolds and symplectic cobordisms between them, originally due to Yakov Eliashberg, Alexander Givental and Helmut Hofer. The symplectic field theory as well as its subcomplexes, rational symplectic field theory and contact homology, are defined as homologies of differential algebras, which are generated by closed orbits of the Reeb vector field of a chosen contact form. The differential counts certain holomorphic curves in the cylinder over the contact manifold, where the trivial examples are the branched coverings of (trivial) cylinders over closed Reeb orbits. It further includes a linear homology theory, called cylindrical or linearized contact homology (sometimes, by abuse of notation, just contact homology), whose chain groups are vector spaces generated by closed orbits and whose differentials count only holomorphic cylinders. However, cylindrical contact homology is not always defined due to the presence of holomorphic discs and a lack of regularity and transversality results. In situations where cylindrical contact homology makes sense, it may be seen as the (slightly modified) Morse homology of the action functional on the free loop space, which sends a loop to the integral of the contact form alpha over the loop. Reeb orbits are the critical points of this functional. SFT also associates a relative invariant of a Legendrian submanifold of a contact manifold known as relative contact homology. Its generators are Reeb chords, which are trajectories of the Reeb vector field beginning and ending on a Lagrangian, and its differential counts certain holomorphic strips in the symplectization of the contact manifold whose ends are asymptotic to given Reeb chords. In SFT the contact manifolds can be replaced by mapping tori of symplectic manifolds with symplectomorphisms. While the cylindrical contact homology is well-defined and given by the symplectic Floer homologies of powers of the symplectomorphism, (rational) symplectic field theory and contact homology can be considered as generalized symplectic Floer homologies. In the important case when the symplectomorphism is the time-one map of a time-dependent Hamiltonian, it was however shown that these higher invariants do not contain any further information. Floer homotopy One conceivable way to construct a Floer homology theory of some object would be to construct a related spectrum whose ordinary homology is the desired Floer homology. Applying other homology theories to such a spectrum could yield other interesting invariants. This strategy was proposed by Ralph Cohen, John Jones, and Graeme Segal, and carried out in certain cases for Seiberg–Witten–Floer homology by and for the symplectic Floer homology of cotangent bundles by Cohen. This approach was the basis of Manolescu's 2013 construction of Pin (2)-equivariant Seiberg–Witten Floer homology, with which he disproved the Triangulation Conjecture for manifolds of dimension 5 and higher. Analytic foundations Many of these Floer homologies have not been completely and rigorously constructed, and many conjectural equivalences have not been proved. Technical difficulties come up in the analysis involved, especially in constructing compactified moduli spaces of pseudoholomorphic curves. Hofer, in collaboration with Kris Wysocki and Eduard Zehnder, has developed new analytic foundations via their theory of polyfolds and a "general Fredholm theory". While the polyfold project is not yet fully completed, in some important cases transversality was shown using simpler methods. Computation Floer homologies are generally difficult to compute explicitly. For instance, the symplectic Floer homology for all surface symplectomorphisms was completed only in 2007. The Heegaard Floer homology has been a success story in this regard: researchers have exploited its algebraic structure to compute it for various classes of 3-manifolds and have found combinatorial algorithms for computation of much of the theory. It is also connected to existing invariants and structures and many insights into 3-manifold topology have resulted. References Footnotes Books and surveys Research articles Project Euclid External links Mathematical physics 3-manifolds Gauge theories Morse theory Homology theory Symplectic topology
Floer homology
[ "Physics", "Mathematics" ]
5,267
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
2,673,011
https://en.wikipedia.org/wiki/Low-dropout%20regulator
A low-dropout regulator (LDO regulator) is a type of a DC linear voltage regulator circuit that can operate even when the supply voltage is very close to the output voltage. The advantages of an LDO regulator over other DC-to-DC voltage regulators include: the absence of switching noise (in contrast to switching regulators); smaller device size (as neither large inductors nor transformers are needed); and greater design simplicity (usually consists of a reference, an amplifier, and a pass element). The disadvantage is that linear DC regulators must dissipate heat in order to operate. History The adjustable low-dropout regulator debuted on April 12, 1977 in an Electronic Design article entitled "Break Loose from Fixed IC Regulators". The article was written by Robert Dobkin, an IC designer then working for National Semiconductor. Because of this, National Semiconductor claims the title of "LDO inventor". Dobkin later left National Semiconductor in 1981 and founded Linear Technology where he was the chief technology officer. Components The main components are a power FET and a differential amplifier (error amplifier). One input of the differential amplifier monitors the fraction of the output determined by the resistor ratio of R1 and R2. The second input to the differential amplifier is from a stable voltage reference (bandgap reference). If the output voltage rises too high relative to the reference voltage, the drive to the power FET changes to maintain a constant output voltage. Regulation Low-dropout (LDO) regulators operate similarly to all linear voltage regulators. The main difference between LDO and non-LDO regulators is their schematic topology. Instead of an emitter follower topology, low-dropout regulators consist of an open collector or open drain topology, where the transistor may be easily driven into saturation with the voltages available to the regulator. This allows the voltage drop from the unregulated voltage to the regulated voltage to be as low as (limited to) the saturation voltage across the transistor. For the circuit given in the figure to the right, the output voltage is given as: If a bipolar transistor is used, as opposed to a field-effect transistor or JFET, significant additional power may be lost to control it, whereas non-LDO regulators take that power from voltage drop itself. For high voltages under very low In-Out difference there will be significant power loss in the control circuit. Because the power control element is an inverter, another inverting amplifier is required to control it, increasing schematic complexity compared to simple linear regulator. Power FETs may be preferable in order to reduce power consumption, yet this poses problems when the regulator is used for low input voltage, since FETs usually require 5 to 10 V to close completely. Power FETs may also increase the cost. Efficiency and heat dissipation The power dissipated in the pass element and internal circuitry () of a typical LDO is calculated as follows: where is the quiescent current required by the LDO for its internal circuitry. Therefore, one can calculate the efficiency as follows: where However, when the LDO is in full operation (i.e., supplying current to the load) generally: . This allows us to reduce to the following: which further reduces the efficiency equation to: It is important to keep thermal considerations in mind when using a low drop-out linear regulator. Having high current and/or a wide differential between input and output voltage could lead to large power dissipation. Additionally, efficiency will suffer as the differential widens. Depending on the package, excessive power dissipation could damage the LDO or cause it to go into thermal shutdown. Quiescent current Among other important characteristics of a linear regulator is the quiescent current, also known as ground current or supply current, which accounts for the difference, although small, between the input and output currents of the LDO, that is: Quiescent current is current drawn by the LDO in order to control its internal circuitry for proper operation. The series pass element, topologies, and ambient temperature are the primary contributors to quiescent current. Many applications do not require an LDO to be in full operation all of the time (i.e. supplying current to the load). In this idle state the LDO still draws a small amount of quiescent current in order to keep the internal circuitry ready in case a load is presented. When no current is being supplied to the load, can be found as follows: Filtering In addition to regulating voltage, LDOs can also be used as filters. This is especially useful when a system is using switchers, which introduce a ripple in the output voltage occurring at the switching frequency. Left alone, this ripple has the potential to adversely affect the performance of oscillators, data converters, and RF systems being powered by the switcher. However, any power source, not just switchers, can contain AC elements that may be undesirable for design. Two specifications that should be considered when using an LDO as a filter are power supply rejection ratio (PSRR) and output noise. Specifications An LDO is characterized by its drop-out voltage, quiescent current, load regulation, line regulation, maximum current (which is decided by the size of the pass transistor), speed (how fast it can respond as the load varies), voltage variations in the output because of sudden transients in the load current, output capacitor and its equivalent series resistance. Speed is indicated by the rise time of the current at the output as it varies from 0 mA load current (no load) to the maximum load current. This is basically decided by the bandwidth of the error amplifier. It is also expected from an LDO to provide a quiet and stable output in all circumstances (example of possible perturbation could be: sudden change of the input voltage or output current). Stability analysis put in place some performance metrics to get such a behaviour and involve placing poles and zeros appropriately. Most of the time, there is a dominant pole that arise at low frequencies while other poles and zeros are pushed at high frequencies. Power supply rejection ratio PSRR refers to the LDO's ability to reject ripple it sees at its input. As part of its regulation, the error amplifier and bandgap attenuate any spikes in the input voltage that deviate from the internal reference to which it is compared. In an ideal LDO, the output voltage would be solely composed of the DC frequency. However, the error amplifier is limited in its ability to gain small spikes at high frequencies. PSRR is expressed as follows: As an example, an LDO that has a PSRR of 55 dB at 1 MHz attenuates a 1 mV input ripple at this frequency to just 1.78 μV at the output. A 6 dB increase in PSRR roughly equates to an increase in attenuation by a factor of 2. Most LDOs have relatively high PSRR at lower frequencies (10 Hz – 1 kHz). However, a Performance LDO is distinguished in having high PSRR over a broad frequency spectrum (10 Hz – 5 MHz). Having high PSRR over a wide band allows the LDO to reject high-frequency noise like that arising from a switcher. Similar to other specifications, PSRR fluctuates over frequency, temperature, current, output voltage, and the voltage differential. Output noise The noise from the LDO itself must also be considered in filter design. Like other electronic devices, LDOs are affected by thermal noise, bipolar shot noise, and flicker noise. Each of these phenomena contribute noise to the output voltage, mostly concentrated over the lower end of the frequency spectrum. In order to properly filter AC frequencies, an LDO must both reject ripple at the input while introducing minimal noise at the output. Efforts to attenuate ripple from the input voltage could be in vain if a noisy LDO just adds that noise back again at the output. Load regulation Load regulation is a measure of the circuit's ability to maintain the specified output voltage under varying load conditions. Load regulation is defined as: The worst case of the output voltage variations occurs as the load current transitions from zero to its maximum rated value or vice versa. Line regulation Line regulation is a measure of the circuit's ability to maintain the specified output voltage with varying input voltage. Line regulation is defined as: Like load regulation, line regulation is a steady state parameter—all frequency components are neglected. Increasing DC open-loop current gain improves the line regulation. Transient response The transient response is the maximum allowable output voltage variation for a load current step change. The transient response is a function of the output capacitor value (), the equivalent series resistance (ESR) of the output capacitor, the bypass capacitor () that is usually added to the output capacitor to improve the load transient response, and the maximum load-current (). The maximum transient voltage variation is defined as follows: Where corresponds to the closed-loop bandwidth of an LDO regulator. is the voltage variation resulting from the presence of the ESR () of the output capacitor. The application determines how low this value should be. Evolution and the future A competitor to the LDO, the IVR (integrated voltage regulator) appears to offer solutions to many of the issues with efficiency and performance that LDO regulators suffer. IVRs combine a switching voltage regulator with all necessary control circuitry into a single device which results in a 10× size reduction and 10–50% energy savings. See also Linear regulator Low-voltage detect (sometimes confused with LDO regulator) Voltage regulator Switched-mode power supply List of linear integrated circuits LM7805 References External links Understanding Low Dropout Regulators - Basics Understanding LDO Regulators - TI Understanding Noise and PSRR in LDOs - All About Circuits Understanding Noise in LDOs - TI Index of TI LDO Application Notes - TI Voltage regulation Linear integrated circuits
Low-dropout regulator
[ "Physics" ]
2,064
[ "Voltage", "Physical quantities", "Voltage regulation" ]
2,673,142
https://en.wikipedia.org/wiki/Terbium%28III%2CIV%29%20oxide
Terbium(III,IV) oxide, occasionally called tetraterbium heptaoxide, has the formula Tb4O7, though some texts refer to it as TbO1.75. There is some debate as to whether it is a discrete compound, or simply one phase in an interstitial oxide system. Tb4O7 is one of the main commercial terbium compounds, and the only such product containing at least some Tb(IV) (terbium in the +4 oxidation state), along with the more stable Tb(III). It is produced by heating the metal oxalate, and it is used in the preparation of other terbium compounds. It is also used in Electronics and Data Storage, Green Energy Technologies, Medical Imaging and Diagnosis, and Chemical Processes. Terbium forms three other major oxides: Tb2O3, TbO2, and Tb6O11. Synthesis Tb4O7 is most often produced by ignition of the oxalate or the sulfate in air. The oxalate (at 1000 °C) is generally preferred, since the sulfate requires a higher temperature, and it produces an almost black product contaminated with Tb6O11 or other oxygen-rich oxides. Chemical properties Terbium(III,IV) oxide loses O2 when heated at high temperatures; at more moderate temperatures (ca. 350 °C) it reversibly loses oxygen, as shown by exchange with18O2. This property, also seen in Pr6O11 and V2O5, allows it to work like V2O5 as a redox catalyst in reactions involving oxygen. It was found as early as 1916 that hot Tb4O7 catalyses the reaction of coal gas (CO + H2) with air, leading to incandescence and often ignition. Tb4O7 reacts with atomic oxygen to produce TbO2, but more convenient preparations are available. (s) + 6 HCl (aq) → 2 (s) + 2 (aq) + 3 (l) . Tb4O7 reacts with other hot concentrated acids to produce terbium(III) salts. For example, reaction with sulfuric acid gives terbium(III) sulfate. Terbium oxide reacts slowly with hydrochloric acid to form terbium(III) chloride solution, and elemental chlorine. At ambient temperature, complete dissolution might require a month; in a hot water bath, about a week. Anhydrous terbium(III) chloride can be produced by the ammonium chloride route In the first step, terbium oxide is heated with ammonium chloride to produce the ammonium salt of the pentachloride: Tb4O7 + 22NH4Cl → 4(NH4)2TbCl5 + 7H2O + 14NH3 In the second step, the ammonium chloride salt is converted to the trichlorides by heating in a vacuum at 350-400 °C: (NH4)2TbCl5 → TbCl3 + 2HCl + 2NH3 References Further reading Terbium compounds Mixed valence compounds Oxides
Terbium(III,IV) oxide
[ "Chemistry" ]
654
[ "Oxides", "Mixed valence compounds", "Inorganic compounds", "Salts" ]
2,673,165
https://en.wikipedia.org/wiki/Ultraviolet%20fixed%20point
In a quantum field theory, one may calculate an effective or running coupling constant that defines the coupling of the theory measured at a given momentum scale. One example of such a coupling constant is the electric charge. In approximate calculations in several quantum field theories, notably quantum electrodynamics and theories of the Higgs particle, the running coupling appears to become infinite at a finite momentum scale. This is sometimes called the Landau pole problem. It is not known whether the appearance of these inconsistencies is an artifact of the approximation, or a real fundamental problem in the theory. However, the problem can be avoided if an ultraviolet or UV fixed point appears in the theory. A quantum field theory has a UV fixed point if its renormalization group flow approaches a fixed point in the ultraviolet (i.e. short length scale/large energy) limit. This is related to zeroes of the beta-function appearing in the Callan–Symanzik equation. The large length scale/small energy limit counterpart is the infrared fixed point. Specific cases and details Among other things, it means that a theory possessing a UV fixed point may not be an effective field theory, because it is well-defined at arbitrarily small distance scales. At the UV fixed point itself, the theory can behave as a conformal field theory. The converse statement, that any QFT which is valid at all distance scales (i.e. isn't an effective field theory) has a UV fixed point is false. See, for example, cascading gauge theory. Noncommutative quantum field theories have a UV cutoff even though they are not effective field theories. Physicists distinguish between trivial and nontrivial fixed points. If a UV fixed point is trivial (generally known as Gaussian fixed point), the theory is said to be asymptotically free. On the other hand, a scenario, where a non-Gaussian (i.e. nontrivial) fixed point is approached in the UV limit, is referred to as asymptotic safety. Asymptotically safe theories may be well defined at all scales despite being nonrenormalizable in perturbative sense (according to the classical scaling dimensions). Asymptotic safety scenario in quantum gravity Steven Weinberg has proposed that the problematic UV divergences appearing in quantum theories of gravity may be cured by means of a nontrivial UV fixed point. Such an asymptotically safe theory is renormalizable in a nonperturbative sense, and due to the fixed point physical quantities are free from divergences. As yet, a general proof for the existence of the fixed point is still lacking, but there is mounting evidence for this scenario. See also Ultraviolet divergence Landau pole Quantum triviality Asymptotic safety in quantum gravity Asymptotic freedom References Statistical mechanics Conformal field theory Renormalization group Fixed points (mathematics)
Ultraviolet fixed point
[ "Physics", "Mathematics" ]
606
[ "Physical phenomena", "Mathematical analysis", "Fixed points (mathematics)", "Critical phenomena", "Quantum mechanics", "Renormalization group", "Topology", "Statistical mechanics", "Quantum physics stubs", "Dynamical systems" ]
3,625,483
https://en.wikipedia.org/wiki/Electrical%20resistivity%20tomography
Electrical resistivity tomography (ERT) or electrical resistivity imaging (ERI) is a geophysical technique for imaging sub-surface structures from electrical resistivity measurements made at the surface, or by electrodes in one or more boreholes. If the electrodes are suspended in the boreholes, deeper sections can be investigated. It is closely related to the medical imaging technique electrical impedance tomography (EIT), and mathematically is the same inverse problem. In contrast to medical EIT, however, ERT is essentially a direct current method. A related geophysical method, induced polarization (or spectral induced polarization), measures the transient response and aims to determine the subsurface chargeability properties. Electrical resistivity measurements can be used for identification and quantification of depth of groundwater, detection of clays, and measurement of groundwater conductivity. History The technique evolved from techniques of electrical prospecting that predate digital computers, where layers or anomalies were sought rather than images. Early work on the mathematical problem in the 1930s assumed a layered medium (see for example Langer, Slichter). Andrey Nikolayevich Tikhonov who is best known for his work on regularization of inverse problems also worked on this problem. He explains in detail how to solve the ERT problem in a simple case of 2-layered medium. During the 1940s, he collaborated with geophysicists and without the aid of computers they discovered large deposits of copper. As a result, they were awarded a State Prize of Soviet Union. When adequate computers became widely available, the inverse problem of ERT could be solved numerically. The work of Loke and Barker at Birmingham University was among the first such solution and their approach is still widely used. With the advancement in the field of Electrical Resistivity Tomography (ERT) from 1D to 2D and nowadays 3D, ERT has explored many fields. The applications of ERT include fault investigation, ground water table investigation, soil moisture content determination and many others. In industrial process imaging ERT can be used in a similar fashion to medical EIT, to image the distribution of conductivity in mixing vessels and pipes. In this context it is usually called Electrical Resistance Tomography, emphasising the quantity that is measured rather than imaged. Operating procedure Soil resistivity, measured in ohm-centimeters (Ω⋅cm), varies with moisture content and temperature changes. In general, an increase in soil moisture results in a reduction in soil resistivity. The pore fluid provides the only electrical path in sands, while both the pore fluid and the surface charged particles provide electrical paths in clays. Resistivities of wet fine-grained soils are generally much lower than those of wet coarse-grained soils. The difference in resistivity between a soil in a dry and in a saturated condition may be several orders of magnitude. The method of measuring subsurface resistivity involves placing four electrodes in the ground in a line at equal spacing, applying a measured AC current to the outer two electrodes, and measuring the AC voltage between the inner two electrodes. A measured resistance is calculated by dividing the measured voltage by the measured current. This resistance is then multiplied by a geometric factor that includes the spacing between each electrode to determine the apparent resistivity. Electrode spacings of 0.75, 1.5, 3.0, 6.0, and 12.0 m are typically used for shallow depths (<10 m) of investigations. Greater electrode spacings of 1.5, 3.0, 6.0, 15.0, 30.0, 100.0, and 150.0 m are typically used for deeper investigations. The depth of investigation is typically less than the maximum electrode spacing. Water is introduced to the electrode holes as the electrodes are driven into the ground to improve electrical contact. Applications ERT is used to create images of various subsurface conditions and structures. It has applications in numerous fields, including: Environmental Studies: Groundwater Exploration: ERT helps locate underground aquifers and assess water quality. Contaminant Mapping: ERT is used to monitor and delineate the spread of contaminants in soil and groundwater. Landfill Monitoring: ERT monitors landfill conditions, gas generation and migration, and leachate pathways. Geotechnical Engineering: Site Investigation: ERT is used to survey soil and rock properties and existing underground infrastructure in construction projects. Foundation Assessment: ERT can evaluate the condition of foundations, detect voids, and assess load-bearing capacity. Sinkhole Detection: ERT can identify subsurface voids that may lead to sinkholes. Archaeology and Cultural Heritage: Buried Archaeological Features: ERT can detect buried structures, artefacts, and archaeological sites. Structural Integrity of Monuments: ERT helps assess the condition of historic buildings and structures. Mining and Mineral Exploration: Mineral Deposits: ERT can delineate the boundaries and characteristics of ore bodies. Cave Detection: ERT is used to locate caves and karst features in mining areas. Hydrogeology: Aquifer Mapping: ERT is employed to create detailed maps of subsurface aquifers and their properties. Saltwater Intrusion Monitoring: ERT helps detect and monitor the encroachment of saltwater into freshwater aquifers. Engineering and Infrastructure: Tunnel and Dam Assessment: ERT assesses the structural integrity of tunnels and dams. Pipeline and Cable Route Surveys: It helps identify subsurface utilities and potential hazards. Landslide Hazard Assessment: ERT can detect subsurface slip planes and unstable slopes. Levee and Embankment Assessment: It assesses the structural integrity of levees and embankments. Building Health Inspections: ERT is used to examine the condition of foundations and other underground parts of buildings to guide upkeep and renovations. Oil and Gas Exploration: Reservoir Characterization: ERT assists in understanding subsurface reservoir properties. Monitoring Fluid Migration: ERT is used to track the movement of fluids in the subsurface during drilling and production. Agriculture: Soil Moisture Mapping: ERT helps assess soil moisture content for precision agriculture. Root Zone Imaging: ERT is used to visualize plant root structures and soil-root interactions. See also Electrical capacitance tomography Electrical impedance tomography Three-dimensional electrical capacitance tomography Magnetotellurics Seismo-electromagnetics Telluric current Vertical electrical sounding Geophysical Imaging Ground-penetrating radar References A.P. Calderón, On an inverse boundary value problem, in Seminar on Numerical Analysis and its Applications to Continuum Physics, Rio de Janeiro. 1980. Scanned copy of paper Geophysical imaging Inverse problems Impedance measurements Multidimensional signal processing
Electrical resistivity tomography
[ "Physics", "Mathematics" ]
1,367
[ "Physical quantities", "Applied mathematics", "Inverse problems", "Impedance measurements", "Electrical resistance and conductance" ]
3,626,600
https://en.wikipedia.org/wiki/Machine%20perfusion
Machine perfusion (MP) is an artificial perfusion technique often used for organ preservation to help facilitate organ transplantation. MP works by continuously pumping a specialized solution through donor organs, mimicking the body's natural blood flow while actively controlling temperature, oxygen levels, chemical composition, and mechanical stress within the organ. By maintaining organ viability outside the body for extended periods, machine perfusion addresses critical challenges in organ transplantation, such as limited preservation times. Machine perfusion has various forms and can be categorised according to the temperature of the perfusate: cold (4 °C) and warm (37 °C). Machine perfusion has been applied to renal transplantation, liver transplantation and lung transplantation. It is an alternative to static cold storage (SCS). Research and development A record-long of human transplant organ preservation with machine perfusion of a liver for 3 days rather than usually <12 hours was reported in 2022. It could possibly be extended to 10 days and prevent substantial cell damage by low temperature preservation methods. Alternative approaches include novel cryoprotectant solvents. There is a novel organ perfusion system under development that can restore, i.e. on the cellular level, multiple vital (pig) organs one hour after death (during which the body had a prolonged warm ischaemia), and a similar method/system for reviving (pig) brains hours after death. The system for cellular recovery could be used to preserve donor organs or for revival-treatments in medical emergencies. History of kidney preservation techniques An essential preliminary to the development of kidney storage and transplantation was the work of Alexis Carrel in developing methods for Vascular anastomosis. Carrel went on to describe the first kidney transplants, which were performed in dogs in 1902; Ullman independently described similar experiments in the same year. In these experiments kidneys were transplanted without there being any attempt at storage. The crucial step in making in vitro storage of kidneys possible, was the demonstration by Fuhrman in 1943, of a reversible effect of hypothermia on the metabolic processes of isolated tissues. Prior to this, kidneys had been stored at normal body temperatures using blood or diluted blood perfusates, but no successful reimplantations had been made. Fuhrman showed that slices of rat kidney cortex and brain withstood cooling to 0.2 °C for one hour at which temperature their oxygen consumption was minimal. When the slices were rewarmed to 37 °C their oxygen consumption recovered to normal. The beneficial effect of hypothermia on ischaemic intact kidneys was demonstrated by Owens in 1955 when he showed that, if dogs were cooled to 23-26 °C, and their thoracic aortas were occluded for 2 hours, their kidneys showed no apparent damage when the dogs were rewarmed. This protective effect of hypothermia on renal ischaemic damage was confirmed by Bogardus who showed a protective effect from surface cooling of dog kidneys whose renal pedicles were clamped in situ for 2 hours. Moyer demonstrated the applicability of these dog experiments to the human, by showing the same effect on dog and human kidney function from the same periods of hypothermic ischaemia. It was not until 1958 that it was shown that intact dog kidneys would survive ischaemia even better if they were cooled to lower temperatures. Stueber showed that kidneys would survive in situ clamping of the renal pedicle for 6 hours if the kidneys were cooled to 0-5 °C by being placed in a cooling jacket, and Schloerb showed that a similar technique with cooling of heparinised dog kidneys to 2-4 °C gave protection for 8 hours but not 12 hours. Schloerb also attempted in vitro storage and auto-transplantation of cooled kidneys, and had one long term survivor after 4 hours kidney storage followed by reimplantation and immediate contralateral nephrectomy. He also had a near survivor, after 24-hour kidney storage and delayed contralateral nephrectomy, in a dog that developed a late arterial thrombosis in the kidney. These methods of surface cooling were improved by the introduction of techniques in which the kidney's vascular system was flushed out with cold fluid prior to storage. This had the effect of increasing the speed of cooling of the kidney and removed red cells from the vascular system. Kiser used this technique to achieve successful 7 hours in vitro storage of a dog kidney, when the kidney had been flushed at 5 °C with a mixture of dextran and diluted blood prior to storage. In 1960 Lapchinsky confirmed that similar storage periods were possible, when he reported eight dogs surviving after their kidneys had been stored at 2-4 °C for 28 hours, followed by auto-transplantation and delayed contralateral nephrectomy. Although Lapchinsky gave no details in his paper, Humphries reported that these experiments had involved cooling the kidneys for 1 hour with cold blood, and then storage at 2-4 °C, followed by rewarming of the kidneys over 1 hour with warm blood at the time of reimplantation. The contralateral nephrectomies were delayed for two months. Humphries developed this storage technique by continuously perfusing the kidney throughout the period of storage. He used diluted plasma or serum as the perfusate and pointed out the necessity for low perfusate pressures to prevent kidney swelling, but admitted that the optimum values for such variables as perfusate temperature, Po2, and flow, remained unknown. His best results, at this time, were 2 dogs that survived after having their kidneys stored for 24 hours at 4-10 °C followed by auto-transplantation and delayed contralateral nephrectomy a few weeks later. Calne challenged the necessity of using continuous perfusion methods by demonstrating that successful 12-hour preservation could be achieved using much simpler techniques. Calne had one kidney supporting life even when the contralateral nephrectomy was performed at the same time as the reimplantation operation. Calne merely heparinised dog kidneys and then stored them in iced solution at 4 °C. Although 17-hour preservation was shown to be possible in one experiment when nephrectomy was delayed, no success was achieved with 24-hour storage. The next advance was made by Humphries in 1964, when he modified the perfusate used in his original continuous perfusion system, and had a dog kidney able to support life after 24-hour storage, even when an immediate contralateral nephrectomy was performed at the same time as the reimplantation. In these experiments autogenous blood, diluted 50% with Tis-U-Sol solution at 10 °C, was used as the perfusate. The perfusate pressure was 40 mm Hg and perfusate pH 7.11-7.35 (at 37 °C). A membrane lung was used for oxygenation to avoid damaging the blood. In attempting to improve on these results Manax investigated the effect of hyperbaric oxygen, and found that successful 48-hour storage of dog kidneys was possible at 2 °C without using continuous perfusion, when the kidneys were flushed with a dextran/Tis-U-Sol solution before storage at 7.9 atmospheres pressure, and if the contralateral nephrectomy was delayed till 2 to 4 weeks after reimplantation. Manax postulated that hyperbaric oxygen might work either by inhibiting metabolism or by aiding diffusion of oxygen into the kidney cells, but he reported no control experiments to determine whether other aspects of his model were more important than hyperbaria. A marked improvement in storage times was achieved by Belzer in 1967 when he reported successful 72-hour kidney storage after returning to the use of continuous perfusion using a canine plasma based perfusate at 8-12 °C. Belzer found that the crucial factor in permitting uncomplicated 72-hour perfusion was cryoprecipitation of the plasma used in the perfusate to reduce the amount of unstable lipo-proteins which otherwise precipitated out of solution and progressively obstructed the kidney's vascular system. A membrane oxygenator was also used in the system in a further attempt to prevent denaturation of the lipo-proteins because only 35% of the lipo-proteins were removed by cryo-precipitation. The perfusate comprised 1 litre of canine plasma, 4 mEq of magnesium sulphate, 250 mL of dextrose, 80 units of insulin, 200,000 units of penicillin and 100 mg of hydrocortisone. Besides being cryo-precipitated, the perfusate was pre-filtered through a 0.22 micron filter immediately prior to use. Belzer used a perfusate pH of 7.4-7.5, a Po2 of 150–190 mm Hg, and a perfusate pressure of 50–80 mm Hg systolic, in a machine that produced a pulsatile perfusate flow. Using this system Belzer had 6 dogs surviving after their kidneys had been stored for 72 hours and then reimplanted, with immediate contralateral nephrectomies being performed at the reimplantation operations. Belzer's use of hydrocortisone as an adjuvant to preservation had been suggested by Lotke's work with dog kidney slices, in which hydrocortisone improved the ability of slices to excrete PAH and oxygen after 30 hour storage at 2-4 °C; Lotke suggested that hydrocortisone might be acting as a lysosomal membrane stabiliser in these experiments. The other components of Belzer's model were arrived at empirically. The insulin and magnesium were used partially in an attempt to induce artificial hibernation, as Suomalainen found this regime to be effective in inducing hibernation in natural hibernators. The magnesium was also provided as a metabolic inhibitor following Kamiyama's demonstration that it was an effective agent in dog heart preservation. A further justification for the magnesium was that it was needed to replace calcium which had been bound by citrate in the plasma. Belzer demonstrated the applicability of his dog experiments to human kidney storage when he reported his experiences in human renal transplantation using the same storage techniques as he had used for dog kidneys. He was able to store kidneys for up to 50 hours with only 8% of patients requiring post operative dialysis when the donor had been well prepared. In 1968 Humphries reported 1 survivor out of 14 dogs following 5 day storage of their kidneys in a perfusion machine at 10 °C, using a diluted plasma medium containing extra fatty acids. However, delayed contralateral nephrectomy 4 weeks after reimplantation was necessary in these experiments to achieve success, and this indicated that the kidneys were severely injured during storage. In 1969 Collins reported an improvement in the results that could be achieved with simple non perfusion methods of hypothermic kidney storage. He based his technique on the observation by Keller that the loss of electrolytes from a kidney during storage could be prevented by the use of a storage fluid containing cations in quantities approaching those normally present in cells. In Collins' model, the dogs were well hydrated prior to nephrectomy, and were also given mannitol to induce a diuresis. Phenoxybenzamine, a vasodilator and lysozomal enzyme stabiliser, was injected into the renal artery before nephrectomy. The kidneys were immersed in saline immediately after removal, and perfused through the renal artery with 100-150 mL of a cold electrolyte solution from a height of 100 cm. The kidneys remained in iced saline for the rest of the storage period. The solution used for these successful cold perfusions imitated the electrolyte composition of intracellular fluids by containing large amounts of potassium and magnesium. The solution also contained glucose, heparin, procaine and phenoxybenzamine. The solution's pH was 7.0 at 25 °C. Collins was able to achieve successful 24-hour storage of 6 kidneys, and 30 hour storage of 3 kidneys, with the kidneys functioning immediately after reimplantation, despite immediate contralateral nephrectomies. Collins emphasised the poor results obtained with a Ringer's solution flush, in finding similar results with this management when compared with kidneys treated by surface cooling alone. Liu reported that Collins' solution could give successful 48-hour storage when the solution was modified by the inclusion of amino acids and vitamins. However, Liu performed no control experiments to show that these modifications were crucial. Difficulty was found by other workers in repeating Belzer's successful 72-hour perfusion storage experiments. Woods was able to achieve successful 48-hour storage of 3 out of 6 kidneys when he used the Belzer additives with cryoprecipitated plasma as the perfusate in a hypothermic perfusion system, but he was unable to extend the storage time to 72 hours as Belzer had done. However, Woods later achieved successful 3 and 7 days storage of dog kidneys. Woods had modified Belzer's perfusate by the addition of 250 mg of methyl prednisolone, increased the magnesium sulphate content to 16.2 mEq and the insulin to 320 units. Six of 6 kidneys produced life sustaining function when they were reimplanted after 72 hours storage despite immediate contralateral nephrectomies; 1 of 2 kidneys produced life sustaining function after 96 hours storage, 1 of 2 after 120 hours storage, and 1 of 2 after 168 hours storage. Perfusate pressure was 60 mm Hg with a perfusate pump rate of 70 beats per minute, and perfusate pH was automatically maintained at 7.4 by a CO2 titrator. Woods stressed the importance of hydration of the donor and recipient animals. Without the methyl prednisolone, Woods found vessel fragility to be a problem when storage times were longer than 48 hours. A major simplification to the techniques of hypothermic perfusion storage was made by Johnson and Claes in 1972 with the introduction of an albumin based perfusate. This perfusate eliminated the need for the manufacture of the cryoprecipitated and millipore filtered plasma used by Belzer. The preparation of this perfusate had been laborious and time-consuming, and there was the potential risk from hepatitis virus and cytotoxic antibodies. The absence of lipo-proteins from the perfusate meant that the membrane oxygenator could be eliminated from the perfusion circuit, as there was no need to avoid a perfusate/air interface to prevent precipitation of lipo-proteins. Both workers used the same additives as recommended by Belzer. The solution that Johnson used was prepared by the Blood Products Laboratory (Elstree: England) by extracting heat labile fibrinogen and gamma globulins from plasma to give a plasma protein fraction (PPF) solution. The solution was incubated at 60 °C for 10 hours to inactivate the agent of serum hepatitis. The result was a 45 g/L human albumin solution containing small amounts of gamma and beta globulins which was stable between 0 °C and 30 °C for 5 years. PPF contained 2.2 mmol/L of free fatty acids. Johnson's experiments were mainly concerned with the storage of kidneys that had been damaged by prolonged warm injury. However, in a control group of non-warm injured dog kidneys, Johnson showed that 24-hour preservation was easily achieved when using a PPF perfusate, and he described elsewhere a survivor after 72 hours perfusion and reimplantation with immediate contralateral nephrectomy. With warm injured kidneys, PPF perfusion gave better results than Collins' method, with 6 out of 6 dogs surviving after 40 minutes warm injury and 24-hour storage followed by reimplantation of the kidneys and immediate contralateral nephrectomy. Potassium, magnesium, insulin, glucose, hydrocortisone and ampicillin were added to the PPF solution to provide an energy source and to prevent leakage of intracellular potassium. Perfusate temperature was 6 °C, pressure 40–80 mm Hg, and Po2 200–400 mm Hg. The pH was maintained between 7.2 and 7.4. Claes used a perfusate based on human albumin (Kabi: Sweden) diluted with saline to a concentration of 45 g/L. Claes preserved 4 out of 5 dog kidneys for 96 hours with the kidneys functioning immediately after reimplantation despite immediate contralateral nephrectomies. Claes also compared this perfusate with Belzer's cryoprecipitated plasma in a control group and found no significant difference between the function of the reimplanted kidneys in the two groups. The only other group besides Woods' to report successful seven-day storage of kidneys was Liu and Humphries in 1973. They had three out of seven dogs surviving, after their kidneys had been stored for seven days followed by reimplantation and immediate contralateral nephrectomy. Their best dog had a peak post reimplantation creatinine of 50 mg/L (0.44 mmol/L). Liu used well hydrated dogs undergoing a mannitol diuresis and stored the kidneys at 9 °C – 10 °C using a perfusate derived from human PPF. The PPF was further fractionated by using a highly water-soluble polymer (Pluronic F-38), and sodium acetyl tryptophanate and sodium caprylate were added to the PPF as stabilisers to permit pasteurisation. To this solution were added human albumin, heparin, mannitol, glucose, magnesium sulphate, potassium chloride, insulin, methyl prednisolone, carbenicillin, and water to adjust the osmolality to 300-310 mosmol/kg. The perfusate was exchanged after 3.5 days storage. Perfusate pressure was 60 mm Hg or less, at a pump rate of 60 per minute. Perfusate pH was 7.12–7.32 (at 37 °C), Pco2 27–47 mm Hg, and Po2 173–219 mm Hg. In a further report on this study Humphries found that when the experiments were repeated with a new batch of PPF no survivors were obtained, and histology of the survivors from the original experiment showed glomerular hypercellularity which he attributed to a possible toxic effect of the Pluronic polymer. Joyce and Proctor reported the successful use of a simple dextran based perfusate for 72-hour storage of dog kidneys. 10 out of 17 kidneys were viable after reimplantation and immediate contralateral nephrectomy. Joyce used non pulsatile perfusion at 4 °C with a perfusate containing Dextran 70 (Pharmacia) 2.1%, with additional electrolytes, glucose (19.5 g/L), procaine and hydrocortisone. The perfusate contained no plasma or plasma components. Perfusate pressure was only 30 cm H2O, pH 7.34-7.40 and Po2 250–400 mm Hg. This work showed that, for 72-hour storage, no nutrients other than glucose were needed, and low perfusate pressures and flows were adequate. In 1973 Sacks showed that simple ice storage could be successfully used for 72-hour storage when a new flushing solution was used for the initial cooling and flush out of the kidney. Sacks removed kidneys from well hydrated dogs that were diuresing after a mannitol infusion, and flushed the kidneys with 200 mL of solution from a height of 100 cm. The kidneys were then simply kept at 2 °C for 72 hours without further perfusion. Reimplantation was followed by immediate contralateral nephrectomies. The flush solution was designed to imitate intracellular fluid composition and contained mannitol as an impermeable ion to further prevent cell swelling. The osmolality of the solution was 430 mosmol/kg and its pH was 7.0 at 2 °C. The additives that had been used by Collins (dextrose, phenoxybenzamine, procaine and heparin) were omitted by Sacks. These results have been equalled by Ross who also achieved successful 72-hour storage without using continuous perfusion, although he was unable to reproduce Collins' or Sacks' results using the original Collins' or Sacks' solutions. Ross's successful solution was similar in electrolyte composition to intracellular fluid with the addition of hypertonic citrate and mannitol. No phosphate, bicarbonate, chloride or glucose were present in the solution; the osmolality was 400 mosmol/kg and the pH 7.1. Five of 8 dogs survived reimplantation of their kidneys and immediate contralateral nephrectomy, when the kidneys had been stored for 72 hours after having been flushed with Ross's solution; but Ross was unable to achieve 7 day storage with this technique even when delayed contralateral nephrectomy was used. The requirements for successful 72-hour hypothermic perfusion storage have been further defined by Collins who showed that pulsatile perfusion was not needed if a perfusate pressure of 49 mm Hg was used, and that 7 °C was a better temperature for storage than 2 °C or 12 °C. He also compared various perfusate compositions and found that a phosphate buffered perfusate could be used successfully, so eliminating the need for a carbon dioxide supply. Grundmann has also shown that low perfusate pressure is adequate. He used a mean pulsatile pressure of 20 mm Hg in 72-hour perfusions and found that this gave better results than mean pressures of 15, 40, 50 or 60 mm Hg. Successful storage up to 8 days was reported by Cohen using various types of perfusate – with the best result being obtained when using a phosphate buffered perfusate at 8 °C. Inability to repeat these successful experiments was thought to be due to changes that had been made in the way that the PPF was manufactured with higher octanoic acid content being detrimental. Octanoic acid was shown to be able to stimulate metabolic activity during hypothermic perfusion and this might be detrimental. Nature of kidney preservation injury Structural injury The structural changes that occur during 72-hour hypothermic storage of previously uninjured kidneys have been described by Mackay who showed how there was progressive vacuolation of the cytoplasm of the cells which particularly affected the proximal tubules. On electron microscopy the mitochondria were seen to become swollen with early separation of the internal cristal membranes and later loss of all internal structure. Lysosomal integrity was well preserved until late, and the destruction of the cell did not appear to be caused by lytic enzymes because there was no more injury immediately adjacent to the lysosomes than in the rest of the cell. Woods and Liu – when describing successful 5 and 7 day kidney storage - described the light microscopic changes seen at the end of perfusion and at post mortem, but found few gross abnormalities apart from some infiltration with lymphocytes and occasional tubular atrophy. The changes during short perfusions of human kidneys prior to reimplantation have been described by Hill who also performed biopsies 1 hour after reimplantation. On electron microscopy Hill found endothelial damage which correlated with the severity of the fibrin deposition after reimplantation. The changes that Hill saw in the glomeruli on light microscopy were occasional fibrin thrombi and infiltration with polymorphs. Hill suspected that these changes were an immunologically induced lesion, but found that there was no correlation between the severity of the histological lesion and the presence or absence of immunoglobulin deposits. There are several reports of the analysis of urine produced by kidneys during perfusion storage. Kastagir analysed urine produced during 24-hour perfusion and found it to be an ultrafiltrate of the perfusate, Scott found a trace of protein in the urine during 24-hour storage, and Pederson found only a trace of protein after 36 hours perfusion storage. Pederson mentioned that he had found heavy proteinuria during earlier experiments. Woods noted protein casts in the tubules of viable kidneys after 5 day storage, but he did not analyse the urine produced during perfusion. In Cohen's study there was a progressive increase in urinary protein concentration during 8 day preservation until the protein content of the urine equalled that of the perfusate. This may have been related to the swelling of the glomerular basement membranes and the progressive fusion of epithelial cell foot processes that was also observed during the same period of perfusion storage. Mechanisms of injury The mechanisms that damage kidneys during hypothermic storage can be sub-divided as follows: Injury to the metabolic processes of the cell caused by: Cold Anoxia when the kidney is warm both before and after the period of hypothermic storage. Failure to supply the correct nutrients. Toxin accumulation in the perfusate. Toxic damage from the storage fluid. Washout of essential substrates from the kidney cells. Injury to nuclear DNA. Mechanical injury to the vascular system of the kidney during hypothermic perfusion. Post reimplantation injury. Metabolic injury Cold At normal temperatures pumping mechanisms in cell walls retain intracellular potassium at high levels and extrude sodium. If these pumps fail sodium is taken up by the cell and potassium lost. Water follows the sodium passively and results in swelling of the cells. The importance of this control of cell swelling was demonstrated by McLoughlin who found a significant correlation between canine renal cortical water content and the ability of kidneys to support life after 36-hour storage. The pumping mechanism is driven by the enzyme system known as Na+K+- activated ATPase and is inhibited by cold. Levy found that metabolic activity at 10 °C, as indicated by oxygen consumption measurements, was reduced to about 5% of normal and, because all enzyme systems are affected in a similar way by hypothermia, ATPase activity is markedly reduced at 10 °C. There are, however, tissue and species differences in the cold sensitivity of this ATPase which may account for the differences in the ability of tissues to withstand hypothermia. Martin has shown that in dog kidney cortical cells some ATPase activity is still present at 10 °C but not at 0 °C. In liver and heart cells activity was completely inhibited at 10 °C and this difference in the cold sensitivity of ATPase correlated with the greater difficulty in controlling cell swelling during hypothermic storage of liver and heart cells. A distinct ATPase is found in vessel walls, and this was shown by Belzer to be completely inhibited at 10 °C, when at this temperature kidney cortical cells ATPase is still active. These experiments were performed on aortic endothelium, but if the vascular endothelium of the kidney has the same properties, then vascular injury may be the limiting factor in prolonged kidney storage. Willis has shown how hibernators derive some of their ability to survive low temperatures by having a Na+K+-ATPase which is able to transport sodium and potassium actively across their cell membranes, at 5 °C, about six times faster than in non-hibernators; this transport rate is sufficient to prevent cell swelling. The rate of cooling of a tissue may also be significant in the production of injury to enzyme systems. Francavilla showed that when liver slices were rapidly cooled (immediate cooling to 12 °C in 6 minutes) anaerobic glycolysis, as measured on rewarming to 37 °C, was inhibited by about 67% of the activity that was demonstrated in slices that had been subjected to delayed cooling. However, dog kidney slices were less severely affected by the rapid cooling than were the liver slices. Anoxia All cells require ATP as an energy source for their metabolic activity. The kidney is damaged by anoxia when kidney cortical cells are unable to generate sufficient ATP under anaerobic conditions to meet the needs of the cells. When excising a kidney some anoxia is inevitable in the interval between dividing the renal artery and cooling the kidney. It has been shown by Bergstrom that 50% of a dog's kidney's cortical cells ATP content is lost within 1 minute of clamping the renal artery, and similar results were found by Warnick in whole mice kidneys, with a fall in cellular ATP by 50% after about 30 seconds of warm anoxia. Warnick and Bergstrom also showed that cooling the kidney immediately after removal markedly reduced any further ATP loss. When these non warm-injured kidneys were perfused with oxygenated hypothermic plasma, ATP levels were reduced by 50% after 24-hour storage and, after 48 hours, mean tissue ATP levels were a little higher than this indicating that synthesis of ATP had occurred. Pegg has shown that rabbit kidneys can resynthesize ATP after a period of perfusion storage following warm injury, but no resynthesis occurred in non warm-injured kidneys. Warm anoxia can also occur during reimplantation of the kidney after storage. Lannon showed, by measurements of succinate metabolism, how the kidney was more sensitive to a period of warm hypoxia occurring after storage than to the same period of warm hypoxia occurring immediately prior to storage. Lack of essential nutrients Active metabolism of glucose with production of bicarbonate has been demonstrated by Pettersson and Cohen. Pettersson studies were on the metabolism of glucose and fatty acids by kidneys during 6 day hypothermic perfusion storage and he found that the kidneys consumed glucose at 4.4 μmol/g/day and fatty acids at 5.8 μmol/g/day. In Cohen's study the best 8 day stored kidneys consumed glucose at the rate of 2.3 μmol/g/day and 4.9 μmol/g/day respectively which made it likely that they were using fatty acids at similar rates to Pettersson's dogs' kidneys. The constancy of both the glucose consumption rate and the rate of bicarbonate production implied that no injury was affecting the glycolytic enzyme or carbonic anhydrase enzyme systems. Lee showed that fatty acids were the preferred substrate of the rabbit's kidney cortex at normothermic temperatures, and glucose the preferred substrate for the medullary cells which normally metabolise anaerobically. Abodeely showed that both fatty acids and glucose could be utilised by the outer medulla of the rabbit's kidney but that glucose was used preferentially. At hypothermia the metabolic needs of the kidney are much reduced but measurable consumption of glucose, fatty acids and ketone bodies occurs. Horsburgh showed that lipid is utilised by hypothermic kidneys, with palmitate consumption being 0-15% of normal in the rat kidney cortex at 15 °C. Pettersson showed that, on a molar basis, glucose and fatty acids were metabolised by hypothermically perfused kidneys at about the same rates. The cortex of the hypothermic dog kidney was shown by Huang to lose lipid (35% loss of total lipid after 24 hours) unless oleate was added to the kidney perfusate. Huang commented that this loss could affect the structure of the cell and that the loss also suggested that the kidney was utilising fatty acid. In a later publication Huang showed that dog kidney cortex slices metabolised fatty acids, but not glucose, at 10 °C. Even if the correct nutrients are provided, they may be lost by absorption into the tubing of the preservation system. Lee demonstrated that silicone rubber (a material used extensively in kidney preservation systems) absorbed 46% of a perfusate's oleic acid after 4 hours of perfusion. Toxin accumulation Abouna showed that ammonia was released into the perfusate during 3 day kidney storage, and suggested that this might be toxic to the kidney cells unless removed by frequent replacement of the perfusate. Some support for the use of perfusate exchange during long perfusions was provided by Liu who used perfusate exchange in his successful 7 day storage experiments. Grundmann also found that 96-hour preservation quality was improved by the use of a double volume of perfusate or by perfusate exchange. However, Grundmann's conclusions were based on comparisons with a control group of only 3 dogs. Cohen was unable to demonstrate any production of ammonia during 8 days of perfusion and no benefit from perfusate exchange; the progressive alkalinity that occurred during perfusion was shown to be due to bicarbonate production. Toxic damage from the perfusate Certain perfusates have been shown to have toxic effects on kidneys as a result of the inadvertent inclusion of particular chemicals in their formulation. Collins showed that the procaine included in the formulation of his flush fluids could be toxic, and Pegg has commented how toxic materials, such as PVC plasticizers, may be washed out of perfusion circuit tubing. Dvorak showed that the methyl-prednisolone addition to the perfusate that was thought to be essential by Woods might in some circumstances be harmful. He showed that with over  g of methyl-prednisolone in 650 mL of perfusate (compared with 250 mg in 1 litre used by Woods) irreversible haemodynamic and structural changes were produced in the kidney after 20 hours of perfusion. There was necrosis of capillary loops, occlusion of Bowman's spaces, basement membrane thickening and endothelial cell damage. Washout of essential substrates The level of nucleotides remaining in the cell after storage was thought by Warnick to be important in determining whether the cell would be able to re-synthesize ATP and recover after rewarming. Frequent changing of the perfusate or the use of a large volume of perfusate has the theoretical disadvantage that broken down adenine nucleotides may be washed out of the cells and so not be available for re-synthesis into ATP when the kidney is rewarmed. Injury to nuclear DNA Nuclear DNA is injured during cold storage of kidneys. Lazarus showed that single stranded DNA breaks occurred within 16 hours in hypothermically stored mice kidneys, with the injury being inhibited a little by storage in Collins' or Sacks' solutions. This nuclear injury differed from that seen in warm injury when double stranded DNA breaks occurred. Mechanical injury to the vascular system Perfusion storage methods can mechanically injury the vascular endothelium of the kidney, which leads to arterial thrombosis or fibrin deposition after reimplantation. Hill noted that, in human kidneys, fibrin deposition in the glomerulus after reimplantation and postoperative function, correlated with the length of perfusion storage. He had taken biopsies at revascularisation from human kidneys preserved by perfusion or ice storage, and showed by electron microscopy that endothelial disruption only occurred in those kidneys that had been perfused. Biopsies taken one hour after revascularisation showed platelets and fibrin adherent to any areas of denuded vascular basement membrane. A different type of vascular damage was described by Sheil who showed how a jet lesion could be produced distal to the cannula tied into the renal artery, leading to arterial thrombosis approximately 1 cm distal to the cannula site. Post reimplantation injury There is evidence that immunological mechanisms may injure hypothermically perfused kidneys after reimplantation if the perfusate contained specific antibody. Cross described two pairs of human cadaver kidneys that were perfused simultaneously with cryoprecipitated plasma containing type specific HLA antibody to one of the pairs. Both these kidneys suffered early arterial thrombosis. Light described similar hyperacute rejection following perfusion storage and showed that the cryoprecipitated plasma used contained cytotoxic IgM antibody. This potential danger of using cryoprecipitated plasma was demonstrated experimentally by Filo who perfused dog kidneys for 24 hours with specifically sensitised cryoprecipitated dog plasma and found that he could induce glomerular and vascular lesions with capillary engorgement, endothelial swelling, infiltration by polymorphonuclear leucocytes and arterial thrombosis. Immunofluorescent microscopy demonstrated specific binding of IgG along endothelial surfaces, in glomeruli, and also in vessels. After reimplantation, complement fixation and tissue damage occurred in a similar pattern. There was some correlation between the severity of the histological damage and subsequent function of the kidneys. Many workers have attempted to prevent kidneys rewarming during reimplantation but only Cohen has described using a system of active cooling. Measurements of lysosomal enzyme release from kidneys subjected to sham anastomoses, when either in or out of the cooling system, demonstrated how sensitive kidneys were to rewarming after a period of cold storage, and confirmed the effectiveness of the cooling system in preventing enzyme release. A further factor in minimising injury at the reimplantation operations may have been that the kidneys were kept at 7 °C within the cooling coil, which was within a degree of the temperature used during perfusion storage, so that the kidneys were not subjected to the greater changes in temperature that would have occurred if ice cooling had been used. Dempster described using slow release of the vascular clamps at the end of kidney reimplantation operations to avoid injuring the kidney, but other workers have not mentioned whether or not they used this manoeuvre. After Cohen found vascular injury with intra renal bleeding after 3 days of perfusion storage, a technique of slow revascularisation was used for all subsequent experiments, with the aim of giving the intra- renal vessels time to recover their tone sufficiently to prevent full systolic pressure being applied to the fragile glomerular vessels. The absence of gross vascular injury in his later perfusions may be attributable to the use of this manoeuvre. See also Artificial organ Cryopreservation References Cryobiology Medical technology Transplantation medicine
Machine perfusion
[ "Physics", "Chemistry", "Biology" ]
8,166
[ "Physical phenomena", "Phase transitions", "Cryobiology", "Biochemistry", "Medical technology" ]
3,626,981
https://en.wikipedia.org/wiki/Hopf%20bifurcation
In the mathematical theory of bifurcations, a Hopf bifurcation is a critical point where, as a parameter changes, a system's stability switches and a periodic solution arises. More accurately, it is a local bifurcation in which a fixed point of a dynamical system loses stability, as a pair of complex conjugate eigenvalues—of the linearization around the fixed point—crosses the complex plane imaginary axis as a parameter crosses a threshold value. Under reasonably generic assumptions about the dynamical system, the fixed point becomes a small-amplitude limit cycle as the parameter changes. A Hopf bifurcation is also known as a Poincaré–Andronov–Hopf bifurcation, named after Henri Poincaré, Aleksandr Andronov and Eberhard Hopf. Overview Supercritical and subcritical Hopf bifurcations The limit cycle is orbitally stable if a specific quantity called the first Lyapunov coefficient is negative, and the bifurcation is supercritical. Otherwise it is unstable and the bifurcation is subcritical. The normal form of a Hopf bifurcation is the following time-dependent differential equation: where z, b are both complex and λ is a real parameter. Write: The number α is called the first Lyapunov coefficient. If α is negative then there is a stable limit cycle for λ > 0: where The bifurcation is then called supercritical. If α is positive then there is an unstable limit cycle for λ < 0. The bifurcation is called subcritical. Intuition The normal form of the supercritical Hopf bifurcation can be expressed intuitively in polar coordinates, where is the instantaneous amplitude of the oscillation and is its instantaneous angular position. The angular velocity is fixed. When , the differential equation for has an unstable fixed point at and a stable fixed point at . The system thus describes a stable circular limit cycle with radius and angular velocity . When then is the only fixed point and it is stable. In that case, the system describes a spiral that converges to the origin. Cartesian coordinates The polar coordinates can be transformed into Cartesian coordinates by writing and . Differentiating and with respect to time yields the differential equations, and Subcritical case The normal form of the subcritical Hopf is obtained by negating the sign of , which reverses the stability of the fixed points in . For the limit cycle is now unstable and the origin is stable. Example Hopf bifurcations occur in the Lotka–Volterra model of predator–prey interaction (known as paradox of enrichment), the Hodgkin–Huxley model for nerve membrane potential, the Selkov model of glycolysis, the Belousov–Zhabotinsky reaction, the Lorenz attractor, the Brusselator, and in classical electromagnetism. Hopf bifurcations have also been shown to occur in fission waves. The Selkov model is The figure shows a phase portrait illustrating the Hopf bifurcation in the Selkov model. In railway vehicle systems, Hopf bifurcation analysis is notably important. Conventionally a railway vehicle's stable motion at low speeds crosses over to unstable at high speeds. One aim of the nonlinear analysis of these systems is to perform an analytical investigation of bifurcation, nonlinear lateral stability and hunting behavior of rail vehicles on a tangent track, which uses the Bogoliubov method. Serial expansion method Consider a system defined by , where is smooth and is a parameter. After a linear transform of parameters, we can assume that as increases from below zero to above zero, the origin turns from a spiral sink to a spiral source. Now, for , we perform a perturbative expansion using two-timing: where is "slow-time" (thus "two-timing"), and are functions of . By an argument with harmonic balance (see for details), we can use . Then, plugging in to , and expanding up to the order, we would obtain three ordinary differential equations in . The first equation would be of form , which gives the solution , where are "slowly varying terms" of . Plugging it into the second equation, we can solve for . Then plugging into the third equation, we would have an equation of form , with the right-hand-side a sum of trigonometric terms. Of these terms, we must set the "resonance term" -- that is, -- to zero. This is the same idea as Poincaré–Lindstedt method. This then provides two ordinary differential equations for , allowing one to solve for the equilibrium value of , as well as its stability. Example Consider the system defined by and . The system has an equilibrium point at origin. When increases from negative to positive, the origin turns from a stable spiral point to an unstable spiral point. First, we eliminate from the equations:Now, perform the perturbative expansion as described above:with . Expanding up to order , we obtain:First equation has solution . Here are respectively the "slow-varying amplitude" and "slow-varying phase" of the simple oscillation. Second equation has solution , where are also slow-varying amplitude and phase. Now, since , we can merge the two terms as some . Thus, without loss of generality, we can assume . ThusPlug into the third equation, we obtainEliminating the resonance terms, we obtain The first equation shows that is a stable equilibrium. Thus we find that the Hopf bifurcation creates an attracting (rather than repelling) limit cycle. Plugging in , we have . We can repick the origin of time to make . Now solve for yieldingPlugging in back to the expressions for , we havePlugging them back to yields the serial expansion of as well, up to order . Letting for notational neatness, we have This provides us with a parametric equation for the limit cycle. This is plotted in the illustration on the right. Definition of a Hopf bifurcation The appearance or the disappearance of a periodic orbit through a local change in the stability properties of a fixed point is known as the Hopf bifurcation. The following theorem works for fixed points with one pair of conjugate nonzero purely imaginary eigenvalues. It tells the conditions under which this bifurcation phenomenon occurs. Theorem (see section 11.2 of ). Let be the Jacobian of a continuous parametric dynamical system evaluated at a steady point . Suppose that all eigenvalues of have negative real part except one conjugate nonzero purely imaginary pair . A Hopf bifurcation arises when these two eigenvalues cross the imaginary axis because of a variation of the system parameters. Routh–Hurwitz criterion Routh–Hurwitz criterion (section I.13 of ) gives necessary conditions so that a Hopf bifurcation occurs. Sturm series Let be Sturm series associated to a characteristic polynomial . They can be written in the form: The coefficients for in correspond to what is called Hurwitz determinants. Their definition is related to the associated Hurwitz matrix. Propositions Proposition 1. If all the Hurwitz determinants are positive, apart perhaps then the associated Jacobian has no pure imaginary eigenvalues. Proposition 2. If all Hurwitz determinants (for all in are positive, and then all the eigenvalues of the associated Jacobian have negative real parts except a purely imaginary conjugate pair. The conditions that we are looking for so that a Hopf bifurcation occurs (see theorem above) for a parametric continuous dynamical system are given by this last proposition. Example Consider the classical Van der Pol oscillator written with ordinary differential equations: The Jacobian matrix associated to this system follows: The characteristic polynomial (in ) of the linearization at (0,0) is equal to: The coefficients are: The associated Sturm series is: The Sturm polynomials can be written as (here ): The above proposition 2 tells that one must have: Because 1 > 0 and −1 < 0 are obvious, one can conclude that a Hopf bifurcation may occur for Van der Pol oscillator if . See also Reaction–diffusion systems References Further reading External links The Hopf Bifurcation Andronov–Hopf bifurcation page at Scholarpedia Bifurcation theory Circuit theorems
Hopf bifurcation
[ "Physics", "Mathematics" ]
1,755
[ "Bifurcation theory", "Equations of physics", "Circuit theorems", "Physics theorems", "Dynamical systems" ]
3,628,399
https://en.wikipedia.org/wiki/Scalar%E2%80%93tensor%20theory
In theoretical physics, a scalar–tensor theory is a field theory that includes both a scalar field and a tensor field to represent a certain interaction. For example, the Brans–Dicke theory of gravitation uses both a scalar field and a tensor field to mediate the gravitational interaction. Tensor fields and field theory Modern physics tries to derive all physical theories from as few principles as possible. In this way, Newtonian mechanics as well as quantum mechanics are derived from Hamilton's principle of least action. In this approach, the behavior of a system is not described via forces, but by functions which describe the energy of the system. Most important are the energetic quantities known as the Hamiltonian function and the Lagrangian function. Their derivatives in space are known as Hamiltonian density and the Lagrangian density. Going to these quantities leads to the field theories. Modern physics uses field theories to explain reality. These fields can be scalar, vectorial or tensorial. An example of a scalar field is the temperature field. An example of a vector field is the wind velocity field. An example of a tensor field is the stress tensor field in a stressed body, used in continuum mechanics. Gravity as field theory In physics, forces (as vectorial quantities) are given as the derivative (gradient) of scalar quantities named potentials. In classical physics before Einstein, gravitation was given in the same way, as consequence of a gravitational force (vectorial), given through a scalar potential field, dependent of the mass of the particles. Thus, Newtonian gravity is called a scalar theory. The gravitational force is dependent of the distance r of the massive objects to each other (more exactly, their centre of mass). Mass is a parameter and space and time are unchangeable. Einstein's theory of gravity, the General Relativity (GR) is of another nature. It unifies space and time in a 4-dimensional manifold called space-time. In GR there is no gravitational force, instead, the actions we ascribed to being a force are the consequence of the local curvature of space-time. That curvature is defined mathematically by the so-called metric, which is a function of the total energy, including mass, in the area. The derivative of the metric is a function that approximates the classical Newtonian force in most cases. The metric is a tensorial quantity of degree 2 (it can be given as a 4x4 matrix, an object carrying 2 indices). Another possibility to explain gravitation in this context is by using both tensor (of degree n>1) and scalar fields, i.e. so that gravitation is given neither solely through a scalar field nor solely through a metric. These are scalar–tensor theories of gravitation. The field theoretical start of General Relativity is given through the Lagrange density. It is a scalar and gauge invariant (look at gauge theories) quantity dependent on the curvature scalar R. This Lagrangian, following Hamilton's principle, leads to the field equations of Hilbert and Einstein. If in the Lagrangian the curvature (or a quantity related to it) is multiplied with a square scalar field, field theories of scalar–tensor theories of gravitation are obtained. In them, the gravitational constant of Newton is no longer a real constant but a quantity dependent of the scalar field. Mathematical formulation An action of such a gravitational scalar–tensor theory can be written as follows: where is the metric determinant, is the Ricci scalar constructed from the metric , is a coupling constant with the dimensions , is the scalar-field potential, is the material Lagrangian and represents the non-gravitational fields. Here, the Brans–Dicke parameter has been generalized to a function. Although is often written as being , one has to keep in mind that the fundamental constant there, is not the constant of gravitation that can be measured with, for instance, Cavendish type experiments. Indeed, the empirical gravitational constant is generally no longer a constant in scalar–tensor theories, but a function of the scalar field . The metric and scalar-field equations respectively write: and Also, the theory satisfies the following conservation equation, implying that test-particles follow space-time geodesics such as in general relativity: where is the stress-energy tensor defined as The Newtonian approximation of the theory Developing perturbatively the theory defined by the previous action around a Minkowskian background, and assuming non-relativistic gravitational sources, the first order gives the Newtonian approximation of the theory. In this approximation, and for a theory without potential, the metric writes with satisfying the following usual Poisson equation at the lowest order of the approximation: where is the density of the gravitational source and (the subscript indicates that the corresponding value is taken at present cosmological time and location). Therefore, the empirical gravitational constant is a function of the present value of the scalar-field background and therefore theoretically depends on time and location. However, no deviation from the constancy of the Newtonian gravitational constant has been measured, implying that the scalar-field background is pretty stable over time. Such a stability is not theoretically generally expected but can be theoretically explained by several mechanisms. The first post-Newtonian approximation of the theory Developing the theory at the next level leads to the so-called first post-Newtonian order. For a theory without potential and in a system of coordinates respecting the weak isotropy condition (i.e., ), the metric takes the following form: with where is a function depending on the coordinate gauge It corresponds to the remaining diffeomorphism degree of freedom that is not fixed by the weak isotropy condition. The sources are defined as the so-called post-Newtonian parameters are and finally the empirical gravitational constant is given by where is the (true) constant that appears in the coupling constant defined previously. Observational constraints on the theory Current observations indicate that , which means that . Although explaining such a value in the context of the original Brans–Dicke theory is impossible, Damour and Nordtvedt found that the field equations of the general theory often lead to an evolution of the function toward infinity during the evolution of the universe. Hence, according to them, the current high value of the function could be a simple consequence of the evolution of the universe. Seven years of data from the NASA MESSENGER mission constraints the post-Newtonian parameter for Mercury's perihelion shift to . Both constraints show that while the theory is still a potential candidate to replace general relativity, the scalar field must be very weakly coupled in order to explain current observations. Generalized scalar-tensor theories have also been proposed as explanation for the accelerated expansion of the universe but the measurement of the speed of gravity with the gravitational wave event GW170817 has ruled this out. Higher-dimensional relativity and scalar–tensor theories After the postulation of the General Relativity of Einstein and Hilbert, Theodor Kaluza and Oskar Klein proposed in 1917 a generalization in a 5-dimensional manifold: Kaluza–Klein theory. This theory possesses a 5-dimensional metric (with a compactified and constant 5th metric component, dependent on the gauge potential) and unifies gravitation and electromagnetism, i.e. there is a geometrization of electrodynamics. This theory was modified in 1955 by P. Jordan in his Projective Relativity theory, in which, following group-theoretical reasonings, Jordan took a functional 5th metric component that led to a variable gravitational constant G. In his original work, he introduced coupling parameters of the scalar field, to change energy conservation as well, according to the ideas of Dirac. Following the Conform Equivalence theory, multidimensional theories of gravity are conform equivalent to theories of usual General Relativity in 4 dimensions with an additional scalar field. One case of this is given by Jordan's theory, which, without breaking energy conservation (as it should be valid, following from microwave background radiation being of a black body), is equivalent to the theory of C. Brans and Robert H. Dicke of 1961, so that it is usually spoken about the Brans–Dicke theory. The Brans–Dicke theory follows the idea of modifying Hilbert-Einstein theory to be compatible with Mach's principle. For this, Newton's gravitational constant had to be variable, dependent of the mass distribution in the universe, as a function of a scalar variable, coupled as a field in the Lagrangian. It uses a scalar field of infinite length scale (i.e. long-ranged), so, in the language of Yukawa's theory of nuclear physics, this scalar field is a massless field. This theory becomes Einsteinian for high values for the parameter of the scalar field. In 1979, R. Wagoner proposed a generalization of scalar–tensor theories using more than one scalar field coupled to the scalar curvature. JBD theories although not changing the geodesic equation for test particles, change the motion of composite bodies to a more complex one. The coupling of a universal scalar field directly to the gravitational field gives rise to potentially observable effects for the motion of matter configurations to which gravitational energy contributes significantly. This is known as the "Dicke–Nordtvedt" effect, which leads to possible violations of the Strong as well as the Weak Equivalence Principle for extended masses. JBD-type theories with short-ranged scalar fields use, according to Yukawa's theory, massive scalar fields. The first of this theories was proposed by A. Zee in 1979. He proposed a Broken-Symmetric Theory of Gravitation, combining the idea of Brans and Dicke with the one of Symmetry Breakdown, which is essential within the Standard Model SM of elementary particles, where the so-called Symmetry Breakdown leads to mass generation (as a consequence of particles interacting with the Higgs field). Zee proposed the Higgs field of SM as scalar field and so the Higgs field to generate the gravitational constant. The interaction of the Higgs field with the particles that achieve mass through it is short-ranged (i.e. of Yukawa-type) and gravitational-like (one can get a Poisson equation from it), even within SM, so that Zee's idea was taken 1992 for a scalar–tensor theory with Higgs field as scalar field with Higgs mechanism. There, the massive scalar field couples to the masses, which are at the same time the source of the scalar Higgs field, which generates the mass of the elementary particles through Symmetry Breakdown. For vanishing scalar field, this theories usually go through to standard General Relativity and because of the nature of the massive field, it is possible for such theories that the parameter of the scalar field (the coupling constant) does not have to be as high as in standard JBD theories. Though, it is not clear yet which of these models explains better the phenomenology found in nature nor if such scalar fields are really given or necessary in nature. Nevertheless, JBD theories are used to explain inflation (for massless scalar fields then it is spoken of the inflaton field) after the Big Bang as well as the quintessence. Further, they are an option to explain dynamics usually given through the standard cold dark matter models, as well as MOND, Axions (from Breaking of a Symmetry, too), MACHOS,... Connection to string theory A generic prediction of all string theory models is that the spin-2 graviton has a spin-0 partner called the dilaton. Hence, string theory predicts that the actual theory of gravity is a scalar–tensor theory rather than general relativity. However, the precise form of such a theory is not currently known because one does not have the mathematical tools in order to address the corresponding non-perturbative calculations. Besides, the precise effective 4-dimensional form of the theory is also confronted to the so-called landscape issue. See also References P. Jordan, Schwerkraft und Weltall, Vieweg (Braunschweig) 1955: Projective Relativity. First paper on JBD theories. C.H. Brans and R.H. Dicke, Phys. Rev. 124: 925, 1061: Brans–Dicke theory starting from Mach's principle. R. Wagoner, Phys. Rev. D1(812): 3209, 2004: JBD theories with more than one scalar field. A. Zee, Phys. Rev. Lett. 42(7): 417, 1979: Broken-Symmetric scalar-tensor theory. H. Dehnen and H. Frommert, Int. J. Theor. Phys. 30(7): 985, 1991: Gravitative-like and short-ranged interaction of Higgs fields within the Standard Model or elementary particles. H. Dehnen et al., Int. J. Theor. Phys. 31(1): 109, 1992: Scalar-tensor-theory with Higgs field. C.H. Brans, June 2005: Roots of scalar-tensor theories. . Discusses the history of attempts to construct gravity theories with a scalar field and the relation to the equivalence principle and Mach's principle. Tensors Theories of gravity String theory Physical cosmology Particle physics Physics beyond the Standard Model Mathematical physics
Scalar–tensor theory
[ "Physics", "Astronomy", "Mathematics", "Engineering" ]
2,809
[ "Astronomical hypotheses", "Astronomical sub-disciplines", "Tensors", "Applied mathematics", "Theoretical physics", "Unsolved problems in physics", "Astrophysics", "Particle physics", "Theories of gravity", "String theory", "Mathematical physics", "Physics beyond the Standard Model", "Physic...
3,628,430
https://en.wikipedia.org/wiki/Single-crossing%20condition
In monotone comparative statics, the single-crossing condition or single-crossing property refers to a condition where the relationship between two or more functions is such that they will only cross once. For example, a mean-preserving spread will result in an altered probability distribution whose cumulative distribution function will intersect with the original's only once. The single-crossing condition was posited in Samuel Karlin's 1968 monograph 'Total Positivity'. It was later used by Peter Diamond, Joseph Stiglitz, and Susan Athey, in studying the economics of uncertainty. The single-crossing condition is also used in applications where there are a few agents or types of agents that have preferences over an ordered set. Such situations appear often in information economics, contract theory, social choice and political economics, among other fields. Example using cumulative distribution functions Cumulative distribution functions F and G satisfy the single-crossing condition if there exists a such that and ; that is, function crosses the x-axis at most once, in which case it does so from below. This property can be extended to two or more variables. Given x and t, for all x'>x, t'>t, and . This condition could be interpreted as saying that for x'>x, the function g(t)=F(x',t)-F(x,t) crosses the horizontal axis at most once, and from below. The condition is not symmetric in the variables (i.e., we cannot switch x and t in the definition; the necessary inequality in the first argument is weak, while the inequality in the second argument is strict). Use in social choice and mechanism design Social choice In social choice theory, the single-crossing condition is a condition on preferences. It is especially useful because utility functions are generally increasing (i.e. the assumption that an agent will prefer or at least consider equivalent two dollars to one dollar is unobjectionable). Specifically, a set of agents with some unidimensional characteristic and preferences over different policies q satisfy the single crossing property when the following is true: If and or if and , then where W is the indirect utility function. An important result extends the median voter theorem, which states that when voters have single peaked preferences, there is a majority-preferred candidate corresponding to the median voter's most preferred policy. With single-crossing preferences, the most preferred policy of the voter with the median value of is the Condorcet winner. In effect, this replaces the unidimensionality of policies with the unidimensionality of voter heterogeneity. In this context, the single-crossing condition is sometimes referred to as the Gans-Smart condition. Mechanism design In mechanism design, the single-crossing condition (often referred to as the Spence-Mirrlees property for Michael Spence and James Mirrlees, sometimes as the constant-sign assumption) refers to the requirement that the isoutility curves for agents of different types cross only once. This condition guarantees that the transfer in an incentive-compatible direct mechanism can be pinned down by the transfer of the lowest type. This condition is similar to another condition called strict increasing difference (SID). Formally, suppose the agent has a utility function , the SID says we have . The Spence-Mirrlees Property is characterized by . See also Brouwer fixed-point theorem Notes References Asymmetric information Fixed-point theorems Utility function types
Single-crossing condition
[ "Physics", "Mathematics" ]
707
[ "Theorems in mathematical analysis", "Asymmetric information", "Fixed-point theorems", "Theorems in topology", "Asymmetry", "Symmetry" ]
3,628,901
https://en.wikipedia.org/wiki/Vortex%20lift
Vortex lift is that portion of lift due to the action of leading edge vortices. It is generated by wings with highly sweptback, sharp, leading edges (beyond 50 degrees of sweep) or highly-swept wing-root extensions added to a wing of moderate sweep. It is sometimes known as non-linear lift due to its rapid increase with angle of attack and controlled separation lift, to distinguish it from conventional lift which occurs with attached flow. How it works Vortex lift works by capturing vortices generated from the sharply swept leading edge of the wing. The vortex, formed roughly parallel to the leading edge of the wing, is trapped by the airflow and remains fixed to the upper surface of the wing. As the air flows around the leading edge, it flows over the trapped vortex and is pulled in and down to generate the lift. A straight, or moderate sweep, wing may experience, depending on its airfoil section, a leading-edge stall and loss of lift, as a result of flow separation at the leading edge and a non-lifting wake over the top of the wing. However, on a highly-swept wing leading-edge separation still occurs but instead creates a vortex sheet that rolls up above the wing producing spanwise flow beneath. Flow not entrained by the vortex passes over the top of the vortex and reattaches to the wing surface. The vortex generates a high negative pressure field on the top of the wing. Vortex lift increases with angle of attack (AOA) as seen on lift~AOA plots which show the vortex, or unattached flow, adding to the normal attached lift as an extra non-linear component of the overall lift. Vortex lift has a limiting AoA at which the vortex bursts or breaks down. Applications Four basic configurations which have used vortex lift are, in chronological order, the 60-degree delta wing; the ogive delta wing with its sharply-swept leading edge at the root; the moderately-swept wing with a leading-edge extension, which is known as a hybrid wing; and the sharp-edge forebody, or vortex-lift strake. Wings which generate vortex lift have been used on delta-winged research aircraft such as the Convair XF-92A and Fairey Delta 2. Early delta wing fighters such as the F-102, the F-106, and contemporaries such as Dassault's deltas had cambered leading edges that were blunt and did not generate significant vortexes. The Concorde supersonic airliner had sharp leading edges. Wings with vortex lift over the inboard section are the moderate-sweep wings with an easily identified LERX used on high-manoeuvrability combat aircraft, such as the Northrop F-5 and McDonnell Douglas F/A-18 Hornet. Vortex lift sharp forebody strakes are used on the General Dynamics F-16 Fighting Falcon. Benefits and shortcomings Vortex lift provides high lift with increasing AoA at landing speeds and in manoeuvring flight. A high AoA needed to meet landing requirements has, in the past, restricted pilot visibility and led to design complications to accommodate a drooping nose, as in the case of the Fairey Delta 2 and Concorde. For moderate swept wings the addition of a LERX reduces wave drag and improves turning performance and enables a far wider range of flying attitudes. The use of vortex lift is restricted by vortex breakdown or bursting and an inherent instability in yaw. There is considerable drag due to increased lift production and loss of leading edge suction that is part of normal attached flow round a leading edge. Among animals Animals such as hummingbirds, and bats that eat pollen and nectar, are able to hover. They produce vortex lift with the sharp leading edges of their wings and change their wing shapes and curvatures to create stability in the lift. See also Kármán vortex street Aerodynamics Crab claw sail References Aerodynamics Vortices
Vortex lift
[ "Chemistry", "Mathematics", "Engineering" ]
802
[ "Vortices", "Aerodynamics", "Aerospace engineering", "Fluid dynamics", "Dynamical systems" ]
3,629,797
https://en.wikipedia.org/wiki/Vacuum%20furnace
A vacuum furnace is a type of furnace in which the product in the furnace is surrounded by a vacuum during processing. The absence of air or other gases prevents oxidation, heat loss from the product through convection, and removes a source of contamination. This enables the furnace to heat materials (typically metals and ceramics) to temperatures as high as with select materials. Maximum furnace temperatures and vacuum levels depend on melting points and vapor pressures of heated materials. Vacuum furnaces are used to carry out processes such as annealing, brazing, sintering and heat treatment with high consistency and low contamination. Characteristics of a vacuum furnace are: Uniform temperatures in the range. Commercially available vacuum pumping systems can reach vacuum levels as low as Temperature can be controlled within a heated zone, typically surrounded by heat shielding or insulation. Low contamination of the product by carbon, oxygen and other gases. Vacuum pumping systems remove low temperature by-products from the process materials during heating, resulting in a higher purity end product. Quick cooling (quenching) of product can be used to shorten process cycle times. The process can be computer controlled to ensure repeatability. Heating metals to high temperatures in open to atmosphere normally causes rapid oxidation, which is undesirable. A vacuum furnace removes the oxygen and prevents this from happening. An inert gas, such as Argon, is often used to quickly cool the treated metals back to non-metallurgical levels (below ) after the desired process in the furnace. This inert gas can be pressurized to two times atmosphere or more, then circulated through the hot zone area to pick up heat before passing through a heat exchanger to remove heat. This process continues until the desired temperature is reached. Common uses Vacuum furnaces are used in a wide range of applications in both production industries and research laboratories. For example, a low-temperature vacuum oven can be used for drying biomass much more efficiently than drying alone. Similarly, microwave-vacuum drying has shown potential for drying foods like cranberries. At temperatures below 1200 °C, a vacuum furnace is commonly used for the heat treatment of steel alloys. Many general heat treating applications involve the hardening and tempering of a steel part to make it strong and tough through service. Hardening involves heating the steel to a predetermined temperature, then cooling it rapidly in water, oil or suitable medium. A further application for vacuum furnaces is Vacuum Carburizing also known as Low Pressure Carburizing or LPC. In this process, a gas (such as acetylene) is introduced as a partial pressure into the hot zone at temperatures typically between . The gas disassociates into its constituent elements (in this case carbon and hydrogen). The carbon is then diffused into the surface area of the part. This function is typically repeated, varying the duration of gas input and diffusion time. Once the workload is properly "cased", the metal is quenched using oil or high pressure gas (HPGQ). For HPGQ, nitrogen or, for faster quench helium, is commonly used. This process is also known as case hardening. Another low temperature application of vacuum furnaces is debinding, a process for the removal of binders. Heat is applied under a vacuum in a sealed chamber, melting or vaporizing the binder from the aggregate. The binder is evacuated by the pumping system and collected or purged downstream. The material with a higher melting point is left behind in a purified state and can be further processed. Vacuum furnaces capable of temperatures above 1200 °C are used in various industry sectors such as electronics, medical, crystal growth, energy and artificial gems. The processing of high temperature materials, both of metals and nonmetals, in a vacuum environment allows annealing, brazing, purification, sintering and other processes to take place in a controlled manner. References Industrial furnaces Furnace
Vacuum furnace
[ "Physics", "Chemistry", "Engineering" ]
793
[ "Metallurgical processes", "Vacuum", "Industrial furnaces", "Vacuum systems", "Matter" ]
18,767,352
https://en.wikipedia.org/wiki/Shape%20theory%20%28mathematics%29
Shape theory is a branch of topology that provides a more global view of the topological spaces than homotopy theory. The two coincide on compacta dominated homotopically by finite polyhedra. Shape theory associates with the Čech homology theory while homotopy theory associates with the singular homology theory. Background Shape theory was invented and published by D. E. Christie in 1944; it was reinvented, further developed and promoted by the Polish mathematician Karol Borsuk in 1968. Actually, the name shape theory was coined by Borsuk. Warsaw circle Borsuk lived and worked in Warsaw, hence the name of one of the fundamental examples of the area, the Warsaw circle. It is a compact subset of the plane produced by "closing up" a topologist's sine curve (also called a Warsaw sine curve) with an arc. The homotopy groups of the Warsaw circle are all trivial, just like those of a point, and so any map between the Warsaw circle and a point induces a weak homotopy equivalence. However these two spaces are not homotopy equivalent. So by the Whitehead theorem, the Warsaw circle does not have the homotopy type of a CW complex. Historical development Borsuk's shape theory was generalized onto arbitrary (non-metric) compact spaces, and even onto general categories, by Włodzimierz Holsztyński in year 1968/1969, and published in Fund. Math. 70, 157–168, y. 1971 (see Jean-Marc Cordier, Tim Porter, (1989) below). This was done in a continuous style, characteristic for the Čech homology rendered by Samuel Eilenberg and Norman Steenrod in their monograph Foundations of Algebraic Topology. Due to the circumstance, Holsztyński's paper was hardly noticed, and instead a great popularity in the field was gained by a later paper by Sibe Mardešić and Jack Segal, Fund. Math. 72, 61–68, y.1971. Further developments are reflected by the references below, and by their contents. For some purposes, like dynamical systems, more sophisticated invariants were developed under the name strong shape. Generalizations to noncommutative geometry, e.g. the shape theory for operator algebras have been found. See also List of topologies References Jean-Marc Cordier and Tim Porter, (1989), Shape Theory: Categorical Methods of Approximation, Mathematics and its Applications, Ellis Horwood. Reprinted Dover (2008) Aristide Deleanu and Peter John Hilton, On the categorical shape of a functor, Fundamenta Mathematicae 97 (1977) 157–176. Aristide Deleanu and Peter John Hilton, Borsuk's shape and Grothendieck categories of pro-objects, Mathematical Proceedings of the Cambridge Philosophical Society 79 (1976) 473–482. Sibe Mardešić and Jack Segal, Shapes of compacta and ANR-systems, Fundamenta Mathematicae 72 (1971) 41–59 Karol Borsuk, Concerning homotopy properties of compacta, Fundamenta Mathematicae 62 (1968) 223–254 Karol Borsuk, Theory of Shape, Monografie Matematyczne Tom 59, Warszawa 1975. D. A. Edwards and H. M. Hastings, Čech Theory: its Past, Present, and Future, Rocky Mountain Journal of Mathematics, Volume 10, Number 3, Summer 1980 D. A. Edwards and H. M. Hastings, (1976), Čech and Steenrod homotopy theories with applications to geometric topology, Lecture Notes in Mathematics 542, Springer-Verlag. Tim Porter, Čech homotopy I, II, Journal of the London Mathematical Society, 1, 6, 1973, pp. 429–436; 2, 6, 1973, pp. 667–675. J.T. Lisica and Sibe Mardešić, Coherent prohomotopy and strong shape theory, Glasnik Matematički 19(39) (1984) 335–399. Michael Batanin, Categorical strong shape theory, Cahiers Topologie Géom. Différentielle Catég. 38 (1997), no. 1, 3–66, numdam Marius Dădărlat, Shape theory and asymptotic morphisms for C*-algebras, Duke Mathematical Journal, 73(3):687–711, 1994. Marius Dădărlat and Terry A. Loring, Deformations of topological spaces predicted by E-theory, In Algebraic methods in operator theory, p. 316–327. Birkhäuser 1994. Topology Homotopy theory
Shape theory (mathematics)
[ "Physics", "Mathematics" ]
989
[ "Spacetime", "Topology", "Space", "Geometry" ]
18,769,542
https://en.wikipedia.org/wiki/Directional%20Recoil%20Identification%20from%20Tracks
The Directional Recoil Identification from Tracks (DRIFT) detector is a low pressure negative ion time projection chamber (NITPC) designed to detect weakly interacting massive particles (WIMPs) - a prime dark matter candidate. There are currently two DRIFT detectors in operation. DRIFT-IId, which is located 1100m underground in the Boulby Underground Laboratory at the Boulby Mine in North Yorkshire, England, and DRIFT-IIe, which is located on the surface at Occidental College in Los Angeles. The DRIFT collaboration ultimately aims to develop and operate an underground array of DRIFT detectors for observing and reconstructing WIMP-induced nuclear recoil tracks with enough precision to provide a signature of the dark matter halo. WIMP detection There are numerous experiments worldwide attempting to detect the energy deposition that is expected to occur when a WIMP directly collides with an atom of ordinary matter. Ultra sensitive experiments are required to detect the low energy and extremely rare interaction that is predicted to occur between a WIMP and the nucleus of an atom in a target material. The DRIFT detectors vary from the majority of WIMP detectors in their use of a low pressure gas as a target material. The low pressure gas means that an interaction within the detector causes an ionisation track of measurable length compared to the point like interactions seen in detectors with solid or liquid target materials. Such ionisation tracks can be reconstructed in three dimensions to determine not only the type of particle that caused it, but from which direction the particle came. This directional sensitivity has the potential to prove the existence of WIMPs by their distinct directional signature. Detection technology The DRIFT detector's target material is a 1 m3 cubical drift chamber filled with a low pressure mixture of carbon disulfide (CS2) and carbon tetrafluoride (CF4) gases (, respectively). It is predicted that WIMPs will occasionally collide with the nucleus of a sulfur or carbon atom in the carbon disulfide gas causing the nucleus to recoil. An energetic recoiling nucleus will ionise gas particles creating a path of free electrons. These free electrons readily attach to the electronegative CS2 molecules creating a track of ions. The gas volume is divided in half by a cathode at , which produces a static electric field that causes these negative ions to drift, whilst maintaining the track structure, to the MWPC planes at two ends of the detector. Addition of of oxygen to the gas mixture has been the key to full fiducialisation of sensitive volume of the DRIFT detector. Results DRIFT-IId published Spin-dependent limits in 2012. References External links DRIFT web site DRIFT-I on display at the Science Museum, London Experiments for dark matter search Research institutes in North Yorkshire
Directional Recoil Identification from Tracks
[ "Physics" ]
552
[ "Dark matter", "Experiments for dark matter search", "Unsolved problems in physics" ]