id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
4,931,186
https://en.wikipedia.org/wiki/Blood%20fractionation
Blood fractionation is the process of fractionating whole blood, or separating it into its component parts. This is typically done by centrifuging the blood. The resulting components are: a clear solution of blood plasma in the upper phase (which can be separated into its own fractions, see Blood plasma fractionation), the buffy coat, which is a thin layer of leukocytes (white blood cells) mixed with platelets in the middle, and erythrocytes (red blood cells) at the bottom of the centrifuge tube. Serum separation tubes (SSTs) are tubes used in phlebotomy containing a silicone gel; when centrifuged the silicone gel forms a layer on top of the buffy coat, allowing the blood serum to be removed more effectively for testing and related purposes. As an alternative to energy-consuming centrifugation, more energy-efficient technologies have been studied, such as ultrasonic fractionation. Plasma protein fractionation Plasma proteins are separated by using the inherent differences of each protein. Fractionation involves changing the conditions of the pooled plasma (e.g., the temperature or the acidity) so that proteins that are normally dissolved in the plasma fluid become insoluble, forming large clumps, called precipitate. The insoluble protein can be collected by centrifugation. One of the very effective ways for carrying out this process is the addition of alcohol to the plasma membrane pool while simultaneously cooling the pool. This process is sometimes called cold alcohol fractionation or ethanol fractionation. It was described by and bears the eponym of Dr Edwin J. Cohn. This procedure is carried out in a series of steps so that a single pool of plasma yields several different protein products, such as albumin and immune globulin. Human serum albumin prepared by this process is used in some vaccines, for treating burn victims, and other medical applications. See also Blood plasma fractionation References Blood Fractionation Medical technology
Blood fractionation
Chemistry,Biology
411
14,874,773
https://en.wikipedia.org/wiki/P2RX4
P2X purinoceptor 4 is a protein that in humans is encoded by the P2RX4 gene. P2X purinoceptor 4 is a member of the P2X receptor family. P2X receptors are trimeric protein complexes that can be homomeric or heteromeric. These receptors are ligand-gated cation channels that open in response to ATP binding. Each receptor subtype, determined by the subunit composition, varies in its affinity to ATP and desensitization kinetics. The P2X4 receptor is the homotrimer composed of three P2X4 monomers. They are nonselective cation channels with high calcium permeability, leading to the depolarization of the cell membrane and the activation of various Ca2+-sensitive intracellular processes. The P2X4 receptor is uniquely expressed on lysosomal compartments as well as the cell surface. The receptor is found in the central and peripheral nervous systems, in the epithelia of ducted glands and airways, in the smooth muscle of the bladder, gastrointestinal tract, uterus, and arteries, in uterine endometrium, and in fat cells. P2X4 receptors have been implicated in the regulation of cardiac function, ATP-mediated cell death, synaptic strengthening, and activating of the inflammasome in response to injury. Structure P2X receptors are composed of three subunits that can be homomeric or heteromeric by nature. In mammals, there are seven different subunits, each encoded in a different gene (P2RX1-P2RX7). Each subunit has two transmembrane alpha helices (TM1 and TM2) linked by a large extracellular loop. Analysis of x-ray crystallographic structures revealed a 'dolphin-like' tertiary structure, where the 'tail' is embedded in the phospholipid bilayer and the upper and lower ectodomains form the 'head' and 'body' respectively. Adjacent interfaces of the subunits form a deep binding pocket for ATP. ATP binding to these orthosteric sites causes a shift in conformation opening the channel pore. The P2X4 subunits can form homomeric or heteromeric receptors. In 2009, the first purinergic receptor crystallized was the closed state homomeric zebrafish P2X4 receptor. Although truncated at its N- and C- termini, this crystal structure resolved and confirmed that these proteins were indeed trimers with an ectodomain rich with disulfide bonds. Gating mechanism P2X receptors have three confirmed conformational states: ATP-unbound closed, ATP-bound open, and ATP-bound desensitized. Imaging of the human P2X3 and rat P2X7 receptors has revealed structural similarities and differences in their cytoplasmic domains. In the ATP-bound state, both receptor types form beta sheet structures from N- and C- termini of adjacent subunits. These newly folded secondary structures come together to form a 'cytoplasmic cap' that helps stabilize the open pore. Crystal structures of the desensitized receptor no longer exhibit the cytoplasmic cap. Desensitization Electrophysiology studies have revealed differences in the rates of receptor desensitization between different P2X subtypes. Homotrimers P2X1 and P2X3 are the fastest, with desensitization observed milliseconds after activation, while P2X2 and P2X4 receptors are on the timescale of seconds. Notably, the P2X7 receptor uniquely does not undergo desensitization. Mutational studies working with the rat P2X2 and P2X3 receptors have identified three residues in the N-terminus that majorly contribute to these differences. By changing the amino acids in the P2X3 to match the analogous P2X2, the desensitization rate slowed down. Conversely, changing residues of P2X2 to match P2X3 increased the desensitization rate. In combination with the open state crystal structures, it was hypothesized that the cytoplasmic cap was stabilizing the open pore conformation. Additionally, structural analysis of the open P2X3 receptor revealed transient changes in TM2, the transmembrane alpha helix lining the pore. While in the open state conformation, a small mid-region of TM2 develops into a 310-helix. This helical structure disappears with desensitization and instead TM2 reforms as a complete alpha helix repositioned closer to the extracellular side. The helical recoil model uses the observed structural changes in TM2 and the transient formation of the cytoplasmic cap to describe a possible mechanism for the desensitization of P2X receptors. In this model, it is theorized that the cytoplasmic cap fixes the intracellular end of the TM2 helix while stretching its extracellular end to allow ion influx. This would induce the observed 310-helix. The cap then disassembles and releases its hold on TM2 causing the helix to recoil towards the outer leaflet of the membrane. In support of this theory, the P2X7 uniquely has a large cytoplasmic domain with palmitoylated C-cysteine anchor sites. These sites further stabilize its cytoplasmic cap by anchoring the domain into the surrounding inner leaflet. Mutations of the associated palmitoylation site residues cause observed atypical desensitization of the receptor. Receptor trafficking P2X4 receptors are functionally expressed on both the cell surface and in lysosomes. Although preferentially localized and stored in lysosomes, P2X4 receptors are brought to the cell surface in response to extracellular signals. These signals include IFN-γ, CCL21, CCL2. Fibronectin is also involved in upregulation of P2X4 receptors through interactions with integrins that lead to the activation of SRC-family kinase member, Lyn. Lyn then activates PI3K-AKT and MEK-ERK signaling pathways to stimulate receptor trafficking. Internalization of P2X4 receptors is clathrin- and dynamin-dependent endocytosis. Pharmacology Agonists P2X4 receptors respond to ATP, but not αβmeATP. These receptors are also potentiated by ivermectin, cibacron blue, and zinc. Antagonists The main pharmacological distinction between the members of the purinoceptor family is the relative sensitivity to the antagonists suramin and pyridoxalphosphate-6-azophenyl-2',4'-disulphonic acid (PPADS). The product of this gene has the lowest sensitivity for these antagonists Neuropathic pain The P2X4 receptor has been linked to neuropathic pain mediated by microglia in vitro and in vivo. P2X4 receptors are upregulated following injury. This upregulation allows for increased activation of p38 mitogen-activated protein kinases, thereby increasing the release of brain-derived neurotrophic factor (BDNF) from microglia. BDNF released from microglia induces neuronal hyperexcitability through interaction with the TrkB receptor. More importantly, recent work shows that P2X4 receptor activation is not only necessary for neuropathic pain, but it is also sufficient to cause neuropathic pain. See also P2X receptor References Further reading External links Ion channels
P2RX4
Chemistry
1,618
4,443,759
https://en.wikipedia.org/wiki/Vignette%20%28graphic%20design%29
A vignette, in graphic design, is a French loanword meaning a unique form for a frame to an image, either illustration or photograph. Rather than the image's edges being rectilinear, it is overlaid with decorative artwork featuring a unique outline. This is similar to the use of the word in photography, where the edges of an image that has been vignetted are non-linear or sometimes softened with a mask – often a darkroom process of introducing a screen. An oval vignette is probably the most common example. Originally a vignette was a design of vine-leaves and tendrils (vignette = small vine in French). The term was also used for a small embellishment without border, in what otherwise would have been a blank space, such as that found on a title-page, a headpiece or tailpiece. The use in modern graphic design is derived from book publishing techniques dating back to the Middle Ages Analytical Bibliography (ca. 1450 to 1800) when a vignette referred to an engraved design printed using a copper-plate press, on a page that has already been printed on using a letter press (Printing press). Vignettes are sometimes distinguished from other in-text illustrations printed on a copper-plate press by the fact that they do not have a border; such designs usually appear on title-pages only. Woodcuts, which are printed on a letterpress and are also used to separate sections or chapters are identified as a headpiece, tailpiece or printer's ornament, depending on shape and position. See also Calligraphy, another conjunction of text and decoration Curlicues, flourishes in the arts usually composed of concentric circles, often used in calligraphy Scrollwork, general name for scrolling abstract decoration used in many areas of the visual arts References Graphic design Illustration Visual motifs
Vignette (graphic design)
Mathematics
384
16,913,034
https://en.wikipedia.org/wiki/Secondary%20carbon
A secondary carbon is a carbon atom bound to two other carbon atoms and has sp3 hybridization. For this reason, secondary carbon atoms are found in almost (neopentane, for example, does not have any secondary carbon atoms) all hydrocarbons having at least three carbon atoms. In unbranched alkanes, the inner carbon atoms are always secondary carbon atoms (see figure). References Chemical nomenclature Organic chemistry
Secondary carbon
Chemistry
88
57,691,027
https://en.wikipedia.org/wiki/NGC%201250
NGC 1250 is an edge-on lenticular galaxy located about 275 million light-years away in the constellation Perseus. It was discovered by astronomer Lewis Swift on Oct 21, 1886. NGC 1250 is a member of the Perseus Cluster. See also List of NGC objects (1001–2000) NGC 1277 References External links Perseus Cluster Perseus (constellation) Lenticular galaxies 1250 02613 012098 +07-07-040 Astronomical objects discovered in 1886 Discoveries by Lewis Swift
NGC 1250
Astronomy
110
55,497,181
https://en.wikipedia.org/wiki/Archaeal%20H/ACA%20sRNA
In archaea like in eukaryotes, uridines in various RNAs are converted to pseudouridines by ribonucleoprotein complexes (RNPs) containing H/ACA sRNA. Because of their conserved function, these sRNAs are also called small "nucleolar" RNAs (snoRNA) like in eukaryotes, despite no nucleus is present in prokaryotes. By using various computational and experimental approaches in three Pyrococcus genomes seven H/ACA sRNAs and 15 pseudouridine (Ψ) resides on rRNA were identified. One H/ACA motif was shown to guide up to three distinct pseudouridylations. Atypical pseudouridine guide RNA features were identified in Pyrobaculum species. Lack of the conserved 3'-terminal ACA sequence and sometimes lack of 5' portion of the pseudouridylation pocket feature in few conserved Pyrobaculum H/ACA-like sRNAs. A study by Toffano-Nioche et al. proposes an unified structure/function model based on the common structural components in "Euryarchaeota" and Thermoproteota (formerly Crenarchaeota) shared by H/ACA and H/ACA-like motifs . See also Other archaeal sRNAs: Pyrobaculum asR3 small RNA Methanosarcina sRNA162 References Non-coding RNA
Archaeal H/ACA sRNA
Chemistry
305
744,480
https://en.wikipedia.org/wiki/Kern%20River
The Kern River is an Endangered, Wild and Scenic river in the U.S. state of California, approximately long. It drains an area of the southern Sierra Nevada mountains northeast of Bakersfield. Fed by snowmelt near Mount Whitney, the river passes through scenic canyons in the mountains and is a popular destination for whitewater rafting and kayaking. It is the southernmost major river system in the Sierra Nevada, and is the only major river in the Sierra that drains in a southerly direction. The Kern River formerly emptied into the now dry Buena Vista Lake and Kern Lake via the Kern River Slough, and Kern Lake in turn emptied into Buena Vista Lake via the Connecting Slough at the southern end of the Central Valley. Buena Vista Lake, when overflowing, first backed up into Kern Lake and then upon rising higher drained into Tulare Lake via Buena Vista Slough and a changing series of sloughs of the Kern River. The lakes were part of a partially endorheic basin that sometimes overflowed into the San Joaquin River. This basin also included the Kaweah and Tule Rivers, as well as southern distributaries of the Kings River that all flowed into Tulare Lake. Since the late 19th century the Kern has been almost entirely diverted for irrigation, recharging aquifers, and the California Aqueduct, although some water empties into Lake Webb and Lake Evans, two small lakes in a portion of the former Buena Vista lakebed. The lakes were created in 1973 for recreational use. The lakes hold combined. Crops are grown in the rest of the former lakebed. In extremely wet years the river will reach the Tulare Lake basin through a series of sloughs and flood channels. Despite its remote source, nearly all of the river is publicly accessible. The Kern River is particularly popular for wilderness hiking and whitewater rafting. The Upper Kern River is paralleled by trails to within a half-mile of its source (which lies at ). Even with the presence of Lake Isabella, the river is perennial down to the lower Tulare Basin. Its swift flow at low elevation makes the river below the reservoir a popular location for rafting. Course The Kern begins in the Sierra Nevada in Sequoia National Park in northeastern Tulare County, near the border with Inyo County. The main branch of the river (sometimes called the North Fork Kern River) rises from several small lakes in a basin northwest of Mount Whitney. The headwaters are surrounded by the Great Western Divide to the west, the Kings-Kern Divide to the north and the main Sierra Crest to the east, all of which have multiple peaks above . The Kern River flows due south through a deep glacier-carved valley, passing through Inyo and Sequoia National Forests and the Golden Trout Wilderness, and receiving numerous tributaries including Rock Creek, Big Arroyo, Golden Trout Creek and Rattlesnake Creek. After deviating briefly from its due south course as it flows east around Hockett Peak, it is joined by the Little Kern River from the northwest at a site called Forks of the Kern. Below there, the Kern River continues south, and is joined by more tributaries including Peppermint Creek, South Creek, Brush Creek, and Salmon Creek, which all form large waterfalls as they tumble into the Kern River canyon. At Kernville the river emerges from its narrow canyon into a wide valley where it is impounded in Lake Isabella, formed by Isabella Dam. The area was once known as Whiskey Flat, the former location of the town of Kernville. In Lake Isabella, it is joined by its largest tributary, the South Fork Kern River, which drains a high plateau area to the east of the North Fork drainage. The -long South Fork rises in Tulare County and flows south through Inyo National Forest, turning west after entering Kern County. Below Isabella Dam the Kern River flows southwest in a rugged canyon along the south edge of the Greenhorn Mountains, parallel to SR 178. A number of hot springs (Scovern, Miracle, Remington, Delonegha, Democrat) are located along this section of the river. With a descent of between Isabella Dam and Bakersfield, this section of the Kern River feeds several hydroelectric plants and is also a popular whitewater run. Due to upstream dam releases for irrigation and power generation, this part of the river has a swift flow even in the driest summers. The river then flows through a winding valley in the Sierra foothills before entering the San Joaquin Valley at Bakersfield, the largest city on the river. In Bakersfield proper, most of the river's flow is diverted into various canals for agricultural use in the southern San Joaquin Valley, and provide municipal water supplies to the City of Bakersfield and surrounding areas. Diverting the river's flow has left of the riverbed that runs through Bakersfield dry. This fertile region is a large alluvial plain, or inland delta, formed by the Kern River, which once spread out into vast wetlands and seasonal lakes. The Friant-Kern Canal, constructed as part of the Central Valley Project, joins the Kern about west of downtown Bakersfield, restoring some flow to the river. The river channel continues about southwest to a point near the California Aqueduct on the western side of the San Joaquin Valley. A weir allows excess floodwaters from the Kern to drain into the California Aqueduct, while any remaining water continues south into the seasonal Buena Vista Lake, which once reached sizes of about in wet periods. Historically, a distributary of the Kern split off above Bakersfield and flowed south to what is now Arvin, where it formed the seasonal Kern Lake, which would grow to cover about during wet periods. Water from Kern Lake would then flow west through Buena Vista Slough into Buena Vista Lake. In periods of extremely high runoff, Buena Vista Lake overflowed and joined other wetlands and seasonal lakes in a series of sloughs that drained north into the former Tulare Lake, which would sometimes overflow into the San Joaquin River via Fresno Slough. The Kern River is one of the very few rivers in the Central Valley which does not contribute water to the Central Valley Project (CVP). However, water from the CVP, mainly the Friant-Kern Canal, will be deposited for water storage in the aquifers. History The river was named by John C. Frémont in honor of Edward M. Kern in 1845 who, as the story goes, nearly drowned in the turbulent waters. Kern was the topographer of Fremont's third expedition through the American West. Before this, the Kern River was known as the as named by Spanish missionary explorer Francisco Garcés when he explored the Bakersfield area on May 1, 1776. On August 2, 1806, Padre Zavidea renamed the river for the day of the Porciuncula Indulgence. It was locally known as Po-sun-co-la until its renaming by Fremont. Gold was discovered along the upper river in 1853. The snowmelt that fed the river resulted in periodic torrential flooding in Bakersfield until the construction of the Isabella Dam in the 1950s. These floods would periodically change the channel of the river. Since the establishment of Kern County in 1866 the main channel has flowed through what is the main part of downtown Bakersfield along Truxtun Avenue and again made a south turn along what is Old River Road. Many of the irrigation canals that flow in a southerly direction from the river follow the old channels of the Kern River, especially the canal that flows along Old River Road. The irrigated region of the Central Valley near the river supports the cultivation of alfalfa, carrots, fruit, and cotton, cattle grazing, and many other year-round crops. In 1987 the United States Congress designated of the Kern's North (Main) Fork and South Fork as a National Wild and Scenic River. The Great 1857 Fort Tejon earthquake on January 9, 1857, with an estimated magnitude of 7.9 on the San Andreas Fault was strong enough to temporarily switch the direction of the flow of the Kern River. Fish in the now dry Tulare Lake were left stranded on the shores. The Buena Vista Lake basin is an arid area and the Kern River is the only significant water supply. Ongoing conflicts between urban and agricultural interests complicate management decisions, in recent years owing to the expiration of some long-term contractual agreements. Lux v. Haggin - 1886 California Supreme Court case on water rights The Kern River was at the center of Lux v. Haggin, 69 Cal. 255; 10 P. 674; (1886), a historic case in the development of water rights in California, and also one of the most consequential water lawsuits in American history. The overarching issue in Lux v. Haggin was whether the court would uphold English common law riparian rights (even though they were poorly suited to California's Mediterranean climate), institute the primacy of appropriative water rights, or create an entirely new system of water rights. The decision was important because it gave the court a chance to either continue to uphold English common law and riparian rights or give appropriative rights supremacy. In the end, the court recognized both water rights systems but decided that appropriative rights were secondary to riparian rights. The ruling "created chaos by shackling the state with two fundamentally incompatible water allocation systems". Additionally, the original definition of "reasonable" water use under English common law was changed. The court decided that water could be used for commercial and agricultural purposes as long as the use did not negatively affect other riparian landowners. This broadening of the "reasonable" use definition meant that riparian landowners could now use more water than previously allowed. The subsequent 1888 Miller-Haggin Agreement that divides the water between First Point and Second Point users, still governs the use of Kern River water today. National Wild And Scenic Rivers System designation On November 24, 1987, portions of the Kern River were designated as Wild & Scenic under the Wild & Scenic Rivers Act. The Wild & Scenic designation covers broken down as Wild — ; Scenic — ; Recreational — . Ecology The Kern River watershed is the native range of California's State Freshwater Fish, California golden trout (Oncorhynchus mykiss aguabonita), which are native to the Kern River tributaries South Fork Kern River and Golden Trout Creek, and the latter's tributary, Volcano Creek. Two currently recognized and closely related sibling subspecies. The Little Kern golden trout (O. m. whitei), found in the Little Kern River basin, and the Kern River rainbow trout (O. m. gilberti), are also found in the Kern River system. Together, these three trout form what is sometimes referred to as the "Golden Trout Complex". The rare and endangered Kern Canyon slender salamander lives alongside the river. In 2008, after public outcry, the City of Bakersfield and the California Department of Fish and Game (CDFG) decided to relocate a family of California Golden beaver (Castor canadensis subauratus) instead of killing them. California Golden beaver were native to the Central Valley and throughout the Sierra Nevada. Specifically to the Kern watershed, an oral history was taken from Roy De Voe, who claimed to have seen "very old beaver sign" on the east side of the Kern River at Funston Meadow (elevation ) in 1946. Also, Mr. De Voe reported that his friend Kenny Keelor trapped the Kern River for beaver around 1900, making his camp at the mouth of Rattlesnake Creek (elevation ) until they were trapped out completely by 1910 – 1914. The presence of Beaver Canyon Creek, tributary to the lower Kern River just east of Delonegha Hot Springs, is also consistent with the Kern River watershed having historically supported native beaver. This oral history is consistent with another oral history taken one watershed to the north by CDFG's Donald T. Tappe from a retired game warden in 1940, who stated that beaver were "apparently not uncommon on the upper part of the Kings River" until 1882–1883. Currently, there are large numbers of beaver in the Ramshaw Meadows on the South Fork Kern River where their dams are trapping sediment, forming extensive pools, accelerating meadow restoration, and increasing riparian willow habitat. The Panorama Vista Preserve Panorama Vista Preserve is a 930-acre wildlife refuge and outdoor recreational area located in the northeast part of Bakersfield, California. The preserve includes hiking trails, biking paths, and areas for horseback riding, and is known for being dog-friendly. It serves as a sanctuary for endangered species such as the San Joaquin Kit Fox and the Bakersfield Cactus. Panorama Vista Preserve is located near Panorama Park and "The Bluffs" and comprises two distinct floodplain elevations that support different vegetation communities. The lower terrace features typical riparian forests and shrub lands, while the upper terrace supports a salt-brush scrub community and relict stands of the endangered Bakersfield Cactus. Visitors to the preserve can see the Gordon's Ferry historic landmark and learn about the natural and cultural history of the region. Kern River Hatchery Just north of Lake Isabella is the Kern River Hatchery. The Kern River Hatchery is also home to the Fishing and Natural History Museum, features picnic grounds and outdoor activities in the area. Hatchery closure On December 1, 2020, after 3 years of extensive renovations, the hatchery was closed down by California Department of Fish and Wildlife, just 20 months after being reopened. According to CDFW, the hatchery is closed for repairs with the primary focus on "replacement of a pipeline that is more than 50 years old and no longer adequately provides a reliable water supply for fish production". There is currently no date set for reopening the hatchery. Despite the closure of the hatchery, the hatchery still diverts 35 cfs year-round from the North Fork Kern River at the expense of North Fork Kern fishery and its biome. Geology The upper Kern River Canyon was created primarily as a result of tectonic forces, and not just by the erosional force of the river. The geologically active Kern Canyon Fault runs the length of the canyon, from the river's headwaters down to the Walker Basin about south of Lake Isabella. The river's course has been modified several times throughout ancient geological history. Prior to 10 million years ago, the Kern River flowed into the San Joaquin Valley at a point further south, along what is now Walker Basin Creek, which outlets north of Arvin. Uplift west of the Kern Canyon Fault blocked the river and forced it to cut a new course further north, forming the steep gorge below Lake Isabella and Bakersfield. The upper part of the Kern River canyon, at least above Golden Trout Creek, was widened and deepened by glaciers during the Ice Ages. The Kern Canyon fault passes very close to Isabella Dam, and is considered a threat to the dam's structural stability. The Kern River Oil Field is adjacent to the river on the north, just before the river flows into Bakersfield. The large oil field, on low hills which rise gradually into the Sierra foothills, formerly allowed much of its wastewater to drain directly south into the river. However, modern environmental regulation ended this practice, and the contaminated water is now cleaned at water treatment plants and used to irrigate farms in the valley to the west. Discharge Due to water diversion and Isabella Dam the Kern River's discharge changes considerably over its length. The highest mean annual flows occur just downriver of Isabella Dam, but because the dam serves to regulate the flow of water the highest daily discharges occur above the dam on the North Fork section of the Kern River. The USGS stream gauge on the North Fork Kern River has recorded an average annual mean discharge of and a maximum daily discharge of , and the gauge on the South Fork Kern River shows an average annual mean discharge of and a maximum daily discharge of . In contrast the first stream gauge below Isabella Dam has recorded an average annual mean of but a maximum daily discharge of only . Due to water withdrawals the three stream gauge stations below Isabella Dam show a dramatically decreasing discharge. At the last gauge, near Bakersfield, the river's average flow is only . Recreation and Tourism Activities Kern Canyon, the deep canyon of the river northeast of Bakersfield, is a popular location for fishing and boating, particularly fly fishing and whitewater rafting, whitewater kayaking, and riverboarding. Of particular interest to fishermen are the Kern River rainbow trout, the Little Kern golden trout, and the California golden trout. The Kern Canyon is popular for camping, hiking, and picnicking. There are developed campgrounds maintained by the US Forest Service along the North Fork of the Kern River. Campgrounds include Camp 3, Fairview, Goldledge, Headquarters, Hospital Flat, and Limestone. All of the campgrounds are open in the summer months while only a few remain open year-round. Safety Concerns The Kern is well known for its danger, and is sometimes referred to as the "killer Kern". A sign at the mouth of Kern Canyon warns visitors: "Danger. Stay Out. Stay Alive" and tallies the deaths since 1968; as of May 23, 2024, the number of deaths listed is 335. Merle Haggard's song "Kern River" fictionally recounts such a tragedy. Below the canyon the Kern River has a gradient of 0.3% until it reaches the Kern River Oil Field and begins to meander along flat land into and through the city of Bakersfield. Tubing is popular along this stretch. Kernville Whitewater The Class II whitewater located in Kernville is used for the slalom event of the annual Kern River Festival. The Kern River Parkway Trail The Kern River Parkway Trail is a system of hiking and biking trails that extends along the Kern River from the mouth of the canyon to Hart Park in Bakersfield, California. The trail system is part of the larger Kern River Parkway, which includes several parks, picnic areas, and green spaces along the river. History The Kern River Parkway Trail was first proposed in the 1970s as part of a plan to create a system of parks and trails along the Kern River. The first section of the trail, between the mouth of the canyon and Buena Vista Lake, was completed in the 1980s. Since then, the trail has been extended to its current length, with several amenities added along the way. Route The Kern River Parkway Trail is a multi-use trail that can be used for hiking, biking, and equestrian activities. The trail extends for approximately 30 miles, from the mouth of the canyon to Hart Park in Bakersfield. The trail follows the Kern River through several parks and green spaces, including the Kern River County Park, Yokuts Park, and the Kern River Preserve. The trail is mostly flat, with a few gentle slopes and curves, and several rest stops and picnic areas are available along the way. Amenities The Kern River Parkway Trail offers several amenities for hikers, bikers, and equestrians, including parking areas, restrooms, and drinking fountains. The trail system includes several hiking and biking trails that branch off from the main trail. These trails offer a variety of terrain and difficulty levels, from easy walks along the river to challenging mountain bike rides through the surrounding hills. The trail is maintained by the City of Bakersfield and the Kern County Parks and Recreation Department, with funding provided by grants and community donations. In popular culture Songs Kern River - 1985 song written by American country music singer Merle Haggard. Kern River - 2022 song written by American Comedian and Songwriter Tim Heidecker in his album High School (2022). Kern River Blues - Also written by Merle Haggard, this was his last recording. Art Projects Flow Flow was a temporary art installation created by environmental artist Andres Amador on the Kern River in Bakersfield, California. The installation was meant to highlight the irony of the dry riverbed and raise awareness about the need to restore water flow to the Kern River. It was commissioned by "Bring Back the Kern", a community group advocating for the restoration of the river, and supported by the Virginia and Alfred Harrell Foundation. The installation featured a 2,500-square-foot image on the riverbed floor south of the 24th street overcrossing, created with an invasive bamboo-like reed called Arundo, which the group would like to eradicate from the Kern River. The design evoked the smooth lines of flowing water and the eddies created by a river's currents. The lines in the design were over 2500 feet in length, making it a large-scale installation. Flow involved extensive community involvement, including volunteer participation in the collection of plant material, the creation of the installation, and the removal of the artwork. The plant material used in the installation was collected by volunteers who had removed invasive plants from elsewhere along the parkway. The seeds from this plant were sterile, but the plant material was removed and disposed of within two weeks of the installation to ensure that the invasive weeds did not spread elsewhere along the river. Bring Back the Kern hoped to draw attention to the lack of water in the Kern River and advocate for solutions, including calling upon the State Water Resources Control Board to award unappropriated water to the city of Bakersfield. The group believed that restoring water flow to the river would revive the local ecosystem, provide recreation opportunities, and enhance property values in the city. Flow was a visual representation of the group's message, highlighting the need for action to restore the Kern River. The installation was open to the public from 9 a.m. to 5 p.m. on Thursday, February 11, and was available for viewing until its removal two weeks later. Visitors were encouraged to enjoy the artwork and engage with it. A River Remembered Miguel Rodriguez, a team member with Bring Back the Kern collecting photos, videos and stories of the once flowing river for an art exhibit. The Mighty Kern River "The Mighty Kern River" is a children's book in the "Indy, Oh Indy" series created by author Teresa Adamo and illustrator Jennifer Williams-Cordova. The book takes readers on a tour of the Kern River, from its mountain origins to the Buena Vista Lake Aquatic Recreation Area west of Bakersfield. The book was inspired by Bring Back the Kern, a grassroots group raising awareness about Bakersfield's mostly dry river and efforts to revive a more regular flow of water through town. The book features cute animal characters and easy-to-follow prose to introduce children to the importance of a running river and its benefits to the community and environment. The creators of the "Indy" book series used Kickstarter to fund "The Mighty Kern River", with a goal of $6,500. The book costs $15 and includes a fold-out map of the river. As an added incentive, the creators are offering "extras" for donors, including having a person illustrated into the book or having someone's name painted on the side of a ski boat. The book is set to be released to buyers by August 1. Bring Back the Kern wrote a foreword for the book to capture some of the river's history and legal issues. The group is promoting the book as a way for readers, young and old, to envision a flowing river "year-round". The state Water Resources Control Board recently announced it would begin the hearing process on the Kern River to determine the allocation of unappropriated water. Bring Back the Kern has been advocating for a hearing and for any available water to be given to the city for use in the riverbed. Legal Action for Restoration and Public Trust Issues In December 2022, six environmental groups initiated a lawsuit against the city of Bakersfield, aiming to restore the flow of the Kern River, which had been heavily diverted to supply water to farms. The lawsuit argued that the city's continued allowance of water diversions upstream was harmful to both the environment and the community, violating California's public trust doctrine. In January 2022, critics, including Attorney Adam Keats, argued that the State Water Resources Control Board was not adequately addressing public interests in its handling of the Kern River water case. The state hearings were not set to consider the impact of water diversion on the environment, recreation, drinking water, or quality of life, collectively known as the "public trust". Keats argued that the first consideration should be the requirements of the public trust, including the amount of water needed to maintain the river's flow. The Bakersfield group Bring Back the Kern sent a letter to the state requesting that the Water Board direct its Administrative Hearing Officer to include several public trust questions. The group also referenced California Fish and Wildlife Code 5937, which mandates that dam owners must allow enough water into rivers to sustain a fishery. The letter pointed out that the current situation on the Kern River precluded the existence of any fisheries. Endangered river status The Lower Kern River was listed by American Rivers as one of America's Most Endangered Rivers of 2022. According to American Rivers, "decades of excessive water diversions for agriculture operations have dried up the last of the Lower Kern River.". Water right holders, use a series of canals, many that run right next to the dry riverbed, to divert water from the natural riverbed and in the process have allowed the Lower Kern River to run dry. Doing so has denied the 500,000 residents of the surrounding communities access to flowing river in direct violation of the public trust doctrine. "Under the Public Trust Doctrine, California is obligated to protect flowing waterways for the benefit of current and future generations." There are several local groups such as Bring Back the Kern, Kern River Fly Fishers, and The Kern River Parkway Foundation that are working to restore the Lower Kern River. See also Indigenous peoples of California Tübatulabal (the Pahkanapil) of the Central Valley Yokuts Palagewan, who lived along the North Fork Kern River into Hot Springs Valley Bankalachi (the Toloim), who lived in the Greenhorn Mountains south to Poso Creek around Linns Valley and north to the Tule River Kawaiisu (the Nuwa), who lived in the Weldon Valley, the Kelso Valley, and Piute Mountains Yawelmani (the Yowlumne), who lived along the Kern River on the east side, north of Kern Lake References External links Kern Valley River Council Kern River Preserve Kern River Parkway Foundation Bring Back the Kern Kern River Fly Fishers Kern River Boaters Kern River Conservancy Rivers of Kern County, California Rivers of Tulare County, California Rivers of the Sierra Nevada (United States) Geography of the San Joaquin Valley Tributaries of the San Joaquin River Wild and Scenic Rivers of the United States Central Valley Project Kern River Valley Rivers of Northern California Rivers of the Sierra Nevada in California
Kern River
Engineering
5,486
58,035,891
https://en.wikipedia.org/wiki/Bioinformatics%20discovery%20of%20non-coding%20RNAs
Non-coding RNAs have been discovered using both experimental and bioinformatic approaches. Bioinformatic approaches can be divided into three main categories. The first involves homology search, although these techniques are by definition unable to find new classes of ncRNAs. The second category includes algorithms designed to discover specific types of ncRNAs that have similar properties. Finally, some discovery methods are based on very general properties of RNA, and are thus able to discover entirely new kinds of ncRNAs. Discovery by homology search Homology search refers to the process of searching a sequence database for RNAs that are similar to already known RNA sequences. Any algorithm that is designed for homology search of nucleic acid sequences can be used, e.g., BLAST. However, such algorithms typically are not as sensitive or accurate as algorithms specifically designed for RNA. Of particular importance for RNA is its conservation of a secondary structure, which can be modeled to achieve additional accuracy in searches. For example, Covariance models can be viewed as an extension to a profile hidden Markov model that also reflects conserved secondary structure. Covariance models are implemented in the Infernal software package. Discovery of specific types of ncRNAs Some types of RNAs have shared properties that algorithms can exploit. For example, tRNAscan-SE is specialized to finding tRNAs. The heart of this program is a tRNA homology search based on covariance models, but other tRNA-specific search programs are used to accelerate searches. The properties of snoRNAs have enabled the development of programs to detect new examples of snoRNAs, including those that might be only distantly related to previously known examples. Computer programs implementing such approaches include snoscan and snoReport. Similarly, several algorithms have been developed to detect microRNAs. Examples include miRNAFold and miRNAminer. Discovery by general properties Some properties are shared by multiple unrelated classes of ncRNA, and these properties can be targeted to discover new classes. Chief among them is the conservation of an RNA secondary structure. To measure conservation of secondary structure, it is necessary to somehow find homologous sequences that might exhibit a common structure. Strategies to do this have included the use of BLAST between two sequences or multiple sequences, exploited synteny via orthologous genes or used locality sensitive hashing in combination with sequence and structural features. Mutations that change the nucleotide sequence, but preserve secondary structure are called covariation, and can provide evidence of conservation. Other statistics and probabilistic models can be used to measure such conservation. The first ncRNA discovery method to use structural conservation was QRNA, which compared the probabilities of an alignment of two sequences based on either an RNA model or a model in which only the primary sequence conserved. Work in this direction has allowed for more than two sequences and included phylogenetic models, e.g., with EvoFold. An approach taken in RNAz involved computing statistics on an input multiple-sequence alignment. Some of these statistics relate to structural conservation, while others measure general properties of the alignment that could affect the expected ranges of the structural statistics. These statistics were combined using a support vector machine. Other properties include the appearance of a promoter to transcribe the RNA. ncRNAs are also often followed by a Rho-independent transcription terminator. Using a combination of these approaches, multiple studies have enumerated candidate RNAs, e.g., Some studies have proceeded to manual analysis of the predictions to find a details structural and functional prediction. See also 6A RNA motif AbiF RNA motif ARRPOF RNA motif CyVA-1 RNA motif List of RNA structure prediction software References Non-coding RNA Bioinformatics
Bioinformatics discovery of non-coding RNAs
Engineering,Biology
762
3,133,750
https://en.wikipedia.org/wiki/Open%20Vulnerability%20and%20Assessment%20Language
Open Vulnerability and Assessment Language (OVAL) is an international, information security, community standard to promote open and publicly available security content, and to standardize the transfer of this information across the entire spectrum of security tools and services. OVAL includes a language used to encode system details, and an assortment of content repositories held throughout the community. The language standardizes the three main steps of the assessment process: representing configuration information of systems for testing; analyzing the system for the presence of the specified machine state (vulnerability, configuration, patch state, etc.); and reporting the results of this assessment. The repositories are collections of publicly available and open content that utilize the language. The OVAL community has developed three schemas written in Extensible Markup Language (XML) to serve as the framework and vocabulary of the OVAL Language. These schemas correspond to the three steps of the assessment process: an OVAL System Characteristics schema for representing system information, an OVAL Definition schema for expressing a specific machine state, and an OVAL Results schema for reporting the results of an assessment. Content written in the OVAL Language is located in one of the many repositories found within the community. One such repository, known as the OVAL Repository, is hosted by The MITRE Corporation. It is the central meeting place for the OVAL Community to discuss, analyze, store, and disseminate OVAL Definitions. Each definition in the OVAL Repository determines whether a specified software vulnerability, configuration issue, program, or patch is present on a system. The information security community contributes to the development of OVAL by participating in the creation of the OVAL Language on the OVAL Developers Forum and by writing definitions for the OVAL Repository through the OVAL Community Forum. An OVAL Board consisting of representatives from a broad spectrum of industry, academia, and government organizations from around the world oversees and approves the OVAL Language and monitors the posting of the definitions hosted on the OVAL Web site. This means that the OVAL, which is funded by US-CERT at the U.S. Department of Homeland Security for the benefit of the community, reflects the insights and combined expertise of the broadest possible collection of security and system administration professionals worldwide. OVAL is used by the Security Content Automation Protocol (SCAP). OVAL Language The OVAL Language standardizes the three main steps of the assessment process: representing configuration information of systems for testing; analyzing the system for the presence of the specified machine state (vulnerability, configuration, patch state, etc.); and reporting the results of this assessment. OVAL Interpreter The OVAL Interpreter is a freely available reference implementation created to show how data can be collected from a computer for testing based on a set of OVAL Definitions and then evaluated to determine the results of each definition. The OVAL Interpreter demonstrates the usability of OVAL Definitions, and can be used by definition writers to ensure correct syntax and adherence to the OVAL Language during the development of draft definitions. It is not a fully functional scanning tool and has a simplistic user interface, but running the OVAL Interpreter will provide you with a list of result values for each evaluated definition. OVAL Repository The OVAL Repository is the central meeting place for the OVAL Community to discuss, analyze, store, and disseminate OVAL Definitions. Other repositories in the community also host OVAL content, which can include OVAL System Characteristics files and OVAL Results files as well as definitions. The OVAL Repository contains all community-developed OVAL Vulnerability, Compliance, Inventory, and Patch Definitions for supported operating systems. Definitions are free to use and implement in information security products and services. The OVAL Repository Top Contributor Award Program grants awards on a quarterly basis to the top contributors to the OVAL Repository. The Repository is a community effort, and contributions of new content and modifications are instrumental in its success. The awards serve as public recognition of an organization’s support of the OVAL Repository and as an incentive to others to contribute. Organizations receiving the award will also receive an OVAL Repository Top Contributor logo indicating the quarter of the award (e.g., 1st Quarter 2007) that may be used as they see fit. Awards are granted to organizations that have made a significant contribution of new or modified content each quarter. OVAL Board The OVAL Board is an advisory body, which provides valuable input on OVAL to the Moderator (currently MITRE). While it is important to have organizational support for OVAL, it is the individuals who sit on the OVAL Board and their input and activity that truly make a difference. The Board’s primary responsibilities are to work with the Moderator and the Community to define OVAL, to provide input into OVAL’s strategic direction, and to advocate OVAL in the Community. See also MITRE The MITRE Corporation Common Vulnerability and Exposures (index of standardized names for vulnerabilities and other security issues) XCCDF - eXtensible Configuration Checklist Description Format Security Content Automation Protocol uses OVAL External links OVAL web site Gideon Technologies (OVAL Board Member) Corporate Web Site www.itsecdb.com Portal for OVAL definitions from several sources oval.secpod.com SecPod OVAL Definitions Professional Feed Computer security procedures Mitre Corporation
Open Vulnerability and Assessment Language
Engineering
1,034
12,458,572
https://en.wikipedia.org/wiki/Dissolution%20testing
In the pharmaceutical industry, drug dissolution testing is routinely used to provide critical in vitro drug release information for both quality control purposes, i.e., to assess batch-to-batch consistency of solid oral dosage forms such as tablets, and drug development, i.e., to predict in vivo drug release profiles. There are three typical situations where dissolution testing plays a vital role: (i) formulation and optimization decisions: during product development, for products where dissolution performance is a critical quality attribute, both the product formulation and the manufacturing process are optimized based on achieving specific dissolution targets. (ii) Equivalence decisions: during generic product development, and also when implementing post-approval process or formulation changes, similarity of in vitro dissolution profiles between the reference product and its generic or modified version are one of the key requirements for regulatory approval decisions. (iii) Product compliance and release decisions: during routine manufacturing, dissolution outcomes are very often one of the criteria used to make product release decisions. The main objective of developing and evaluating an IVIVC is to establish the dissolution test as a surrogate for human studies, as stated by the Food and Drug Administration (FDA). Analytical data from drug dissolution testing are sufficient in many cases to establish safety and efficacy of a drug product without in vivo tests, following minor formulation and manufacturing changes (Qureshi and Shabnam, 2001). Thus, the dissolution testing which is conducted in dissolution apparatus must be able to provide accurate and reproducible results. Equipment Several dissolution apparatuses exist. In United States Pharmacopeia (USP) General Chapter <711> Dissolution, there are four dissolution apparatuses standardized and specified. They are: USP Dissolution Apparatus 1 – Basket (37 °C ± 0.5 °C ) USP Dissolution Apparatus 2 – Paddle (37 °C ± 0.5 °C) USP Dissolution Apparatus 3 – Reciprocating Cylinder (37 °C ± 0.5 °C) USP Dissolution Apparatus 4 – Flow-Through Cell (37 °C ± 0.5 °C) USP Dissolution Apparatus 5 - Reciprocating Disk (37 °C ± 0.5 °C) General Method The vessels of the dissolution method are usually either partially immersed in a water bath solution or heated by a jacket. An apparatus is used on solution within the vessels for a predetermined amount of time which depends on the method for the particular drug. The dissolution medium within the vessels are heated to 37 °C with an acceptable difference of ± 0.5 °C The performances of dissolution apparatuses are highly dependent on hydrodynamics due to the nature of dissolution testing. The designs of the dissolution apparatuses and the ways of operating dissolution apparatuses have huge impacts on the hydrodynamics, thus the performances. Hydrodynamic studies in dissolution apparatuses were carried out by researchers over the past few years with both experimental methods and numerical modeling such as Computational Fluid Dynamics (CFD). The main target was USP Dissolution Apparatus 2. The reason is that many researchers suspect that USP Dissolution Apparatus 2 provides inconsistent and sometimes faulty data. The hydrodynamic studies of USP Dissolution Apparatus 2 mentioned above clearly showed that it does have intrinsic hydrodynamic issues which could result in problems. In 2005, Professor Piero Armenante from New Jersey Institute of Technology (NJIT) and Professor Fernando Muzzio from Rutgers University submitted a technical report to the FDA. In this technical report, the intrinsic hydrodynamic issues with USP Dissolution Apparatus 2 based on the research findings of Armenante's group and Muzzio's group were discussed. More recently, hydrodynamic studies were conducted in USP Dissolution Apparatus 4. Operation The general procedure for a dissolution involves a liquid known as Dissolution Medium which is placed in the vessels of a dissolution unit. The medium can range from degassed or sonicated deionized water to pH adjusted chemically-prepared solutions and mediums that are prepared with surfactants. Degassing the dissolution medium through sonication or other means is important since the presence of dissolved gases may affect results. The drug is placed within the medium in the vessels after it has reached sufficient temperature and then the dissolution apparatus is operated. Sample solutions collected from dissolution testing are commonly analyzed by HPLC or Ultraviolet–visible spectroscopy. There are criteria known as 'release specifications' that samples tested must meet statistically, both as individual values and as average of the whole. One such criteria is the parameter "Q", which is a percentage value denoting the quantity of dissolved active ingredient within the monograph of a sample solution. If the initial sample analysis, known as S1 or stage 1 testing fails to meet the acceptable value for Q, then additional testing known as stage 2 and 3 testing is required. S3 testing is performed only if S2 testing still fails the Q parameter. If there is a deviation from the acceptable Q values at S3, then an OOS (Out of Specification) investigation is generally initiated. References Drug manufacturing Quality control Pharmacokinetics
Dissolution testing
Chemistry
1,025
18,842,511
https://en.wikipedia.org/wiki/Klincewicz%20method
In thermodynamic modelling, the Klincewicz method is a predictive method based both on group contributions and on a correlation with some basic molecular properties. The method estimates the critical temperature, the critical pressure, and the critical volume of pure components. It is named after Karen Klincewicz Gleason who developed it in 1984 in collaboration with Robert C. Reid. Model description As a group contribution method the Klincewicz method correlates some structural information of a chemical molecule with the critical data. The used structural information are small functional groups which are assumed to have no interactions. This assumption makes it possible to calculate the thermodynamic properties directly from the sums of the group contributions. The correlation method does not even use these functional groups, only the molecular weight and the number of atoms are used as molecular descriptors. The prediction of the critical temperature relies on the knowledge of the normal boiling point because the method only predicts the relation of the normal boiling point and the critical temperature and not directly the critical temperature. The critical volume and pressure however are directly predicted. Model quality The quality of the Klincewicz method is not superior to older methods, especially the method of Ambrose gives somewhat better results as stated by the original authors and by Reid et al. The advantage of the Klincewicz method is that it is less complex. The quality and complexity of the Klincewicz method is comparable to the Lydersen method from 1955 which has been used widely in chemical engineering. The aspect where the Klincewicz method is unique and useful are the alternative equations where only very basic molecular data like the molecular weight and the atom count are used. Deviation diagrams The diagrams show estimated critical data of hydrocarbons together with experimental data. An estimation would be perfect if all data points would lie directly on the diagonal line. Only the simple correlation of the Klincewicz method with the molecular weight and the atom count have been used in this example. Equations Klincewicz published two sets of equations. The first uses contributions of 35 different groups. These group contribution based equations are giving somewhat better results than the very simple equations based only on correlations with the molecular weight and the atom count. Group-contribution-based equations Equations based on correlation with molecular weight and atom count only with Group contributions The group XCX is used to take the pairwise interaction of halogens connected to a single carbon into account. Its contribution has to be added once for two halogens but three times for three halogens (interactions between the halogens 1 and 2, 1 and 3, and 2 and 3). Example calculations Example calculation for acetone with group contributions *used normal boiling point Tb= 329.250 K Example calculation for acetone with molecular weight and atom count only Used molecular weight: 58.080 g/mol Used atom count: 10 For comparison, experimental values for Tc, Pc and Vc are 508.1 K, 47.0 bar and 209 cm3/mol, respectively. References Thermodynamic models
Klincewicz method
Physics,Chemistry
624
51,197,794
https://en.wikipedia.org/wiki/Metric%20Systems%20Corporation
Metric Systems Corporation (MSC) is an American company that develops, manufactures and sells wireless networking equipment and systems. Based in Carlsbad, California, MSC focuses on White spaces (radio) and other equipment and systems for the commercial, industrial, and government market place. History In the late 2000s Metric Systems Corporation was tasked by Microsoft to provide White Space test equipment for use by the Federal Communications Commission in its decision to allow unlicensed use of White Space. Following the FCC's release of final rules for the use of TV-band devices in late 2010, Metric Systems began developing its line of VHF/UHF White Space Broadband Radios. Patents and technology Metric Systems Corporation holds several patents in the United States and Canada which focus on Dynamic Spectrum Management and wide area wireless networking. Key patents include methods and apparatuses for adaptively setting frequency channels in a multi-point wireless networking system and maintain connectivity in the presence of noise and interference. Product use and field trials MSC's White Space products have been evaluated by a number of domestic and international agencies and companies. In February 2013 MSC's first generation White Space VHF/UHF Broadband Radios were delivered to Brazil's CPqD for evaluation. In July 2013, Metric Systems Corporation's White Space equipment was used by the Port of Pittsburgh for testing inland waterways. The first carrier-class TV Band White Space Radio, the RaptorX, was certified by the FCC for unlicensed use in 2015. In Summer 2016, MSC's production White Space Infrastructure Radio, the RaptorXR, began field trials in the mid-western United States for educational, financial, and public safety applications. References Wireless network organizations
Metric Systems Corporation
Technology
337
40,854,179
https://en.wikipedia.org/wiki/Neutron%20microscope
Neutron microscopes use neutrons focused by small-angle neutron scattering to create images by passing neutrons through an object to be investigated. The neutrons that aren't absorbed by the object hit scintillation targets where induced nuclear fission of lithium-6 can be detected and be used to produce an image. Neutrons have no electric charge, enabling them to penetrate substances to gain information about structure that is not accessible through other forms of microscopy. As of 2013, neutron microscopes offered four-fold magnification and 10-20 times better illumination than pinhole neutron cameras. The system increases the signal rate at least 50-fold. Neutrons interact with atomic nuclei via the strong force. This interaction can scatter neutrons from their original path and can also absorb them. Thus, a neutron beam becomes progressively less intense as it moves deeper within a substance. In this way, neutrons are analogous to x-rays for studying object interiors. Darkness in an x-ray image corresponds to the amount of matter the x-rays pass through. The density of a neutron image provides information on neutron absorption. Absorption rates vary by many orders of magnitude among the chemical elements. While neutrons have no charge, they do have spin and therefore a magnetic moment that can interact with external magnetic fields. Applications Neutron imaging has potential for studying so-called soft materials, as small changes in the location of hydrogen within a material can produce highly visible changes in a neutron image. Neutrons also offer unique capabilities for research in magnetic materials. The neutron's lack of electric charge means there is no need to correct magnetic measurements for errors caused by stray electric fields and charges. Polarized neutron beams orient neutron spins in one direction. This allows measurement of the strength and characteristics of a material's magnetism. Neutron-based instruments have the ability to probe inside metal objects — such as fuel cells, batteries and engines to study their internal structure. Neutron instruments are also uniquely sensitive to lighter elements that are important in biological materials. Shadowgraphs Shadowgraphs are images produced by casting a shadow on a surface, usually taken with a pinhole camera and are widely used for nondestructive testing. Such cameras provide low illumination levels that require long exposure times. They also provide poor spatial resolution. The resolution of such a lens cannot be smaller than the hole diameter. A good balance between illumination and resolution is obtained when the pinhole diameter is about 100 times smaller than the distance between the pinhole and the image screen, effectively making the pinhole an f/100 lens. The resolution of an f/100 pinhole is about half a degree. Wolter mirror Glass lenses and conventional mirrors are useless for working with neutrons, because they pass through such materials without refraction or reflection. Instead, the neutron microscope employs a Wolter mirror, similar in principle to grazing incidence mirrors used for x-ray and gamma-ray telescopes. When a neutron grazes the surface of a metal at a sufficiently small angle, it is reflected away from the metal surface at the same angle. When this occurs with light, the effect is called total internal reflection. The critical angle for grazing reflection is large enough (a few tenths of a degree for thermal neutrons) that a curved mirror can be used. Curved mirrors then allow an imaging system to be made. The microscope uses several reflective cylinders nested inside each other, to increase the surface area available for reflection. Measurement The neutron flux at the imaging focal plane is measured by a CCD imaging array with a neutron scintillation screen in front of it. The scintillation screen is made of zinc sulfide, a fluorescent compound, laced with lithium. When a thermal neutron is absorbed by a lithium-6 nucleus, it causes a fission reaction that produces helium, tritium and energy. These fission products cause the ZnS phosphor to light up, producing an optical image for capture by the CCD array. See also Electron microscope ISIS neutron and muon source LARMOR neutron microscope Microscope image processing X-ray microscope References Neutron instrumentation Microscopes
Neutron microscope
Chemistry,Technology,Engineering
819
56,798,968
https://en.wikipedia.org/wiki/Praskov%E2%80%B2ja%20Georgievna%20Parchomenko
Praskov′ja Georgievna Parchomenko (1886–1970) was a Soviet astronomer who discovered many minor planets between the years of 1930–1940. Parchomenko first discovered 1129 Neujmina on 8 August 1929 at the Simeiz Observatory. Less than a year later, Parchomenko discovered 1166 Sakuntala on June 27, 1930, only two nights before Karl Reinmuth viewed it. On 30 August 1970, Tamara Smirnova discovered 1857 Parchomenko which was named in honour of Parchomenko. References Балышев М.А. Историко-биографическое исследование жизни и творчества украинского астронома Прасковьи Георгиевны Пархоменко. Наука та наукознавство. 2018. №1. С. 114-137. Soviet astronomers 1886 births 1970 deaths Women astronomers Soviet women scientists
Praskov′ja Georgievna Parchomenko
Astronomy
263
595,479
https://en.wikipedia.org/wiki/Habitat%20II
Habitat II, the Second United Nations Conference on Human Settlements, was held in Istanbul, Turkey, from 3–14 June 1996, twenty years after Habitat I held in Vancouver, Canada, in 1976. Popularly called the "City Summit", it brought together high-level representatives of national and local governments, as well as private sector, NGOs, research and training institutions and the media. Universal goals of ensuring adequate shelter for all and human settlements safer, healthier and more livable cities, inspired by the Charter of the United Nations, were discussed and endorsed. Habitat II received its impetus from the 1992 United Nations Conference on Environment and Development and General Assembly resolution A/RES/47/180. The conference outcomes were integrated in the Istanbul Declaration and the Habitat Agenda, and adopted as a new global action plan to realize sustainable human settlements. The Secretary-General of the Conference was Dr. Wally N'Dow. The objectives for Habitat II were stated as: in the long term, to arrest the deterioration of global human settlements conditions and ultimately create the conditions for achieving improvements in the living environment of all people on a sustainable basis, with special attention to the needs and contributions of women and vulnerable social groups whose quality of life and participation in development have been hampered by exclusion and inequality, affecting the poor in general; to adopt a general statement of principles and commitments and formulate a related global plan of action capable of guiding national and international efforts through the first two decades of the next century. A new mandate for the United Nations Centre for Human Settlements (UNCHS) was derived to support and monitor the implementation of the Habitat Agenda adopted at the Conference and approved by the General Assembly. Habitat III met in Quito, Ecuador, from 17–20 October 2016. Previous negotiations The organizational session of the Preparatory Committee (PrepCom) for Habitat II was held at UN Headquarters in New York from 3–5 March 1993. Delegates elected the Bureau and took decisions regarding the organization and timing of the process. The First Substantive Session of the Preparatory Committee of the PrepCom was held in Geneva from 11–22 April 1994. Delegates agreed that the overriding objective of the Conference was to increase world awareness of the problems and potentials of human settlements as important inputs to social progress and economic growth, and to commit the world's leaders to making cities, towns and villages healthy, safe, just and sustainable. The Earth Negotiations Bulletin prepared a comprehensive report on the first session of the PrepCom. The PrepCom also took decisions on the organization of the Conference and financing, in addition to the areas of: National Objectives, International Objectives, Participation, Draft Statement of Principles and Commitments and Draft Global Plan of Action. Habitat II at the 49th United Nations General Assembly The Second Committee of the UN General Assembly addressed Habitat II from 8–16 November 1994. The Earth Negotiations Bulletin prepared a year-end update report on Habitat II preparations that included a report on the General Assembly's treatment of this agenda item. A draft resolution on the "United Nations Conference on Human Settlements (Habitat II)" (A/C.2/49/L.27) was first tabled by the co-sponsors, Algeria, on behalf of the G-77 and China, and Turkey. After informal consultations by members of the Second Committee, the Vice Chair, Raiko Raichev (Bulgaria) submitted a new draft resolution (A/C.2/49/L.61). This resolution was adopted as orally amended by the Committee on 9 December 1994.The operative part of the resolution, as contained in L.61, took note of the reports of the PrepCom on its organizational session and first substantive session and endorsed the decisions contained therein. The resolution approved the PrepCom's recommendation that a third substantive session of the PrepCom be held at UN Headquarters early in 1996 to complete the preparatory work for the Conference. The Second Session of the Habitat II Preparatory Committee The Second Substantive Session of the PrepCom was held in Nairobi, Kenya, from the 24 April - 5 May 1995. The Earth Negotiations Bulletin published a summary of the meeting. The Third Session of the Habitat II Preparatory Committee The Third Session of the Habitat II Preparatory Committee was held in New York from 5–17 February 1996. The Earth Negotiations Bulletin published a summary of the meeting See also UN-Habitat World Urban Forum Habitat I Habitat III United Nations Conference on Housing and Sustainable Urban Development (Habitat III) References United Nations conferences Diplomatic conferences in Turkey 20th-century diplomatic conferences 1996 in international relations Human settlement Urban planning United Nations Human Settlements Programme Turkey and the United Nations 1990s in Istanbul
Habitat II
Engineering
926
7,062,356
https://en.wikipedia.org/wiki/Anomalous%20diffusion
Anomalous diffusion is a diffusion process with a non-linear relationship between the mean squared displacement (MSD), , and time. This behavior is in stark contrast to Brownian motion, the typical diffusion process described by Einstein and Smoluchowski, where the MSD is linear in time (namely, with d being the number of dimensions and D the diffusion coefficient). It has been found that equations describing normal diffusion are not capable of characterizing some complex diffusion processes, for instance, diffusion process in inhomogeneous or heterogeneous medium, e.g. porous media. Fractional diffusion equations were introduced in order to characterize anomalous diffusion phenomena. Examples of anomalous diffusion in nature have been observed in ultra-cold atoms, harmonic spring-mass systems, scalar mixing in the interstellar medium, telomeres in the nucleus of cells, ion channels in the plasma membrane, colloidal particle in the cytoplasm, moisture transport in cement-based materials, and worm-like micellar solutions. Classes of anomalous diffusion Unlike typical diffusion, anomalous diffusion is described by a power law, where is the so-called generalized diffusion coefficient and is the elapsed time. The classes of anomalous diffusions are classified as follows: α < 1: subdiffusion. This can happen due to crowding or walls. For example, a random walker in a crowded room, or in a maze, is able to move as usual for small random steps, but cannot take large random steps, creating subdiffusion. This appears for example in protein diffusion within cells, or diffusion through porous media. Subdiffusion has been proposed as a measure of macromolecular crowding in the cytoplasm. α = 1: Brownian motion. : superdiffusion. Superdiffusion can be the result of active cellular transport processes or due to jumps with a heavy-tail distribution. α = 2: ballistic motion. The prototypical example is a particle moving at constant velocity: . : hyperballistic. It has been observed in optical systems. In 1926, using weather balloons, Lewis Fry Richardson demonstrated that the atmosphere exhibits super-diffusion. In a bounded system, the mixing length (which determines the scale of dominant mixing motions) is given by the Von Kármán constant according to the equation , where is the mixing length, is the Von Kármán constant, and is the distance to the nearest boundary. Because the scale of motions in the atmosphere is not limited, as in rivers or the subsurface, a plume continues to experience larger mixing motions as it increases in size, which also increases its diffusivity, resulting in super-diffusion. Models of anomalous diffusion The types of anomalous diffusion given above allows one to measure the type, but how does anomalous diffusion arise? There are many possible ways to mathematically define a stochastic process which then has the right kind of power law. Some models are given here. These are long range correlations between the signals continuous-time random walks (CTRW) and fractional Brownian motion (fBm), and diffusion in disordered media. Currently the most studied types of anomalous diffusion processes are those involving the following Generalizations of Brownian motion, such as the fractional Brownian motion and scaled Brownian motion Diffusion in fractals and percolation in porous media Continuous time random walks These processes have growing interest in cell biophysics where the mechanism behind anomalous diffusion has direct physiological importance. Of particular interest, works by the groups of Eli Barkai, Maria Garcia Parajo, Joseph Klafter, Diego Krapf, and Ralf Metzler have shown that the motion of molecules in live cells often show a type of anomalous diffusion that breaks the ergodic hypothesis. This type of motion require novel formalisms for the underlying statistical physics because approaches using microcanonical ensemble and Wiener–Khinchin theorem break down. See also Long term correlations References External links Boltzmann's transformation, Parabolic law (animation) Anomalous interface shift kinetics (Computer simulations and Experiments) Physical chemistry
Anomalous diffusion
Physics,Chemistry
861
56,481,472
https://en.wikipedia.org/wiki/Vaginal%20support%20structures
The vaginal support structures are those muscles, bones, ligaments, tendons, membranes and fascia, of the pelvic floor that maintain the position of the vagina within the pelvic cavity and allow the normal functioning of the vagina and other reproductive structures in the female. Defects or injuries to these support structures in the pelvic floor leads to pelvic organ prolapse. Anatomical and congenital variations of vaginal support structures can predispose a woman to further dysfunction and prolapse later in life. The urethra is part of the anterior wall of the vagina and damage to the support structures there can lead to incontinence and urinary retention. Pelvic bones The support for the vagina is provided by muscles, membranes, tendons and ligaments. These structures are attached to the hip bones. These bones are the pubis, ilium and ischium. The interior surface of these pelvic bones and their projections and contours are used as attachment sites for the fascia, muscles, tendons and ligaments that support the vagina. These bones are then fuse and attach to the sacrum behind the vagina and anteriorly at the pubic symphysis. Supporting ligaments include the sacrospinous and sacrotuberous ligaments. The sacrospinous ligament is unusual in that it is thin and triangular. Pelvic diaphragm The muscular pelvic diaphragm is composed of the bilateral levator ani and coccygeus muscles and these attach to the inner pelvic surface. The iliococcygeus and pubococcygeus make up the levator ani muscle. The muscles pass behind the rectum. The levator ani surrounds the opening which the urethra, rectum and vagina pass. The pubococcygeus muscle is subdivided into the pubourethralis, pubovaginal muscle and the puborectalis muscle. The names describe the attachments of the muscles to the urethra, vagina, anus, and rectum. The names are also called the pubourethralis, pubovaginalis, puboanalis, and puborectalis muscles and sometimes the pubovisceralis since it attaches to the viscera. Urogenital diaphragm (perineal membrane) The urogenital diaphragm, or perineal membrane, is present over the anterior pelvic outlet below the pelvic diaphragm. The exact structure description is controversial. Despite the controversy, MRI imaging studies support the existence of the structure. Superficial and inferior muscles of the perineum (urogenital diaphragm): ischiocavernosus bulbospongiosus superficial transverse perinei The perineum attaches across the gap between the inferior pubic rami bilaterally and the perineal body. This grouping of muscles constricts to close the urogenital openings. The perineum supports and functions as a sphincter at the opening of the vagina. Other structures exist below the perineum that support the anus. Perineal body The perineal body is a pyramidal structure of muscle and connective tissue and part of it is located between the anus and vagina. It is a tendon that is formed at the point where the bulbospongiosus muscle, superficial transverse perineal muscle, and external anal sphincter muscle converge to form this major supportive structure of the pelvis and vagina. Below this, muscles and their fascia converge and become part of the perineal body. The lower vagina is attached to the perineal body by attachments from the pubococcygeus, perineal muscles, and the anal sphincter. The perineal body is made up of smooth muscle, elastic connective tissue fibers, and nerve endings. Above the perineal body are the vagina and the uterus. Damage and resulting weakness of the perineal body changes the length of the vagina and predisposes it to rectocele and enterocele. Endopelvic fascia and connective tissue The vagina is attached to the pelvic walls by endopelvic fascia. The peritoneum is the external layer of skin that covers the fascia. This tissue provides additional support to the pelvic floor. The endopelvic fascia is one continuous sheet of tissue and varies in thickness. It permits some shifting of the pelvic structures. The fascia contains elastic collagen fibers in a 'mesh-like' structure. The fascia also contains fibroblasts, smooth muscle, and vascular vessels. The cardinal ligament supports the apex of the vagina and derives some of its strength from vascular tissue. The endopelvic fascia attaches to the lateral pelvic wall via the arcus tendineus. Anterior vaginal support Not all agree to the amount of supportive tissue or fascia exists in the anterior vaginal wall. The major point of contention is whether the vaginal fascial layer exists. Some texts do not describe a fascial layer. Other sources state that the fascia is present under the urethra which is embedded in the anterior vaginal wall. Despite disagreement, the urethra is embedded in the anterior vaginal wall. Lateral and mid support structures The midsection of the vagina is supported by its lateral attachments to the arcus tendineus. Some describe the pubocervical fascia as extending from the pubic symphysis to the anterior vaginal wall and cervix. Anatomists do not agree on its existence. Complications Vaginal support structures can be damaged or weakened during childbirth or pelvic surgery. Other conditions that repeatedly strain or increase pressure in the pelvic area can also compromise support. Examples are: chronic constipation chronic or violent coughing heavy lifting being overweight or obese See also Bone terminology Anatomical terms of location Ilium (bone) Human anatomical terms Pelvic organ prolapse References External links Vagina, Anatomical Atlases, an Anatomical Digital Library (2018) Human female reproductive system Women and sexuality Women's health Anatomy Gynaecology
Vaginal support structures
Biology
1,309
1,185,139
https://en.wikipedia.org/wiki/Resolved%20sideband%20cooling
Resolved sideband cooling is a laser cooling technique allowing cooling of tightly bound atoms and ions beyond the Doppler cooling limit, potentially to their motional ground state. Aside from the curiosity of having a particle at zero point energy, such preparation of a particle in a definite state with high probability (initialization) is an essential part of state manipulation experiments in quantum optics and quantum computing. Historical notes As of the writing of this article, the scheme behind what we refer to as resolved sideband cooling today is attributed to D. J. Wineland and H. Dehmelt, in their article "Proposed laser fluorescence spectroscopy on mono-ion oscillator III (sideband cooling)". The clarification is important, as at the time of the latter article, the term also designated what we call today Doppler cooling, which was experimentally realized with atomic ion clouds in 1978 by W. Neuhauser and independently by D. J. Wineland. An experiment that demonstrates resolved sideband cooling unequivocally in its contemporary meaning is that of Diedrich et al. Similarly unequivocal realization with non-Rydberg neutral atoms was demonstrated in 1998 by S. E. Hamann et al. via Raman cooling. Conceptual description Resolved sideband cooling is a laser-cooling technique that can be used to cool strongly trapped atoms to the quantum ground state of their motion. The atoms are usually precooled using the Doppler laser cooling. Subsequently, the resolved sideband cooling is used to cool the atoms beyond the Doppler cooling limit. A cold trapped atom can be treated to a good approximation as a quantum-mechanical harmonic oscillator. If the spontaneous decay rate is much smaller than the vibrational frequency of the atom in the trap, the energy levels of the system will be an evenly spaced frequency ladder, with adjacent levels spaced by an energy . Each level is denoted by a motional quantum number n, which describes the amount of motional energy present at that level. These motional quanta can be understood in the same way as for the quantum harmonic oscillator. A ladder of levels will be available for each internal state of the atom. For example, in the figure at right both the ground (g) and excited (e) states have their own ladder of vibrational levels. Suppose a two-level atom whose ground state is denoted by g and excited state by e. Efficient laser cooling occurs when the frequency of the laser beam is tuned to the red sideband i.e. where is the internal atomic transition frequency corresponding to at transition between g and e, and is the harmonic-oscillation frequency of the atom. In this case the atom undergoes the transition where represents the state of an ion whose internal atomic state is a, and the motional state is m. If the recoil energy of the atom is negligible compared with the vibrational quantum energy, subsequent spontaneous emission occurs predominantly at the carrier frequency. This means that the vibrational quantum number remains constant. This transition is The overall effect of one of these cycles is to reduce the vibrational quantum number of the atom by one. To cool to the ground state, this cycle is repeated many times until is reached with a high probability. Theoretical basis The core process that provides the cooling assumes a two level system that is well localized compared to the wavelength () of the transition (Lamb–Dicke regime), such as a trapped and sufficiently cooled ion or atom. Modeling the system as a harmonic oscillator interacting with a classical monochromatic electromagnetic field yields (in the rotating wave approximation) the Hamiltonian with and where is the number operator, is the frequency spacing of the oscillator, is the Rabi frequency due to the atom-light interaction, is the laser detuning from , is the laser wave vector. That is, incidentally, the Jaynes–Cummings Hamiltonian used to describe the phenomenon of an atom coupled to a cavity in cavity QED. The absorption(emission) of photons by the atom is then governed by the off-diagonal elements, with probability of a transition between vibrational states proportional to , and for each there is a manifold coupled to its neighbors with strength proportional to . Three such manifolds are shown in the picture. If the transition linewidth satisfies , a sufficiently narrow laser can be tuned to a red sideband, . For an atom starting at , the predominantly probable transition will be to . This process is depicted by arrow "1" in the picture. In the Lamb–Dicke regime, the spontaneously emitted photon (depicted by arrow "2") will be, on average, at frequency , and the net effect of such a cycle, on average, will be the removing of motional quanta. After some cycles, the average phonon number is , where is the ratio of the intensities of the red to blue -th sidebands. In practice, this process is normally done on the first motional sideband for optimal efficiency. Repeating the processes many times while ensuring that spontaneous emission occurs provides cooling to . More rigorous mathematical treatment is given in Turchette et al. and Wineland et al. Specific treatment of cooling multiple ions can be found in Morigi et al. Experimental implementations For resolved sideband cooling to be effective, the process needs to start at sufficiently low . To that end, the particle is usually first cooled to the Doppler limit, then some sideband cooling cycles are applied, and finally, a measurement is taken or state manipulation is carried out. A more or less direct application of this scheme was demonstrated by Diedrich et al. with the caveat that the narrow quadrupole transition used for cooling connects the ground state to a long-lived state, and the latter had to be pumped out to achieve optimal cooling efficiency. It is not uncommon, however, that additional steps are needed in the process, due to the atomic structure of the cooled species. Examples of that are the cooling of ions and the Raman sideband cooling of atoms. Example: cooling of ions The energy levels relevant to the cooling scheme for ions are the S1/2, P1/2, P3/2, D3/2, and D5/2, which are additionally split by a static magnetic field to their Zeeman manifolds. Doppler cooling is applied on the dipole S1/2 - P1/2 transition (397 nm), however, there is about 6% probability of spontaneous decay to the long-lived D3/2 state, so that state is simultaneously pumped out (at 866 nm) to improve Doppler cooling. Sideband cooling is performed on the narrow quadrupole transition S1/2 - D5/2 (729 nm), however, the long-lived D5/2 state needs to be pumped out to the short lived P3/2 state (at 854 nm) to recycle the ion to the ground S1/2 state and maintain cooling performance. One possible implementation was carried out by Leibfried et al. and a similar one is detailed by Roos. For each data point in the 729 nm absorption spectrum, a few hundred iterations of the following are executed: the ion is Doppler cooled with 397 nm and 866 nm light, with 854 nm light on as well the ion is spin polarized to the S1/2(m=-1/2) state by applying a 397 nm light for the last few moments of the Doppler cooling process sideband cooling loops are applied at the first red sideband of the D5/2(m=-5/2) 729 nm transition to ensure the population ends up in the S1/2(m=-1/2) state, another 397 nm pulse is applied manipulation is carried out and analysis is carried out by applying 729 nm light at the frequency of interest detection is carried out with 397 nm and 866 nm light: discrimination between dark (D) and bright (S) state is based on a pre-determined threshold value of fluorescence counts Variations of this scheme relaxing the requirements or improving the results are being investigated/used by several ion-trapping groups. Example: Raman sideband cooling of atoms A Raman transition replaces the one-photon transition used in the sideband above by a two-photon process via a virtual level. In the cooling experiment carried out by Hamann et al., trapping is provided by an isotropic optical lattice in a magnetic field, which also provides Raman coupling to the red sideband of the Zeeman manifolds. The process followed in is: preparation of cold sample of atoms is carried out in optical molasses, in a magneto-optic trap atoms are allowed to occupy a 2D, near resonance lattice the lattice is changed adiabatically to a far off resonance lattice, which leaves the sample sufficiently well cooled for sideband cooling to be effective (Lamb-Dicke regime) a magnetic field is turned on to tune the Raman coupling to the red motional sideband relaxation between the hyperfine states is provided by a pump/repump laser pair after some time, pumping is intensified to transfer the population to a specific hyperfine state lattice is turned off and time of flight techniques are employed to perform Stern-Gerlach analysis See also Laser cooling Amplitude modulation References Laser applications Cooling technology Atomic physics Plasma technology and applications
Resolved sideband cooling
Physics,Chemistry
1,940
54,881,118
https://en.wikipedia.org/wiki/Cytokine%20%28journal%29
Cytokine is a monthly peer-reviewed academic journal covering the study of cytokines as they relate to multiple disciplines, including molecular biology, immunology, and genetics. It was established in 1989 and is published by Elsevier. It is the official journal of the International Cytokine & Interferon Society. The editor-in-chief is Dhan Kalvakolanu (University of Maryland School of Medicine). According to the Journal Citation Reports, the journal has a 2016 impact factor of 3.488 . References External links Elsevier academic journals Immunology journals Academic journals established in 1989 Monthly journals Biochemistry journals English-language journals Academic journals associated with international learned and professional societies
Cytokine (journal)
Chemistry
143
42,802,042
https://en.wikipedia.org/wiki/Geobacter%20psychrophilus
Geobacter psychrophilus is a Fe(III)-reducing bacterium. It is Gram-negative, slightly curved, rod-shaped and motile via means of monotrichous flagella. Its type strain is P35T (=ATCC BAA-1013T =DSM 16674T =JCM 12644T). References Further reading External links Type strain of Geobacter psychrophilus at BacDive - the Bacterial Diversity Metadatabase Bacteria described in 2005 Thermodesulfobacteriota
Geobacter psychrophilus
Biology
114
3,565,395
https://en.wikipedia.org/wiki/Columbia%E2%80%93Wrightsville%20Bridge
The Columbia–Wrightsville Bridge, officially the Veterans Memorial Bridge, spans the Susquehanna River between Columbia and Wrightsville, Pennsylvania, and carries Pennsylvania Route 462 and BicyclePA Route S. Built originally as the Lancaster-York Intercounty Bridge, construction began in 1929, and the bridge opened September 30, 1930. On November 11, 1980, it was officially dedicated as Veterans Memorial Bridge, though it is still referenced locally as the Columbia–Wrightsville Bridge. In nominating the present Columbia–Wrightsville Bridge as an engineering landmark, the Pennsylvania section of the American Society of Civil Engineers noted that it is "a splendid example of the graceful multiple-span, reinforced-concrete arched form popular in early 20th Century highway bridges in the United States." The bridge is designated State Route 462 and is listed on the National Register of Historic Places, and is also a National Historic Civil Engineering Landmark. Instead of being replaced by a name such as the Old Lincoln Highway, its name is a kept part of the historic Lincoln Highway in local naming, the nation's first transcontinental highway, connecting a series of local highways and stretching from New York City to San Francisco. The opening in 1940 of the cross-state Pennsylvania Turnpike, a part of Interstate 76, subsequently provided faster passage. History Construction Designed by James B. Long and built by Glen Wiley and Glenway Maxon (Wiley-Maxon Construction Company), it cost $2,484,000 (equivalent to $ million in ), plus an early completion bonus of $56,400 (). Constructed of reinforced concrete, the bridge ( including spans over land) has 27 river piers, 22 approach piers, a two-lane roadway, and a sidewalk. of concrete and 8 million pounds of steel reinforcing rods were used, and coffer dams were built to aid in construction. Each span consists of three separate concrete ribs connected at five points by horizontal concrete struts, with the longest span measuring . Tolls of 25 cents per vehicle were charged when the bridge first opened () and ended on January 31, 1943, when the bond issue was retired. Some time after World War II, the original bridge lights were replaced with newer lighting. Two of the original bronze light fixtures can still be seen on the front lawn of the Frank Sahd Salvage Center along Route 462 in Columbia. Current In the 1970s, the state considered closing the bridge permanently due to the recently constructed Wright's Ferry Bridge nearby, but local residents objected. In the mid-1970s, it was given a major overhaul instead, and was closed only temporarily. A few years later, the bridge was once again closed briefly so that a weather-resistant coating could be applied to the roadway. Today, the bridge is maintained by PennDOT and is still considered the world's longest concrete multiple-arch bridge. Its annual average daily traffic (AADT) was 10,350 as of 2004. It is the fifth bridge to span the river at this general location. As of the first quarter of 2020, PennDOT said plans were underway to restore the bridge, while also: improving roadway intersections at both ends, connecting pedestrian and bicycle paths to river-side parks, and possibly addressing annual mayfly swarms by adding lights beneath the bridge. The project has an estimated $54 million cost and construction was projected to begin in the winter of 2022–2023. In June 2023, an inspection of the bridge revealed cracks in the floor beams and columns that support the deck. The discovery resulted in a weight restriction of being applied, except for emergency vehicles that need to cross. A PennDOT spokesman said the bridge is safe and the limit is to keep the deterioration from getting worse. Interim repairs are planned and, with an expected need for redesign, the already-scheduled bridge rehabilitation is being pushed back to 2025. The other present-day Columbia-Wrightsville bridge is the Wright's Ferry Bridge, the sixth bridge to cross the river between the two towns. Also known as the Route 30 bridge, it stands about north of the Veterans Memorial Bridge. (Wright's Ferry was one of the original names of Columbia.) G.A. & F.C. Wagman, Inc. began its construction in March 1969, and the bridge opened on November 21, 1972. It was commissioned by the Commonwealth of Pennsylvania in the 1960s to relocate Route 30 and bypass the river towns of Wrightsville and Columbia. Costing $12 million, it is constructed of reinforced concrete and steel and has 46 equal sections on 45 piers. US 30 crosses it as an expressway (4-lane divided highway), and there is no walkway. Tolls were never collected on this bridge. About a year after its opening, the bridge was shut down briefly so that an experimental weather-resistant coating could be applied to its roadway. Previous bridges First bridge Construction of the first Columbia–Wrightsville Bridge was begun in 1812 and completed December 5, 1814, by J. Wolcott, H. Slaymaker, S. Slaymaker at a total cost of $231,771 (equal to $ today), which was underwritten by the newly formed Columbia Bank and Bridge Company. The bridge was long and wide and had 54 piers and twin carriageways. Constructed of wood and stone, the covered bridge also included a wooden roof, a whitewashed interior and openings in its wooden sides to admit light and allow a view of the river and surrounding areas. It stood immediately south of the present-day Wright's Ferry Bridge along Route 30. Tolls were $1.50 () for a wagon and six horses, and six cents for pedestrians (). It was considered the longest covered bridge in the world at the time. The bridge accommodated east–west traffic across the Susquehanna River for 14 years before being destroyed by ice, high water and severe weather on February 5, 1832. Second bridge Construction of the second Columbia–Wrightsville Bridge, another covered bridge, started mid-1832 and was completed in 1834 (opening on July 8, 1834) by James Moore and John Evans at a cost of $157,300, equal to $ today. It was long and wide and also enjoyed the distinction of being the world's longest covered bridge. The wood and stone structure had 27 piers, a carriageway, walkway, and two towpaths to guide canal traffic across the river. Tolls were $1.00 for a wagon and 6 horses (), and 6 cents per pedestrian (). Much of the mostly oak timber used in its construction was salvaged from the previous bridge. Its roof was covered with shingles, its sides with weatherboard, and its interior was whitewashed. The structure was modified in 1840 by the Canal Company at a cost of $40,000 (equal to $ today) concurrent with the construction of the Wrightsville Dam. Towpaths of different levels and with sidewalls were added to prevent horses from falling into river, as happened several times when the river flooded. The roof of the lower path formed the floor of upper path. In this way, canal boats were towed across the river from the Pennsylvania Canal on the Columbia side to the Susquehanna and Tidewater Canal at Wrightsville. Sometime after 1846, a double-track railway was added, linking the Philadelphia and Columbia Railroad to the Northern Central Railway. Due to fear of fire caused by locomotives, rail cars were pulled across the bridge by teams of mules or horses. The second bridge's role during the Civil War To prevent the advance of Confederate troops across the river from the Wrightsville (York County) side during the Civil War, the bridge was burned by Union militia under Maj. Granville O. Haller and Col. Jacob G. Frick on June 28, 1863. Civilian volunteers from Columbia had mined the bridge at the fourth span from the Wrightsville side, originally hoping to drop the whole span into the river, but when the charges were detonated, only small portions of the support arch splintered, leaving the span passable. As Confederates advanced onto the bridge, Union forces set fire to it near the Wrightsville side. Earlier they had saturated the structure with crude oil from a Columbia refinery. The entire structure soon caught fire and completely burned in six hours. Confederate generals Jubal A. Early and John B. Gordon had originally planned to save the bridge despite orders from General Robert E. Lee to burn it, and Union forces under the command of Colonel Jacob G. Frick had burned the bridge, originally hoping to defend and save it. Afterwards, the Columbia Bank and Bridge Company appealed to the federal government for reimbursement for damages incurred from the bridge burning, but none were ever paid. Conservative estimates put the cost of damages with interest today at well over $170 million. In 1864, the bank sold all interest in the bridge and bridge piers to the Pennsylvania Railroad for $57,000, equal to $ today. Third bridge Construction of the third Columbia-Wrightsville bridge was started in 1868 by the Pennsylvania Railroad. The covered bridge (5,390 feet long) was completed later that year at a cost of $400,000, equal to $ today. Built of stone, wood, and steel, it included 27 piers, a carriageway, railway, and walkway. It was destroyed September 30, 1896 by the 1896 Cedar Keys hurricane. Fourth bridge Construction of the fourth Columbia-Wrightsville bridge, known as the Pennsylvania Railroad "Iron Bridge," started April 16, 1897, and was completed May 11, and was considered the fastest bridge-building job in the world at the time. A steel truss bridge made of prefabricated sections, it was designed to be resistant to fire, ice, water and wind, elements that had destroyed previous wooden structures. Like the previous bridges, tolls were collected to recover a portion of the half-million dollar investment, equal to $ today. Built on the same 27 piers as the previous two bridges, it opened June 7, 1897. The iron and prefabricated steel structure had a railway to carry rail traffic for the York Branch of the Pennsylvania Railroad, and twin carriageways that were shared with pedestrians. Tolls were 20 cents () for vehicles (plus four cents per passenger; ) and three cents for pedestrians (). The bridge remained uncompleted because a planned upper deck was never built. With the completion of the Lincoln Highway in 1925, vehicular traffic routinely jammed in the late 1920s when vehicles had to wait for trains to pass before crossing the bridge, since the bridge was shared with rail traffic. A fifth bridge (Veterans' Memorial Bridge) was planned and erected to accommodate vehicular and pedestrian traffic. The "Iron Bridge" carried passenger trains until 1954 and freight traffic until March 13, 1958, and was dismantled for scrap starting in 1963 and ending in November 1964. Its stone piers, dating to pre-Civil War times, still stand today, running parallel to the north side of the Veterans' Memorial Bridge. See also List of bridges documented by the Historic American Engineering Record in Pennsylvania List of crossings of the Susquehanna River References Further reading Town historical markers and plaques provided by Columbia Borough and Rivertownes PA USA External links https://web.archive.org/web/20070919222620/http://www.columbiapa.net/about.html https://web.archive.org/web/20070709072013/http://www.columbiapaonline.com/ http://www.rivertownes.org/Features/Crossings/Crossings.htm http://www.rivertownes.org/Features/Burning/storm_fw2b.htm https://web.archive.org/web/20070825001139/http://www.civilwaralbum.com/misc6/columbia_wrightsville_bridge1.htm http://www.rivertownes.org/townes.htm Vital statistics, fifth bridge: http://www.yorkblog.com/yorktownsquare/2005/11/columbiawrightsville-bridge-ce Open-spandrel deck arch bridges in the United States Bridges completed in 1930 Road bridges on the National Register of Historic Places in Pennsylvania Lincoln Highway Bridges over the Susquehanna River Historic American Engineering Record in Pennsylvania Historic Civil Engineering Landmarks Bridges in Lancaster County, Pennsylvania Bridges in York County, Pennsylvania Former toll bridges in Pennsylvania National Register of Historic Places in Lancaster County, Pennsylvania National Register of Historic Places in York County, Pennsylvania Concrete bridges in the United States
Columbia–Wrightsville Bridge
Engineering
2,566
209,005
https://en.wikipedia.org/wiki/Noon
Noon (also known as noontime or midday) is 12 o'clock in the daytime. It is written as 12 noon, 12:00 m. (for meridiem, literally 12:00 midday), 12 p.m. (for post meridiem, literally "after midday"), 12 pm, or 12:00 (using a 24-hour clock) or 1200 (military time). Solar noon is the time when the Sun appears to contact the local celestial meridian. This is when the Sun reaches its apparent highest point in the sky, at 12 noon apparent solar time and can be observed using a sundial. The local or clock time of solar noon depends on the date, longitude, and time zone, with Daylight Saving Time tending to place solar noon closer to 1:00pm. Etymology The word noon is derived from Latin nona hora, the ninth canonical hour of the day, in reference to the Western Christian liturgical term Nones (liturgy), (number nine), one of the seven fixed prayer times in traditional Christian denominations. The Roman and Western European medieval monastic day began at 6:00 a.m. (06:00) at the equinox by modern timekeeping, so the ninth hour started at what is now 3:00 p.m. (15:00) at the equinox. In English, the meaning of the word shifted to midday and the time gradually moved back to 12:00 local timethat is, not taking into account the modern invention of time zones. The change began in the 12th century and was fixed by the 14th century. Solar noon Solar noon, also known as the local apparent solar noon and Sun transit time (informally high noon), is the moment when the Sun contacts the observer's meridian (culmination or meridian transit), reaching its highest position above the horizon on that day and casting the shortest shadow. This is also the origin of the terms ante meridiem (a.m.) and post meridiem (p.m.), as noted below. The Sun is directly overhead at solar noon at the Equator on the equinoxes, at the Tropic of Cancer (latitude N) on the June solstice and at the Tropic of Capricorn ( S) on the December solstice. In the Northern Hemisphere, north of the Tropic of Cancer, the Sun is due south of the observer at solar noon; in the Southern Hemisphere, south of the Tropic of Capricorn, it is due north. When the Sun contacts the observer's meridian at the observer's zenith, it is perceived to be directly overhead and no shadows are cast. This occurs at Earth's subsolar point, a point which moves around the tropics throughout the year. The elapsed time from the local solar noon of one day to the next is exactly 24 hours on only four instances in any given year. This occurs when the effects of Earth's obliquity of ecliptic and its orbital speed around the Sun offset each other. These four days for the current epoch are centered on 11 February, 13 May, 26 July, and 3 November. It occurs at only one particular line of longitude in each instance. This line varies year to year, since Earth's true year is not an integer number of days. This event time and location also varies due to Earth's orbit being gravitationally perturbed by the planets. These four 24-hour days occur in both hemispheres simultaneously. The precise Coordinated Universal Times for these four days also mark when the opposite line of longitude, 180° away, experiences precisely 24 hours from local midnight to local midnight the next day. Thus, four varying great circles of longitude define from year to year when a 24-hour day (noon to noon or midnight to midnight) occurs. The two longest time spans from noon to noon occur twice each year, around 20 June (24 hours plus 13 seconds) and 21 December (24 hours plus 30 seconds). The shortest time spans occur twice each year, around 25 March (24 hours minus 18 seconds) and 13 September (24 hours minus 22 seconds). For the same reasons, solar noon and "clock noon" are usually not the same. The equation of time shows that the reading of a clock at solar noon will be higher or lower than 12:00 by as much as 16 minutes. Additionally, due to the political nature of time zones, as well as the application of daylight saving time, it can be off by more than an hour. Nomenclature In the US, noon is commonly indicated by 12 p.m., and midnight by 12 a.m. While some argue that such usage is "improper" based on the Latin meaning (a.m. stands for ante meridiem and p.m. for post meridiem, meaning "before midday" and "after midday" respectively), digital clocks are unable to display anything else, and an arbitrary decision must be made. An earlier standard of indicating noon as "12M" or "12m" (for "meridies"), which was specified in the U.S. GPO Government Style Manual, has fallen into relative obscurity; the current edition of the GPO makes no mention of it. However, due to the lack of an international standard, the use of "12 a.m." and "12 p.m." can be confusing. Common alternative methods of representing these times are: to use a 24-hour clock (00:00 and 12:00, 24:00; but never 24:01) to use "12 noon" or "12 midnight" (though "12 midnight" may still present ambiguity regarding the specific date) to specify midnight as between two successive days or dates (as in "midnight Saturday/Sunday" or "midnight December 14/15") to avoid those specific times and to use "11:59 p.m." or "12:01 a.m." instead. (This is common in the travel industry to avoid confusion to passengers' schedules, especially train and plane schedules.) See also Afternoon Analemma Dipleidoscope Hour angle Solar azimuth angle Notes References External links Generate a solar noon calendar for your location U.S. Government Printing Office Style Manual (2008), 30th edition Shows the hour and angle of sunrise, noon, and sunset drawn over a map. Real Sun Time - gives you an exact unique time to the sun, with yours GPS coordinates position. Parts of a day Time in astronomy
Noon
Astronomy,Technology
1,355
1,220,007
https://en.wikipedia.org/wiki/Spartina
Spartina is a genus of plants in the grass family, frequently found in coastal salt marshes. Species in this genus are commonly known as cordgrass or cord-grass, and are native to the coasts of the Atlantic Ocean in western and southern Europe, north-western and southern Africa, the Americas and the islands of the southern Atlantic Ocean; one or two species also occur on the western coast of North America and in freshwater habitats inland in the Americas. The highest species diversity is on the east coasts of North and South America, particularly Florida. They form large, often dense colonies, particularly on coastal salt marshes, and grow quickly. The species vary in size from 0.3–2 m tall. Many of the species will produce hybrids if they come into contact. Taxonomy In 2014, the taxon Spartina was subsumed into the genus Sporobolus and reassigned to the taxonomic status of section, but it may still be possible to see Spartina referred to as an accepted genus. In 2019, an interdisciplinary team of experts from all continents (except for Antarctica) coauthored a report published in the journal Ecology supporting Spartina as a genus. The section name Spartina is derived from (), the Greek word for a cord made from Spanish broom (Spartium junceum). Species The following species are recognised in the section Spartina: Subsection Alterniflori P.M.Peterson & Saarela Sporobolus alterniflorus – smooth cordgrass – Atlantic coasts of North and South America, West Indies Sporobolus anglicus (C.E.Hubb.) P.M.Peterson & Saarela - Great Britain, introduced to Europe, China, Australia, New Zealand, and North America Sporobolus foliosus – California cordgrass – California, Baja California, Baja California Sur Sporobolus longispicus – Argentina, Uruguay Sporobolus maritimus (Curtis) P.M.Peterson & Saarela - Europe, Africa Sporobolus × townsendii – Townsend's cordgrass – western Europe Subsection Ponceletia (Thouars) P.M.Peterson & Saarela Sporobolus arundinacea (Thouars) Carmich – Tristan da Cunha, Amsterdam Island in Indian Ocean Sporobolus mobberleyanus P.M.Peterson & Saarela – Tristan da Cunha, Amsterdam Island in Indian Ocean Sporobolus spartinae – Gulf cordgrass – Atlantic coast of North America from Florida to Argentina, incl the Caribbean and the Gulf of Mexico Subsection Spartina (Schreb) P.M.Peterson & Saarela Sporobolus bakeri – sand cordgrass – south-eastern US Sporobolus coarctatus – Brazil, Argentina, Uruguay Sporobolus cynosuroides – big cordgrass – eastern US (TX to MA); Bahamas Sporobolus × eatonianus – eastern North America Sporobolus hookerianus – alkali cordgrass – western Canada, western + central US, Chihuahua, Jalisco, Michoacán Sporobolus michauxianus – prairie cordgrass – from Northwest Territories to Texas and Newfoundland Sporobolus montevidensis – denseflower cordgrass – Venezuela, Brazil, Argentina, Uruguay, Chile Sporobolus pumilus – saltmeadow cordgrass – east coast of North America from Labrador to Tamaulipas; West Indies Sporobolus versicolor – Mediterranean, Azores Ecology Species of the section Spartina are used as food plants by the larvae of some Lepidoptera species including the Aaron's skipper, which feeds exclusively on smooth cordgrass, and the engrailed moth. Some species of the section Spartina are considered as ecosystem engineers that can strongly influence the physical and biological environment. This is particularly important in areas where invasive Spartina species significantly alter their new environment, with impacts to native plants and animals. As an invasive species Three of the Spartina species have become invasive plants in some countries. In British Columbia, Sporobolus anglica, also known as English cordgrass, is an aggressive, aquatic alien that invades mud flats, salt marshes and beaches, out-competing native plants, spreading quickly over mud flats and leaving large Spartina meadows. It is also invasive in China and California. Sporobolus montevidensis and Sporobolus pumilus have become invasive on the Iberian Peninsula and the west coast of the United States Sporobolus alterniflorus and its hybrids with other Spartina species are invasive in numerous locations around the globe, including China, California, England, France, and Spain. Cultivation Species of the section Spartina have been planted to reclaim estuarine areas for farming, to supply fodder for livestock, and to prevent erosion. Various members of the genus (especially Sporobolus alterniflorus and its derivatives, Sporobolus anglicus and Sporobolus × townsendii) have spread outside of their native boundaries and become invasive. Big cordgrass (S. cynosuroides) is used in the construction of bull's eye targets for sports archery. A properly constructed target can stop an arrow safely without damage to the arrowhead as it lodges in the target. References Halophytes Grasses of Africa Grasses of Europe Grasses of North America Grasses of South America
Spartina
Chemistry
1,123
76,621,426
https://en.wikipedia.org/wiki/NGC%20646
NGC 646 is a large barred spiral galaxy located in the constellation Hydrus. Its speed relative to the cosmic microwave background is 8,145 ± 19 km/s, which corresponds to a Hubble distance of 120.1 ± 8.4 Mpc (~392 million ly). NGC 646 was discovered by British astronomer John Herschel in 1834. It forms an interacting galaxy pair. Luminosity The luminosity class of NGC 646 is III. It has surface brightness equal to 14.69 mag/am2. NGC 646 is a low surface brightness galaxy (LSB). LSB galaxies are diffuse (D) galaxies with a surface brightness less than one magnitude lower than that of the ambient night sky. Distance The Hubble distance of PGC 6014, the galaxy to the east of NGC 646, is 106.4 ± 7.5 Mpc (~347 million ly). A distance of approximately 45 million light years therefore separates these two galaxies which appear to be neighbors in the image. Their gravitational interaction, if there is any interaction, should therefore be of short duration. See also List of NGC objects (1–1000) Interacting galaxy External links NGC 646 at SIMBAD References Hydrus Barred spiral galaxies Interacting galaxies 0646 Astronomical objects discovered in 1834 Discoveries by John Herschel
NGC 646
Astronomy
274
984,380
https://en.wikipedia.org/wiki/Metalloproteinase
A metalloproteinase, or metalloprotease, is any protease enzyme whose catalytic mechanism involves a metal. An example is ADAM12 which plays a significant role in the fusion of muscle cells during embryo development, in a process known as myogenesis. Most metalloproteases require zinc, but some use cobalt. The metal ion is coordinated to the protein via three ligands. The ligands coordinating the metal ion can vary with histidine, glutamate, aspartate, lysine, and arginine. The fourth coordination position is taken up by a labile water molecule. Treatment with chelating agents such as EDTA leads to complete inactivation. EDTA is a metal chelator that removes zinc, which is essential for activity. They are also inhibited by the chelator orthophenanthroline. Classification There are two subgroups of metalloproteinases: Exopeptidases, metalloexopeptidases (EC number: 3.4.17). Endopeptidases, metalloendopeptidases (3.4.24). Well known metalloendopeptidases include ADAM proteins and matrix metalloproteinases, and M16 metalloproteinases such as Insulin Degrading Enzyme and Presequence Protease In the MEROPS database peptidase families are grouped by their catalytic type, the first character representing the catalytic type: A, aspartic; C, cysteine; G, glutamic acid; M, metallo; S, serine; T, threonine; and U, unknown. The serine, threonine and cysteine peptidases utilise the amino acid as a nucleophile and form an acyl intermediate - these peptidases can also readily act as transferases. In the case of aspartic, glutamic and metallopeptidases, the nucleophile is an activated water molecule. In many instances, the structural protein fold that characterises the clan or family may have lost its catalytic activity, yet retained its function in protein recognition and binding. Metalloproteases are the most diverse of the four main protease types, with more than 50 families classified to date. In these enzymes, a divalent cation, usually zinc, activates the water molecule. The metal ion is held in place by amino acid ligands, usually three in number. The known metal ligands are histidine, glutamate, aspartate or lysine and at least one other residue is required for catalysis, which may play an electrophilic role. Of the known metalloproteases, around half contain an HEXXH motif, which has been shown in crystallographic studies to form part of the metal-binding site. The HEXXH motif is relatively common, but can be more stringently defined for metalloproteases as 'abXHEbbHbc', where 'a' is most often valine or threonine and forms part of the S1' subsite in thermolysin and neprilysin, 'b' is an uncharged residue, and 'c' a hydrophobic residue. Proline is never found in this site, possibly because it would break the helical structure adopted by this motif in metalloproteases. Metallopeptidases from family M48 are integral membrane proteins associated with the endoplasmic reticulum and Golgi, binding one zinc ion per subunit. These endopeptidases include CAAX prenyl protease 1, which proteolytically removes the C-terminal three residues of farnesylated proteins. Metalloproteinase inhibitors are found in numerous marine organisms, including fish, cephalopods, mollusks, algae and bacteria. Members of the M50 metallopeptidase family include: mammalian sterol-regulatory element binding protein (SREBP) site 2 protease and Escherichia coli protease EcfE, stage IV sporulation protein FB. See also Matrix metalloproteinase The Proteolysis Map References External links The MEROPS online database for peptidases and their inhibitors: Metallo Peptidases Proteopedia: Metalloproteases Protein families Proteases Fibrinolytic enzymes
Metalloproteinase
Biology
943
33,478,101
https://en.wikipedia.org/wiki/Rep-tile
In the geometry of tessellations, a rep-tile or reptile is a shape that can be dissected into smaller copies of the same shape. The term was coined as a pun on animal reptiles by recreational mathematician Solomon W. Golomb and popularized by Martin Gardner in his "Mathematical Games" column in the May 1963 issue of Scientific American. In 2012 a generalization of rep-tiles called self-tiling tile sets was introduced by Lee Sallows in Mathematics Magazine. Terminology A rep-tile is labelled rep-n if the dissection uses n copies. Such a shape necessarily forms the prototile for a tiling of the plane, in many cases an aperiodic tiling. A rep-tile dissection using different sizes of the original shape is called an irregular rep-tile or irreptile. If the dissection uses n copies, the shape is said to be irrep-n. If all these sub-tiles are of different sizes then the tiling is additionally described as perfect. A shape that is rep-n or irrep-n is trivially also irrep-(kn − k + n) for any k > 1, by replacing the smallest tile in the rep-n dissection by n even smaller tiles. The order of a shape, whether using rep-tiles or irrep-tiles is the smallest possible number of tiles which will suffice. Examples Every square, rectangle, parallelogram, rhombus, or triangle is rep-4. The sphinx hexiamond (illustrated above) is rep-4 and rep-9, and is one of few known self-replicating pentagons. The Gosper island is rep-7. The Koch snowflake is irrep-7: six small snowflakes of the same size, together with another snowflake with three times the area of the smaller ones, can combine to form a single larger snowflake. A right triangle with side lengths in the ratio 1:2 is rep-5, and its rep-5 dissection forms the basis of the aperiodic pinwheel tiling. By Pythagoras' theorem, the hypotenuse, or sloping side of the rep-5 triangle, has a length of . The international standard ISO 216 defines sizes of paper sheets using the , in which the long side of a rectangular sheet of paper is the square root of two times the short side of the paper. Rectangles in this shape are rep-2. A rectangle (or parallelogram) is rep-n if its aspect ratio is :1. An isosceles right triangle is also rep-2. Rep-tiles and symmetry Some rep-tiles, like the square and equilateral triangle, are symmetrical and remain identical when reflected in a mirror. Others, like the sphinx, are asymmetrical and exist in two distinct forms related by mirror-reflection. Dissection of the sphinx and some other asymmetric rep-tiles requires use of both the original shape and its mirror-image. Rep-tiles and polyforms Some rep-tiles are based on polyforms like polyiamonds and polyominoes, or shapes created by laying equilateral triangles and squares edge-to-edge. Squares If a polyomino is rectifiable, that is, able to tile a rectangle, then it will also be a rep-tile, because the rectangle will have an integer side length ratio and will thus tile a square. This can be seen in the octominoes, which are created from eight squares. Two copies of some octominoes will tile a square; therefore these octominoes are also rep-16 rep-tiles. Four copies of some nonominoes and nonakings will tile a square, therefore these polyforms are also rep-36 rep-tiles. Equilateral triangles Similarly, if a polyiamond tiles an equilateral triangle, it will also be a rep-tile. Right triangles A right triangle is a triangle containing one right angle of 90°. Two particular forms of right triangle have attracted the attention of rep-tile researchers, the 45°-90°-45° triangle and the 30°-60°-90° triangle. 45°-90°-45° triangles Polyforms based on isosceles right triangles, with sides in the ratio 1 : 1 : , are known as polyabolos. An infinite number of them are rep-tiles. Indeed, the simplest of all rep-tiles is a single isosceles right triangle. It is rep-2 when divided by a single line bisecting the right angle to the hypotenuse. Rep-2 rep-tiles are also rep-2n and the rep-4,8,16+ triangles yield further rep-tiles. These are found by discarding half of the sub-copies and permutating the remainder until they are mirror-symmetrical within a right triangle. In other words, two copies will tile a right triangle. One of these new rep-tiles is reminiscent of the fish formed from three equilateral triangles. 30°-60°-90° triangles Polyforms based on 30°-60°-90° right triangles, with sides in the ratio 1 :  : 2, are known as polydrafters. Some are identical to polyiamonds. Multiple and variant rep-tilings Many of the common rep-tiles are rep- for all positive integer values of . In particular this is true for three trapezoids including the one formed from three equilateral triangles, for three axis-parallel hexagons (the L-tromino, L-tetromino, and P-pentomino), and the sphinx hexiamond. In addition, many rep-tiles, particularly those with higher rep-n, can be self-tiled in different ways. For example, the rep-9 L-tetramino has at least fourteen different rep-tilings. The rep-9 sphinx hexiamond can also be tiled in different ways. Rep-tiles with infinite sides The most familiar rep-tiles are polygons with a finite number of sides, but some shapes with an infinite number of sides can also be rep-tiles. For example, the teragonic triangle, or horned triangle, is rep-4. It is also an example of a fractal rep-tile. Pentagonal rep-tiles Triangular and quadrilateral (four-sided) rep-tiles are common, but pentagonal rep-tiles are rare. For a long time, the sphinx was widely believed to be the only example known, but the German/New-Zealand mathematician Karl Scherer and the American mathematician George Sicherman have found more examples, including a double-pyramid and an elongated version of the sphinx. These pentagonal rep-tiles are illustrated on the Math Magic pages overseen by the American mathematician Erich Friedman. However, the sphinx and its extended versions are the only known pentagons that can be rep-tiled with equal copies. See Clarke's Reptile pages. Rep-tiles and fractals Rep-tiles as fractals Rep-tiles can be used to create fractals, or shapes that are self-similar at smaller and smaller scales. A rep-tile fractal is formed by subdividing the rep-tile, removing one or more copies of the subdivided shape, and then continuing recursively. For instance, the Sierpinski carpet is formed in this way from a rep-tiling of a square into 27 smaller squares, and the Sierpinski triangle is formed from a rep-tiling of an equilateral triangle into four smaller triangles. When one sub-copy is discarded, a rep-4 L-triomino can be used to create four fractals, two of which are identical except for orientation. Fractals as rep-tiles Because fractals are often self-similar on smaller and smaller scales, many may be decomposed into copies of themselves like a rep-tile. However, if the fractal has an empty interior, this decomposition may not lead to a tiling of the entire plane. For example, the Sierpinski triangle is rep-3, tiled with three copies of itself, and the Sierpinski carpet is rep-8, tiled with eight copies of itself, but repetition of these decompositions does not form a tiling. On the other hand, the dragon curve is a space-filling curve with a non-empty interior; it is rep-4, and does form a tiling. Similarly, the Gosper island is rep-7, formed from the space-filling Gosper curve, and again forms a tiling. By construction, any fractal defined by an iterated function system of n contracting maps of the same ratio is rep-n. Infinite tiling Among regular polygons, only the triangle and square can be dissected into smaller equally sized copies of themselves. However, a regular hexagon can be dissected into six equilateral triangles, each of which can be dissected into a regular hexagon and three more equilateral triangles. This is the basis for an infinite tiling of the hexagon with hexagons. The hexagon is therefore an irrep-∞ or irrep-infinity irreptile. See also Mosaic Self-replication Self-tiling tile set Reptiles (M. C. Escher) Notes References External links Rep-tiles Mathematics Centre Sphinx Album: http://mathematicscentre.com/taskcentre/sphinx.htm Clarke, A. L. "Reptiles." http://www.recmath.com/PolyPages/PolyPages/Reptiles.htm. http://www.uwgb.edu/dutchs/symmetry/reptile1.htm (1999) IFStile - program for finding rep-tiles: https://ifstile.com Irrep-tiles Math Magic - Problem of the Month 10/2002 Tanya Khovanova - L-Irreptiles Tessellation Fractals
Rep-tile
Physics,Mathematics
2,136
2,116,008
https://en.wikipedia.org/wiki/Microprofessor%20III
Microprofessor III (MPF III), introduced in 1983, was Multitech's (later renamed Acer) third branded computer product and also (arguably) one of the first Apple IIe clones. Unlike the two earlier computers, its design was influenced by the IBM personal computer. Because of some additional functions in the ROM and different graphics routines, the MPF III was not totally compatible with the original Apple IIe computer. One key feature of the MPF III in some models was its Chinese BASIC, a version of Chinese-localized BASIC based on Applesoft BASIC. The MPF III included an optional Z80 CP/M emulator card. It permitted the computer to switch to the Z80 processor and run programs developed under the CP/M operating system. In 1984 the machine had a retail price of $699 in Australia. The different models in the MPF-III brand were the Multitech MPF-III/311 in NTSC countries (mainly in the United States and Canada) and the MPF-III/312 in PAL countries (mainly in Australia, Sweden, Spain, Finland, Italy and Singapore). It was also sold in Latin America as the Latindata MPF-III. Technical specifications CPU: MOS Technology 6502, 1 MHz Memory (RAM): 64KB dynamic RAM and 2KB static RAM ROM: 24KB, including MBASIC (MPF-III BASIC), monitor, sound, display and printer programs and drivers Operating system: DOS 3.3 or ProDOS Input/Output: NTSC composite video jack (MPF-III/311), TV RF modulator port, cassette port, printer port, joystick 9-pin D-type port, earphone, and external speaker jacks Expandability: internal slots (6), optional Z80 CP/M emulator card, one external Apple II type card slot Screen display: Text modes: 40×24, 80x24 (with 80 columns card) Graphics modes: 40×48 (16 col), 280×192 (6 col) Sound: one channel Storage: 2 optional 5.25 inch 140 KB diskette drives Keyboard: 90 keys keyboard with numeric keypad See also Microprofessor I — unrelated Z80 programming education device Microprofessor II References Acer Inc. computers Home computers Apple II clones Computer-related introductions in 1983
Microprofessor III
Technology
494
54,370,150
https://en.wikipedia.org/wiki/NGC%207038
NGC 7038 is an intermediate spiral galaxy located about 210 million light-years away in the constellation of Indus. Astronomer John Herschel discovered NGC 7038 on September 30, 1834. NGC 7038 along with NGC 7014 are the brightest members of Abell 3742. Abell 3742 is located near the center of the Pavo–Indus Supercluster. Supernovae Three supernovae have been observed in NGC 7038: SN 1983L (type unknown, mag. 17.1) was discovered by H. Schild and M. Pizarro on 14 June 1983. SN 2010dx (type II, mag. 17.4) was discovered by CHASE (CHilean Automatic Supernovas sEarch) on 8 June 2010. SN 2018hsa (type Ia, mag. 16) was discovered by the Backyard Observatory Supernova Search on November 1, 2018. See also NGC 4725 NGC 7001 List of NGC objects (7001–7840) References External links Intermediate spiral galaxies Indus (constellation) 7038 66414 Astronomical objects discovered in 1834 Abell 3742
NGC 7038
Astronomy
227
35,718,173
https://en.wikipedia.org/wiki/Thermoanaerobacter%20acetoethylicus
Thermoanaerobacter acetoethylicus, formerly called Thermobacteroides acetoethylicus, is a species of thermophilic, nonspore-forming bacteria. T. acetoethylicus was first isolated from Octopus Spring in Yellowstone National Park. The bacteria produce ethanol and acetic acid as fermentation products, but do not produce lactic acid. The growth range of T. acetoethylicus is 40-80 °C and pH 5.5-8.5, with the optimum growth temperature around 65 °C. The species was originally placed in its own new genus of Thermobacteroides in 1981. In 1993, based on further study, the species was moved into the genus Thermoanaerobacter. References Thermoanaerobacterales Thermophiles Anaerobes Bacteria described in 1983
Thermoanaerobacter acetoethylicus
Biology
186
25,197,285
https://en.wikipedia.org/wiki/Adamkiewicz%20reaction
The Adamkiewicz reaction is part of a biochemical test used to detect the presence of the amino acid tryptophan in proteins. When concentrated sulfuric acid is combined with a solution of protein and glyoxylic acid, a red/purple colour is produced. It was named after its discoverer, Albert Wojciech Adamkiewicz. Pure sulfuric acid and a minimal amount of pure formaldehyde, along with an oxidizing agent introduced into the sulfuric acid, allow the reaction to proceed. Later studies clarified the reaction's dependence on glyoxylic acid and its specific interaction with the amino acid tryptophan. These findings also shed light on the underlying chemical mechanism. Dependence on glyoxylic acid In 1901, researchers Fredricks Hopkins and Sydney W. Cole determined that glyoxylic acid, an impurity in acetic acid, was an essential component in the Adamkiewicz reaction. It was observed that the violet-red characteristic of the reaction occurred only when glyoxylic acid was present in the acetic acid used in the reaction. Without glyoxylic acid, the reaction failed, even if other conditions remained unchanged. Their work demonstrated that glyoxylic acid, in the presence of concentrated sulfuric acid and tryptophan, reacted with proteins to produce the characteristic violet-red coloration of the Adamkiewicz reaction. Mechanism and the indole ring The reaction relies on the interaction between glyoxylic acid and the indole ring of the amino acid tryptophan, a structural feature found in most proteins. When proteins are exposed to concentrated sulfuric acid and glyoxylic acid, the indole group undergoes a reaction that produces a highly colored compound. This interaction highlights tryptophan's central role in the test, as proteins lacking this amino acid do not produce the characteristic color change. Hopkins and Cole further noted that the sulfuric acid provided the acidic environment and acted as an oxidizing agent necessary for the reaction to proceed. Later studies proposed that the reaction involves a condensation process, where glyoxylic acid combines with the indole group of tryptophan to form a complex quinonoid structure. This process explains the strong color change observed in the test and has been key to understanding tryptophan's chemical properties and its function in proteins. See also Tryptophan Glyoxylic acid Indole References Organic reactions Biochemistry
Adamkiewicz reaction
Chemistry,Biology
507
608,168
https://en.wikipedia.org/wiki/Titadine
Titadyn 30 AG (often referred to as Titadine) is a type of compressed dynamite used in mining and manufactured in southern France by Titanite S.A. The explosive comes in the form of salmon-coloured tubes of a range of diameters, from 50 to 120 mm. Titadine is very powerful and fast-burning, with an energy rating of 4650 J/g and a speed of detonation of over 6,000 m/s. It was used in bomb attacks by the separatist group ETA in Spain. In September 1999 a combined group of ETA members and Breton separatists raided a factory at Plevin, Brittany, stealing over eight tonnes of Titadyn, some of which was subsequently sold to the Islamist resistance group Hamas, according to Spain's El Mundo newspaper. Another raid took place in March 2001 when an explosives factory near Grenoble in France was targeted and 1.6 tonnes of Titadyn was stolen. Much of it was later recovered by Spanish police in raids, or was used by ETA in car bomb attacks in Spanish cities. References External links Last report of explosives used the 11-M Explosives
Titadine
Chemistry
238
17,828,289
https://en.wikipedia.org/wiki/NaGISA
NaGISA (Natural Geography in Shore Areas or Natural Geography of In-Shore Areas) is an international collaborative effort aimed at inventorying, cataloguing, and monitoring biodiversity of the in-shore area. So named for the Japanese word "nagisa" ("where the land meets the sea"), it is an Apronym. NaGISA is the first project of the larger CoML effort (Census of Marine Life) to have global participation in actual field work. The actual procedures of this project involve inexpensive collection equipment (for easy universal participation). This equipment is used to photograph sampling sites, to actually take samples from the sites, and to process these samples. At each site throughout the world, samples are taken from the intertidal zone out to a depth of 10 meters (and optionally out to 20 meters depth). These samples are then processed (the organisms are isolated) and then analyzed and catalogued. The information (regarding the kind and number of organisms analyzed) is sent to the global headquarters of NaGISA- the University of Kyoto in Japan. All of this information is then collated on the Ocean Biogeographic Information System (OBIS website). The end goal of the larger CoML effort is to find what was, what is, and what will be in the world's oceans. For NaGISA the goal is to find this in the world's in-shore areas. See also Ecological forecasting References Biodiversity Fisheries databases Biodiversity databases Databases in Japan
NaGISA
Biology,Environmental_science
306
4,672,713
https://en.wikipedia.org/wiki/QVT
QVT (Query/View/Transformation) is a standard set of languages for model transformation defined by the Object Management Group. Overview Model transformation is a key technique used in model-driven architecture. As the name QVT indicates, the OMG standard covers transformations, views and queries together. Model queries and model views can be seen as special kinds of model transformation, provided that we use a suitably broad definition of model transformation: a model transformation is a program which operates on models. The QVT standard defines three model transformation languages. All of them operate on models which conform to Meta-Object Facility (MOF) 2.0 metamodels; the transformation states which metamodels are used. A transformation in any of the three QVT languages can itself be regarded as a model, conforming to one of the metamodels specified in the standard. The QVT standard integrates the OCL 2.0 standard and also extends it with imperative features. QVT-Operational is an imperative language designed for writing unidirectional transformations. QVT-Relations is a declarative language designed to permit both unidirectional and bidirectional model transformations to be written. A transformation embodies a consistency relation on sets of models. Consistency can be checked by executing the transformation in checkonly mode; the transformation then returns True if the set of models is consistent according to the transformation and False otherwise. The same transformation can be used in enforce mode to attempt to modify one of the models so that the set of models will be consistent. The QVT-Relations language has both a textual and a graphical concrete syntax. QVT-Core is a declarative language designed to be simple and to act as the target of translation from QVT-Relations. However, QVT-Core has never had a full implementation and in fact it is not as expressive as QVT-Relations. Hence the QVT Architecture pictured above is misleading: the transformation from QVT-Relations to QVT-Core given in the QVT Standard is not semantics-preserving. Finally, QVT-BlackBox is a mechanism to invoke transformation facilities expressed in other languages (for example XSLT or XQuery). Although QVT has a broad scope, it does not cover everything that has been considered as a model transformation, view or query. For example, the QVT languages do not permit transformations to or from textual models, since each model must conform to some MOF 2.0 metamodel. Model-to-text transformations are being standardised separately by OMG (see MOFM2T). History In 2002, OMG issued a Request for proposal (RFP) on MOF Query/View/Transformation to seek a standard compatible with the Model Driven Architecture (MDA) recommendation suite (UML, MOF, OCL, etc.). Several replies were given by a number of companies and research institutions that evolved during three years to produce a common proposal, based on a draft by UK research Dr Laurence Tratt. The first version was submitted and approved in 2005. QVT Version 1.1 was released in January 2011. Implementations QVT-Operational: Borland Together contains an implementation of QVT Operational, which has been contributed to the Eclipse Foundation and is now developed as the Eclipse M2M Operational QVT project. Eclipse M2M Operational QVT: official Eclipse open source implementation of QVT Operational. MagicDraw has the QVT plugin which uses Operational QVT implementation that is provided by the Eclipse M2M project. SmartQVT: an Eclipse open source implementation (Orange Labs) of the QVT-Operational language. QVT-Core: OptimalJ: Early access implementation of the QVT-Core language was in OptimalJ version 3.4 from Compuware. However, OptimalJ has been discontinued. QVT-Relations: ModelMorf: A proprietary tool from Tata Consultancy Services Ltd. Fully compliant with QVT-Relations language. The trial version provides a command line utility which consumes and produces models in XMI form. A full-fledged, repository integrated version is available as part of their proprietary modeling framework. MediniQVT: EMF based transformation engine with EPL license for engine and non-commercial license editor/debugger. Uses QVT-Relations syntax, but deliberately departs from the semantics of the OMG standard. The Eclipse M2M project aims to produce an implementation of QVT Core and Relations. Echo, an open-source EMF-based tool for model repair and transformation built over the Alloy model finder, which provides an implementation of QVT-Relations syntax, but using semantics that deliberately depart from the OMG specification. QVT-Like: jQVT: A compiled QVT engine for Java, using Xbase in place of OCL. A QVT-relational transformation is first compiled into Java source code, which then directly produces the target model from source ones at run-time, without interpreting the transformation rule again. It supports EMF models, as well as plain Java objects. Tefkat : an open source implementation of Tefkat language which is also similar to QVT. Open source. ATL : a component in the M2M Eclipse project. ATL is a QVT-like transformation language and engine with a large user community and an open source library of transformations. Model Transformation Framework (MTF): an IBM alphaWorks project, last updated in 2007. See also List of available transformation languages MOF Model to Text Transformation Language - OMG's transformation language specification for expressing M2T transformations Model-driven engineering (MDE) Model Driven Architecture (MDA): OMG's vision of MDE Domain-specific language (DSL) Meta-Object Facility (MOF): a language to write metamodels Object Constraint Language (OCL): a model constraint (and query) language Model transformation Model transformation language Metamodel References Systems engineering Unified Modeling Language gl:Model-driven architecture
QVT
Engineering
1,277
56,107
https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings%20algorithm
In statistics and statistical physics, the Metropolis–Hastings algorithm is a Markov chain Monte Carlo (MCMC) method for obtaining a sequence of random samples from a probability distribution from which direct sampling is difficult. New samples are added to the sequence in two steps: first a new sample is proposed based on the previous sample, then the proposed sample is either added to the sequence or rejected depending on the value of the probability distribution at that point. The resulting sequence can be used to approximate the distribution (e.g. to generate a histogram) or to compute an integral (e.g. an expected value). Metropolis–Hastings and other MCMC algorithms are generally used for sampling from multi-dimensional distributions, especially when the number of dimensions is high. For single-dimensional distributions, there are usually other methods (e.g. adaptive rejection sampling) that can directly return independent samples from the distribution, and these are free from the problem of autocorrelated samples that is inherent in MCMC methods. History The algorithm is named in part for Nicholas Metropolis, the first coauthor of a 1953 paper, entitled Equation of State Calculations by Fast Computing Machines, with Arianna W. Rosenbluth, Marshall Rosenbluth, Augusta H. Teller and Edward Teller. For many years the algorithm was known simply as the Metropolis algorithm. The paper proposed the algorithm for the case of symmetrical proposal distributions, but in 1970, W.K. Hastings extended it to the more general case. The generalized method was eventually identified by both names, although the first use of the term "Metropolis-Hastings algorithm" is unclear. Some controversy exists with regard to credit for development of the Metropolis algorithm. Metropolis, who was familiar with the computational aspects of the method, had coined the term "Monte Carlo" in an earlier article with Stanisław Ulam, and led the group in the Theoretical Division that designed and built the MANIAC I computer used in the experiments in 1952. However, prior to 2003 there was no detailed account of the algorithm's development. Shortly before his death, Marshall Rosenbluth attended a 2003 conference at LANL marking the 50th anniversary of the 1953 publication. At this conference, Rosenbluth described the algorithm and its development in a presentation titled "Genesis of the Monte Carlo Algorithm for Statistical Mechanics". Further historical clarification is made by Gubernatis in a 2005 journal article recounting the 50th anniversary conference. Rosenbluth makes it clear that he and his wife Arianna did the work, and that Metropolis played no role in the development other than providing computer time. This contradicts an account by Edward Teller, who states in his memoirs that the five authors of the 1953 article worked together for "days (and nights)". In contrast, the detailed account by Rosenbluth credits Teller with a crucial but early suggestion to "take advantage of statistical mechanics and take ensemble averages instead of following detailed kinematics". This, says Rosenbluth, started him thinking about the generalized Monte Carlo approach – a topic which he says he had discussed often with John Von Neumann. Arianna Rosenbluth recounted (to Gubernatis in 2003) that Augusta Teller started the computer work, but that Arianna herself took it over and wrote the code from scratch. In an oral history recorded shortly before his death, Rosenbluth again credits Teller with posing the original problem, himself with solving it, and Arianna with programming the computer. Description The Metropolis–Hastings algorithm can draw samples from any probability distribution with probability density , provided that we know a function proportional to the density and the values of can be calculated. The requirement that must only be proportional to the density, rather than exactly equal to it, makes the Metropolis–Hastings algorithm particularly useful, because it removes the need to calculate the density's normalization factor, which is often extremely difficult in practice. The Metropolis–Hastings algorithm generates a sequence of sample values in such a way that, as more and more sample values are produced, the distribution of values more closely approximates the desired distribution. These sample values are produced iteratively in such a way, that the distribution of the next sample depends only on the current sample value, which makes the sequence of samples a Markov chain. Specifically, at each iteration, the algorithm proposes a candidate for the next sample value based on the current sample value. Then, with some probability, the candidate is either accepted, in which case the candidate value is used in the next iteration, or it is rejected in which case the candidate value is discarded, and the current value is reused in the next iteration. The probability of acceptance is determined by comparing the values of the function of the current and candidate sample values with respect to the desired distribution. The method used to propose new candidates is characterized by the probability distribution (sometimes written ) of a new proposed sample given the previous sample . This is called the proposal density, proposal function, or jumping distribution. A common choice for is a Gaussian distribution centered at , so that points closer to are more likely to be visited next, making the sequence of samples into a Gaussian random walk. In the original paper by Metropolis et al. (1953), was suggested to be a uniform distribution limited to some maximum distance from . More complicated proposal functions are also possible, such as those of Hamiltonian Monte Carlo, Langevin Monte Carlo, or preconditioned Crank–Nicolson. For the purpose of illustration, the Metropolis algorithm, a special case of the Metropolis–Hastings algorithm where the proposal function is symmetric, is described below. Metropolis algorithm (symmetric proposal distribution) Let be a function that is proportional to the desired probability density function (a.k.a. a target distribution). Initialization: Choose an arbitrary point to be the first observation in the sample and choose a proposal function . In this section, is assumed to be symmetric; in other words, it must satisfy . For each iteration t: Propose a candidate for the next sample by picking from the distribution . Calculate the acceptance ratio , which will be used to decide whether to accept or reject the candidate. Because f is proportional to the density of P, we have that . Accept or reject: Generate a uniform random number . If , then accept the candidate by setting , If , then reject the candidate and set instead. This algorithm proceeds by randomly attempting to move about the sample space, sometimes accepting the moves and sometimes remaining in place. at specific point is proportional to the iterations spent on the point by the algorithm. Note that the acceptance ratio indicates how probable the new proposed sample is with respect to the current sample, according to the distribution whose density is . If we attempt to move to a point that is more probable than the existing point (i.e. a point in a higher-density region of corresponding to an ), we will always accept the move. However, if we attempt to move to a less probable point, we will sometimes reject the move, and the larger the relative drop in probability, the more likely we are to reject the new point. Thus, we will tend to stay in (and return large numbers of samples from) high-density regions of , while only occasionally visiting low-density regions. Intuitively, this is why this algorithm works and returns samples that follow the desired distribution with density . Compared with an algorithm like adaptive rejection sampling that directly generates independent samples from a distribution, Metropolis–Hastings and other MCMC algorithms have a number of disadvantages: The samples are autocorrelated. Even though over the long term they do correctly follow , a set of nearby samples will be correlated with each other and not correctly reflect the distribution. This means that effective sample sizes can be significantly lower than the number of samples actually taken, leading to large errors. Although the Markov chain eventually converges to the desired distribution, the initial samples may follow a very different distribution, especially if the starting point is in a region of low density. As a result, a burn-in period is typically necessary, where an initial number of samples are thrown away. On the other hand, most simple rejection sampling methods suffer from the "curse of dimensionality", where the probability of rejection increases exponentially as a function of the number of dimensions. Metropolis–Hastings, along with other MCMC methods, do not have this problem to such a degree, and thus are often the only solutions available when the number of dimensions of the distribution to be sampled is high. As a result, MCMC methods are often the methods of choice for producing samples from hierarchical Bayesian models and other high-dimensional statistical models used nowadays in many disciplines. In multivariate distributions, the classic Metropolis–Hastings algorithm as described above involves choosing a new multi-dimensional sample point. When the number of dimensions is high, finding the suitable jumping distribution to use can be difficult, as the different individual dimensions behave in very different ways, and the jumping width (see above) must be "just right" for all dimensions at once to avoid excessively slow mixing. An alternative approach that often works better in such situations, known as Gibbs sampling, involves choosing a new sample for each dimension separately from the others, rather than choosing a sample for all dimensions at once. That way, the problem of sampling from potentially high-dimensional space will be reduced to a collection of problems to sample from small dimensionality. This is especially applicable when the multivariate distribution is composed of a set of individual random variables in which each variable is conditioned on only a small number of other variables, as is the case in most typical hierarchical models. The individual variables are then sampled one at a time, with each variable conditioned on the most recent values of all the others. Various algorithms can be used to choose these individual samples, depending on the exact form of the multivariate distribution: some possibilities are the adaptive rejection sampling methods, the adaptive rejection Metropolis sampling algorithm, a simple one-dimensional Metropolis–Hastings step, or slice sampling. Formal derivation The purpose of the Metropolis–Hastings algorithm is to generate a collection of states according to a desired distribution . To accomplish this, the algorithm uses a Markov process, which asymptotically reaches a unique stationary distribution such that . A Markov process is uniquely defined by its transition probabilities , the probability of transitioning from any given state to any other given state . It has a unique stationary distribution when the following two conditions are met: Existence of stationary distribution: there must exist a stationary distribution . A sufficient but not necessary condition is detailed balance, which requires that each transition is reversible: for every pair of states , the probability of being in state and transitioning to state must be equal to the probability of being in state and transitioning to state , . Uniqueness of stationary distribution: the stationary distribution must be unique. This is guaranteed by ergodicity of the Markov process, which requires that every state must (1) be aperiodic—the system does not return to the same state at fixed intervals; and (2) be positive recurrent—the expected number of steps for returning to the same state is finite. The Metropolis–Hastings algorithm involves designing a Markov process (by constructing transition probabilities) that fulfills the two above conditions, such that its stationary distribution is chosen to be . The derivation of the algorithm starts with the condition of detailed balance: which is re-written as The approach is to separate the transition in two sub-steps; the proposal and the acceptance-rejection. The proposal distribution is the conditional probability of proposing a state given , and the acceptance distribution is the probability to accept the proposed state . The transition probability can be written as the product of them: Inserting this relation in the previous equation, we have The next step in the derivation is to choose an acceptance ratio that fulfills the condition above. One common choice is the Metropolis choice: For this Metropolis acceptance ratio , either or and, either way, the condition is satisfied. The Metropolis–Hastings algorithm can thus be written as follows: Initialise Pick an initial state . Set . Iterate Generate a random candidate state according to . Calculate the acceptance probability . Accept or reject: generate a uniform random number ; if , then accept the new state and set ; if , then reject the new state, and copy the old state forward . Increment: set . Provided that specified conditions are met, the empirical distribution of saved states will approach . The number of iterations () required to effectively estimate depends on the number of factors, including the relationship between and the proposal distribution and the desired accuracy of estimation. For distribution on discrete state spaces, it has to be of the order of the autocorrelation time of the Markov process. It is important to notice that it is not clear, in a general problem, which distribution one should use or the number of iterations necessary for proper estimation; both are free parameters of the method, which must be adjusted to the particular problem in hand. Use in numerical integration A common use of Metropolis–Hastings algorithm is to compute an integral. Specifically, consider a space and a probability distribution over , . Metropolis–Hastings can estimate an integral of the form of where is a (measurable) function of interest. For example, consider a statistic and its probability distribution , which is a marginal distribution. Suppose that the goal is to estimate for on the tail of . Formally, can be written as and, thus, estimating can be accomplished by estimating the expected value of the indicator function , which is 1 when and zero otherwise. Because is on the tail of , the probability to draw a state with on the tail of is proportional to , which is small by definition. The Metropolis–Hastings algorithm can be used here to sample (rare) states more likely and thus increase the number of samples used to estimate on the tails. This can be done e.g. by using a sampling distribution to favor those states (e.g. with ). Step-by-step instructions Suppose that the most recent value sampled is . To follow the Metropolis–Hastings algorithm, we next draw a new proposal state with probability density and calculate a value where is the probability (e.g., Bayesian posterior) ratio between the proposed sample and the previous sample , and is the ratio of the proposal density in two directions (from to and conversely). This is equal to 1 if the proposal density is symmetric. Then the new state is chosen according to the following rules. If else: The Markov chain is started from an arbitrary initial value , and the algorithm is run for many iterations until this initial state is "forgotten". These samples, which are discarded, are known as burn-in. The remaining set of accepted values of represent a sample from the distribution . The algorithm works best if the proposal density matches the shape of the target distribution , from which direct sampling is difficult, that is . If a Gaussian proposal density is used, the variance parameter has to be tuned during the burn-in period. This is usually done by calculating the acceptance rate, which is the fraction of proposed samples that is accepted in a window of the last samples. The desired acceptance rate depends on the target distribution, however it has been shown theoretically that the ideal acceptance rate for a one-dimensional Gaussian distribution is about 50%, decreasing to about 23% for an -dimensional Gaussian target distribution. These guidelines can work well when sampling from sufficiently regular Bayesian posteriors as they often follow a multivariate normal distribution as can be established using the Bernstein–von Mises theorem. If is too small, the chain will mix slowly (i.e., the acceptance rate will be high, but successive samples will move around the space slowly, and the chain will converge only slowly to ). On the other hand, if is too large, the acceptance rate will be very low because the proposals are likely to land in regions of much lower probability density, so will be very small, and again the chain will converge very slowly. One typically tunes the proposal distribution so that the algorithms accepts on the order of 30% of all samples – in line with the theoretical estimates mentioned in the previous paragraph. Bayesian Inference MCMC can be used to draw samples from the posterior distribution of a statistical model. The acceptance probability is given by: where is the likelihood, the prior probability density and the (conditional) proposal probability. See also Genetic algorithms Mean-field particle methods Metropolis light transport Multiple-try Metropolis Parallel tempering Sequential Monte Carlo Simulated annealing References Notes Further reading Bernd A. Berg. Markov Chain Monte Carlo Simulations and Their Statistical Analysis. Singapore, World Scientific, 2004. Chib, Siddhartha; Greenberg, Edward (1995). "Understanding the Metropolis–Hastings Algorithm". The American Statistician, 49(4), 327–335. David D. L. Minh and Do Le Minh. "Understanding the Hastings Algorithm." Communications in Statistics - Simulation and Computation, 44:2 332–349, 2015 Bolstad, William M. (2010) Understanding Computational Bayesian Statistics, John Wiley & Sons Monte Carlo methods Markov chain Monte Carlo Statistical algorithms
Metropolis–Hastings algorithm
Physics
3,482
8,200
https://en.wikipedia.org/wiki/Discovery%20of%20chemical%20elements
The discoveries of the 118 chemical elements known to exist as of 2025 are presented here in chronological order. The elements are listed generally in the order in which each was first defined as the pure element, as the exact date of discovery of most elements cannot be accurately determined. There are plans to synthesize more elements, and it is not known how many elements are possible. Each element's name, atomic number, year of first report, name of the discoverer, and notes related to the discovery are listed. Periodic table of elements Graphical timeline Cumulative diagram Pre-modern and early modern discoveries Modern discoveries For 18th-century discoveries, around the time that Antoine Lavoisier first questioned the phlogiston theory, the recognition of a new "earth" has been regarded as being equivalent to the discovery of a new element (as was the general practice then). For some elements (e.g. Be, B, Na, Mg, Al, Si, K, Ca, Mn, Co, Ni, Zr, Mo), this presents further difficulties as their compounds were widely known since medieval or even ancient times, even though the elements themselves were not. Since the true nature of those compounds was sometimes only gradually discovered, it is sometimes very difficult to name one specific discoverer. In such cases the first publication on their chemistry is noted, and a longer explanation given in the notes. See also History of the periodic table Periodic table Extended periodic table The Mystery of Matter: Search for the Elements (2014/2015 PBS film) Transfermium Wars References External links History of the Origin of the Chemical Elements and Their Discoverers Last updated by Boris Pritychenko on March 30, 2004 History of Elements of the Periodic Table Timeline of Element Discoveries The Historyscoper Discovery of the Elements – The Movie – YouTube (1:18) The History Of Metals Timeline . A timeline showing the discovery of metals and the development of metallurgy. —Eric Scerri, 2007, The periodic table: Its story and its significance, Oxford University Press, New York, Elements, discoveries Timeline History of chemistry History of physics Discovery
Discovery of chemical elements
Chemistry
427
477,073
https://en.wikipedia.org/wiki/Cybiko
The Cybiko is a line of personal digital assistants and handheld game consoles first released by Cybiko Inc. in 2000. Cybiko Inc. was a startup company founded by David Yang; the eponymous PDA was first test marketed in New York in April 2000 and rolled out nationwide in May 2000. It was designed for teens, featuring its own two-way radio text messaging system. It has over 430 "official" freeware games and applications. It features a rubber QWERTY keyboard. An MP3 player add-on with a SmartMedia card slot was made for the unit as well. Cybikos can communicate with each other up to a maximum range of . Several Cybikos can chat with each other in a wireless chatroom. By the end of 2000, the Cybiko Classic had sold over 500,000 units. The company stopped manufacturing the units after two product versions and a few years on the market. History Cybiko development was initiated in 1998 by Russian entrepreneur David Yang, founder of ABBYY Software House. The concept for the device emerged from social research conducted in six countries, which identified a need for digital communication among youth. The first prototype was completed by October 1998. By the end of 1999, three industrial samples had been produced, and a radio protocol was patented. This protocol allowed up to 3,000 Cybiko devices to form a network without using auxiliary stations. By early 1999, the Moscow-based development team had grown to 40 employees. The Cybiko was designed as a handheld computer for teenagers, combining communication capabilities with entertainment features. It included a QWERTY keyboard, a monochrome display, short-range radio messaging, and support for downloadable applications and games. Inventec, a Taiwanese manufacturer, was contracted for device production, while marketing was entrusted to Poznik & Kolker, an American firm known for their successful promotion of Furby toys. Although initially scheduled for September 1999, Cybiko's launch was delayed until early 2000 due to difficulties in casing production. This issue consumed a significant portion of the project's initial budget, with $70,000 of the $120,000 allocated to casing development. Prior to Cybiko's official New York presentation, David Yang met with ICQ founder Yossi Vardi. Vardi, impressed by the device, introduced Yang to America Online CEO Steve Case, which subsequently led to substantial investment in the project. Cybiko was released in the U.S. market in 2000 after securing $20 million in investments. The device launched in April with a trial run in select New York retailers, including FAO Schwarz, Virgin Megastore, and Software Etc/Babbage's, before expanding nationwide in mid-May. Its features, such as short-range messaging, file sharing, and the ability to discover nearby users, quickly garnered popularity and media attention. The initial retail price was set at $139. Sales performance was strong, with devices worth $25 million sold during the first three weeks of the holiday season, contributing to total first-year sales of approximately 250,000 units. In 2001, Cybiko introduced an upgraded model, the Cybiko Xtreme, featuring improved design and specifications. However, the emergence of more affordable internet-capable mobile phones posed significant competition. Despite initial market success, the company encountered financial challenges exacerbated by the early 2000s dot-com crisis, which hindered new investment efforts. A planned London device launch on September 15, 2001, was canceled due to the September 11 attacks, further impacting the company's financial position. Late 2001 saw the company restructure into two entities: Cybiko Advance Technologies, retaining ownership of the wireless network, and CWAG (Cybiko Wireless Applications and Games), which acquired the game development portfolio and other intellectual assets. CWAG subsequently began collaborating with mobile companies such as Motorola, Sprint, and Nokia to adapt its software for mobile phone platforms. In 2003, Cybiko Inc. announced that it would cease production of its communicators. Models Cybiko Classic There are two models of the Classic Cybiko. Visually, the only difference is that the original version has a power switch on the side, while the updated version uses the "escape" key for power management. Internally, the differences between the two models are in the internal memory and the firmware location. The CPU is a Hitachi H8S/2241 clocked at 11.0592 MHz and the Cybiko Classic also has an Atmel AT90S2313 co-processor, clocked at 4 MHz to provide some support for RF communications. It has 512KB flash memory-based ROM flash memory and 256KB RAM installed. An add-on slot is located in the rear. The Cybiko Classics were sold in five colors: blue, purple, neon green, white, and black. The black version has a yellow keypad, instead of the white unit found on other Cybikos. The add-on slot has the same pin arrangement as a PC card, but it is not electrically compatible. Cybiko Xtreme The Cybiko Xtreme is the second-generation Cybiko handheld. It features various improvements over the original Cybiko, such as a faster processor, more RAM, more ROM, a new operating system, a new keyboard layout and case design, greater wireless range, a microphone, improved audio output, and smaller size. The CPU is a Hitachi H8S/2323 at 18 MHz, and like the original version, it also has an Atmel AT90S2313 co-processor at 4 MHz to provide some support for RF communications. 512KiB ROM flash memory and 1.5MiB RAM is installed. It features an add-on slot in the rear, which is compatible with the MP3 player. It was released in two variants. US variant (Model No. CY44801) has frequency range of 902-928 MHz and European variant (Model No. CY44802) with frequency range of 868-870 MHz. No other functional difference exists between these variants. Options MP3 player Classic MP3 Player: The MP3 player for the Classic plugs into the bottom of the Cybiko and used SmartMedia cards; it can support a maximum size of 64 MB. The player has built-in controls. Xtreme MP3 Player: The MP3 player plugs into the rear of the Cybiko Xtreme. It has a slot for one MMC memory card. The MP3 player can only be controlled from the Cybiko. A memory card from the MP3 player can also be addressed from the Cybiko and used for data and program storage. 1MB Expansion Memory The memory expansion card plugs into the rear of the Cybiko. It provides 256 kilobytes of static RAM, and 1 megabyte of data flash memory. The RAM allows programs with larger memory requirements to run. The data flash allows more programs to be stored. Some Cybiko programs will not run unless the Expansion Memory is plugged in. Games A large number of games were produced for the Cybiko. Programs were posted daily on the website and can be downloaded using the CyberLoad application. Many games support multiplayer mode with automatic saves, which allowed resuming the game in case of a connection loss. Some of the company games were ported to the Cybiko, including under the Funny Balls title, and . The first games on the Cybiko were initially created in the genre of classic board games – chess, checkers, backgammon, kalah, renju and seega. The "casual" puzzle games Phat Cash and Tooty Fruity were also made, with the latter requiring the Cybiko to be held horizontally. A first-person shooter engine was written, on which the game Lost in Labyrinth is built, similar in gameplay to Wolfenstein 3D. The popular skateboarding game Blazing Boards is based on the racing engine which was later used as the basis for Tony Hawk's Pro Skater for cell phones, in a collaboration between Cybiko and THQ. Turn-based strategy and real-time strategy games include Warfare and Land of Kings, with the latter requiring a memory card to work. The flagship game on the system is CyLandia, which combines the tamagotchi and economic strategy genres. Cybiko devices with the game installed have pets called Cy-B (also called "cypets"), which the player has to raise. The game continues on switched-off devices, and in case of insufficient attention, Cy-B could "run away" to any other Cybiko within range. Players can also voluntarily send pets to other devices. Toward the end of the Cybiko's lifecycle, quest and RPG genre games were being developed, but were not released. However, the fighting game Knight's Tournament contains role-playing elements, where player characters can be outfitted with various equipment won in tournaments. After the September 11 attacks, a problem of game censorship emerged, which led to the cancellation of the beat-'em-up game Renegade by the American management, in part because the main character is a police officer who beats up hooligans. Comparison References Personal digital assistants Handheld game consoles Sixth-generation video game consoles Computer-related introductions in 2000 Mesh networking Defunct computer companies of the United States Defunct computer hardware companies Defunct computer systems companies
Cybiko
Technology
1,940
25,871,959
https://en.wikipedia.org/wiki/PlanetPol
PlanetPol was a ground-based, high sensitivity polarimeter based at the William Herschel Telescope on the island of La Palma in the Canary Islands, Spain that has now been decommissioned. It was the most sensitive astronomical visual polarimeter ever built in fractional polarisation, a mantle that since its decommissioning now belongs to HIPPI. Although the device could be used for a wide range of astronomy, its primary use was the detection of extrasolar planets. Results PlanetPol did not discover any extrasolar planets, however it was used to provide upper limits to planetary albedos in the known 55 Cnc and τ Boo planetary systems. Observations with the polarimeter in the Canary Islands, which are affected by dust from the Sahara, also identified airborne dust as a source of polarization within our atmosphere. Additionally, PlanetPol provided measurements of the polarization of a few dozen nearby stars, which were later combined with southern hemisphere measurements from PlanetPol's successor, HIPPI, to provide information about the nature of those stars and the distribution of the interstellar medium. References Exoplanet search projects
PlanetPol
Astronomy
225
42,828,392
https://en.wikipedia.org/wiki/Low%20Carbon%20Vehicle%20Event
The Low Carbon Vehicle Event (LCV), is the United Kingdom's premier low carbon vehicle event. It is held annually since 2008 at Millbrook Proving Ground at the beginning of September. The show consists of a technological exhibition, seminars sessions and Ride & Drive (of latest research and development and commercial available low emission vehicles) activities. LCV is a business-to-business free-to-attend event organised by Cenex and whose main aim is promoting the UK supply chain of the low carbon vehicle industry. LCV history The Low Carbon Vehicle Event has been organised annually from 2008 by Cenex and supporting partners include: The Department for Business, Energy and Industrial Strategy, The Centre for Connected and Autonomous Vehicles, The Office for Low Emission Vehicles, The Department for International Trade, The Advanced Propulsion Centre, The Automotive Council UK, Innovate UK, The Low Carbon Vehicle Partnership, Millbrook, The Society of Motor Manufacturers, and Transport Systems Catapult. LCV 2016 The event attracts a range of customers (Automotive 53%, Energy & Infrastructure 8%, Government & Local Authority 7%, Academia 8%) and senior managers including those at board level [38%] and middle management [27%]. LCV2016 set a new record attendance with 3,137 visitors, 226 exhibitors and 1180 organisations represented. References Auto shows Automotive industry in the United Kingdom Electric vehicle organizations Plug-in hybrid vehicles Recurring events established in 2008 Science and technology in Bedfordshire Sustainable transport Electric vehicles in the United Kingdom Vehicle emission controls
Low Carbon Vehicle Event
Physics,Engineering
312
2,035,310
https://en.wikipedia.org/wiki/Neurochip
A neurochip is an integrated circuit chip (such as a microprocessor) that is designed for interaction with neuronal cells. Formation It is made of silicon that is doped in such a way that it contains EOSFETs (electrolyte-oxide-semiconductor field-effect transistors) that can sense the electrical activity of the neurons (action potentials) in the above-standing physiological electrolyte solution. It also contains capacitors for the electrical stimulation of the neurons. The University of Calgary, Faculty of Medicine scientists led by Pakistani-born Canadian scientist Naweed Syed who proved it is possible to cultivate a network of brain cells that reconnect on a silicon chip—or the brain on a microchip—have developed new technology that monitors brain cell activity at a resolution never achieved before. Developed with the National Research Council Canada (NRC), the new silicon chips are also simpler to use, which will help future understanding of how brain cells work under normal conditions and permit drug discoveries for a variety of neurodegenerative diseases, such as Alzheimer's and Parkinson's. Naweed Syed's lab cultivated brain cells on a microchip. The new technology from the lab of Naweed Syed, in collaboration with the NRC, was published online in August 2010, in the journal Biomedical Devices. It is the world's first neurochip. It is based on Syed's earlier experiments on neurochip technology dating back to 2003. "This technical breakthrough means we can track subtle changes in brain activity at the level of ion channels and synaptic potentials, which are also the most suitable target sites for drug development in neurodegenerative diseases and neuropsychological disorders," says Syed, professor and head of the Department of Cell Biology and Anatomy, member of the Hotchkiss Brain Institute and advisor to the Vice President Research on Biomedical Engineering Initiative of the University of Chicago. The new neurochips are also automated, meaning that anyone can learn to place individual brain cells on them. Previously it took years of training to learn how to record ion channel activity from brain cells, and it was only possible to monitor one or two cells simultaneously. Now, larger networks of cells can be placed on a chip and observed in minute detail, allowing the analysis of several brain cells networking and performing automatic, large-scale drug screening for various brain dysfunctions. This new technology has the potential to help scientists in a variety of fields and on a variety of research projects. Gerald Zamponi, professor and head of the Department of Physiology and Pharmacology, and member of the Hotchkiss Brain Institute, says, “This technology can likely be scaled up such that it will become a novel tool for medium throughput drug screening, in addition to its usefulness for basic biomedical research”. "In previous studies, researchers developed a neurochip that could directly stimulate and record brain cell activity. Now, Orly Yadid-Pecht and Naweed Syed have successfully developed a novel lab-on-a-chip technology that, through an ultra-sensitive component built directly on the microchip, also enables direct imaging of activity in brain cells." Applications Present applications are neuron research. Future applications (still in the experimental phase) are retinal implants or brain implants. See also Brain–computer interfacing CoDi Cultured neuronal networks Neuroprosthetics References External links Video interview with Dr. Naweed Syed, Hotchkiss Brain Institute, Neurochip co-lead researcher, Published Aug 17, 2012 Brain–computer interface Integrated circuits Canadian inventions Pakistani inventions
Neurochip
Technology,Engineering
758
38,336,626
https://en.wikipedia.org/wiki/Link%20layer%20security
The link layer is the lowest layer in the TCP/IP model. It is also referred to as the network interface layer and mostly equivalent to the data link layer plus physical layer in OSI. This particular layer has several unique security vulnerabilities that can be exploited by a determined adversary. Network interface layer The link layer is the interface between the host system and the network hardware. It defines how data packets are to be formatted for transmission and routings. Some common link-layer protocols include IEEE 802.2 and X.25. The data link layer and its associated protocols govern the physical interface between the host computer and the network hardware. The goal of this layer is to provide reliable communications between hosts connected on a network. Services provided by this layer of the network stack include: Data Framing Breaking up the data stream into individual frames or packets. Checksums Sending checksum data for each frame to enable the receiving node to determine whether or not the frame was received error-free. Acknowledgment Sending either a positive (data was received) or negative (data was not received but expected) acknowledgement from receiver to sender to ensure reliable data transmission. Flow Control Buffering data transmissions to ensure that a fast sender does not overwhelm a slower receiver. Vulnerabilities and mitigation strategies Wired networks Content Address Memory (CAM) table exhaustion attack The data link layer addresses data packets based on the destination hardware's physical Media Access Control (MAC) address. Switches within the network maintain Content Address Tables (CAMs) that maps the switch's ports to specific MAC addresses. These tables allow the switch to securely deliver the packet to its intended physical address only. Using the switch to connect only the systems that are communicating provides much greater security than a network hub, which broadcasts all traffic over all ports, allowing an eavesdropper to intercept and monitor all network traffic. A CAM Table Exhaustion Attack basically turns a switch into a hub. The attacker floods the CAM table with new MAC-to-port mappings until the table's fixed memory allotment is full. At this point the switch no longer knows how to deliver traffic based on a MAC-to-port mapping, and defaults to broadcasting traffic over all ports. An adversary is then able to intercept and monitor all network traffic traversing the switch to include passwords, emails, instant messages, etc. The CAM table-overflow attack can be mitigated by configuring port security on the switch. This option provides for either the specification of the MAC addresses on a particular switch port or the specification of the number of MAC addresses that can be learned by a switch port. When an invalid MAC address is detected on the port, the switch can either block the offending MAC address or shut down the port. Address Routing Protocol (ARP) spoofing At the data link layer a logical IP address assigned by the network layer is translated into a physical MAC address. In order to ensure reliable data communications all switches in the network must maintain up-to-date tables for mapping logical (IP) to physical (MAC) addresses. If a client or switch is unsure of the IP-to-MAC mapping of a data packet it receives it will send an Address Resolution Protocol (ARP) message to the nearest switch asking for the MAC address associated with the particular IP address. Once this is accomplished the client or switch will update its table to reflect the new mapping. In an ARP spoofing attack the adversary broadcasts the IP address of the machine to be attacked along with its own MAC address. All neighboring switches will then update their mapping tables and begin transmitting data destined to the attacked system's IP address to the attacker's MAC address. Such an attack is commonly referred to as a "man in the middle" attack. Defenses against ARP spoofing generally rely on some form of certification or cross-checking of ARP responses. Uncertified ARP responses are blocked. These techniques may be integrated with the Dynamic Host Configuration Protocol (DHCP) server so that both dynamic and static IP addresses are certified. This capability may also be implemented in individual hosts or may be integrated into Ethernet switches or other network equipment. Dynamic Host Configuration Protocol (DHCP) starvation When a client system without an IP address enters a network it will request an IP address from the resident DHCP server. The DHCP server will reserve an IP address (so anyone else asking for one is not granted this one) and it will send that IP address to the device along with a lease identifying how long the address will be valid. Normally, from this point, the device will respond by confirming the IP address with the DHCP server and the DHCP server finally responds with an acknowledgement. In a DHCP starvation attack, once the adversary receives the IP address and the lease period from the DHCP server, the adversary does not respond with the confirmation. Instead, the adversary floods the DHCP server with IP address requests until all addresses within the server's address space have been reserved (exhausted). At this point, any hosts wishing to join the network will be denied access, resulting in a denial of service. The adversary can then set up a rogue DHCP server so that clients receive incorrect network settings and as a result transmit data to an attacker's machine. One method for mitigating this type of attack is to use the IP source guard feature available on many Ethernet switches. The IP guard initially blocks all traffic except DHCP packets. When a client receives a valid IP address from the DHCP server the IP address and switch port relationship are bound in an Access Control List (ACL). The ACL then restricts traffic only to those IP addresses configured in the binding. Wireless networks Hidden node attack In a wireless network many hosts or nodes are sharing a common medium. If nodes A and B are both wireless laptop computers communicating in an office environment their physical separation may require that they communicate through a wireless access point. But only one device can transmit at a time in order to avoid packet collisions. Prior to transmitting, Node A sends out a Ready to Send (RTS) signal. If it is not receiving any other traffic the access point will broadcast a Clear to Send (CTS) signal over the network. Node A will then begin transmitting while Node B knows to hold off transmitting its data for the time being. Even though it cannot directly communicate with Node A, i.e. Node A is hidden, it knows to wait based on its communication with the access point. An attacker can exploit this functionality by flooding the network with CTS messages. Then every node assumes there is a hidden node trying to transmit and will hold its own transmissions, resulting in a denial of service. Preventing hidden node attacks requires a network tool such as NetEqualizer. Such a tool monitors access point traffic and develops a baseline level of traffic. Any spikes in CTS/RTS signals are assumed to be the result of a hidden node attack and are subsequently blocked. De-auth (de-authentication) attack Any client entering a wireless network must first authenticate with an access point (AP) and is thereafter associated with that access point. When the client leaves it sends a deauthentication, or deauth, message to disassociate itself with the access point. An attacker can send deauth messages to an access point tied to client IP addresses thereby knocking the users off-line and requiring continued re-authenticate, giving the attacker valuable insight into the reauthentication handshaking that occurs. To mitigate this attack, the access point can be set up to delay the effects of deauthentication or disassociation requests (e.g., by queuing such requests for 5–10 seconds) thereby giving the access point an opportunity to observe subsequent packets from the client. If a data packet arrives after a deauthentication or disassociation request is queued, that request is discarded since a legitimate client would never generate packets in that order. References Computer network security Link protocols
Link layer security
Engineering
1,645
45,180,368
https://en.wikipedia.org/wiki/Lithium%20helide
Lithium helide is a compound of helium and lithium with the formula LiHe. The substance is a cold low-density gas made of Van der Waals molecules, each composed of a helium atom and lithium atom bound by van der Waals force. The preparation of LiHe opens up the possibility to prepare other helium dimers, and beyond that multi-atom clusters that could be used to investigate Efimov states and Casimir retardation effects. Detection It was detected in 2013. Previously 7Li4He was predicted to have a binding energy of 0.0039 cm−1 (7.7×10−8eV, 1.2×10−26J, or 6 mK), and a bond length of 28 Å. Other van der Waals-bound helium molecules were previously known including Ag3He and He2. Detection of LiHe was done via fluorescence. The lithium atom in the X2Σ state was excited to A2Π. The spectrum showed a pair of lines, each split into two with the hyperfine structure of 7Li. The lines had wavenumbers of 14902.563, 14902.591, 14902.740, and 14902.768 cm−1. The two pairs are separated by 0.177 cm−1. This is explained by two different vibrational states of the LiHe molecule: 1/2 and 3/2. The bonding between the atoms is so low that it cannot withstand any rotation or greater vibration without breaking apart. The lowest rotation states would have energies of 40 and 80 mK, greater than the binding energy around 6 mK. LiHe was formed by laser ablation of lithium metal into a cryogenic helium buffer gas at a temperature between 1 and 5 K. The proportion of LiHe molecules was proportional to the density of He, and declined as the temperature increased. Properties LiHe is polar and paramagnetic. The average separation between the lithium and helium atoms depends on the isotope. For 6LiHe the separation is 48.53 Å, but for 7LiHe the distance is much smaller at 28.15 Å on average. If the helium atom of LiHe is excited so that the 1s electron is promoted to 2s, it decays by transferring energy to ionise lithium, and the molecule breaks up. This is called interatomic Coulombic decay. The energy of the Li+ and He decay products is distributed in a curve that oscillates up and down about a dozen times. See also Helium compounds#Predicted compounds References Helium compounds Lithium compounds Van der Waals molecules
Lithium helide
Physics,Chemistry
530
5,421,591
https://en.wikipedia.org/wiki/Artin%20billiard
In mathematics and physics, the Artin billiard is a type of a dynamical billiard first studied by Emil Artin in 1924. It describes the geodesic motion of a free particle on the non-compact Riemann surface where is the upper half-plane endowed with the Poincaré metric and is the modular group. It can be viewed as the motion on the fundamental domain of the modular group with the sides identified. The system is notable in that it is an exactly solvable system that is strongly chaotic: it is not only ergodic, but is also strong mixing. As such, it is an example of an Anosov flow. Artin's paper used symbolic dynamics for analysis of the system. The quantum mechanical version of Artin's billiard is also exactly solvable. The eigenvalue spectrum consists of a bound state and a continuous spectrum above the energy . The wave functions are given by Bessel functions. Exposition The motion studied is that of a free particle sliding frictionlessly, namely, one having the Hamiltonian where m is the mass of the particle, are the coordinates on the manifold, are the conjugate momenta: and is the metric tensor on the manifold. Because this is the free-particle Hamiltonian, the solution to the Hamilton-Jacobi equations of motion are simply given by the geodesics on the manifold. In the case of the Artin billiards, the metric is given by the canonical Poincaré metric on the upper half-plane. The non-compact Riemann surface is a symmetric space, and is defined as the quotient of the upper half-plane modulo the action of the elements of acting as Möbius transformations. The set is a fundamental domain for this action. The manifold has, of course, one cusp. This is the same manifold, when taken as the complex manifold, that is the space on which elliptic curves and modular functions are studied. References E. Artin, "Ein mechanisches System mit quasi-ergodischen Bahnen", Abh. Math. Sem. d. Hamburgischen Universität, 3 (1924) pp170-175. Chaotic maps Ergodic theory
Artin billiard
Mathematics
455
9,073,015
https://en.wikipedia.org/wiki/Triangular%20interval
The triangular interval (also known as the lateral triangular space, lower triangular space, and triceps hiatus) is a space found in the axilla. It is one of the three intermuscular spaces found in the axillary space. The other two spaces are: quadrangular space and triangular space. Borders Two of its borders are as follows: teres major - superior long head of the triceps brachii - medial Some sources state the lateral border is the humerus, while others define it as the lateral head of the triceps. (The effective difference is relatively minor, though.) Contents The contents of its borders are as follows: Radial nerve Profunda brachii The radial nerve is visible through the triangular interval, on its way to the posterior compartment of the arm. Profunda brachii also passes through the triangular interval from anterior to posterior. Additional images Triangular Interval Syndrome Triangular Interval Syndrome (TIS) was described as a differential diagnosis for radicular pain in the upper extremity. It is a condition where the radial nerve is entrapped in the triangular interval resulting in upper extremity radicular pain. The radial nerve and profunda brachii pass through the triangular interval and are hence vulnerable. The triangular interval has a potential for compromise secondary alterations in thickness of the teres major and triceps. It is described based on cadaveric studies that fibrous bands were commonly present between the teres major and triceps. When these bands were present, rotation of the shoulder caused a reduction in cross sectional area of the space. Normal resting postures of humeral adduction and internal rotation with scapular protraction may be speculated as a precedent for teres major contractures owing to the shortened position of this muscle in this position. In addition, hypertrophy of this muscle can occur secondary to weight training and potentially compromise the triangular interval with resultant entrapment of the radial nerve. Shoulder dysfunctions have a potential for shortening and hypertrophy of the teres major. Shoulders that exhibit stiffness, secondary to capsular tightness, contribute to contracture and hypertrophy of the teres major. Hence, restricted external rotation can encourage adaptive shortening and thickening of the internal rotators of the shoulder principally the teres major and subscapularis. One may speculate that the lateral arm pain presented in shoulder dysfunctions may be of a nerve origin secondary to adverse neural tension of the radial nerve. The triceps brachii has a potential to entrap the radial nerve in the triangular interval secondary to hypertrophy. The presence of a fibrous arch in the long head and lateral head further complicates the situation. Repeated forceful extension seen in weight training and sport involving punching may be a precedent to this scenario. The radial nerve is vulnerable as it passes through this space, for all of the reasons mentioned above. See also Axillary spaces Quadrangular space Triangular space References External links Photo at ithaca.edu Anatomy
Triangular interval
Biology
621
3,137,102
https://en.wikipedia.org/wiki/Vaginal%20ring
Vaginal rings (also known as intravaginal rings, or V-Rings) are polymeric drug delivery devices designed to provide controlled release of drugs for intravaginal administration over extended periods of time. The ring is inserted into the vagina and provides contraception protection. Vaginal rings come in one size that fits most people. Types Several vaginal ring products are currently available, including: Vaginal rings as treatment of peri-menopausal symptoms: Estring - a low-dose estradiol-releasing ring, manufactured from silicone elastomer, for the treatment of vaginal atrophy (atrophic vaginitis). Femring - a low-dose estradiol-acetate releasing ring, manufactured from silicone elastomer, for the relief of hot flashes and vaginal atrophy associated with menopause. Vaginal rings as contraception: NuvaRing - a low-dose contraceptive vaginal ring, manufactured from poly(ethylene-co-vinyl acetate), and releasing etonogestrel (a progestin) and ethinylestradiol (an estrogen). Progering - containing progesterone as a sole ingredient, is available only in Chile and Peru. Annovera - a contraceptive vaginal ring, manufactured from methyl siloxane-based polymers, and releasing segesterone acetate (a progestin) and ethinylestradiol (an estrogen) A number of other vaginal ring products are also in development. Contraception The combined hormonal contraceptive vaginal ring is self-administered once a month. Leaving the ring in for three weeks slowly releases hormones into the body, mainly vaginally administered estrogens and/or progestogens (a group of hormones including progesterone) - the same hormones used in birth control pills. These hormones work mostly by stopping ovulation and thickening the cervical mucus, creating a barrier preventing sperm from fertilizing an egg. They could theoretically affect implantation but no evidence shows that they do. Worn continuously for three weeks followed by a week off, each vaginal ring provides anywhere from one month (NuvaRing) to one year (Annovera and Progering) of birth control. For continuous-use contraception, users can also choose to wear the vaginal ring for the full four week cycle. This manner of contraception will eliminate monthly periods. Throughout the additional week, the serum hormone levels will remain in the contraceptive range. When compared with combined hormonal pills, the combined hormonal vaginal ring offers potentially better cycle control and treatment of heavy menstrual bleeding. However, both methods are effective short-term treatments in the reproductive age group. Vaginal rings may lead to increased normal vaginal secretions, decreased body weight, reduced symptoms of PMS, and occasionally cases of vaginitis, device-related problems and leukorrhea. Because they release estrogen, vaginal rings have an increased risk of heart attack, stroke, and other serious side effects. Additionally, certain medicines and supplements, such as the antibiotic rifampin, the anti-fungal griseofulvin, anti-seizure medicines, St. John's wort, and HIV medicines, may compromise the effectiveness of vaginal rings. Vaginal rings do not protect users from sexually transmitted diseases. The only contraceptive measures that does so are latex or polyurethane condoms. The contraceptive vaginal ring has a failure rate of 0.3% when used as prescribed and 9% when used typically. The correlation between breast cancer and the use of vaginal rings is under investigation, but recent literature suggests that the hormones used in vaginal rings has little, if any, relation to the risk of developing breast cancer. Methods of use Vaginal rings are easily inserted and removed. Vaginal walls hold them in place. Although their exact location within the vagina is not critical for clinical efficacy, rings commonly reside next to the cervix, and the deeper the placement in the vagina, the less likely the ring will be felt. Rings are typically left in place during intercourse, and most couples report no interference or discomfort. In many cases, neither partner feels the presence of the ring. Rings can be removed prior to intercourse, but, in the case of the contraceptive NuvaRing, only for one to three hours to maintain efficacy of birth control. If the ring is out for more than 48 hours, back up contraception is necessary for seven days. It typically takes between one and two months for a user's cycle to return to normal after the use of a vaginal ring is stopped. Estring - Estring is inserted into the vagina and left in place for three months, after which it is removed and replaced with a fresh ring. Femring - Femring is inserted into the vagina and left in place for three months, after which it is removed and replaced with a fresh ring. NuvaRing - NuvaRing is inserted into the vagina and left in place for three weeks, after which it is removed and discarded for a 'ring-free' week to allow menstruation to occur. At the end of that week, a new NuvaRing is inserted. Annovera - Annovera ring is inserted into the vagina and left in place for three weeks, after which it is removed for one week. Unlike the NuvaRing, users will reinsert the same Annovera Ring one week later. A single Annovera Ring is used for 3 week cycles for a total of 13 cycles. References External links Estring Femring Nuvaring Hormonal contraception Drug delivery devices Dosage forms Vagina
Vaginal ring
Chemistry
1,195
58,955,645
https://en.wikipedia.org/wiki/Biochemical%20Genetics%20%28journal%29
Biochemical Genetics is a bimonthly peer-reviewed scientific journal covering molecular biology as it relates to genetics. It was established in 1967 and is published by Springer Science+Business Media. The editor-in-chief is Luís Filipe Dias e Silva (University of the Azores). According to the Journal Citation Reports, the journal has a 2021 impact factor of 2.220. References External links Academic journals established in 1967 Genetics journals Monthly journals Springer Science+Business Media academic journals Molecular and cellular biology journals English-language journals
Biochemical Genetics (journal)
Chemistry
108
83,180
https://en.wikipedia.org/wiki/United%20States%20Army%20Corps%20of%20Engineers
The United States Army Corps of Engineers (USACE) is the military engineering branch of the United States Army. A direct reporting unit (DRU), it has three primary mission areas: Engineer Regiment, military construction, and civil works. USACE has 37,000 civilian and military personnel, making it one of the world's largest public engineering, design, and construction management agencies. The USACE workforce is approximately 97% civilian, 3% active duty military. The civilian workforce is mainly located in the United States, Europe and in select Middle East office locations. Civilians do not function as active duty military and are not required to be in active war and combat zones; however, volunteer (with pay) opportunities do exist for civilians to do so. The day-to-day activities of the three mission areas are administered by a lieutenant general known as the chief of engineers/commanding general. The chief of engineers commands the Engineer Regiment, comprising combat engineer, rescue, construction, dive, and other specialty units, and answers directly to the Chief of Staff of the Army. Combat engineers, sometimes called sappers, form an integral part of the Army's combined arms team and are found in all Army service components: Regular Army, National Guard, and Army Reserve. Their duties are to breach obstacles; construct fighting positions, fixed/floating bridges, and obstacles and defensive positions; place and detonate explosives; conduct route clearance operations; emplace and detect landmines; and fight as provisional infantry when required. For the military construction mission, the chief of engineers is directed and supervised by the Assistant Secretary of the Army for installations, environment, and energy, whom the President appoints and the Senate confirms. Military construction relates to construction on military bases and worldwide installations. On 16 June 1775, the Continental Congress, gathered in Philadelphia, granted authority for the creation of a "Chief Engineer for the Army". Congress authorized a corps of engineers for the United States on 11 March 1779. The Corps as it is known today came into being on 16 March 1802, when the president was authorized to "organize and establish a Corps of Engineers ... that the said Corps ... shall be stationed at West Point in the State of New York and shall constitute a Military Academy." A Corps of Topographical Engineers, authorized on 4 July 1838, merged with the Corps of Engineers in March 1863. Civil works are managed and supervised by the Assistant Secretary of the Army. Army civil works include three U.S. Congress-authorized business lines: navigation, flood and storm damage protection, and aquatic ecosystem restoration. Civil works is also tasked with administering the Clean Water Act Section 404 program, including recreation, hydropower, and water supply at USACE flood control reservoirs, and environmental infrastructure. The civil works staff oversee construction, operation, and maintenance of dams, canals and flood protection in the U.S., as well as a wide range of public works throughout the world. Some of its dams, reservoirs, and flood control projects also serve as public outdoor recreation facilities. Its hydroelectric projects provide 24% of U.S. hydropower capacity. The Corps of Engineers is headquartered in Washington, D.C., and has a budget of $7.8 billion (FY2021). The corps's mission is to "deliver vital public and military engineering services; partnering in peace and war to strengthen our nation's security, energize the economy and reduce risks from disasters." Its most visible civil works missions include: Planning, designing, building, and operating locks and dams. Other civil engineering projects include flood control, beach nourishment, and dredging for waterway navigation. Design and construction of flood protection systems through various federal mandates. Design and construction management of military facilities for the Army, Air Force, Army Reserve, and Air Force Reserve as well as other Department of Defense and federal government agencies. Environmental regulation and ecosystem restoration. History 18th century The history of United States Army Corps of Engineers can be traced back to the American Revolution. On 16 June 1775, the Continental Congress organized the Corps of Engineers, whose initial staff included a chief engineer and two assistants. Colonel Richard Gridley became General George Washington's first chief engineer. One of his first tasks was to build fortifications near Boston at Bunker Hill. The Continental Congress recognized the need for engineers trained in military fortifications and asked the government of King Louis XVI of France for assistance. Many of the early engineers in the Continental Army were former French officers. Louis Lebègue Duportail, a lieutenant colonel in the French Royal Corps of Engineers, was secretly sent to North America in March 1777 to serve in George Washington's Continental Army. In July 1777 he was appointed colonel and commander of all engineers in the Continental Army and, on 17 November 1777, he was promoted to brigadier general. When the Continental Congress created a separate Corps of Engineers in May 1779, Duportail was appointed as its commander. In late 1781 he directed the construction of the allied U.S.-French siege works at the Battle of Yorktown. On 26 February 1783, the Corps was disbanded. It was re-established during the Presidency of George Washington. From 1794 to 1802, the engineers were combined with the artillery as the Corps of Artillerists and Engineers. 19th century The Corps of Engineers, as it is known today, was established on 16 March 1802, when President Thomas Jefferson signed the Military Peace Establishment Act, whose aim was to "organize and establish a Corps of Engineers ... that the said Corps ... shall be stationed at West Point in the State of New York and shall constitute a military academy." Until 1866, the superintendent of the United States Military Academy was always an Engineer Officer. The General Survey Act of 1824 authorized the use of Army engineers to survey road and canal routes for the growing nation. That same year, Congress passed an "Act to Improve the Navigation of the Ohio and Mississippi Rivers" and to remove sand bars on the Ohio and "planters, sawyers, or snags" (trees fixed in the riverbed) on the Mississippi, for which the Corps of Engineers was identified as the responsible agency. Separately authorized on 4 July 1838, the Corps of Topographical Engineers consisted only of officers and was used for mapping and the design and construction of federal civil works and other coastal fortifications and navigational routes. It was merged with the Corps of Engineers on 31 March 1863, at which point the Corps of Engineers also assumed the Lakes Survey District mission for the Great Lakes. In 1841, Congress created the Lake Survey. The survey, based in Detroit, Michigan, was charged with conducting a hydrographical survey of the Northern and Northwestern lakes and preparing and publishing nautical charts and other navigation aids. The Lake Survey published its first charts in 1852. In the mid-19th century, Corps of Engineers' officers ran Lighthouse Districts in tandem with U.S. Naval officers. Civil War The Army Corps of Engineers played a significant role in the American Civil War. Many of the men who would serve in the top leadership in this organization were West Point graduates. Several rose to military fame and power during the Civil War. Some examples include Union generals George McClellan, Henry Halleck, and George Meade; and Confederate generals Robert E. Lee, Joseph Johnston, and P.G.T. Beauregard. The versatility of officers in the Army Corps of Engineers contributed to the success of numerous missions throughout the Civil War. They were responsible for building pontoon and railroad bridges, forts and batteries, destroying enemy supply lines (including railroads), and constructing roads for the movement of troops and supplies. Both sides recognized the critical work of engineers. On 6 March 1861, once the South had seceded from the Union, its legislature passed an act to create a Confederate Corps of Engineers. The South was initially at a disadvantage in engineering expertise; of the initial 65 cadets who resigned from West Point to accept positions with the Confederate Army, only seven were placed in the Corps of Engineers. The Confederate Congress passed legislation that authorized a company of engineers for every division in the field; by 1865, the CSA had more engineer officers serving in the field of action than the Union Army. One of the main projects for the Army Corps of Engineers was constructing railroads and bridges. Union forces took advantage of such Confederate infrastructure because railroads and bridges provided access to resources and industry. The Confederate engineers, using slave labor, built fortifications that were used both offensively and defensively, along with trenches that made them harder to penetrate. This method of building trenches was known as the zigzag pattern. 20th century The National Defense Act of 1916 authorized a reserve corps in the Army, and the Engineer Officers' Reserve Corps and the Engineer Enlisted Reserve Corps became one of the branches. Some of these personnel were called into active service for World War I. From the beginning, many politicians wanted the Corps of Engineers to contribute to both military construction and civil works. Assigned the military construction mission on 1 December 1941, after the Quartermaster Department struggled with the expanding mission, the Corps built facilities at home and abroad to support the U.S. Army and Air Force. During World War II the USACE program expanded to more than 27,000 military and industrial projects in a $15.3 billion mobilization effort. Included were aircraft, tank assembly, and ammunition plants; camps for 5.3 million soldiers; depots, ports, and hospitals; and the rapid construction of such landmark projects such as the Manhattan Project at Los Alamos, Hanford and Oak Ridge among other places, and the Pentagon, the Department of Defense headquarters across the Potomac from Washington, DC. In civilian projects, the Corps of Engineers became the lead federal navigation and flood control agency. Congress significantly expanded its civil works activities, becoming a major provider of hydroelectric energy and the country's leading provider of recreation. Its role in responding to natural disasters also grew dramatically, especially following the devastating Mississippi Flood of 1927. In the late 1960s, the agency became a leading environmental preservation and restoration agency. In 1944, specially trained army combat engineers were assigned to blow up underwater obstacles and clear defended ports during the invasion of Normandy. During World War II, the Army Corps of Engineers in the European Theater of Operations was responsible for building numerous bridges, including the first and longest floating tactical bridge across the Rhine at Remagen, and building or maintaining roads vital to the Allied advance across Europe into the heart of Germany. In the Pacific theater, the "Pioneer troops" were formed, a hand-selected unit of volunteer Army combat engineers trained in jungle warfare, knife fighting, and unarmed jujitsu (hand-to-hand combat) techniques. Working in camouflage, the Pioneers cleared jungle, prepared routes of advance and established bridgeheads for the infantry, as well as demolishing enemy installations. Five commanding generals (chiefs of staff after the 1903 reorganization) of the United States Army held engineer commissions early in their careers. All transferred to other branches before being promoted to the top position. They were Alexander Macomb, George B. McClellan, Henry W. Halleck, Douglas MacArthur, and Maxwell D. Taylor. Notable dates and projects The General Survey Act of 1824 authorized use of army engineers to survey roads and canals. The next month, an act to improve navigation on the Ohio and Mississippi rivers initiated the Corps of Engineers' permanent civil works construction mission. Although the 1824 act to improve the Mississippi and Ohio rivers is often called the first rivers and harbors legislation, the act passed in 1826 was the first to combine authorizations for both surveys and projects, thereby establishing a pattern that continues to the present day. Survey and construction of the National Road until Federal funds were withdrawn (1838) The tall Washington Monument, completed under the direction and command of Lieutenant Colonel Thomas Lincoln Casey, 1884 Panama Canal, completed under supervision of Army Engineer officers, 1914 Flood Control Act of 1936 made flood control a federal policy and officially recognized the Corps of Engineers as the major federal flood control agency Bonneville Dam, completed in 1937 Flood Control Act of 1941, which channelized the Los Angeles River and parts of the Santa Ana River USACE took over all real estate acquisition, construction, and maintenance for army facilities, 1941 There was no road between Costa Rica and Panama until the Corps began one in 1941–1943. The concern was access to the Panama Canal during wartime. The Manhattan Project (1942–1946) Planning and construction of the Pentagon, completed in 1943 just 16 months after groundbreaking Comprehensive Everglades Restoration Plan, first authorized by congress in 1948 USACE began construction support for NASA leading to major activities at the Manned Spacecraft Center and Kennedy Space Center, 1961 King Khalid Military City 1973–1987 The Water Resources Development Act of 1986 (WRDA 86) brought major change in financing by requiring non-federal contributions toward most federal water resource projects Cross Florida Barge Canal Tennessee-Tombigbee Waterway Occasional civil disasters, including the Great Mississippi Flood of 1927, resulted in greater responsibilities for the Corps of Engineers. The aftermath of Hurricane Katrina in New Orleans and the collapse of the Francis Scott Key Bridge in Baltimore provide other examples of this. Organization Headquarters The Chief of Engineers and Commanding General (Lt. general) of U.S. Army Corps of Engineers has three mission areas: combat engineers, military construction, and civil works. For each mission area the Chief of Engineers/Commanding General is supervised by a different person. For civil works the Commanding General is supervised by the civilian Assistant Secretary of the Army (Civil Works). Three deputy commanding generals (major generals) report to the chief of engineers, who have the following titles: Deputy Commanding General, Deputy Commanding General for Civil and Emergency Operation, and Deputy Commanding General for Military and International Operations. The Corps of Engineers headquarters is located in Washington, D.C. The headquarters staff is responsible for Corps of Engineers policy and plans the future direction of all other USACE organizations. It comprises the executive office and 17 staff principals. USACE has two civilian directors who head up Military and Civil Works programs in concert with their respective DCG for the mission area. Divisions and districts The U.S. Army Corps of Engineers is organized geographically into eight permanent divisions, one provisional division, one provisional district, and one research command reporting directly to the HQ. Within each division, there are several districts. Districts are defined by watershed boundaries for civil works projects and by political boundaries for military projects. Great Lakes and Ohio River Division (LRD), located in Cincinnati. Reaches from the St Lawrence Seaway, across the Great Lakes, down the Ohio River Valley to the Tennessee and Cumberland rivers. Covers , parts of 17 states. Serves 56 million people. Its seven districts are located in Buffalo, Chicago, Detroit, Louisville, Nashville, Pittsburgh, and Huntington, West Virginia. The division commander serves on two national and international decision-making bodies: co-chair of the Lake Superior, Niagara, and Ontario/St Lawrence Seaway boards of control; and the Mississippi River Commission. Mississippi Valley Division (MVD), located in Vicksburg, Mississippi. Reaches from Canada to the Gulf of Mexico. Covers , and portions of 12 states bordering the Mississippi River. Serves 28 million people. Its six districts are located in St. Paul, Minnesota, Rock Island, Illinois, St. Louis, Memphis, Vicksburg, and New Orleans. MVD serves as headquarters for the Mississippi River Commission. North Atlantic Division (NAD), headquartered at Fort Hamilton in Brooklyn, New York. Reaches from Maine to Virginia, including the District of Columbia, with an overseas mission to provide engineering, construction, and project management services to the U.S. European Command and U.S. Africa Command. Serves 62 million people. Its six districts are located in New York City, Philadelphia, Baltimore, Norfolk, Concord, Massachusetts, and Wiesbaden, Germany. NAD has the largest Superfund program in USACE with 60% of the funding. NAD's Europe District has done work in dozens of countries and has offices in Germany, Belgium, Italy, Turkey, Georgia, Romania, Bulgaria, Israel, Spain, and soon Botswana. Northwestern Division (NWD), located in Portland, Oregon. Reaches from Canada to California, and from the Pacific Ocean to Missouri. Covers nearly in all or parts of 14 states. Its five districts are located in Omaha, Portland, Seattle, Kansas City, and Walla Walla. NWD has 35% of the total Corps of Engineers' water storage capacity and 75% of the total hydroelectric capacity. Pacific Ocean Division (POD), located at Fort Shafter, Hawaii. Reaches across 12 million square miles of the Pacific Ocean from the Arctic Circle to American Samoa below the equator and across the International Date Line, and into Asia. Includes the territories of Guam, American Samoa and the Commonwealth of the Northern Mariana Islands as well as the Freely Associated States including the Republic of Palau, Federated States of Micronesia and the Republic of the Marshall Islands. Its four districts are located in Japan; Seoul, South Korea; Anchorage, Alaska; and Honolulu. Unlike other military work, POD designs and builds for all of the military services — Army, Navy, Air Force, Marines — in Japan, Korea, and Kwajalein Atoll. South Atlantic Division (SAD), located in Atlanta. Reaches from North Carolina to Alabama as well as the Caribbean, and Central and South America. Covers all or parts of six states. Its five districts are located in Wilmington, North Carolina, Charleston, South Carolina, Savannah, Jacksonville, and Mobile. One-third of the stateside Army and one-fifth of the stateside Air Force are located within the division boundaries. The largest single environmental restoration project in the world — the Everglades Restoration — is managed by SAD. South Pacific Division (SPD), located in San Francisco. Reaches from California to Colorado and New Mexico. Covers all or parts of seven states. Its four districts are located in Albuquerque, Los Angeles, Sacramento, and San Francisco. Its region is host to 18 of the 25 fastest-growing metropolitan areas in the nation. Southwestern Division (SWD), located in Dallas. Reaches from Mexico to Kansas. Covers all or part of seven states. Its four districts are located in Little Rock, Tulsa, Galveston, and Fort Worth. SWD's recreation areas are the most visited in USACE with more than of shoreline and 1,172 recreation sites. Transatlantic Division (TAD), located in Winchester, Virginia. Supports Federal programs and policies overseas. Consists of the Gulf Region District, the Afghanistan Engineer District South, the Afghanistan Engineer District North, the Middle East District, the USACE Deployment Center and the TAD G2 Intelligence Fusion Center. TAD oversees thousands of projects overseas. TAD overseas locations are staffed primarily by civilian volunteers from throughout USACE. The Corps of Engineers built much of Afghanistan's original Ring Road in the early 1960s and returned in 2002. Supports the full spectrum of regional support, including the Afghan National Security Forces, U.S. and Coalition Forces, Counter Narcotics and Border Management, Strategic Reconstruction support to USAID, and the Commander's Emergency Response Program. The Engineer Regiment U.S. Army engineer units outside of USACE Districts and not listed below fall under the Engineer Regiment of the U.S. Army Corps of Engineers, which comprises the majority of Army engineer soldiers. The Regiment includes combat engineers, whose duties are to breach obstacles; construct fighting positions, fixed/floating bridges, and obstacles and defensive positions; place and detonate explosives; conduct route clearance operations; emplace and detect landmines; and fight as provisional infantry when required. It also includes support engineers, who are more focused on construction and sustainment. Headquartered at Fort Leonard Wood, MO, the Engineer Regiment is commanded by the Engineer Commandant, currently a position filled by an Army brigadier general. The Engineer Regiment includes the U.S. Army Engineer School (USAES) which publishes its mission as: Generate the military engineer capabilities the Army needs: training and certifying Soldiers with the right knowledge, skills, and critical thinking; growing and educating professional leaders; organizing and equipping units; establishing a doctrinal framework for employing capabilities; and remaining an adaptive institution in order to provide Commanders with the freedom of action they need to successfully execute Unified Land Operations. Other USACE organizations There are several other organizations within the Corps of Engineers: Engineer Research and Development Center (ERDC) — the Corps of Engineers research and development command. ERDC comprises seven laboratories. (see research below) U.S. Army Engineering and Support Center (CEHNC) — provides engineering and technical services, program and project management, construction management, and innovative contracting initiatives, for programs that are national or broad in scope or not normally provided by other Corps of Engineers elements Finance Center, U.S. Army Corps of Engineers (CEFC) — supports the operating finance and accounting functions throughout the Corps of Engineers Humphreys Engineer Center Support Activity (HECSA) — provides administrative and operational support for Headquarters, U.S. Army Corps of Engineers and various field offices. Army Geospatial Center (AGC)  — provides geospatial information, standards, systems, support, and services across the Army and the Department of Defense. Marine Design Center (CEMDC) — provides total project management including planning, engineering, and shipbuilding contract management in support of USACE, Army, and national water resource projects in peacetime, and augments the military construction capacity in time of national emergency or mobilization Institute for Water Resources (IWR) — supports the Civil Works Directorate and other Corps of Engineers commands by developing and applying new planning evaluation methods, policies and data in anticipation of changing water resources management conditions. USACE Logistics Activity (ULA)- Provides logistics support to the Corps of Engineers including supply, maintenance, readiness, materiel, transportation, travel, aviation, facility management, integrated logistics support, management controls, and strategic planning. Enterprise Infrastructure Services (CEEIS) — designs information technology standards for the Corps, including automation, communications, management, visual information, printing, records management, and information assurance. CEEIS outsources the maintenance of its IT services, forming the Army Corps of Engineers Information Technology (ACE-IT). ACE-IT is made up of both civilian government employees and contractors. Deployable Tactical Operations System (DTOS) — provides mobile command and control platforms in support of the quick ramp-up of initial emergency response missions for the Corps. DTOS is a system designed to respond to District, Division, National, and International events. Until 2001 local Directorates of Engineering and Housing (DEH), being constituents of the USACE, had been responsible for the housing, infrastructure and related tasks as environmental protection, garbage removal and special fire departments or fire alarm coordination centers in the garrisons of the U.S. Army abroad as in Europe (e.g. Germany, as in Berlin, Wiesbaden, Karlsruhe etc.) Subsequently, a similar structure called DPWs (Directorates of Public Works), subordinate to the United States Army Installation Management Command, assumed the tasks formerly done by the DEHs. Directly reporting military units 249th Engineer Battalion (Prime Power) — generates and distributes prime electrical power in support of fighting wars, disaster relief, stability and support operations as well as provides advice and technical assistance in all aspects of electrical power and distribution systems. 911th Engineer Company — (formerly the MDW Engineer Company) provides specialized technical search and rescue support for the Washington, D.C. metropolitan area; it is also a vital support member of the Joint Force Headquarters National Capital Region, which is charged with the homeland security of the United States capital region. 412th Theater Engineer Command, U.S. Army Reserve, located in Vicksburg, MS. 416th Theater Engineer Command, U.S. Army Reserve, located in Darien, IL. Mission areas Warfighting USACE provides support directly and indirectly to the warfighting effort. They build and help maintain much of the infrastructure that the Army and the Air Force use to train, house, and deploy troops. USACE built and maintained navigation systems and ports provide the means to deploy vital equipment and other material. Corps of Engineers Research and Development (R&D) facilities help develop new methods and measures for deployment, force protection, terrain analysis, mapping, and other support. USACE directly supports the military in the battle zone, making expertise available to commanders to help solve or avoid engineering (and other) problems. Forward Engineer Support Teams, FEST-A's or FEST-M's, may accompany combat engineers to provide immediate support, or to reach electronically into the rest of USACE for the necessary expertise. A FEST-A team is an eight-person detachment; a FEST-M is approximately 36. These teams are designed to provide immediate technical-engineering support to the warfighter or in a disaster area. Corps of Engineers' professionals use the knowledge and skills honed on both military and civil projects to support the U.S. and local communities in the areas of real estate, contracting, mapping, construction, logistics, engineering, and management experience. Prior to their respective troop withdrawals in 2021, this included support for rebuilding Iraq, establishing infrastructure in Afghanistan, and supporting international and inter-agency services. In addition, the work of almost 26,000 civilians on civil-works programs throughout USACE provides a training ground for similar capabilities worldwide. USACE civilians volunteer for assignments worldwide. For example, hydropower experts have helped repair, renovate, and run hydropower dams in Iraq in an effort to help get Iraqis to become self-sustaining. Homeland security USACE supports the United States' Department of Homeland Security and the Federal Emergency Management Agency (FEMA) through its security planning, force protection, research and development, disaster preparedness efforts, and quick response to emergencies and disasters. The CoE conducts its emergency response activities under two basic authorities — the Flood Control and Coastal Emergency Act (), and the Stafford Disaster Relief and Emergency Assistance Act (). In a typical year, the Corps of Engineers responds to more than 30 Presidential disaster declarations, plus numerous state and local emergencies. Emergency responses usually involve cooperation with other military elements and Federal agencies in support of State and local efforts. Infrastructure support Work comprises engineering and management support to military installations, global real estate support, civil works support (including risk and priorities), operations and maintenance of Federal navigation and flood control projects, and monitoring of dams and levees. More than 67 percent of the goods consumed by Americans and more than half of the nation's oil imports are processed through deepwater ports maintained by the Corps of Engineers, which maintains more than of commercially navigable channels across the U.S. In both its Civil Works mission and Military Construction program, the Corps of Engineers is responsible for billions of dollars of the nation's infrastructure. For example, USACE maintains direct control of 609 dams, maintains or operates 257 navigation locks, and operates 75 hydroelectric facilities generating 24% of the nation's hydropower and three percent of its total electricity. USACE inspects over 2,000 Federal and non-Federal levees every two years. Four billion gallons of water per day are drawn from the Corps of Engineers' 136 multi-use flood control projects comprising of water storage, making it one of the United States' largest water supply agencies. The 249th Engineer Battalion (Prime Power), the only active duty unit in USACE, generates and distributes prime electrical power in support of warfighting, disaster relief, stability and support operations as well as provides advice and technical assistance in all aspects of electrical power and distribution systems. The battalion deployed in support of recovery operations after 9/11 and was instrumental in getting Wall Street back up and running within a week. The battalion also deployed in support of post-Katrina operations. All of this work represents a significant investment in the nation's resources. Water resources Through its Civil Works program, USACE carries out a wide array of projects that provide coastal protection, flood protection, hydropower, navigable waters and ports, recreational opportunities, and water supply. Work includes coastal protection and restoration, including a new emphasis on a more holistic approach to risk management. As part of this work, USACE is one of the top providers of outdoor recreation in the U.S., so there is a significant emphasis on water safety. Army involvement in works "of a civil nature," including water resources, goes back almost to the origins of the U.S. Over the years, as the nation's needs have changed, so have the Army's Civil Works missions. Major areas of emphasis include the following: Navigation. Supporting navigation by maintaining and improving channels was the Corps of Engineers' earliest Civil Works mission, dating to Federal laws in 1824 authorizing the Corps to improve safety on the Ohio and Mississippi Rivers and several ports. Today, the Corps of Engineers maintains more than of inland waterways and operates 235 locks. These waterways—a system of rivers, lakes and coastal bays improved for commercial and recreational transportation—carry about of the nation's inter-city freight, at a cost per ton-mile about that of rail or that of trucks. USACE also maintains 300 commercial harbors, through which pass of cargo a year, and more than 600 smaller harbors. New locks are needed, according to the Corps and barge shippers, where existing locks are in poor condition, requiring frequent closures for repairs, and/or because a lock's size causes delays for barge tows. Flood Risk Management. The Engineers were first called upon to address flood problems along the Mississippi river in the mid-19th century. They began work on the Mississippi River and Tributaries Flood Control Project in 1928, and the Flood Control Act of 1936 gave the Corps the mission to provide flood protection to the entire country. Recreation. The Corps of Engineers is the nation's largest provider of outdoor recreation, operating more than 2,500 recreation areas at 463 projects (mostly lakes) and leasing an additional 1,800 sites to state or local park and recreation authorities or private interests. USACE hosts about 260 million visits a year at its lakes, beaches and other areas, and estimates that 25 million Americans (one in ten) visit a Corps' project at least once a year. Supporting visitors to these recreation areas generates 600,000 jobs. Hydroelectric Power. The Corps of Engineers was first authorized to build hydroelectric plants in the 1920s, and today operates 75 power plants, producing one fourth of the nation's hydro-electric power—or three percent of its total electric energy. This makes USACE the fifth largest electric supplier in the United States. Shore Protection. With a large proportion of the U.S. population living near our sea and lake shores, and an estimated 75% of U.S. vacations being spent at the beach, there has been Federal interest — and a Corps of Engineers mission — in protecting these areas from hurricane and coastal storm damage. Dam Safety. The Corps of Engineers develops engineering criteria for safe dams, and conducts an active inspection program of its own dams. Water Supply. The Corps first got involved in water supply in the 1850s, when they built the Washington Aqueduct. Today USACE reservoirs supply water to nearly 10 million people in 115 cities. In the drier parts of the Nation, water from Corps reservoirs is also used for agriculture. Water Safety. The Corps of Engineers has taken an interest in recreational water safety, with current initiatives for increasing the use rate of life jackets and preventing the use of alcohol while boating. Environment The U.S. Army Corps of Engineers environmental mission has two major focus areas: restoration and stewardship. The Corps supports and manages numerous environmental programs, that run the gamut from cleaning up areas on former military installations contaminated by hazardous waste or munitions to helping establish/reestablish wetlands that helps endangered species survive. Some of these programs include Ecosystem Restoration, Formerly Used Defense Sites, Environmental Stewardship, EPA Superfund, Abandoned Mine Lands, Formerly Utilized Sites Remedial Action Program, Base Realignment and Closure, 2005, and Regulatory. This mission includes education as well as regulation and cleanup. The U.S. Army Corps of Engineers has an active environmental program under both its Military and Civil Programs. The Civil Works environmental mission that ensures all USACE projects, facilities and associated lands meet environmental standards. The program has four functions: compliance, restoration, prevention, and conservation. The Corps also regulates all work in wetlands and waters of the United States. The Military Programs Environmental Program manages design and execution of a full range of cleanup and protection activities: cleans up sites contaminated with hazardous waste, radioactive waste, or ordnance complies with federal, state, and local environmental laws and regulations strives to minimize our use of hazardous materials conserves our natural and cultural resources The following are major areas of environmental emphasis: Wetlands and Waterways Regulation and Permitting Ecosystem Restoration Environmental Stewardship Radioactive site cleanup through the Formerly Used Sites Remedial Action Program (FUSRAP) Base Realignment and Closure (BRAC) Formerly Used Defense Sites (FUDS) Support to EPA's Superfund Program See also Environmental Enforcement below. Sustainability The Army adopted a sustainability policy in the early 2000s to make military bases, and the force as a whole, more resilient and less dependent on fossil fuels. Since the US military is one of the world's largest institutional energy consumers, this would have a significant impact on reducing waste, improving efficiency, and ensuring that public resources are used effectively. The Army has developed and adopted its own triple bottom line framework shifting from the traditional "People Planet, and Profit" to "Mission, Community, and Environment". To meet these new sustainability targets, it has implemented regulations such as designing all new projects to meet the LEED silver standard. Additional regulations are detailed in the Sustainable Design and Development Policy. The 2017 revision to the Sustainable Design and Development Policy outlines the updated goals and requirements the Army established in an effort to successfully complete the sustainability mission. Most of these requirements result in stricter regulations on the planning, design and construction of new projects and major renovations: Siting and site development Energy performance and security Indoor and outdoor water use Metering, monitoring, and subsystem measurement Indoor environmental quality Waste and recyclables management New and underused technologies Commissioning and plans for operation Many of these goals fall directly onto USACE, as it oversees most construction and maintenance of Army bases and infrastructure. To embrace the branch's movement toward sustainability, USACE added sustainability as an overarching mission with several specific focus areas: Gaining expertise and becoming a leader in industry technology and advancement; primarily in areas surrounding construction and energy to enable high-performance buildings and civil works projects, as well as energy security Planning and implementing a number of approaches to mitigate the potential environmental changes due to the climate crisis specifically with regard to the nation's water infrastructure Focusing on purchases that further the sustainability mission and prioritizing designs/technology that are recycled, bio-based, or benefit the environment Releasing annual Sustainability Report and Implementation Plans for accountability and to track progress toward achieving energy goals This challenge is not without its difficulties. The first report issued in 2008 showed that 78% of new projects were built to the LEED silver standard (without actually getting the certification) instead of the 100% required. In addition, there was an 8.4% and 32% reduction in energy use intensity and water use, respectively, and a 35% increase in hazardous waste production. Later reports show some improvement toward resilience and sustainability. The 2020 Sustainability Report and Implementation Plan show a further 12% reduction in water use as well as 35% total reduction in energy use intensity since 2003. Future projections show that USACE intends to continue to build on these focus areas and drive down its demands in areas such as fuel, electricity and water. Operational facts and figures Summary of facts and figures as of 2007, provided by the Corps of Engineers: One HQ, 8 Divisions, 2 Provisional Division, 45 Districts, 6 Centers, one active-duty unit, 2 Engineer Reserve Command At work in more than 90 countries Supports 159 Army installations and 91 Air Force installations Owns and operates 609 dams Owns or operates 257 navigation lock chambers at 212 sites Largest owner-operator of hydroelectric plants in the US. Owns and operates 75 plants—24% of U.S. hydropower capacity (3% of the total U.S. electric capacity) Operates and maintains of commercial inland navigation channels Maintains 926 coast, Great Lakes, and inland harbors Dredge annually for construction or maintenance Nation's number one provider of outdoor recreation with more than 368 million visits annually to 4,485 sites at 423 USACE projects (383 major lakes and reservoirs) Total water supply storage capacity of Average annual damages prevented by Corps flood risk management projects (1995–2004) of $21 billion (see "Civil works controversies" below) Approximately 137 environmental protection projects under construction (September 2006 figure) Approximately of wetlands restored, created, enhanced, or preserved annually under the Corps' Regulatory Program Approximately $4 billion in technical services to 70 non-DoD Federal agencies annually Completed (and continuing work on) thousands of infrastructure projects in Iraq at an estimated cost over $9 billion: school projects (324,000 students), crude oil production , potable water projects (3.9 million people (goal 5.2 million)), fire stations, border posts, prison/courthouse improvements, transportation/communication projects, village road/expressways, railroad stations, postal facilities, and aviation projects. More than 90 percent of the USACE construction contracts have been awarded to Iraqi-owned businesses — offering employment opportunities, boosting the economy, providing jobs, and training, promoting stability and security where before there was none. Consequently, the mission is a central part of the U.S. exit strategy. The Corps of Engineers has one of the strongest Small Business Programs in the Army—Each year, approximately 33% of all contract dollars are obligated with Small Businesses, Small Disadvantaged Businesses, Service Disabled Veteran Owned Small Businesses, Women Owned Small Businesses, Historically Underutilized Business Zones, and Historically Black Colleges and Universities. Jackie Robinson-Burnette was named the Chief of the Corps' Small Business Program in May 2010. The program is managed through an integrated network of over 60 Small Business Advisors, 8 Division Commanders, 4 Center Directors, and 45 District Commanders. Environmental protection and regulatory program The regulatory program is authorized to protect the nation's aquatic resources. USACE personnel evaluate permit applications for essentially all construction activities that occur in the nation's waters, including wetlands. Two primary authorities granted to the Army Corps of Engineers by Congress fall under Section 10 of the Rivers and Harbors Act and Section 404 of the Clean Water Act. Section 10 of the Rivers and Harbors Act of 1899 (codified in Chapter 33, Section 403 of the United States Code) gave the Corps authority over navigable waters of the United States, defined as "those waters that are subject to the ebb and flow of the tide and/or are presently being used, or have been used in the past, or may be susceptible for use to transport interstate or foreign commerce." Section 10 covers construction, excavation, or deposition of materials in, over, or under such waters, or any work that would affect the course, location, condition or capacity of those waters. Actions requiring section 10 permits include structures (e.g., piers, wharfs, breakwaters, bulkheads, jetties, weirs, transmission lines) and work such as dredging or disposal of dredged material, or excavation, filling or other modifications to the navigable waters of the United States. The Coast Guard also has responsibility for permitting the erection or modification of bridges over navigable waters of the U.S. Another of the major responsibilities of the Army Corps of Engineers is administering the permitting program under Section 404 of the Federal Water Pollution Control Act of 1972, also known as the Clean Water Act. The Secretary of the Army is authorized under this act to issue permits for the discharge of dredged and fill material in waters of the United States, including adjacent wetlands. The geographic extent of waters of the United States subject to section 404 permits fall under a broader definition and include tributaries to navigable waters and adjacent wetlands. The engineers must first determine if the waters at the project site are jurisdictional and subject to the requirements of the section 404 permitting program. Once jurisdiction has been established, permit review and authorization follows a sequence process that encourages avoidance of impacts, followed by minimizing impacts and, finally, requiring mitigation for unavoidable impacts to the aquatic environment. This sequence is described in the section 404(b)(1) guidelines. There are three types of permits issued by the Corps of Engineers: Nationwide, Regional General, and Individual. 80% of the permits issued are nationwide permits, which include 50 general type of activities for minimal impacts to waters of the United States, as published in the Federal Register. Nationwide permits are subject to a reauthorization process every 5 years, with the most recent reauthorization occurring in March, 2012. To gain authorization under a nationwide permit, an applicant must comply with the terms and conditions of the nationwide permit. Select nationwide permits require preconstruction notification to the applicable corps district office notifying them of his or her intent, type and amount of impact and fill in waters, and a site map. Although the nationwide process is fairly simple, corps approval must be obtained before commencing with any work in waters of the United States. Regional general permits are specific to each corps district office. Individual permits are generally required for projects that impact greater than of waters of the United States. Individual permits are required for activities that result in more than minimal impacts to the aquatic environment. Research The Corps of Engineers has two research organizations, the Engineer Research and Development Center (ERDC) and the Army Geospatial Center (AGC). ERDC provides science, technology, and expertise in engineering and environmental sciences to support both military and civilian customers. ERCD research support includes: Dam safety systems Mapping and topography terrain analysis Infrastructure design, construction, operations and maintenance Structural engineering Cold-regions science and engineering Coastal and hydraulic engineering, producing products such as HEC-RAS Environmental quality, including toxic chemistry of bay mud and other dredge spoils Geotechnical engineering Earthquake engineering High performance computing and information technology AGC coordinates, integrates, and synchronizes geospatial information requirements and standards across the Army and provides direct geospatial support and products to warfighters. See also Geospatial Information Officer. Insignia The Corps of Engineers branch insignia, the Corps Castle, is believed to have originated on an informal basis. In 1841, cadets at West Point wore insignia of this type. In 1902, the Castle was formally adopted by the Corps of Engineers as branch insignia. The "castle" is actually the Pershing Barracks at the United States Military Academy in West Point, New York. A current tradition was established with the "Gold Castles" branch insignia of General of the Army Douglas MacArthur, West Point Class of 1903, who served in the Corps of Engineers early in his career and had received the two pins as a graduation gift of his family. In 1945, near the conclusion of World War II, General MacArthur gave his personal pins to his Chief Engineer, General Leif J. Sverdrup. On 2 May 1975, upon the 200th anniversary of the Corps of Engineers, retired General Sverdrup, who had civil engineering projects including the landmark -long Chesapeake Bay Bridge-Tunnel to his credit, presented the Gold Castles to then-Chief of Engineers Lieutenant General William C. Gribble, Jr., who had also served under General MacArthur in the Pacific. General Gribble then announced a tradition of passing the insignia along to future Chiefs of Engineers, and it has been done so since. Controversies Civil works Some of the Corps of Engineers' civil works projects have been characterized in the press as being pork barrel or boondoggles such as the New Madrid Floodway Project and the New Orleans flood protection. Projects have allegedly been justified based on flawed or manipulated analyses during the planning phase. Some projects are said to have created profound detrimental environmental effects or provided questionable economic benefit such as the Mississippi River–Gulf Outlet in southeast Louisiana. Faulty design and substandard construction have been cited in the failure of levees in the wake of Hurricane Katrina that caused flooding of 80% of the city of New Orleans. Review of Corps of Engineers' projects has also been criticized for its lack of impartiality. The investigation of levee failure in New Orleans during Hurricane Katrina was sponsored by the American Society of Civil Engineers (ASCE) but funded by the Corps of Engineers and involved its employees. Corps of Engineers projects can be found in all 50 states, and are specifically authorized and funded directly by Congress. Local citizen, special interest, and political groups lobby Congress for authorization and appropriations for specific projects in their area. Senator Russ Feingold and Senator John McCain sponsored an amendment requiring peer review of Corps projects to the Water Resources Development Act of 2006, proclaiming "efforts to reform and add transparency to the way the U.S. Army Corps of Engineers receives funding for and undertakes water projects." A similar bill, the Water Resources Development Act of 2007, which included the text of the original Corps' peer review measure, was eventually passed by Congress in 2007, overriding Presidential veto. Military construction A number of Army camps and facilities designed by the Corps of Engineers, including the former Camp O'Ryan in New York State, have reportedly had a negative impact on the surrounding communities. Camp O'Ryan, with its rifle range, has possibly contaminated well and storm runoff water with lead. This runoff water eventually runs into the Niagara River and Lake Ontario, sources of drinking water to millions of people. This situation is exacerbated by a failure to locate the engineering and architectural plans for the camp, which were produced by the New York District in 1949. Greenhouse whistleblower suit Bunnatine "Bunny" Greenhouse, a formerly high-ranking official in the Corps of Engineers, won a lawsuit against the United States government in July 2011. Greenhouse had objected to the Corps accepting cost projections from KBR in a no-bid, noncompetitive contract. After she complained, Greenhouse was demoted from her Senior Executive Service position, stripped of her top secret security clearance, and even, according to Greenhouse, had her office booby-trapped with a trip-wire from which she sustained a knee injury. A U.S. District court awarded Greenhouse $970,000 in full restitution of lost wages, compensatory damages, and attorney fees. Units 412th Engineer Command 416th Engineer Command 1st Engineer Brigade 2nd Engineer Brigade 16th Engineer Brigade 18th Engineer Brigade 20th Engineer Brigade 35th Engineer Brigade 36th Engineer Brigade 130th Engineer Brigade 194th Engineer Brigade 372nd Engineer Brigade 411th Engineer Brigade 420th Engineer Brigade 555th Engineer Brigade 926th Engineer Brigade 1st Engineer Battalion 2nd Engineer Battalion 3rd Engineer Battalion 4th Engineer Battalion 5th Engineer Battalion 6th Engineer Battalion 7th Engineer Battalion 8th Engineer Battalion 9th Engineer Battalion 10th Engineer Battalion 11th Engineer Battalion 14th Engineer Battalion 15th Engineer Battalion 16th Engineer Battalion 19th Engineer Battalion 20th Engineer Battalion 21st Engineer Battalion 23rd Engineer Battalion 27th Engineer Battalion 29th Engineer Battalion 31st Engineer Battalion 35th Engineer Battalion 37th Engineer Battalion 39th Engineer Battalion 40th Engineer Battalion 41st Engineer Battalion 44th Engineer Battalion 46th Engineer Battalion 52nd Engineer Battalion 54th Engineer Battalion 62nd Engineer Battalion 65th Engineer Battalion 70th Engineer Battalion 82nd Engineer Battalion 84th Engineer Battalion 91st Engineer Battalion 92nd Engineer Battalion 94th Engineer Battalion 101st Engineer Battalion 103rd Engineer Battalion 107th Engineer Battalion 112th Engineer Battalion 120th Engineer Battalion 122nd Engineer Battalion 127th Engineer Battalion 130th Engineer Battalion 133rd Engineer Battalion 168th Engineer Battalion 169th Engineer Battalion 178th Engineer Battalion 204th Engineer Battalion 206th Engineer Battalion 216th Engineer Battalion 224th Engineer Battalion 227th Engineer Battalion 244th Engineer Battalion 249th Engineer Battalion 299th Engineer Battalion 307th Engineer Battalion 315th Engineer Battalion 317th Engineer Battalion 321st Engineer Battalion 326th Engineer Battalion 363rd Engineer Battalion 365th Engineer Battalion 367th Engineer Battalion 368th Engineer Battalion 389th Engineer Battalion 391st Engineer Battalion 397th Engineer Battalion 411th Engineer Battalion 448th Engineer Battalion 458th Engineer Battalion 467th Engineer Battalion 463rd Engineer Battalion 478th Engineer Battalion 479th Engineer Battalion 489th Engineer Battalion 528th Engineer Battalion 554th Engineer Battalion 572nd Engineer Battalion 588th Engineer Battalion 724th Engineer Battalion 837th Engineer Battalion 841st Engineer Battalion 844th Engineer Battalion 854th Engineer Battalion 863rd Engineer Battalion 864th Engineer Battalion 877th Engineer Battalion 961st Engineer Battalion 980th Engineer Battalion 983rd Engineer Battalion 1092nd Engineer Battalion 1203rd Engineer Battalion 1249th Engineer Battalion Notable personnel Charles Keller, former U.S. Army Brigadier General and the oldest Army officer to serve on active duty during World War II. Peter Conover Hains, former U.S. Army Major General and the oldest Army officer to serve on active duty during World War I. The only known person to serve in both the American Civil War and the First World War. See also Combat Pin for Civilian Service SDEF Title 33 of the Code of Federal Regulations United States Air Force Rapid Engineer Deployable Heavy Operational Repair Squadron Engineers United States Navy Seabees Notes References Further reading US Army Corps of Engineers. The History of the US Army Corps of Engineers (Army Corps of Engineers, 1986) online; can be downloaded at no cost; not copyright Angevine, Robert G. "Individuals, organizations, and engineering: US Army officers and the American railroads, 1827-1838." Technology and Culture 42.2 (2001): 292-320. online Ballard, Joe N., ed. The history of the US Army Corps of Engineers (DIANE Publishing, 1999). online Becker, William H. From the Atlantic to the Great Lakes: a history of the US Army Corps of Engineers and the St. Lawrence Seaway (Historical Division, Office of Administrative Services, Office of the Chief of Engineers, 1984) online. Cowdrey, Albert E. "Pioneering Environmental Law: The Army Corps of Engineers and the Refuse Act." Pacific Historical Review (1975): 331-349. online Crump, Irving. Our Army Engineer (1954), popular history of 19 great projects; online Dobney, Fredrick J. River Engineers on the Middle Mississippi: A History of the St. Louis District, US Army Corps of Engineers (US Government Printing Office, 1978) online. Fine, Lenore, and Jesse Arthur Remington. The corps of engineers: Construction in the United States (Vol. 10. No. 5. Center of Military History, US Army, 1972). online Grathwol, Robert P., and Donita M. Moorhus. Bricks, sand, and marble: US Army Corps of Engineers construction in the Mediterranean and Middle East, 1947-1991 (Vol. 45. Center of Military History, Corps of Engineers, US Army, 2009) online. Griggs, William E. The World War II Black Regiment that Built the Alaska Military Highway: A Photographic History (Univ. Press of Mississippi, 2002) online. Hendricks, Charles. Combat and Construction: US Army Engineers in World War I (Vol. 870. No. 1-47. US Army Corps of Engineers, 1993) online. Hill, Forest G. Roads, rails & waterways: the army engineers and early transportation (1957) online Klawonn, Marion J. Cradle of the Corps: A History of the New York District, US Army Corps of Engineers, 1775-1975 (Department of Defense, Department of the Army, Corps of Engineers, New York District, 1977) online. Johnson, Leland R. The Falls City Engineers: A History of the Louisville District, Corps of Engineers, United States Army, 1970-1983 (US Army Engineer District, 1984) online. Prucha, Francis Paul. Broadax and bayonet : the role of the United States Army in the development of the northwest, 1815-1860 (1953) online Scott, Pamela. Capital Engineers: The US Army Corps of Engineers in the Development of Washington, DC 1790-2004 (Office of History, Headquarters, US Army Corps of Engineers, 2011). online Shallat, Todd. "Building waterways, 1802–1861: Science and the United States Army in early public works." Technology and Culture 31.1 (1990): 18-50. excerpt Shallat, Todd. Structures in the stream: Water, science, and the rise of the US Army Corps of Engineers (University of Texas Press, 2010) online. Thompson, Erwin N. Pacific Ocean Engineers: History of the US Army Corps of Engineers in the Pacific, 1905-1980 (US Government Printing Office, 1985) online. U. S. Army Corps of Engineers. Builders and Fighters: U. S. Army Engineers in World War II (University Press of the Pacific, 2005) 556pp Willingham, William F. Northwest Passages: A History of the Seattle District, US Army Corps of Engineers (US Army Corps of Engineers, Seattle District, 1992) online. Historic photos of Corps of Engineers lock and dam projects throughout Texas in 1910-20s (from the Portal to Texas History) External links Official Engineers Corps at Federal Register 1775 establishments in the Thirteen Colonies American military units and formations of the War of 1812 Army units and formations of the United States in World War I Branches of the United States Army Civil engineering organizations Engineering units and formations of the United States Army United States Army Military engineering of the United States Military units and formations established in 1775 Military units and formations of the Continental Army Military units and formations of the Mexican–American War Military units and formations of the United States Army in the Vietnam War Military units and formations of the United States Army in World War II Military units and formations of the United States in the Indian Wars Organizations based in Washington, D.C. Regulatory authorities of the United States Union army corps United States Army Direct Reporting Units United States Army units and formations in the Korean War
United States Army Corps of Engineers
Engineering
10,992
26,776,184
https://en.wikipedia.org/wiki/Human%20subject%20research%20legislation%20in%20the%20United%20States
Human subject research legislation in the United States can be traced to the early 20th century. Human subject research in the United States was mostly unregulated until the 20th century, as it was throughout the world, until the establishment of various governmental and professional regulations and codes of ethics. Notable – and in some cases, notorious – human subject experiments performed in the US include the Tuskegee syphilis experiment, human radiation experiments, the Milgram obedience experiment and Stanford prison experiments and Project MKULTRA. With growing public awareness of such experimentation, and the evolution of professional ethical standards, such research became regulated by various legislation, most notably, those that introduced and then empowered the institutional review boards. Early research and legislation Aside from the Pure Food and Drug Act of 1906 and the Harrison Act of 1914 banning the sale of some narcotic drugs, there was no federal regulatory control ensuring the safety of new drugs. Thus the early calls for regulation of human experimentation concerned medicine, and in particular, testing of new pharmaceutical drugs and their release on the market. In 1937, a drug known as Elixir Sulfanilamide was released without any clinical trials. Reports in the press about potentially lethal side effects led to a public outcry. Investigation by the American Medical Association showed that a poisonous compound, diethylene glycol, was present in the drug. The AMA concluded that the drug caused more than a hundred deaths – yet the contemporary law did not require the company that released it to test it (the existing laws required only that a drug be clearly labeled, no false claims be made about it, and that it was not adulterated). A new legislation was proposed by the Secretary of Agriculture to address the issue but was weakened after opposition from business interests. It was finally included in the Federal Food, Drug, and Cosmetic Act of 1938. In the aftermath of World War II, and what became recognized as deeply unethical human experimentation carried out by the Nazis, the Nuremberg Code – ethical principles governing international human experimentation – were founded. The code highlighted 3 key elements (voluntary informed consent, favorable risk/benefit analysis, and right to withdraw without repercussions) which later became the foundation for further human research regulations. However, neither the Nuremberg Code nor the Federal Food, Drug and Cosmetic Act of 1938 prevented the "thalidomide tragedy" of the early 1960s. Thalidomide was introduced in 1958, and there were reports of it being unsafe for certain groups, such as pregnant women and young children; however, although the Food and Drug Administration did not approve it for market, the existing regulations allowed relatively unrestricted testing of the drug. This led to the abuse of approved drug testing as the means to further a promotional marketing strategy. This was addressed by the Drug Amendments legislation of 1962, which introduced a requirement for a series of animal tests before proceeding with human experimentation, and a total of three phases of human clinical trials before a drug can be approved for the market. The inadequacy of the 1938 and 1962 acts was exposed by revelations in the 1960s and 1970s. 60s and 70s: Beecher's study and the Tuskegee syphilis experiment Another milestone came with Henry K. Beecher's 1966 study as published in the New England Journal of Medicine. His study became instrumental in the implementation of federal rules on human experimentation and informed consent. Beecher's study listed over 20 cases of mainstream research where subjects were subject to experimentation without being fully informed of their status as research subjects, and without knowledge of the risks of such participation in the research. Some of the research subjects died or were permanently crippled as a result of that research. One of the cases analyzed was the Willowbrook State School Case, in which children were deliberately infected with hepatitis, under disguise of a vaccination program. Beecher's findings were not alone. Evidence emerged that soon after the introduction of nuclear weapons, soldiers and civilians were subjected to potentially dangerous levels of radiation – without consent – to test its health effects (see Advisory Committee on Human Radiation Experiments and human radiation experiments in the United States). While most major controversies about unethical research were focused on biomedical sciences, there were also controversies involving behavioral, psychological, and sociological experiments such as: the Milgram obedience experiment, Stanford prison experiment, Tearoom Trade study, and others. There were also ethical issues related to the CIA's Project MKULTRA. The Tuskegee syphilis experiment is probably the most infamous case of unethical medical experimentation in the United States. Starting in 1932, investigators recruited 399 impoverished African-American sharecroppers with syphilis for research related to the natural progression of the untreated disease, in hopes of justifying treatment programs for blacks. By 1947, penicillin had become the standard treatment for syphilis, but the Tuskegee scientists decided to withhold penicillin (and information about it) from the patients. The study continued under numerous supervisors until 1972, when a leak to the press resulted in its termination. Victims included a number of men who died of syphilis, their wives who contracted the disease, and some children who were born with syphilis. Even when the results were made public, the initial reaction of the medical scientific community was to exonerate the study and criticize the popular press for interfering with the research. In 1976, the National Institutes of Health (NIH) Office for Protection of Research Subjects (OPRR) was created, and issued its Policies for the Protection of Human Subjects which recommended establishing independent review bodies, later called institutional review boards. Rise of the IRB In 1969, Kentucky Court of Appeals Judge Samuel Steinfeld dissented in Strunk v. Strunk, 445 S.W.2d 145, and made the first judicial suggestion that the Nuremberg Code should apply to American jurisprudence. By the early 1970s, cases like the Willowbrook State School and the Tuskegee syphilis experiments were being raised in the U. S. Senate. As controversy over human experiments continued, the public opinion criticized research where the science seemed to be valued over the good of the subjects. In 1974, Congress passed the National Research Act which established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (CPHS) and mandated that the Public Health Service come up with regulations that would protect the rights of human research subjects. The Commission work from 1974-1978 resulted in 17 reports and appendices, of which the most important were the Institutional Review Board Report and the Belmont Report ("Ethical Principles and Guidelines for the Protection of Human Subjects of Research"). The IRB Report endorsed the establishment and functioning of the Institutional Review Board institution, and the Belmont Report, the Commission's last report, identified "basic ethical principles" applicable to human subject experimentation that became modern guidelines for ethical medical research: "respect for persons", "beneficence" and "justice". However, contemporaneous critics of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research and the Belmont Report argued that while physicians and psychologists were prominent in those commissions, there were few experienced social scientists on them. As a result, they argued, research standards aimed at medical and psychological research were misapplied to all social science research as well. Members of the Belmont Report later acknowledged some of these criticisms. A later conference in September 1979 organized by Tom Beauchamp, who co-authored the Belmont Report, sought to remedy that by hosting social scientists and ethicists and resulted in an anthology. However, the conference lacked the official status of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, and did not influence subsequent legislation. In 1975, the Department of Health, Education and Welfare (DHEW) created regulation which included the recommendations laid out in the NIH's 1966 Policies for the Protection of Human Subjects. Title 45 of the Code of Federal Regulations, known as "The Common Rule," requires that institutional review boards (IRBs) oversee experiments using human subjects. Beyond IRBs The National Commission was superseded by the Ethical Advisory Board (EAB), which in turn was superseded in 1980 by the President's Commission for the Study of Ethical Problems in Biomedical and Behavioral Research (PCEMR). EAB focused on the issues of in vitro fertilization and prohibited the creation of fetuses for research purposes; and PCEMR issues recommendation on subjects such as brain death, access to health services, withdrawal of life-support systems, and testing in regards to genetic disease. In 1980 the FDA made prisoners ineligible to be subjects of new drug testing in clinical trials (21 CFR 50.44). On January 15, 1994, President Bill Clinton formed the Advisory Committee on Human Radiation Experiments (ACHRE). This committee was created to investigate and report the use of human beings as test subjects in experiments involving the effects of ionizing radiation in federally funded research. The committee attempted to determine the causes of the experiments, and reasons why the proper oversight did not exist, and made several recommendations to help prevent future occurrences of similar events. In 1995 (or 1996 – sources vary) a National Bioethics Advisory Commission was established, opining on issues such as cloning of humans, and research involving mentally disabled. In 2001, The President's Council on Bioethics was founded to consider bioethics issues, such as stem cell research. Committee review of research has since then become a standard part of American attitude to ethical issues in science. In 2009, the Obama administration replaced this body with the Presidential Commission for the Study of Bioethical Issues. Research that does not require IRB review Some provisions of medical research regulation allow certain research project to proceed without IRB review. For example, in the United States, research that uses electronic health record of deceased patients does not require IRB review. References Further reading Christine Grady, "Clinical Trials," in From Birth to Death and Bench to Clinic: The Hastings Center Bioethics Briefing Book for Journalists, Policymakers, and Campaigns, ed. Mary Crowley (Garrison, NY: The Hastings Center, 2008), 21-24. Jay Katz, The Regulation of Human Experimentation in the United States: A Personal Odyssey, IRB: Ethics and Human Research, Vol. 9, No. 1 (Jan. – Feb., 1987), pp. 1–6, JSTOR Eileen Welsome, The plutonium files: America's secret medical experiments in the Cold War, Dial Press, 1999, Harriet A. Washington, Medical Apartheid: The Dark History of Medical Experimentation on Black Americans from Colonial Times to the Present, Random House, Inc., 2008, External links IRB: Ethics & Human Research journal The Belmont Report Human subject research in the United States United States federal legislation Bioethics
Human subject research legislation in the United States
Technology
2,200
11,343,381
https://en.wikipedia.org/wiki/Reform%20of%20Architects%20Registration
"Reform of Architects Registration" was the title of a UK government consultation paper dated 19 July 1994 which was issued by the Department of the Environment. The introduction stated that in October 1993 the Government had announced that the profession and others would be consulted about measures which could be taken to simplify the then arrangements for the registration of architects under the Architects Registration Acts, and that broad agreement on what those measures would be had been reached with the Architects' Registration Council of the United Kingdom (ARCUK) and the Royal Institute of British Architects (RIBA). Eventually, Parliament made certain changes to the Architects Registration Acts which now have effect under the Architects Act 1997. The consultation paper went on to state that the current proposals for reform stemmed from a request from ARCUK to the Government in 1992 that the Architects Registration Acts should be reviewed; and that a review had been carried out by Mr E J D Warne CB, whose report had been published by HMSO in 1993. The consultation paper mentioned that the Warne Report had considered the views of architects, architectural bodies and consumers and "agreed with ARCUK" that there were certain weaknesses of its structure: and that the reforms being proposed were aimed at overcoming those weaknesses in a way that would be generally acceptable to the profession and public alike. The main objective of the reforms was stated to be: to create a small, focussed and effective registration body which represents the interests of both the profession and the general public, and its purpose would be to set criteria for admission to the Register; prevent misuse of the title 'architect'; discipline unprofessional conduct, and set fee levels. The punctuation in the document, as reproduced above, seemed to indicate a close connection of some kind between setting fee levels and professional conduct. But in the event the setting of fee levels was later abandoned, while, in respect of professional conduct, statutory powers to inflict fines expressly on a par with criminal penalties were given to a body which would have persons who are not themselves members of the profession in the decisive majority, and who would not be acting under the judicial oath of a judge or a magistrate in a court of criminal or civil jurisdiction, or pursuant to the consensual jurisdiction of an arbitrator, and would not necessarily have the appropriate skill and knowledge to be able to act competently and fairly in respect of hazarding an architect's professional reputation. This could have been seen as an objectionable aberration, but that instead a preponderance of political opinion welcomed such an arrangement, regarding it as a pioneering development, may be explained at least in part by observing that the usage "stakeholders" had gained some currency at the time. Certain issues had been the background to the Warne Report as matters were in the 1990s and had always been, namely, issues concerning the flaws or merits of the case for or against such proposals in theory or in principle or in relation particularly, on the one hand, to the Register of Architects, to restrictions on the use of the word "architect", and to the practice of architecture considered as an art or as a business or as a means of earning a livelihood; and on the other hand, to official accountability, juridical norms and the rule of law: see further, article on Architects Registration in the United Kingdom - background to legislation. One of the proposals mentioned in the consultation document which were later enacted and are now operative was that ARCUK would remain as a legal entity, but its name would be changed to "Architects Registration Board"; and it was stated that although this change, in itself, would have no impact on the status or role of ARCUK it would suggest a smaller, tighter body and would mark the alterations to ARCUK's functions. Another of the proposals was that there should be an office of Registrar whose functions would be to maintain the Register and carry out the instructions of the Board; and it was stated that the Registrar would be a named appointee of the Board which would decide whether the Registrar should be an employee or a contractor. See also The Architects (Registration) Acts, 1931 to 1938 Extent and citation of the Acts Formation and duties of ARCUK Architects Act 1997 Regulations and rules Penal restriction. Board of Architectural Education Nomination and appointment Statutory nomenclature. ARCUK, Architects' Registration Council of the United Kingdom Use of title Background to legislation, cc19-20 Three aspects 1931 regime From 1990s Chronology 1834-1997 The present legislation Summary of legislative history The "burdens" and "choices" Side effects Membership of the Board Duties of the Board "Architecture" and "architectural services" Architects Registration Board Links to the Architects Act 1997 Main Customs Office (Munich) References Registration of architects in the United Kingdom Professional certification in architecture Architectural education Reform in the United Kingdom
Reform of Architects Registration
Engineering
964
17,094,227
https://en.wikipedia.org/wiki/Poincar%C3%A9%20space
In algebraic topology, a Poincaré space is an n-dimensional topological space with a distinguished element μ of its nth homology group such that taking the cap product with an element of the kth cohomology group yields an isomorphism to the (n − k)th homology group. The space is essentially one for which Poincaré duality is valid; more precisely, one whose singular chain complex forms a Poincaré complex with respect to the distinguished element μ. For example, any closed, orientable, connected manifold M is a Poincaré space, where the distinguished element is the fundamental class Poincaré spaces are used in surgery theory to analyze and classify manifolds. Not every Poincaré space is a manifold, but the difference can be studied, first by having a normal map from a manifold, and then via obstruction theory. Other uses Sometimes, Poincaré space means a homology sphere with non-trivial fundamental group—for instance, the Poincaré dodecahedral space in 3 dimensions. See also Stable normal bundle References Algebraic topology Abstract algebra
Poincaré space
Mathematics
221
5,085,057
https://en.wikipedia.org/wiki/SB1394
SB1394 is Creative Labs implementation of IEEE 1394 interface (also known as i.Link or FireWire) and was included on the Sound Blaster Audigy and Audigy 2 family of sound cards. Also OEM Audigy card with model number SB0090 is often referred as "Audigy SB1394". Creative Technology products IEEE standards
SB1394
Technology
75
1,583,938
https://en.wikipedia.org/wiki/International%20Plant%20Protection%20Convention
The International Plant Protection Convention (IPPC) is a 1951 multilateral treaty overseen by the United Nations Food and Agriculture Organization that aims to secure coordinated, effective action to prevent and to control the introduction and spread of pests of plants and plant products. The Convention extends beyond the protection of cultivated plants to the protection of natural flora and plant products. It also takes into consideration both direct and indirect damage by pests, so it includes weeds. IPPC promulgates International Standards for Phytosanitary Measures (ISPMs). The Convention created a governing body consisting of each party, known as the Commission on Phytosanitary Measures, which oversees the implementation of the convention (see ). As of August 2017, the convention has 183 parties, being 180 United Nations member states and the Cook Islands, Niue, and the European Union. The convention is recognized by the World Trade Organization's (WTO) Agreement on the Application of Sanitary and Phytosanitary Measures (the SPS Agreement) as the only international standard setting body for plant health. Goals While the IPPC's primary focus is on plants and plant products moving in international trade, the convention also covers research materials, biological control organisms, germplasm banks, containment facilities, food aid, emergency aid and anything else that can act as a vector for the spread of plant pests – for example, containers, packaging materials, soil, vehicles, vessels and machinery. The IPPC was created by member countries of the Food and Agriculture Organization (UN FAO). The IPPC places emphasis on three core areas: international standard setting, information exchange and capacity development for the implementation of the IPPC and associated international phytosanitary standards. The Secretariat of the IPPC is housed at FAO headquarters in Rome, Italy, and is responsible for the coordination of core activities under the IPPC work program. In recent years the Commission of Phytosanitary Measures of the IPPC has developed a strategic framework with the objectives of: protecting sustainable agriculture and enhancing global food security through the prevention of pest spread; protecting the environment, forests and biodiversity from plant pests; facilitating economic and trade development through the promotion of harmonized scientifically based phytosanitary measures, and: developing phytosanitary capacity for members to accomplish the preceding three objectives. By focusing the convention's efforts on these objectives, the Commission on Phytosanitary Measures of the IPPC intends to: protect farmers from economically devastating pest and disease outbreaks. protect the environment from the loss of species diversity. protect ecosystems from the loss of viability and function as a result of pest invasions. protect industries and consumers from the costs of pest control or eradication. facilitate trade through International Standards that regulate the safe movements of plants and plant products. protect livelihoods and food security by preventing the entry and spread of new pests of plants into a country. Regional Plant Protection Organizations Under the IPPC are Regional Plant Protection Organizations (RPPO). These are intergovernmental organizations responsible for cooperation in plant protection. There are the following organizations recognized by and working under the IPPC: Asia and Pacific Plant Protection Commission (APPPC) Caribbean Agricultural Health and Food Safety Agency (CAHFSA) Andean Community (Comunidad Andina, CAN) Plant Health Committee of the Southern Cone (, COSAVE) European and Mediterranean Plant Protection Organization (EPPO) Inter-African Phytosanitary Council (IAPSC) Near East Plant Protection Organization (NEPPO) North American Plant Protection Organization (NAPPO) International Regional Organization for Agricultural Health (, OIRSA) Pacific Plant Protection Organization (PPPO) Under the IPPC, the role of an RPPO is to: function as the coordinating bodies in the areas covered, shall participate in various activities to achieve the objectives of this Convention and, where appropriate, shall gather and disseminate information. cooperate with the Secretary in achieving the objectives of the Convention and, where appropriate, cooperate with the Secretary and the Commission in developing international standards. hold regular Technical Consultations of representatives of regional plant protection organizations to: promote the development and use of relevant international standards for phytosanitary measures; and encourage inter-regional cooperation in promoting harmonized phytosanitary measures for controlling pests and in preventing their spread and/or introduction. International Plant Health Conference The first annual International Plant Health Conference was organized by the FAO and set to be hosted by the Finnish Government in Helsinki 28 June–July 1, 2021. However, on 9 February 2021 it was cancelled due to the ongoing pandemic. Commission on Phytosanitary Measures The fifteenth session of the Commission on Phytosanitary Measures (CPM) was held 16 March, 18 March and 1 April 2021 virtually over Zoom. ePhyto The IPPC created and administers the ePhyto system, the international electronic phytosanitary certificate standard. ePhyto has been very widely adopted three million ePhyto certificates have been exchanged between exporting and importing partner states. Activities IPPC convenes consultative committees and forms international standards. This includes standards on food irradiation. Haack et al., 2014 find the IPPC has been successful in reducing wood boring beetle infestation of wood packaging material in shipments entering the United States. See also Phytosanitary certification Phytosanitary Certificate Issuance and Tracking System (PCIT) International Year of Plant Health (IYPH) References External links International Plant Protection Convention Food and Agriculture Organization of the United Nations Ratifications 1951 in the environment Crop protection organizations Treaties concluded in 1951 Treaties entered into force in 1952 International Plant Protection Convention Treaties of Afghanistan Treaties of Albania Treaties of Algeria Treaties of Antigua and Barbuda Treaties of Argentina Treaties of Armenia Treaties of Australia Treaties of Austria Treaties of Azerbaijan Treaties of the Bahamas Treaties of Bahrain Treaties of Bangladesh Treaties of Barbados Treaties of Belarus Treaties of Belgium Treaties of Belize Treaties of Benin Treaties of Bhutan Treaties of Bolivia Treaties of Bosnia and Herzegovina Treaties of Botswana Treaties of the Second Brazilian Republic Treaties of Bulgaria Treaties of Burkina Faso Treaties of Burundi Treaties of the French protectorate of Cambodia Treaties of Cameroon Treaties of Canada Treaties of Cape Verde Treaties of the Central African Republic Treaties of Chad Treaties of Chile Treaties of the People's Republic of China Treaties of Colombia Treaties of the Cook Islands Treaties of the Comoros Treaties of the Democratic Republic of the Congo Treaties of the Republic of the Congo Treaties of Costa Rica Treaties of Ivory Coast Treaties of Croatia Treaties of Cuba Treaties of Cyprus Treaties of the Czech Republic Treaties of North Korea Treaties of Denmark Treaties of Djibouti Treaties of Dominica Treaties of the Dominican Republic Treaties of Ecuador Treaties of the Republic of Egypt (1953–1958) Treaties of El Salvador Treaties of Equatorial Guinea Treaties of Eritrea Treaties of Estonia Treaties of the Derg Treaties of Fiji Treaties of Finland Treaties of France Treaties of Gabon Treaties of the Gambia Treaties of Georgia (country) Treaties of Germany Treaties of Ghana Treaties of Greece Treaties of Grenada Treaties of Guatemala Treaties of Guinea Treaties of Guinea-Bissau Treaties of Guyana Treaties of Haiti Treaties of Honduras Treaties of Hungary Treaties of Iceland Treaties of India Treaties of Indonesia Treaties of Iran Treaties of Ireland Treaties of Israel Treaties of Italy Treaties of Jamaica Treaties of Japan Treaties of Jordan Treaties of Kazakhstan Treaties of Kenya Treaties of the Kingdom of Afghanistan Treaties of the Kingdom of Iraq Treaties of the Kingdom of Laos Treaties of Kuwait Treaties of Kyrgyzstan Treaties of Latvia Treaties of Lebanon Treaties of Lesotho Treaties of Liberia Treaties of the Libyan Arab Republic Treaties of Lithuania Treaties of Luxembourg Treaties of Madagascar Treaties of Malawi Treaties of Malaysia Treaties of the Maldives Treaties of Mali Treaties of Malta Treaties of Mauritania Treaties of Mauritius Treaties of Mexico Treaties of the Federated States of Micronesia Treaties of Mongolia Treaties of Montenegro Treaties of Morocco Treaties of Mozambique Treaties of Myanmar Treaties of Namibia Treaties of Nepal Treaties of the Netherlands Treaties of New Zealand Treaties of Nicaragua Treaties of Niue Treaties of Niger Treaties of Nigeria Treaties of Norway Treaties of Oman Treaties of the Dominion of Pakistan Treaties of Palau Treaties of Panama Treaties of Papua New Guinea Treaties of Paraguay Treaties of Peru Treaties of the Philippines Treaties of Poland Treaties of Portugal Treaties of Qatar Treaties of South Korea Treaties of Moldova Treaties of Romania Treaties of Russia Treaties of Rwanda Treaties of Samoa Treaties of São Tomé and Príncipe Treaties of Saudi Arabia Treaties of Senegal Treaties of Serbia Treaties of Seychelles Treaties of Sierra Leone Treaties of Singapore Treaties of Slovakia Treaties of Slovenia Treaties of the Solomon Islands Treaties of South Africa Treaties of South Sudan Treaties of Spain Treaties of the Dominion of Ceylon Treaties of Saint Kitts and Nevis Treaties of Saint Lucia Treaties of Saint Vincent and the Grenadines Treaties of the Democratic Republic of the Sudan Treaties of Suriname Treaties of Eswatini Treaties of Sweden Treaties of Switzerland Treaties of Syria Treaties of Tajikistan Treaties of Thailand Treaties of North Macedonia Treaties of Togo Treaties of Tonga Treaties of Trinidad and Tobago Treaties of Tunisia Treaties of Turkey Treaties of Tuvalu Treaties of Uganda Treaties of Ukraine Treaties of the United Arab Emirates Treaties of the United Kingdom Treaties of Tanzania Treaties of the United States Treaties of Uruguay Treaties of Uzbekistan Treaties of Vanuatu Treaties of Venezuela Treaties of Vietnam Treaties of Yemen Treaties of Zambia Treaties of Zimbabwe Treaties entered into by the European Union International Plant Protection Convention International Plant Protection Convention 1951 in Italy Treaties establishing intergovernmental organizations Agricultural treaties Treaties extended to Norfolk Island Treaties extended to the Isle of Man Treaties extended to Jersey Treaties extended to Guernsey Treaties extended to American Samoa Treaties extended to Baker Island Treaties extended to Guam Treaties extended to Howland Island Treaties extended to Jarvis Island Treaties extended to Johnston Atoll Treaties extended to Midway Atoll Treaties extended to Navassa Island Treaties extended to the Trust Territory of the Pacific Islands Treaties extended to Palmyra Atoll Treaties extended to Puerto Rico Treaties extended to the United States Virgin Islands Treaties extended to Wake Island Treaties extended to the Nauru Trust Territory Treaties extended to Dutch New Guinea Treaties extended to Surinam (Dutch colony) Treaties extended to Macau Treaties extended to the Panama Canal Zone
International Plant Protection Convention
Biology
2,023
2,299,454
https://en.wikipedia.org/wiki/Lorentz%20ether%20theory
What is now often called Lorentz ether theory (LET) has its roots in Hendrik Lorentz's "theory of electrons", which marked the end of the development of the classical aether theories at the end of the 19th and at the beginning of the 20th century. Lorentz's initial theory was created between 1892 and 1895 and was based on removing assumptions about aether motion. It explained the failure of the negative aether drift experiments to first order in v/c by introducing an auxiliary variable called "local time" for connecting systems at rest and in motion in the aether. In addition, the negative result of the Michelson–Morley experiment led to the introduction of the hypothesis of length contraction in 1892. However, other experiments also produced negative results and (guided by Henri Poincaré's principle of relativity) Lorentz tried in 1899 and 1904 to expand his theory to all orders in v/c by introducing the Lorentz transformation. In addition, he assumed that non-electromagnetic forces (if they exist) transform like electric forces. However, Lorentz's expression for charge density and current were incorrect, so his theory did not fully exclude the possibility of detecting the aether. Eventually, it was Henri Poincaré who in 1905 corrected the errors in Lorentz's paper and actually incorporated non-electromagnetic forces (including gravitation) within the theory, which he called "The New Mechanics". Many aspects of Lorentz's theory were incorporated into special relativity (SR) with the works of Albert Einstein and Hermann Minkowski. Today LET is often treated as some sort of "Lorentzian" or "neo-Lorentzian" interpretation of special relativity. The introduction of length contraction and time dilation for all phenomena in a "preferred" frame of reference, which plays the role of Lorentz's immobile aether, leads to the complete Lorentz transformation (see the Robertson–Mansouri–Sexl test theory as an example), so Lorentz covariance doesn't provide any experimentally verifiable distinctions between LET and SR. The absolute simultaneity in the Mansouri–Sexl test theory formulation of LET implies that a one-way speed of light experiment could in principle distinguish between LET and SR, but it is now widely held that it is impossible to perform such a test. In the absence of any way to experimentally distinguish between LET and SR, SR is widely preferred over LET, due to the superfluous assumption of an undetectable aether in LET, and the validity of the relativity principle in LET seeming ad hoc or coincidental. Historical development Basic concept The Lorentz ether theory, which was developed mainly between 1892 and 1906 by Lorentz and Poincaré, was based on the aether theory of Augustin-Jean Fresnel, Maxwell's equations and the electron theory of Rudolf Clausius. Lorentz's 1895 paper rejected the aether drift theories, and refused to express assumptions about the nature of the aether. It said: As Max Born later said, it was natural (though not logically necessary) for scientists of that time to identify the rest frame of the Lorentz aether with the absolute space of Isaac Newton. The condition of this aether can be described by the electric field E and the magnetic field H, where these fields represent the "states" of the aether (with no further specification), related to the charges of the electrons. Thus an abstract electromagnetic aether replaces the older mechanistic aether models. Contrary to Clausius, who accepted that the electrons operate by actions at a distance, the electromagnetic field of the aether appears as a mediator between the electrons, and changes in this field can propagate not faster than the speed of light. Lorentz theoretically explained the Zeeman effect on the basis of his theory, for which he received the Nobel Prize in Physics in 1902. Joseph Larmor found a similar theory simultaneously, but his concept was based on a mechanical aether. A fundamental concept of Lorentz's theory in 1895 was the "theorem of corresponding states" for terms of order v/c. This theorem states that a moving observer with respect to the aether can use the same electrodynamic equations as an observer in the stationary aether system, thus they are making the same observations. Length contraction A big challenge for the Lorentz ether theory was the Michelson–Morley experiment in 1887. According to the theories of Fresnel and Lorentz, a relative motion to an immobile aether had to be determined by this experiment; however, the result was negative. Michelson himself thought that the result confirmed the aether drag hypothesis, in which the aether is fully dragged by matter. However, other experiments like the Fizeau experiment and the effect of aberration disproved that model. A possible solution came in sight, when in 1889 Oliver Heaviside derived from Maxwell's equations that the magnetic vector potential field around a moving body is altered by a factor of . Based on that result, and to bring the hypothesis of an immobile aether into accordance with the Michelson–Morley experiment, George FitzGerald in 1889 (qualitatively) and, independently of him, Lorentz in 1892 (already quantitatively), suggested that not only the electrostatic fields, but also the molecular forces, are affected in such a way that the dimension of a body in the line of motion is less by the value than the dimension perpendicularly to the line of motion. However, an observer co-moving with the earth would not notice this contraction because all other instruments contract at the same ratio. In 1895 Lorentz proposed three possible explanations for this relative contraction: The body contracts in the line of motion and preserves its dimension perpendicularly to it. The dimension of the body remains the same in the line of motion, but it expands perpendicularly to it. The body contracts in the line of motion and expands at the same time perpendicularly to it. Although the possible connection between electrostatic and intermolecular forces was used by Lorentz as a plausibility argument, the contraction hypothesis was soon considered as purely ad hoc. It is also important that this contraction would only affect the space between the electrons but not the electrons themselves; therefore the name "intermolecular hypothesis" was sometimes used for this effect. The so-called Length contraction without expansion perpendicularly to the line of motion and by the precise value (where l0 is the length at rest in the aether) was given by Larmor in 1897 and by Lorentz in 1904. In the same year, Lorentz also argued that electrons themselves are also affected by this contraction. For further development of this concept, see the section . Local time An important part of the theorem of corresponding states in 1892 and 1895 was the local time , where t is the time coordinate for an observer resting in the aether, and t' is the time coordinate for an observer moving in the aether. (Woldemar Voigt had previously used the same expression for local time in 1887 in connection with the Doppler effect and an incompressible medium.) With the help of this concept Lorentz could explain the aberration of light, the Doppler effect and the Fizeau experiment (i.e. measurements of the Fresnel drag coefficient) by Hippolyte Fizeau in moving and also resting liquids. While for Lorentz length contraction was a real physical effect, he considered the time transformation only as a heuristic working hypothesis and a mathematical stipulation to simplify the calculation from the resting to a "fictitious" moving system. Contrary to Lorentz, Poincaré saw more than a mathematical trick in the definition of local time, which he called Lorentz's "most ingenious idea". In The Measure of Time he wrote in 1898: In 1900 Poincaré interpreted local time as the result of a synchronization procedure based on light signals. He assumed that two observers, A and B, who are moving in the aether, synchronize their clocks by optical signals. Since they treat themselves as being at rest, they must consider only the transmission time of the signals and then crossing their observations to examine whether their clocks are synchronous. However, from the point of view of an observer at rest in the aether the clocks are not synchronous and indicate the local time . But because the moving observers don't know anything about their movement, they don't recognize this. In 1904, he illustrated the same procedure in the following way: A sends a signal at time 0 to B, which arrives at time t. B also sends a signal at time 0 to A, which arrives at time t. If in both cases t has the same value, the clocks are synchronous, but only in the system in which the clocks are at rest in the aether. So, according to Darrigol, Poincaré understood local time as a physical effect just like length contraction – in contrast to Lorentz, who did not use the same interpretation before 1906. However, contrary to Einstein, who later used a similar synchronization procedure which was called Einstein synchronisation, Darrigol says that Poincaré had the opinion that clocks resting in the aether are showing the true time. However, at the beginning it was unknown that local time includes what is now known as time dilation. This effect was first noticed by Larmor (1897), who wrote that "individual electrons describe corresponding parts of their orbits in times shorter for the [aether] system in the ratio or ". And in 1899 also Lorentz noted for the frequency of oscillating electrons "that in S the time of vibrations be times as great as in S0", where S0 is the aether frame, S the mathematical-fictitious frame of the moving observer, k is , and is an undetermined factor. Lorentz transformation While local time could explain the negative aether drift experiments to first order to v/c, it was necessary – due to other unsuccessful aether drift experiments like the Trouton–Noble experiment – to modify the hypothesis to include second-order effects. The mathematical tool for that is the so-called Lorentz transformation. Voigt in 1887 had already derived a similar set of equations (although with a different scale factor). Afterwards, Larmor in 1897 and Lorentz in 1899 derived equations in a form algebraically equivalent to those which are used up to this day, although Lorentz used an undetermined factor l in his transformation. In his paper Electromagnetic phenomena in a system moving with any velocity smaller than that of light (1904) Lorentz attempted to create such a theory, according to which all forces between the molecules are affected by the Lorentz transformation (in which Lorentz set the factor l to unity) in the same manner as electrostatic forces. In other words, Lorentz attempted to create a theory in which the relative motion of earth and aether is (nearly or fully) undetectable. Therefore, he generalized the contraction hypothesis and argued that not only the forces between the electrons, but also the electrons themselves are contracted in the line of motion. However, Max Abraham (1904) quickly noted a defect of that theory: Within a purely electromagnetic theory the contracted electron-configuration is unstable and one has to introduce non-electromagnetic force to stabilize the electrons – Abraham himself questioned the possibility of including such forces within the theory of Lorentz. So it was Poincaré, on 5 June 1905, who introduced the so-called "Poincaré stresses" to solve that problem. Those stresses were interpreted by him as an external, non-electromagnetic pressure, which stabilize the electrons and also served as an explanation for length contraction. Although he argued that Lorentz succeeded in creating a theory which complies to the postulate of relativity, he showed that Lorentz's equations of electrodynamics were not fully Lorentz covariant. So by pointing out the group characteristics of the transformation, Poincaré demonstrated the Lorentz covariance of the Maxwell–Lorentz equations and corrected Lorentz's transformation formulae for charge density and current density. He went on to sketch a model of gravitation (incl. gravitational waves) which might be compatible with the transformations. It was Poincaré who, for the first time, used the term "Lorentz transformation", and he gave them a form which is used up to this day. (Where is an arbitrary function of , which must be set to unity to conserve the group characteristics. He also set the speed of light to unity.) A substantially extended work (the so-called "Palermo paper") was submitted by Poincaré on 23 July 1905, but was published in January 1906 because the journal appeared only twice a year. He spoke literally of "the postulate of relativity", he showed that the transformations are a consequence of the principle of least action; he demonstrated in more detail the group characteristics of the transformation, which he called Lorentz group, and he showed that the combination is invariant. While elaborating his gravitational theory, he noticed that the Lorentz transformation is merely a rotation in four-dimensional space about the origin by introducing as a fourth, imaginary, coordinate, and he used an early form of four-vectors. However, Poincaré later said the translation of physics into the language of four-dimensional geometry would entail too much effort for limited profit, and therefore he refused to work out the consequences of this notion. This was later done, however, by Minkowski; see "The shift to relativity". Electromagnetic mass J. J. Thomson (1881) and others noticed that electromagnetic energy contributes to the mass of charged bodies by the amount , which was called electromagnetic or "apparent mass". Another derivation of some sort of electromagnetic mass was conducted by Poincaré (1900). By using the momentum of electromagnetic fields, he concluded that these fields contribute a mass of to all bodies, which is necessary to save the center of mass theorem. As noted by Thomson and others, this mass increases also with velocity. Thus in 1899, Lorentz calculated that the ratio of the electron's mass in the moving frame and that of the aether frame is parallel to the direction of motion, and perpendicular to the direction of motion, where and is an undetermined factor. And in 1904, he set , arriving at the expressions for the masses in different directions (longitudinal and transverse): where Many scientists now believed that the entire mass and all forms of forces were electromagnetic in nature. This idea had to be given up, however, in the course of the development of relativistic mechanics. Abraham (1904) argued (as described in the preceding section #Lorentz transformation), that non-electrical binding forces were necessary within Lorentz's electrons model. But Abraham also noted that different results occurred, dependent on whether the em-mass is calculated from the energy or from the momentum. To solve those problems, Poincaré in 1905 and 1906 introduced some sort of pressure of non-electrical nature, which contributes the amount to the energy of the bodies, and therefore explains the 4/3-factor in the expression for the electromagnetic mass-energy relation. However, while Poincaré's expression for the energy of the electrons was correct, he erroneously stated that only the em-energy contributes to the mass of the bodies. The concept of electromagnetic mass is not considered anymore as the cause of mass per se, because the entire mass (not only the electromagnetic part) is proportional to energy, and can be converted into different forms of energy, which is explained by Einstein's mass–energy equivalence. Gravitation Lorentz's theories In 1900 Lorentz tried to explain gravity on the basis of the Maxwell equations. He first considered a Le Sage type model and argued that there possibly exists a universal radiation field, consisting of very penetrating em-radiation, and exerting a uniform pressure on every body. Lorentz showed that an attractive force between charged particles would indeed arise, if it is assumed that the incident energy is entirely absorbed. This was the same fundamental problem which had afflicted the other Le Sage models, because the radiation must vanish somehow and any absorption must lead to an enormous heating. Therefore, Lorentz abandoned this model. In the same paper, he assumed like Ottaviano Fabrizio Mossotti and Johann Karl Friedrich Zöllner that the attraction of opposite charged particles is stronger than the repulsion of equal charged particles. The resulting net force is exactly what is known as universal gravitation, in which the speed of gravity is that of light. This leads to a conflict with the law of gravitation by Isaac Newton, in which it was shown by Pierre Simon Laplace that a finite speed of gravity leads to some sort of aberration and therefore makes the orbits unstable. However, Lorentz showed that the theory is not concerned by Laplace's critique, because due to the structure of the Maxwell equations only effects in the order v2/c2 arise. But Lorentz calculated that the value for the perihelion advance of Mercury was much too low. He wrote: In 1908 Poincaré examined the gravitational theory of Lorentz and classified it as compatible with the relativity principle, but (like Lorentz) he criticized the inaccurate indication of the perihelion advance of Mercury. Contrary to Poincaré, Lorentz in 1914 considered his own theory as incompatible with the relativity principle and rejected it. Lorentz-invariant gravitational law Poincaré argued in 1904 that a propagation speed of gravity which is greater than c is contradicting the concept of local time and the relativity principle. He wrote: However, in 1905 and 1906 Poincaré pointed out the possibility of a gravitational theory, in which changes propagate with the speed of light and which is Lorentz covariant. He pointed out that in such a theory the gravitational force not only depends on the masses and their mutual distance, but also on their velocities and their position due to the finite propagation time of interaction. On that occasion Poincaré introduced four-vectors. Following Poincaré, also Minkowski (1908) and Arnold Sommerfeld (1910) tried to establish a Lorentz-invariant gravitational law. However, these attempts were superseded because of Einstein's theory of general relativity, see "The shift to relativity". The non-existence of a generalization of the Lorentz ether to gravity was a major reason for the preference for the spacetime interpretation. A viable generalization to gravity has been proposed only 2012 by Schmelzer. The preferred frame is defined by the harmonic coordinate condition. The gravitational field is defined by density, velocity and stress tensor of the Lorentz ether, so that the harmonic conditions become continuity and Euler equations. The Einstein Equivalence Principle is derived. The Strong Equivalence Principle is violated, but is recovered in a limit, which gives the Einstein equations of general relativity in harmonic coordinates. Principles and conventions Constancy of the speed of light Already in his philosophical writing on time measurements (1898), Poincaré wrote that astronomers like Ole Rømer, in determining the speed of light, simply assume that light has a constant speed, and that this speed is the same in all directions. Without this postulate it would not be possible to infer the speed of light from astronomical observations, as Rømer did based on observations of the moons of Jupiter. Poincaré went on to note that Rømer also had to assume that Jupiter's moons obey Newton's laws, including the law of gravitation, whereas it would be possible to reconcile a different speed of light with the same observations if we assumed some different (probably more complicated) laws of motion. According to Poincaré, this illustrates that we adopt for the speed of light a value that makes the laws of mechanics as simple as possible. (This is an example of Poincaré's conventionalist philosophy.) Poincaré also noted that the propagation speed of light can be (and in practice often is) used to define simultaneity between spatially separate events. However, in that paper he did not go on to discuss the consequences of applying these "conventions" to multiple relatively moving systems of reference. This next step was done by Poincaré in 1900, when he recognized that synchronization by light signals in earth's reference frame leads to Lorentz's local time. (See the section on "local time" above). And in 1904 Poincaré wrote: Principle of relativity In 1895 Poincaré argued that experiments like that of Michelson–Morley show that it seems to be impossible to detect the absolute motion of matter or the relative motion of matter in relation to the aether. And although most physicists had other views, Poincaré in 1900 stood to his opinion and alternately used the expressions "principle of relative motion" and "relativity of space". He criticized Lorentz by saying, that it would be better to create a more fundamental theory, which explains the absence of any aether drift, than to create one hypothesis after the other. In 1902 he used for the first time the expression "principle of relativity". In 1904 he appreciated the work of the mathematicians, who saved what he now called the "principle of relativity" with the help of hypotheses like local time, but he confessed that this venture was possible only by an accumulation of hypotheses. And he defined the principle in this way (according to Miller based on Lorentz's theorem of corresponding states): "The principle of relativity, according to which the laws of physical phenomena must be the same for a stationary observer as for one carried along in a uniform motion of translation, so that we have no means, and can have none, of determining whether or not we are being carried along in such a motion." Referring to the critique of Poincaré from 1900, Lorentz wrote in his famous paper in 1904, where he extended his theorem of corresponding states: "Surely, the course of inventing special hypotheses for each new experimental result is somewhat artificial. It would be more satisfactory, if it were possible to show, by means of certain fundamental assumptions, and without neglecting terms of one order of magnitude or another, that many electromagnetic actions are entirely independent of the motion of the system." One of the first assessments of Lorentz's paper was by Paul Langevin in May 1905. According to him, this extension of the electron theories of Lorentz and Larmor led to "the physical impossibility to demonstrate the translational motion of the earth". However, Poincaré noticed in 1905 that Lorentz's theory of 1904 was not perfectly "Lorentz invariant" in a few equations such as Lorentz's expression for current density (Lorentz admitted in 1921 that these were defects). As this required just minor modifications of Lorentz's work, also Poincaré asserted that Lorentz had succeeded in harmonizing his theory with the principle of relativity: "It appears that this impossibility of demonstrating the absolute motion of the earth is a general law of nature. [...] Lorentz tried to complete and modify his hypothesis in order to harmonize it with the postulate of complete impossibility of determining absolute motion. It is what he has succeeded in doing in his article entitled Electromagnetic phenomena in a system moving with any velocity smaller than that of light [Lorentz, 1904b]." In his Palermo paper (1906), Poincaré called this "the postulate of relativity“, and although he stated that it was possible this principle might be disproved at some point (and in fact he mentioned at the paper's end that the discovery of magneto-cathode rays by Paul Ulrich Villard (1904) seems to threaten it), he believed it was interesting to consider the consequences if we were to assume the postulate of relativity was valid without restriction. This would imply that all forces of nature (not just electromagnetism) must be invariant under the Lorentz transformation. In 1921 Lorentz credited Poincaré for establishing the principle and postulate of relativity and wrote: "I have not established the principle of relativity as rigorously and universally true. Poincaré, on the other hand, has obtained a perfect invariance of the electro-magnetic equations, and he has formulated 'the postulate of relativity', terms which he was the first to employ." Aether Poincaré wrote in the sense of his conventionalist philosophy in 1889: "Whether the aether exists or not matters little – let us leave that to the metaphysicians; what is essential for us is, that everything happens as if it existed, and that this hypothesis is found to be suitable for the explanation of phenomena. After all, have we any other reason for believing in the existence of material objects? That, too, is only a convenient hypothesis; only, it will never cease to be so, while some day, no doubt, the aether will be thrown aside as useless." He also denied the existence of absolute space and time by saying in 1901: "1. There is no absolute space, and we only conceive of relative motion; and yet in most cases mechanical facts are enunciated as if there is an absolute space to which they can be referred. 2. There is no absolute time. When we say that two periods are equal, the statement has no meaning, and can only acquire a meaning by a convention. 3. Not only have we no direct intuition of the equality of two periods, but we have not even direct intuition of the simultaneity of two events occurring in two different places. I have explained this in an article entitled "Mesure du Temps" [1898]. 4. Finally, is not our Euclidean geometry in itself only a kind of convention of language?" However, Poincaré himself never abandoned the aether hypothesis and stated in 1900: "Does our aether actually exist ? We know the origin of our belief in the aether. If light takes several years to reach us from a distant star, it is no longer on the star, nor is it on the earth. It must be somewhere, and supported, so to speak, by some material agency." And referring to the Fizeau experiment, he even wrote: "The aether is all but in our grasp." He also said the aether is necessary to harmonize Lorentz's theory with Newton's third law. Even in 1912 in a paper called "The Quantum Theory", Poincaré ten times used the word "aether", and described light as "luminous vibrations of the aether". And although he admitted the relative and conventional character of space and time, he believed that the classical convention is more "convenient" and continued to distinguish between "true" time in the aether and "apparent" time in moving systems. Addressing the question if a new convention of space and time is needed he wrote in 1912: "Shall we be obliged to modify our conclusions? Certainly not; we had adopted a convention because it seemed convenient and we had said that nothing could constrain us to abandon it. Today some physicists want to adopt a new convention. It is not that they are constrained to do so; they consider this new convention more convenient; that is all. And those who are not of this opinion can legitimately retain the old one in order not to disturb their old habits, I believe, just between us, that this is what they shall do for a long time to come." Also Lorentz argued during his lifetime that in all frames of reference this one has to be preferred, in which the aether is at rest. Clocks in this frame are showing the "real“ time and simultaneity is not relative. However, if the correctness of the relativity principle is accepted, it is impossible to find this system by experiment. The shift to relativity Special relativity In 1905, Albert Einstein published his paper on what is now called special relativity. In this paper, by examining the fundamental meanings of the space and time coordinates used in physical theories, Einstein showed that the "effective" coordinates given by the Lorentz transformation were in fact the inertial coordinates of relatively moving frames of reference. From this followed all of the physically observable consequences of LET, along with others, all without the need to postulate an unobservable entity (the aether). Einstein identified two fundamental principles, each founded on experience, from which all of Lorentz's electrodynamics follows: The laws by which physical processes occur are the same with respect to any system of inertial coordinates (the principle of relativity) In empty space light propagates at an absolute speed c in any system of inertial coordinates (the principle of the constancy of light) Taken together (along with a few other tacit assumptions such as isotropy and homogeneity of space), these two postulates lead uniquely to the mathematics of special relativity. Lorentz and Poincaré had also adopted these same principles, as necessary to achieve their final results, but didn't recognize that they were also sufficient, and hence that they obviated all the other assumptions underlying Lorentz's initial derivations (many of which later turned out to be incorrect). Therefore, special relativity very quickly gained wide acceptance among physicists, and the 19th century concept of a luminiferous aether was no longer considered useful. Poincare (1905) and Hermann Minkowski (1905) showed that special relativity had a very natural interpretation in terms of a unified four-dimensional "spacetime" in which absolute intervals are seen to be given by an extension of the Pythagorean theorem. The utility and naturalness of the spacetime representation contributed to the rapid acceptance of special relativity, and to the corresponding loss of interest in Lorentz's aether theory. In 1909 and 1912 Einstein explained: In 1907 Einstein criticized the "ad hoc" character of Lorentz's contraction hypothesis in his theory of electrons, because according to him it was an artificial assumption to make the Michelson–Morley experiment conform to Lorentz's stationary aether and the relativity principle. Einstein argued that Lorentz's "local time" can simply be called "time", and he stated that the immobile aether as the theoretical foundation of electrodynamics was unsatisfactory. He wrote in 1920: Minkowski argued that Lorentz's introduction of the contraction hypothesis "sounds rather fantastical", since it is not the product of resistance in the aether but a "gift from above". He said that this hypothesis is "completely equivalent with the new concept of space and time", though it becomes much more comprehensible in the framework of the new spacetime geometry. However, Lorentz disagreed that it was "ad-hoc" and he argued in 1913 that there is little difference between his theory and the negation of a preferred reference frame, as in the theory of Einstein and Minkowski, so that it is a matter of taste which theory one prefers. Mass–energy equivalence It was derived by Einstein (1905) as a consequence of the relativity principle, that inertia of energy is actually represented by , but in contrast to Poincaré's 1900-paper, Einstein recognized that matter itself loses or gains mass during the emission or absorption. So the mass of any form of matter is equal to a certain amount of energy, which can be converted into and re-converted from other forms of energy. This is the mass–energy equivalence, represented by . So Einstein didn't have to introduce "fictitious" masses and also avoided the perpetual motion problem, because according to Darrigol, Poincaré's radiation paradox can simply be solved by applying Einstein's equivalence. If the light source loses mass during the emission by , the contradiction in the momentum law vanishes without the need of any compensating effect in the aether. Similar to Poincaré, Einstein concluded in 1906 that the inertia of (electromagnetic) energy is a necessary condition for the center of mass theorem to hold in systems, in which electromagnetic fields and matter are acting on each other. Based on the mass–energy equivalence, he showed that emission and absorption of em-radiation, and therefore the transport of inertia, solves all problems. On that occasion, Einstein referred to Poincaré's 1900-paper and wrote: Also Poincaré's rejection of the reaction principle due to the violation of the mass conservation law can be avoided through Einstein's , because mass conservation appears as a special case of the energy conservation law. General relativity The attempts of Lorentz and Poincaré (and other attempts like those of Abraham and Gunnar Nordström) to formulate a theory of gravitation were superseded by Einstein's theory of general relativity. This theory is based on principles like the equivalence principle, the general principle of relativity, the principle of general covariance, geodesic motion, local Lorentz covariance (the laws of special relativity apply locally for all inertial observers), and that spacetime curvature is created by stress-energy within the spacetime. In 1920, Einstein compared Lorentz's aether with the "gravitational aether" of general relativity. He said that immobility is the only mechanical property of which the aether has not been deprived by Lorentz, but, contrary to the luminiferous and Lorentz's aether, the aether of general relativity has no mechanical property, not even immobility: Priority Some claim that Poincaré and Lorentz are the true founders of special relativity, not Einstein. For more details see the article on this dispute. Later activity Viewed as a theory of elementary particles, Lorentz's electron/ether theory was superseded during the first few decades of the 20th century, first by quantum mechanics and then by quantum field theory. As a general theory of dynamics, Lorentz and Poincare had already (by about 1905) found it necessary to invoke the principle of relativity itself in order to make the theory match all the available empirical data. By this point, most vestiges of a substantial aether had been eliminated from Lorentz's "aether" theory, and it became both empirically and deductively equivalent to special relativity. The main difference was the metaphysical postulate of a unique absolute rest frame, which was empirically undetectable and played no role in the physical predictions of the theory, as Lorentz wrote in 1909, 1910 (published 1913), 1913 (published 1914), or in 1912 (published 1922). As a result, the term "Lorentz aether theory" is sometimes used today to refer to a neo-Lorentzian interpretation of special relativity. The prefix "neo" is used in recognition of the fact that the interpretation must now be applied to physical entities and processes (such as the standard model of quantum field theory) that were unknown in Lorentz's day. Subsequent to the advent of special relativity, only a small number of individuals have advocated the Lorentzian approach to physics. Many of these, such as Herbert E. Ives (who, along with G. R. Stilwell, performed the first experimental confirmation of time dilation) have been motivated by the belief that special relativity is logically inconsistent, and so some other conceptual framework is needed to reconcile the relativistic phenomena. For example, Ives wrote "The 'principle' of the constancy of the velocity of light is not merely 'ununderstandable', it is not supported by 'objective matters of fact'; it is untenable...". However, the logical consistency of special relativity (as well as its empirical success) is well established, so the views of such individuals are considered unfounded within the mainstream scientific community. John Stewart Bell advocated teaching special relativity first from the viewpoint of a single Lorentz inertial frame, then showing that Poincare invariance of the laws of physics such as Maxwell's equations is equivalent to the frame-changing arguments often used in teaching special relativity. Because a single Lorentz inertial frame is one of a preferred class of frames, he called this approach Lorentzian in spirit. Also some test theories of special relativity use some sort of Lorentzian framework. For instance, the Robertson–Mansouri–Sexl test theory introduces a preferred aether frame and includes parameters indicating different combinations of length and times changes. If time dilation and length contraction of bodies moving in the aether have their exact relativistic values, the complete Lorentz transformation can be derived and the aether is hidden from any observation, which makes it kinematically indistinguishable from the predictions of special relativity. Using this model, the Michelson–Morley experiment, Kennedy–Thorndike experiment, and Ives–Stilwell experiment put sharp constraints on violations of Lorentz invariance. References For a more complete list with sources of many other authors, see History of special relativity#References. Works of Lorentz, Poincaré, Einstein, Minkowski (group A) Preface partly reprinted in "Science and Hypothesis", Ch. 12. . Reprinted in Poincaré, Oeuvres, tome IX, pp. 395–413 . Reprinted in "Science and Hypothesis", Ch. 9–10. . See also the English translation. . Reprinted in "Science and Hypothesis", Ch. 6–7. Reprinted in Poincaré 1913, Ch. 6. . See also: English translation. . English Translation: Secondary sources (group B) In English: Other notes and comments (group C) External links Mathpages: Corresponding States, The End of My Latin, Who Invented Relativity?, Poincaré Contemplates Copernicus, Whittaker and the Aether, Another Derivation of Mass-Energy Equivalence Aether theories Hendrik Lorentz Special relativity
Lorentz ether theory
Physics
7,836
2,646,623
https://en.wikipedia.org/wiki/Immunocompetence
In immunology, immunocompetence is the ability of the body to produce a normal immune response following exposure to an antigen. Immunocompetence is the opposite of immunodeficiency (also known as immuno-incompetence or being immuno-compromised). In reference to lymphocytes, immunocompetence means that a B cell or T cell is mature and can recognize antigens and allow a person to mount an immune response. In order for lymphocytes such as T cells to become immunocompetent, which refers to the ability of lymphocyte cell receptors to recognize MHC molecules, they must undergo positive selection. Adaptive immunocompetence is regulated by growth hormone (GH), prolactin (PRL), and vasopressin (VP) – hormones secreted by the pituitary gland. See also Immunopharmacology and Immunotoxicology (medical journal) Parasite-stress theory References Immunology de:Immunkompetenz
Immunocompetence
Biology
230
11,281,115
https://en.wikipedia.org/wiki/SWEEPS-10
|- style="vertical-align: top;" SWEEPS-10 is an extrasolar planet that, from June 2007 to August 2011, was the planet candidate with the shortest orbital period yet found, until PSR J1719-1438 b was discovered in 2011 with an even shorter orbit. The planet orbits the star SWEEPS J175902.00−291323.7 located in the Galactic bulge at a distance of approximately 22,000 light years from Earth (based on a distance modulus of 14.1). It completes an orbit of its star (designated SWEEPS J175902.00−291323.7) in just 10 hours, and is categorized as an ultra-short period planet (USPP). Located only 1.2 million kilometers from its star (roughly three times the distance between the Earth and the Moon), the planet is among the hottest ever detected; its estimated temperature is approximately 1,650 degrees Celsius. "This star-hugging planet must be at least 1.6 times the mass of Jupiter, otherwise the star's gravitational muscle would pull the planet apart," said team leader Kailash Sahu of the Space Telescope Science Institute in Baltimore, Maryland. Such USPPs seem to occur only around dwarf stars. The small star's relatively low temperature allows the planet to exist. "USPPs occur preferentially around normal red dwarf stars that are smaller and cooler than our Sun," Sahu said. See also SWEEPS stands for Sagittarius Window Eclipsing Extrasolar Planet Search SWEEPS-04 SWEEPS-11 References (web Preprint) External links SWEEPS-10 at exoplanet.eu Sagittarius (constellation) Exoplanet candidates Transiting exoplanets Hot Jupiters Giant planets Exoplanets discovered in 2006 Sagittarius Window Eclipsing Extrasolar Planet Search
SWEEPS-10
Astronomy
388
7,195,896
https://en.wikipedia.org/wiki/Orciprenaline
Orciprenaline, also known as metaproterenol, is a bronchodilator used in the treatment of asthma. Orciprenaline is a moderately selective β2 adrenergic receptor agonist that stimulates receptors of the smooth muscle in the lungs, uterus, and vasculature supplying skeletal muscle, with minimal or no effect on α adrenergic receptors. The pharmacologic effects of β adrenergic agonist drugs, such as orciprenaline, are at least in part attributable to stimulation through β adrenergic receptors of intracellular adenylyl cyclase, the enzyme which catalyzes the conversion of ATP to cAMP. Increased cAMP levels are associated with relaxation of bronchial smooth muscle and inhibition of release of mediators of immediate hypersensitivity from many cells, especially from mast cells. Adverse effects tremor nervousness dizziness weakness headache nausea tachycardia Rare side effects that could be life-threatening difficulty breathing rapid or increased heart rate irregular heartbeat chest pain or discomfort References Chemical substances for emergency medicine Phenylethanolamines Bronchodilators Tocolytics
Orciprenaline
Chemistry
244
2,293,999
https://en.wikipedia.org/wiki/Sequence%20transformation
In mathematics, a sequence transformation is an operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as discrete convolution with another sequence and resummation of a sequence and nonlinear mappings, more generally. They are commonly used for series acceleration, that is, for improving the rate of convergence of a slowly convergent sequence or series. Sequence transformations are also commonly used to compute the antilimit of a divergent series numerically, and are used in conjunction with extrapolation methods. Classical examples for sequence transformations include the binomial transform, Möbius transform, and Stirling transform. Definitions For a given sequence and a sequence transformation the sequence resulting from transformation by is where the elements of the transformed sequence are usually computed from some finite number of members of the original sequence, for instance for some natural number for each and a multivariate function of variables for each See for instance the binomial transform and Aitken's delta-squared process. In the simplest case the elements of the sequences, the and , are real or complex numbers. More generally, they may be elements of some vector space or algebra. If the multivariate functions are linear in each of their arguments for each value of for instance if for some constants and for each then the sequence transformation is called a linear sequence transformation. Sequence transformations that are not linear are called nonlinear sequence transformations. In the context of series acceleration, when the original sequence and the transformed sequence share the same limit as the transformed sequence is said to have a faster rate of convergence than the original sequence if If the original sequence is divergent, the sequence transformation may act as an extrapolation method to an antilimit . Examples The simplest examples of sequence transformations include shifting all elements by an integer that does not depend on if and 0 otherwise, and scalar multiplication of the sequence some constant that does not depend on These are both examples of linear sequence transformations. Less trivial examples include the discrete convolution of sequences with another reference sequence. A particularly basic example is the difference operator, which is convolution with the sequence and is a discrete analog of the derivative; technically the shift operator and scalar multiplication can also be written as trivial discrete convolutions. The binomial transform and the Stirling transform are two linear transformations of a more general type. An example of a nonlinear sequence transformation is Aitken's delta-squared process, used to improve the rate of convergence of a slowly convergent sequence. An extended form of this is the Shanks transformation. The Möbius transform is also a nonlinear transformation, only possible for integer sequences. See also Aitken's delta-squared process Minimum polynomial extrapolation Richardson extrapolation Series acceleration Steffensen's method References Hugh J. Hamilton, "Mertens' Theorem and Sequence Transformations", AMS (1947) External links Transformations of Integer Sequences, a subpage of the On-Line Encyclopedia of Integer Sequences Mathematical series Asymptotic analysis Perturbation theory
Sequence transformation
Physics,Mathematics
619
3,085,681
https://en.wikipedia.org/wiki/Framatome
Framatome () is a French nuclear reactor business. It is owned by Électricité de France (EDF) (80.5%) and Mitsubishi Heavy Industries (19.5%). The company first formed in 1958 to license Westinghouse's pressurized water reactor (PWR) designs for use in France. Similar agreements had been put in place with other European countries, and this led to a 1962 contract for a complete plant at Chooz. Westinghouse sold its stake to engineering firm Creusot-Loire in 1976, and the company became solely French owned. In 2001, Siemens sold its reactor business to Framatome. As part of a larger series of mergers with Cogema and Technicatome, Framatome became the Areva NP division of the new Areva. It changed its name back to Framatome in 2018 after a major investment by utility operator EDF. While originally a licensing and construction business, today Framatome supplies the entire reactor life-cycle, including design of the European Pressurized Reactor (EPR), construction, fuel management and many related tasks. History Framatome was founded in 1958 by several companies of the French industrial giant Schneider Group along with Empain, Merlin Gérin, and the American Westinghouse, in order to license Westinghouse's pressurized water reactor (PWR) technology and develop a bid for Chooz A (in France). Called Franco-Américaine de Constructions Atomiques (Framatome), the original company consisted of four engineers, one from each of the parent companies. The original mission of the company was to act as a nuclear engineering firm and to develop a nuclear power plant that was to be identical to Westinghouse's existing product specifications. The first European plant of Westinghouse design was by then already under construction in Italy. A formal contract was signed in September 1961 for Framatome to deliver a turnkey system, that is, not only the reactor, but an entire, ready-to-use system of piping, cabling, supports, and other auxiliary systems, propelling Framatome from a nuclear engineering firm to an industrial contractor. In January 1976, Westinghouse agreed to sell its remaining 15% share to Creusot-Loire, which now owned 66%, and to cede complete marketing independence to Framatome. In February, the Belgian Édouard-Jean Empain sold his 35% interest in Creusot-Loire to Paribas, a French government-linked banking group. A January 1982 company reorganization simultaneously strengthened French public and private control of the company by allowing Creusot-Loire to increase its share of the company while increasing CEA say in the running of the firm. In 2001, German company Siemens' nuclear business was merged into Framatome. Framatome and Siemens had been officially cooperating since 1989 on the development of the European Pressurized Reactor (EPR). In 2001, after a merger with Cogema (now Orano) and Technicatome, a new nuclear conglomerate called Areva was formed, and Framatome became Areva NP. In 2007, Areva and Mitsubishi Heavy Industries created a joint venture named Atmea, for marketing the ATMEA1 reactor design. In 2009, Areva NP acquired 30% stake in the Mitsubishi Nuclear Fuel company. In 2009, Siemens sold its remaining shares in Areva NP. In 2018, after restructuring of Areva, Areva NP was sold to Électricité de France. Mitsubishi Heavy Industries (19.5%), and Assystem (5%) also became shareholders. As a result of the restructuring, Électricité de France and Mitsubishi Heavy Industries became equal shareholders of Atmea with 50% of shares both while Framatome owns a special share in Atmea. Operations Framatome designs, manufactures, and installs components, fuel and instrumentation and control systems for nuclear power plants and offers a full range of reactor services. It is responsible for Flamanville 3, Taishan 1 and 2, and Hinkley Point C projects. In addition, Framatome conducts preliminary study for construction of six reactors at the Jaitapur Nuclear Power Project in the Indian state of Maharashtra. Framatome provides EPR reactors, which is a third generation pressurised water reactor (PWR) design, and Kerena reactors, which is 1,250 MWe Generation III+ boiling water reactor (BWR) design, provisionally known as SWR-1000. The Kerena design was developed from that of the Gundremmingen Nuclear Power Plant by Areva, with extensive German input and using operating experience from Generation II BWRs to simplify systems engineering. In 2016, following a discovery at Flamanville 3, about 400 large steel forgings manufactured by Framatome's Le Creusot Forge operation since 1965 were found to have carbon-content irregularities that weakened the steel. A widespread programme of French reactor checks was started involving a progressive programme of reactor shutdowns, continued over the winter high electricity demand period into 2017. In December 2016 the Wall Street Journal characterised the problem as a "decades long coverup of manufacturing problems", with Framatome executives acknowledging that Le Creusot had been falsifying documents. Le Creusot Forge was out of operation from December 2015 to January 2018 while improvements to process controls, the quality management system, organisation and safety culture were made. In 2020 Framatome won an order to deliver reactor protection systems for the Russian VVER-TOI design nuclear reactors at Kursk II. Locations France 18 sites spread throughout the country 7500+ employees Germany 3 locations: Erlangen, Karlstein and Lingen 3000+ employees China 8 sites : Beijing, Lianyungang, Shanghai, Qinshan, Fuqing, Daya Bay, Yangjiang and Taishan 4000 experts in the world providing vital support ACNS USA 14 sites: Benicia, CA, Charlotte, NC, Cranberry Township, PA, Fort Worth, TX, Foxborough, MA, Houston, TX, Jacksonville, FL, Lake Forest, CA, Lynchburg, VA (3 locations), Richland, WA, and Washington, D.C. 2,320 employees Canada 3 sites (Kincardine, ON, Montreal, QC, and Pickering, ON) UK 3 sites (Bristol and Cranfield) References External links Areva French companies established in 1958 Energy companies established in 1958 Nuclear technology companies of France Technology companies established in 1958 Engineering companies of France Electrical engineering companies Électricité de France
Framatome
Engineering
1,345
32,041,228
https://en.wikipedia.org/wiki/HIP%2078530%20b
HIP 78530 b is an object that is either a planet or a brown dwarf in the orbit of the star HIP 78530. It was observed as early as 2000, but the object was not confirmed as one in orbit of the star HIP 78530 until a direct imaging project photographed the star in 2008. The image caught the attention of the project's science team, so the team followed up on its initial observations. HIP 78530 b orbits a young, hot, bright blue star in the Upper Scorpius association. The planet itself is over twenty-three times more massive than Jupiter, orbiting eighteen times further from its host star than Pluto does from the Sun by the estimates published in its discovery paper. In this predicted orbit, HIP 78530 b completes an orbit every twelve thousand years. Discovery Between 2000 and 2001, the ADONIS system at the ESO 3.6 m Telescope in Chile detected a faint object in the vicinity of HIP 78530. This object was reported in 2005 and 2007, although the astronomers investigating the star were not able to tell, based on their observations, if the faint object was an orbiting companion or not. The team did not follow up on this. The random selection of ninety-one stars in the Upper Scorpius association provided a sample of stars to be observed using the Near Infrared Imager and Spectrometer (NIRI) and Altitude conjugate Adaptive Optics for the Infrared (ALTAIR) adaptive optics system at the Gemini Observatory. Among the ninety stars selected for direct imaging was the star HIP 78530, which was first imaged by the camera on May 24, 2008. This initial image revealed the presence of the same faint object within the vicinity of HIP 78530. Follow-up imaging took place on July 2, 2009, and August 30, 2010, using the same instruments, as astronomers hoped to reveal this companion object's proper motion, or the rate that it moves over time. Additional follow-up data was recovered in the spring and summer of 2010, but large errors in the data's astrometry led the investigating astronomers to disregard it. The observations over the three years was compiled, with the data used to filter out pixelated portions of the images and improve the images' quality. The result suggested not only that the faint object in the image was nearby the star HIP 78530, but that it was a brown dwarf or planet in size. Further study would be needed to prove its true nature. On July 2, 2009, July 3, 2009, and August 8, 2009, use of the NIFS integral field spectrograph with ALTAIR allowed the astronomers to collect data on the spectrum of the faint object and its star. Analysis of the spectra and the objects' astrometry (how the star and the faint object change position in the sky) led to the confirmation of the companion HIP 78530 b. The confirmation of HIP 78530 b was reported on January 24, 2011. In imaging the ninety-one stars, HIP 78530 b and 1RXSJ1609-2105b were discovered. The discoveries of these two orbiting bodies allowed astronomers to predict that bodies with such low planet/brown dwarf-to-star mass ratios (below 0.01) orbiting at a distance of hundreds of AU exists in the orbits of 2.2% of all stars. However, this number is a lower limit, as astronomers have been unable to detect smaller, low-mass planets that fit this scenario. Host star HIP 78530 is a luminous, blue B-type main sequence star in the Upper Scorpius association, a loose star cluster composed of stars with a common origin. The star is estimated to be approximately 2.2 times the mass of the Sun. Ages of the Upper Scorpius group have been quoted at 5 million years, however a more recent estimate suggests that the group is somewhat older (approximately 11 million years old). Its effective temperature is estimated at 10500 K, less than twice the effective temperature of the Sun. HIP 78530 has an apparent magnitude of 7.18. It is incredibly faint, if visible at all, as seen from the unaided eye of an observer on Earth. Characteristics HIP 78530 b is most likely a brown dwarf, a massive object that is large enough to fuse deuterium (something that planets are too small to do) but not large enough to ignite and become a star. Because HIP 78530 b's characteristics blend the line between whether or not it is a brown dwarf or a planet, astronomers have tried to determine what HIP 78530 b is by predicting whether it was created in a planet-like or star-like (how brown dwarves are formed) manner. Its estimated mass is over 23.04 times that of Jupiter. Additionally, HIP 78530 b orbits its host star at an estimated average distance of 710 AU, which is 710 times the average distance between the Earth and the Sun assuming the brown dwarf has a circular orbit. The average distance between dwarf planet Pluto and the Sun is 39.482 AU, meaning that HIP 78530 b orbits its host star nearly eighteen times further than Pluto orbits the Sun. In accordance with the data, HIP 78530 b would complete an orbit approximately every 12,000 years, although the actual orbital motion of HIP 78530 b is most likely smaller than 710 AU, but it has not been directly observed long enough to know definitively. References Exoplanets discovered in 2011 Scorpius Exoplanets detected by direct imaging Upper Scorpius
HIP 78530 b
Astronomy
1,146
3,080,821
https://en.wikipedia.org/wiki/Alcan%20Lynemouth%20Aluminium%20Smelter
The Alcan Lynemouth Aluminium Smelter was an industrial facility near Ashington, Northumberland, on the coast of North East England, south of the village of Lynemouth. The smelter was owned by the Canadian aluminium company Alcan, which is part of Rio Tinto. The smelter was opened in 1974 at a cost, which exceeded its budgeted estimate of £54 million, of $156 million. The plant ceased production in March 2012, and demolition of the facility was completed in March 2018. Factors determining the smelter's site A variety of factors determined the smelter's position: The first was a source of electric power to smelt the aluminium. One tonne of aluminium requires the same amount of electricity that an average family uses in 20 years, so cheap power was needed. In 1972, Alcan commissioned Lynemouth Power Station, less than from the smelter's site, to fulfil its power needs. The station's site was convenient for access to the Ellington and Lynemouth coal mines nearby, which were also the fundamental reason for the nearby village's creation. The power station has a 420 megawatt (MW) capacity, more than enough to meet the load requirements of the smelter. The spare electricity is sold to the National Grid. Another factor was finding a labour force. Many coal mines in the area had shut down, leaving thousands of people there unemployed. Aluminium smelting is very labour-intensive, but the workforce in the local area was used to heavy work because of working in the mines. The British government also granted £28 million to the company to help reduce unemployment in the area. Transport was another major factor as bauxite could not be found in the United Kingdom, only in places such as Jamaica and Australia. The smelter's location had to be near a port with good transport links to the site. The town of Blyth, which is south of the smelter, already had a deep sea port. There was also a railway link from the port going directly to the power station, which was connected to the Alcan facility. The site also has good road links. Facts The smelter had two of the most efficient ring burners in the world, costing around £17 million each. The smelter was the only aluminium smelting site in Europe which rebuilt the smelter whilst still in production. It was a 100-day process which took place every seven years. The smelter was provided with alumina by two trains a day from Blyth, each consisting of 21 wagons. The alumina was shipped to Blyth from Limerick in the Republic of Ireland. Coke was shipped to Blyth from Louisiana in the U.S. and was transported to the smelter by heavy goods vehicles. Worries When work started on the site, local farmers were worried that pollution from the smelter would ruin their crops and harm their livestock. To address their concerns, Alcan decided to buy the land from them. Alcan now owns over of land in the local area and employs a farming director. The land is still used to grow crops and raise livestock. In early 2005, residents of nearby villages were worried about the fate of the smelter when the only remaining local coal mine, situated at Ellington, closed. However, the smelter did not close and imports its coal from overseas or from mines in other parts of the country. The emissions of the power plant connected to the smelter were another concern for the environment. In April 2010, the European Court of Justice decided that, contrary to the claim of the UK government, the power plant was subject to the emission limit values laid down in the 2001 Large Combustion Plant Directive. As a consequence, emissions of air polluting substances of the plant had to be reduced. Closure Production at the Lynemouth Smelter ended at 14:00 on 29 March 2012, following a 90-day consultation period. It closed in May 2012 putting 515 people out of work and causing a knock-on effect in its local supply chain. Alcan cited rising energy costs due to emerging European environmental legislation as the reason. The 420MW coal power station continues to operate under new ownership. In 2015, the site was sold by Rio Tinto to Harworth Estates who plan to turn the site into an 'employment park.' In June 2016, all eight chimneys at the site were demolished and the site had been decommissioned. Demolition of the former smelter was completed in March 2018. The rest of the buildings are now rented to other businesses. It has now been repurposed as a biomass power plant. The area is now a wind farm site. See also Anglesey Aluminium List of aluminium smelters References External links UK Business Park News on Alcan AME Research Brief description Company invests in a safe staff future Aluminium smelters Non-ferrous metallurgical works in the United Kingdom Former Rio Tinto (corporation) subsidiaries Alcan Newbiggin-by-the-Sea
Alcan Lynemouth Aluminium Smelter
Chemistry
1,043
61,503,305
https://en.wikipedia.org/wiki/C27H31NO5
{{DISPLAYTITLE:C27H31NO5}} The molecular formula C27H31NO5 (molar mass: 449.55 g/mol) may refer to: Ignavine Yaequinolone J1
C27H31NO5
Chemistry
53
60,058,463
https://en.wikipedia.org/wiki/Lactivibrio
Lactivibrio is a genus of bacteria from the family of Synergistaceae with one known species (Lactivibrio alcoholicus). Lactivibrio alcoholicus has been isolated from mesophilic granular sludge from Tokyo in Japan. See also List of bacteria genera List of bacterial orders References Synergistota Bacteria genera Monotypic bacteria genera
Lactivibrio
Biology
76
5,895,310
https://en.wikipedia.org/wiki/Tachykinin%20receptor
There are three known mammalian tachykinin receptors termed NK1, NK2 and NK3. All are members of the 7 transmembrane G-protein coupled receptor family and induce the activation of phospholipase C, producing inositol triphosphate (so called Gq-coupled). Inhibitors of NK-1, known as NK-1 receptor antagonists, can be used as antiemetic agents, such as the drug aprepitant. Binding The genes and receptor ligands are as follows: (Hökfelt et al., 2001; Page, 2004; Pennefather et al., 2004; Maggi, 2000) See also Substance P G protein coupled receptors References External links G protein-coupled receptors Molecular neuroscience
Tachykinin receptor
Chemistry,Biology
159
646,346
https://en.wikipedia.org/wiki/Skeuomorph
A skeuomorph (also spelled skiamorph, ) is a derivative object that retains ornamental design cues (attributes) from structures that were necessary in the original. Skeuomorphs are typically used to make something new feel familiar in an effort to speed understanding and acclimation. They employ elements that, while essential to the original object, serve no pragmatic purpose in the new system. Examples include pottery embellished with imitation rivets reminiscent of similar pots made of metal and a software calendar that imitates the appearance of binding on a paper desk calendar. Definition and purpose The term skeuomorph is compounded from the Greek skeuos (σκεῦος), meaning "container or tool", and morphḗ (μορφή), meaning "shape". It has been applied to material objects since 1890. With the advent of graphical computer systems in the 1980s, skeuomorph is used to characterize the many "old fashioned" icons utilized in graphic user interfaces. A similar alternative definition of skeuomorph is "a physical ornament or design on an object made to resemble another material or technique". This definition is broader in scope, as it can be applied to design elements that still serve the same function as they did in a previous design. Skeuomorphs may be deliberately employed to make a new design more familiar and comfortable or may be the result of cultural influences and norms on the designer. They may be the artistic expression on the part of the designer. The usability researcher and academic Don Norman describes skeuomorphism in terms of cultural constraints: interactions with a system that are learned only through culture. Norman also popularized perceived affordances, where the user can tell what an object provides or does based on its appearance, which skeuomorphism can make easy. The concept of skeuomorphism overlaps with other design concepts. Mimesis is an imitation, coming directly from the Greek word. Archetype is the original idea or model that is emulated, where the emulations can be skeuomorphic. Skeuomorphism is parallel to, but different from, path dependence in technology, where an element's functional behavior is maintained even when the original reasons for its design no longer exist. Physical examples Many features of wooden buildings were repeated in stone by the Ancient Greeks when they transitioned from wood to masonry construction. Decorative stone features in the Doric order of classical architecture in Greek temples such as triglyphs, mutules, guttae, and modillions are supposed to be derived from true structural and functional features of the early wooden temples. The triglyph and guttae are seen as recreating, respectively, the carved beam-ends and six wooden pegs driven in to secure the beam in place. Historically, high-status items such as the Minoans' elaborate and expensive silver cups were recreated for a wider market using pottery, a cheaper material. The exchange of shapes between metalwork and ceramics, often from the former to the latter, is near-constant in the history of the decorative arts. Sometimes pellets of clay are used to evoke the rivets of the metal originals. There is also evidence of skeuomorphism in material transitions. Leather and pottery often carry over features from the wooden counterparts of previous generations. Clay pottery has also been found bearing rope-shaped protrusions, pointing to craftsmen seeking familiar shapes and processes while working with new materials. Another example is the tiny, non-functional handle on glass maple syrup bottles, which evoke stoneware jug handles. In this context, skeuomorphs exist as traits sought in other objects, either for their social desirability or psychological comforts. In the modern era, cheaper plastic items often attempt to mimic more expensive wooden and metal products, though they are only skeuomorphic if new ornamentation references the original functionality, such as molded screw heads in molded plastic items. Another well-known skeuomorph is the plastic Adirondack chair. The lever on a mechanical slot machine, or "one-armed bandit", is a skeuomorphic throwback feature when it appears on a modern video slot machine, since it is no longer required to set physical mechanisms and gears into motion. Articles of clothing are also given skeuomorphic treatment; for example, faux buckles on certain strap shoes, such as Mary Janes for small children, which permit the retention of the original aesthetic. Automotive design has historically been full of physical skeuomorphisms, such as the transformation from wooden framed and bodied early vehicles produced by coachworks to those which incorporated both functional wood and steel (referred to as "woodies") to, ultimately, simulated vinyl woodgrain cladding entirely for style by the 1960s. Other examples include thinly faux chrome-plated plastic components and imitation leather, gold, interior wood, pearl, or crystal jeweled elements. In The Design of Everyday Things, Don Norman notes that early automobiles were designed after horse-drawn carriages. Indeed, the early automobile design Horsey Horseless even included a wooden horse head on the front to try to minimize scaring the real animals. In the 1970s, opera windows and vinyl roofs on many luxury sedan cars similarly imitated carriage work from the horse and buggy era. , most electric cars feature prominent front grilles, even though there is little need for intake of air to cool an absent internal combustion engine. Virtual examples Many computer programs have a skeuomorphic graphical user interface that emulates the aesthetics of physical objects. Examples include a digital contact list resembling a Rolodex, and IBM's 1998 RealThings package. A more extreme example is found in some music synthesis and audio processing software packages, which closely emulate physical musical instruments and audio equipment complete with buttons and dials. On a smaller scale, the icons of GUIs may remain skeuomorphic representations of physical objects, such as an image of a physical paper folder to represent computer files in the desktop metaphor. This is even the case for items that are no longer directly applicable to the task they represent (such as a drawing of a floppy disk to represent "save"). Apple Inc., while under the direction of Steve Jobs, was known for its wide usage of skeuomorphic designs in various applications. This changed after Jobs's death when Scott Forstall, described as "the most vocal and high-ranking proponent of the visual design style favored by Mr. Jobs", resigned. Apple designer Jonathan Ive took over some of Forstall's responsibilities and had "made his distaste for the visual ornamentation in Apple's mobile software known within the company". With the announcement of iOS 7 at WWDC in 2013, Apple officially shifted from skeuomorphism to a more simplified design, thus beginning the so-called "death of skeuomorphism" at Apple. Skeuomorphism is a key component of Frutiger Aero, an Internet aesthetic derived from mid-2000s user interface designs. Other virtual skeuomorphs do not employ literal images of some physical object; but rather allude to ritual human heuristics or heuristic motifs, such as slider bars that emulate linear potentiometers and visual tabs that behave like physical tabbed file folders. Another example is the swiping hand gesture for turning the "pages" or screens of a tablet display. Virtual skeuomorphs can also be auditory. The shutter-click sound emitted by most camera phones when taking a picture is an auditory skeuomorph. Another familiar example is the paper-crumpling sound when a document is trashed. In design Retrofuturism incorporates visual motifs from old predictions of the future, especially Skeuomorphic design is frequently incorporated in retrowave or synthwave illustrations. Skeuomorphic design is closely linked with metamodernism. Skeuomorphic design seems to be preferred by older recipient groups, often referred to as "digital immigrants", while "digital natives" tend to favor flat design over skeuomorphisms. However, younger people are still able to understand the signifiers that skeuomorphic design employs. A better user experience could be measured for each respective design philosophy among digital natives and immigrants. Arguments in favor An argument in favor of skeuomorphic design in digital devices is that signifiers to affordances help those familiar with the original item learn to use the digital version. Interaction paradigms for computer devices are culturally entrenched; proposals for change often spawn debate. Don Norman describes this process as a form of cultural heritage, and credits skeuomorphism with easing transitions to newer technology, stating that it "gives comfort and makes learning easier" until the newer devices no longer need to resemble their predecessors. Compared to flat design, skeuomorphic design seems to facilitate a fast navigation through graphic user interfaces, because icons are more easily recognized and less abstract than their minimalistic counterparts found in flat design. Arguments against The arguments against virtual skeuomorphic design are that skeuomorphic interface elements are harder to operate and take up more screen space than standard interface elements, that this breaks operating system interface design standards, that it causes an inconsistent look and feel between applications, that skeuomorphic interface elements rarely incorporate numeric input or feedback for accurately setting a value, that many users may have no experience with the original device being emulated, that skeuomorphic design can increase cognitive load with visual noise that after a few uses gives little or no value to the user, that skeuomorphic design limits creativity by grounding the user experience to physical counterparts, and that skeuomorphic designs often do not accurately represent underlying system state or data types due to inappropriate mimesis. For example, an analog gauge interface may be read less precisely than a digital one. Gallery See also Anachronism Facadism Flat design Human interface guidelines Intuition Spandrel Trompe-l'œil, 2D artwork using realistic optical illusions to simulate three dimensions Vestigiality Footnotes General references Flecker, M., "An Age of Intermateriality: Skeuomorphism and Intermateriality between the Late Republic and Early Empire", in: A. Haug – A. Hielscher – T. Lauritsen (Hrsg.), Materiality in Roman Art and Architecture: Aesthetics, Semantics and Function (Berlin 2021) 265–283 (Open Access). Freeth, C. M., & Taylor, T. F. (2001). Skeuomorphism in Scythia: Deference and Emulation, Olbia ta antichnii svit. Kiev: British Academy; Ukrainian Academy of Sciences. p. 150. External links flatisbad.com a selection of user experience studies on skeuomorphism maintained by Lomonosov Moscow State University, Laboratory of Work Psychology Architectural elements Computer accessibility Design Graphical user interfaces Industrial design Product design
Skeuomorph
Technology,Engineering
2,266
779,111
https://en.wikipedia.org/wiki/Xenotransplantation
Xenotransplantation (xenos- from the Greek meaning "foreign" or strange), or heterologous transplant, is the transplantation of living cells, tissues or organs from one species to another. Such cells, tissues or organs are called xenografts or xenotransplants. It is contrasted with allotransplantation (from other individual of same species), syngeneic transplantation or isotransplantation (grafts transplanted between two genetically identical individuals of the same species) and autotransplantation (from one part of the body to another in the same person). Xenotransplantation is an artificial method of creating an animal-human chimera, that is, a human with a subset of animal cells. In contrast, an individual where each cell contains genetic material from a human and an animal is called a human–animal hybrid. Patient derived xenografts are created by xenotransplantation of human tumor cells into immunocompromised mice, and is a research technique frequently used in pre-clinical oncology research. Human xenotransplantation offers a potential treatment for end-stage organ failure, a significant health problem in parts of the industrialized world. It also raises many novel medical, legal and ethical issues. A continuing concern is that many animals, such as pigs, have a shorter lifespan than humans, meaning that their tissues age at a quicker rate. (Pigs have a maximum life span of about 27 years.) Disease transmission (xenozoonosis) and permanent alteration to the genetic code of animals are also causes for concern. Similarly to objections to animal testing, animal rights activists have also objected to xenotransplantation on ethical grounds. A few temporarily successful cases of xenotransplantation are published. Bioprosthetic artificial heart valves are generally pig or bovine-derived, but the cells are killed by glutaraldehyde treatment before insertion, therefore technically not fulfilling the WHO definition of xenotransplantation of being live cells. History The first serious attempts at xenotransplantation (then called heterotransplantation) appeared in the scientific literature in 1905, when slices of rabbit kidney were transplanted into a child with chronic kidney disease. In the first two decades of the 20th century, several subsequent efforts to use organs from lambs, pigs, and primates were published. Scientific interest in xenotransplantation declined when the immunological basis of the organ rejection process was described. The next waves of studies on the topic came with the discovery of immunosuppressive drugs. Even more studies followed Joseph Murray's first successful renal transplantation in 1954 and scientists, facing the ethical questions of organ donation for the first time, accelerated their effort in looking for alternatives to human organs. Non-human kidney to a human In 1963, doctors at Tulane University attempted chimpanzee-to-human renal transplantations in six people who were near death; after this and several subsequent unsuccessful attempts to use primates as organ donors and the development of a working cadaver organ procuring program, interest in xenotransplantation for kidney failure dissipated. Out of 13 such transplants performed by Keith Reemtsma, one kidney recipient lived for nine months, returning to work as a schoolteacher. At autopsy, the chimpanzee kidneys appeared normal and showed no signs of acute or chronic rejection. Non-human heart to a human An American infant girl known as "Baby Fae" with hypoplastic left heart syndrome was the first infant recipient of a xenotransplantation, when she received a baboon heart in 1984. The procedure was performed by Leonard Lee Bailey at Loma Linda University Medical Center in Loma Linda, California. Fae died 21 days later due to a humoral-based graft rejection thought to be caused mainly by an ABO blood type mismatch, considered unavoidable due to the rarity of type O baboons. The graft was meant to be temporary, but unfortunately a suitable allograft replacement could not be found in time. While the procedure itself did not advance the progress on xenotransplantation, it did shed a light on the insufficient amount of organs for infants. The story made such an impact that the crisis of infant organ shortage improved for that time. Non-human heart, lungs, and kidneys to a human The first transplant of a non-genetically modified pig's heart, lungs and kidneys into a human was performed in Sonapur, Assam, in India in mid-December 1996, and was announced in January 1997. The recipient was Purno Saikia, a 32-year-old terminally-ill man; he died of multiple infections shortly after the operation. The Indian cardiothoracic surgeon Dhani Ram Baruah and two of his associates, Jonathan Ho Kei-shing (of the Hong Kong-based Prince of Wales Medical Institute) and C.S. James, performed the surgeries. Baruah claimed that Saikia had failed to respond to conventional surgery, and that the patient and his family had consented to the procedure. All three involved in the surgery were arrested on January 9, 1997, for the alleged violation of the Transplantation of Human Organs and Tissues Act of 1994. Baruah was dismissed in medical circles as a "mad scientist" and the procedure was dubbed a "hoax". Baruah himself signed a statement saying he had done no transplant, but then alleged that the confession was forced from him. They were found guilty of unethical procedure and culpable homicide and imprisoned for 40 days. Dhani Ram Baruah's surgical institute was also found to be without necessary registration. Critics said Dhani Bam Baruah's claims and medical procedures were neither taken seriously nor accepted by the scientific community because he never got his findings scientifically peer-reviewed. Past complaints of ethics violations during surgeries in Hong Kong by Baruah and Ho had occurred in 1992, when they had implanted heart valves, developed by Baruah, made of animal tissue. A year later, six patients died. The Asian Medical News reported that "grave concerns" were expressed "over the procedure and ethics of the implementation". Genetically engineered non-human kidney to a human In September 2021, surgeons led by Robert Montgomery performed the first genetically engineered pig kidney xenotransplant to a brain-dead human at NYU Langone Health with no sign of immediate rejection (partly because the pig thymus gland was transplanted as well). The kidney was procured from a pig with only a single gene modification: the removal of alpha-gal. In July 2023, surgeons from the NYU Langone Transplant Institute completed a transplant of a genetically modified pig kidney (along with the pig's thymus gland underneath it) into a patient declared brain dead but maintained on a respirator. The patient had previously consented to be an organ donor, but his tissues were not considered suitable for transplant. The kidney came from an animal with a knocked-out gene for the production of alpha gal sugars, which has been implicated in immune response to mammalian tissue. In order to ensure that renal function was only supported by the pig kidney, the team removed both of the patient's kidneys. The team has reported that the kidney has maintained optimal functioning for over a month, as evidenced by routine testing of creatinine and weekly biopsies. The team plans to monitor the patient for another month, pending approval by ethics board and his family. In March 2024, Richard Slayman, a patient whose transplanted human kidney had failed, received a genetically engineered pig kidney xenotransplant from surgeons at Massachusetts General Hospital. This kidney has 69 genomic edits (3 gene knockout, 7 human gene insertion and 59 copies of the porcine retrovirus knockout) made by eGenesis, Inc. Mr. Slayman died a few months later of unrelated causes, with no apparent rejection of the kidney. Meanwhile, in April 2024, Lisa Pisano became the second person to receive such a kidney transplant. Because of "unique challenges" related to a mechanical heart pump she received along with the kidney, her kidney had to be removed due to "insufficient blood flow" late in May. Medication also deteriorated the kidney, which led to the organs rejection. Genetically engineered non-human heart to a human In January 2022, doctors led by cardiothoracic surgeon Bartley P. Griffith and Muhammad M. Mohiuddin at the University of Maryland Medical Center and University of Maryland School of Medicine performed a heart transplant from a genetically modified pig to a terminally ill patient, David Bennett Sr., who was ineligible for a standard human heart transplant. The pig had undergone specific gene editing to remove enzymes responsible for producing sugar antigens that lead to hyperacute organ rejection in humans. The US medical regulator gave special dispensation to carry out the procedure under compassionate use criteria. The recipient died two months after the transplantation. In June and July 2022, surgeons at NYU Langone Health performed two genetically modified pig heart transplants into recently deceased humans. The hearts were from pigs that had the identical 10 genetic modifications used in the University of Maryland Medical Center heart xenotransplantation in January 2022. All three hearts came from Revivicor, Inc., a facility based in Blacksburg, Va., and a subsidiary of United Therapeutics. On 20 September 2023, surgeons at the University of Maryland Medical Center in Baltimore performed a heart transplant from a genetically modified pig to Lawrence Faucette, a patient with terminal heart disease who was ineligible for a traditional heart transplant. On 30 October 2023, Faucette died after showing signs of organ rejection. Potential uses A worldwide shortage of organs for clinical implantation causes about 20–35% of patients who need replacement organs to die on the waiting list. Certain procedures, some of which are being investigated in early clinical trials, aim to use cells or tissues from other species to treat life-threatening and debilitating illnesses such as cancer, diabetes, liver failure and Parkinson's disease. If vitrification can be perfected, it could allow for long-term storage of xenogenic cells, tissues and organs so that they would be more readily available for transplant. Xenotransplants could save thousands of patients waiting for donated organs. The animal organ, probably from a pig or baboon could be genetically altered with human genes to trick a patient's immune system into accepting it as a part of its own body. They have re-emerged because of the lack of organs available and the constant battle to keep immune systems from rejecting allotransplants. Xenotransplants are thus potentially a more effective alternative. Xenotransplantation of human tumor cells into immunocompromised mice is a research technique frequently used in oncology research. It is used to predict the sensitivity of the transplanted tumor to various cancer treatments; several companies offer this service, including the Jackson Laboratory. Human organs have been transplanted into animals as a powerful research technique for studying human biology without harming human patients. This technique has also been proposed as an alternative source of human organs for future transplantation into human patients. For example, researchers from the Ganogen Research Institute transplanted human fetal kidneys into rats which demonstrated life supporting function and growth. Potential animal organ donors Since they are the closest relatives to humans, non-human primates were first considered as a potential organ source for xenotransplantation to humans. Chimpanzees were originally considered the best option since their organs are of similar size, and they have good blood type compatibility with humans, which makes them potential candidates for xenotransfusions. However, since chimpanzees are listed as an endangered species, other potential donors were sought. Baboons are more readily available, but impractical as potential donors. Problems include their smaller body size, the infrequency of blood group O (the universal donor), their long gestation period, and their typically small number of offspring. In addition, a major problem with the use of nonhuman primates is the increased risk of disease transmission, since they are so closely related to humans. Pigs (Sus scrofa domesticus) are currently thought to be the best candidates for organ donation. The risk of cross-species disease transmission is decreased because of their increased phylogenetic distance from humans. Pigs have relatively short gestation periods, large litters, and are easy to breed, making them readily available. They are inexpensive and easy to maintain in pathogen-free facilities, and current gene editing tools are adapted to pigs to combat rejection and potential zoonoses. Pig organs are anatomically comparable in size, and new infectious agents are less likely since they have been in close contact with humans through domestication for many generations. Treatments sourced from pigs have proven to be successful such as porcine-derived insulin for patients with diabetes mellitus. Increasingly, genetically engineered pigs are becoming the norm, which raises moral qualms, but also increases the success rate of the transplant. Current experiments in xenotransplantation most often use pigs as the donor, and baboons as human models. In 2020 the U.S. Food and Drug Administration approved a genetic modification of pigs so they do not produce alpha-gal sugars. Pig organs have been used for kidney and heart transplants into humans. Barriers and issues Immunologic barriers To date, no xenotransplantation trials have been entirely successful due to the many obstacles arising from the response of the recipient's immune system. Xenozoonoses are one of the biggest threats to rejections, as they are xenogeneic infections. The introduction of these microorganisms are a big issue that lead to the fatal infections and then rejection of the organs. This response, which is generally more extreme than in allotransplantations, ultimately results in rejection of the xenograft, and can in some cases result in the immediate death of the recipient. There are several types of rejection organ xenografts are faced with, these include hyperacute rejection, acute vascular rejection, cellular rejection, and chronic rejection. A rapid, violent, and hyperacute response comes as a result of antibodies present in the host organism. These antibodies are known as xenoreactive natural antibodies (XNAs). Hyperacute rejection This rapid and violent type of rejection occurs within minutes to hours from the time of the transplant. It is mediated by the binding of XNAs (xenoreactive natural antibodies) to the donor endothelium, causing activation of the human complement system, which results in endothelial damage, inflammation, thrombosis and necrosis of the transplant. XNAs are first produced and begin circulating in the blood in neonates, after colonization of the bowel by bacteria with galactose moieties on their cell walls. Most of these antibodies are the IgM class, but also include IgG, and IgA. The epitope XNAs target is an α-linked galactose moiety, galactose-alpha-1,3-galactose (also called the α-Gal epitope), produced by the enzyme alpha-galactosyltransferase. Most non-primates contain this enzyme thus, this epitope is present on the organ epithelium and is perceived as a foreign antigen by primates, which lack the galactosyl transferase enzyme. In pig to primate xenotransplantation, XNAs recognize porcine glycoproteins of the integrin family. The binding of XNAs initiate complement activation through the classical complement pathway. Complement activation causes a cascade of events leading to: destruction of endothelial cells, platelet degranulation, inflammation, coagulation, fibrin deposition, and hemorrhage. The result is thrombosis and necrosis of the xenograft. Hyperacute rejection is a severe, immediate immune response that occurs when a transplanted organ, such as a pig kidney, is rapidly attacked and destroyed by the recipient's immune system. In the context of pig kidney xenotransplantation, this type of rejection is triggered by pre-existing antibodies in the recipient's blood that recognize and bind to antigens on the surface of the pig kidney cells. These antigens, which are foreign to the human immune system, include certain carbohydrates and proteins that are not present in human tissues. The binding of these antibodies activates the complement system, leading to a cascade of events that cause widespread clotting and inflammation in the transplanted organ's blood vessels. As a result, the kidney quickly becomes ischemic (lacking adequate blood flow) and undergoes acute damage, often resulting in the organ's immediate loss. Hyperacute rejection can severely affect the recipient’s body by leading to the rapid and complete failure of the transplanted kidney. This failure not only undermines the purpose of the transplant, which is to restore kidney function, but also poses serious health risks to the recipient. The sudden loss of kidney function can result in the accumulation of waste products and fluids in the body, causing symptoms such as swelling, electrolyte imbalances, and potential life-threatening complications. Furthermore, hyperacute rejection necessitates immediate medical intervention, often leading to the removal of the rejected kidney and the need to explore alternative treatment options, such as returning to dialysis or seeking another transplant source. Yang S, Zhang M, Wei H, Zhang B, Peng J, Shang P, Sun S. Overcoming hyperacute rejection Since hyperacute rejection presents such a barrier to the success of xenografts, several strategies to overcome it are under investigation: Interruption of the complement cascade The recipient's complement cascade can be inhibited through the use of cobra venom factor (which depletes C3), soluble complement receptor type 1, anti-C5 antibodies, or C1 inhibitor (C1-INH). Disadvantages of this approach include the toxicity of cobra venom factor, and most importantly these treatments would deprive the individual of a functional complement system. Transgenic organs (Genetically engineered pigs) 1,3 galactosyl transferase gene knockouts – These pigs do not contain the gene that codes for the enzyme responsible for expression of the immunogeneic gal-α-1,3Gal moiety (the α-Gal epitope). Increased expression of H-transferase (α-1,2-fucosyltransferase), an enzyme that competes with galactosyl transferase. Experiments have shown this reduces α-Gal expression by 70%. Expression of human complement regulators (CD55, CD46, and CD59) to inhibit the complement cascade. Plasmaphoresis, on humans to remove 1,3 galactosyltransferase, reduces the risk of activation of effector cells such as CTL (CD8 T cells), complement pathway activation and delayed type hypersensitivity (DTH). Acute vascular rejection Also known as delayed xenoactive rejection, this type of rejection occurs in discordant xenografts within 2 to 3 days, if hyperacute rejection is prevented. The process is much more complex than hyperacute rejection and is currently not completely understood. Acute vascular rejection requires de novo protein synthesis and is driven by interactions between the graft endothelial cells and host antibodies, macrophages, and platelets. The response is characterized by an inflammatory infiltrate of mostly macrophages and natural killer cells (with small numbers of T cells), intravascular thrombosis, and fibrinoid necrosis of vessel walls. Binding of the previously mentioned XNAs to the donor endothelium leads to the activation of host macrophages as well as the endothelium itself. The endothelium activation is considered type II since gene induction and protein synthesis are involved. The binding of XNAs ultimately leads to the development of a procoagulant state, the secretion of inflammatory cytokines and chemokines, as well as expression of leukocyte adhesion molecules such as E-selectin, intercellular adhesion molecule-1 (ICAM-1), and vascular cell adhesion molecule-1 (VCAM-1). This response is further perpetuated as normally binding between regulatory proteins and their ligands aid in the control of coagulation and inflammatory responses. However, due to molecular incompatibilities between the molecules of the donor species and recipient (such as porcine major histocompatibility complex molecules and human natural killer cells), this may not occur. Overcoming acute vascular rejection Due to its complexity, the use of immunosuppressive drugs along with a wide array of approaches are necessary to prevent acute vascular rejection, and include administering a synthetic thrombin inhibitor to modulate thrombogenesis, depletion of anti-galactose antibodies (XNAs) by techniques such as immunoadsorption, to prevent endothelial cell activation, and inhibiting activation of macrophages (stimulated by CD4+ T cells) and NK cells (stimulated by the release of Il-2). Thus, the role of MHC molecules and T cell responses in activation would have to be reassessed for each species combo. Accommodation If hyperacute and acute vascular rejection are avoided accommodation is possible, which is the survival of the xenograft despite the presence of circulating XNAs. The graft is given a break from humoral rejection when the complement cascade is interrupted, circulating antibodies are removed, or their function is changed, or there is a change in the expression of surface antigens on the graft. This allows the xenograft to up-regulate and express protective genes, which aid in resistance to injury, such as heme oxygenase-1 (an enzyme that catalyzes the degradation of heme). Cellular rejection Rejection of the xenograft in hyperacute and acute vascular rejection is due to the response of the humoral immune system, since the response is elicited by the XNAs. Cellular rejection is based on cellular immunity, and is mediated by natural killer cells that accumulate in and damage the xenograft and T-lymphocytes which are activated by MHC molecules through both direct and indirect xenorecognition. In direct xenorecognition, antigen presenting cells from the xenograft present peptides to recipient CD4+ T cells via xenogeneic MHC class II molecules, resulting in the production of interleukin 2 (IL-2). Indirect xenorecognition involves the presentation of antigens from the xenograft by recipient antigen presenting cells to CD4+ T cells. Antigens of phagocytosed graft cells can also be presented by the host's class I MHC molecules to CD8+ T cells. The strength of cellular rejection in xenografts remains uncertain, however, it is expected to be stronger than in allografts due to differences in peptides among different animals. This leads to more antigens potentially recognized as foreign, thus eliciting a greater indirect xenogenic response. Overcoming cellular rejection A proposed strategy to avoid cellular rejection is to induce donor non-responsiveness using hematopoietic chimerism. Donor stem cells are introduced into the bone marrow of the recipient, where they coexist with the recipient's stem cells. The bone marrow stem cells give rise to cells of all hematopoietic lineages, through the process of hematopoiesis. Lymphoid progenitor cells are created by this process and move to the thymus where negative selection eliminates T cells found to be reactive to self. The existence of donor stem cells in the recipient's bone marrow causes donor reactive T cells to be considered self-reactive and undergo apoptosis. Chronic rejection Chronic rejection is slow and progressive, and usually occurs in transplants that survive the initial rejection phases. Scientists are still unclear how chronic rejection exactly works, research in this area is difficult since xenografts rarely survive past the initial acute rejection phases. Nonetheless, it is known that XNAs and the complement system are not primarily involved. Fibrosis in the xenograft occurs as a result of immune reactions, cytokines (which stimulate fibroblasts), or healing (following cellular necrosis in acute rejection). Perhaps the major cause of chronic rejection is arteriosclerosis. Lymphocytes, which were previously activated by antigens in the vessel wall of the graft, activate macrophages to secrete smooth muscle growth factors. This results in a build up of smooth muscle cells on the vessel walls, causing the hardening and narrowing of vessels within the graft. Chronic rejection leads to pathologic changes of the organ, and is why transplants must be replaced after so many years. It is also anticipated that chronic rejection will be more aggressive in xenotransplants as opposed to allotransplants. Dysregulated coagulation Successful efforts have been made to create knockout mice without α1,3GT; the resulting reduction in the highly immunogenic αGal epitope has resulted in the reduction of the occurrence of hyperacute rejection, but has not eliminated other barriers to xenotransplantation such as dysregulated coagulation, also known as coagulopathy. Different organ xenotransplants result in different responses in clotting. For example, kidney transplants result in a higher degree of coagulopathy, or impaired clotting, than cardiac transplants, whereas liver xenografts result in severe thrombocytopenia, causing recipient death within a few days due to bleeding. An alternate clotting disorder, thrombosis, may be initiated by preexisting antibodies that affect the protein C anticoagulant system. Due to this effect, porcine donors must be extensively screened before transplantation. Studies have also shown that some porcine transplant cells are able to induce human tissue factor expression, thus stimulating platelet and monocyte aggregation around the xenotransplanted organ, causing severe clotting. Additionally, spontaneous platelet accumulation may be caused by contact with pig von Willebrand factor. Just as the α1,3G epitope is a major problem in xenotransplantation, so too is dysregulated coagulation a cause of concern. Transgenic pigs that can control for variable coagulant activity based on the specific organ transplanted would make xenotransplantation a more readily available solution for the 70,000 patients per year who do not receive a human donation of the organ or tissue they need. Physiology Extensive research is required to determine whether animal organs can replace the physiological functions of human organs. Many issues include: size – with pigs for example, organs are taken from young pigs to be of suitable size for donation, and these may still be able to grow afterwards longevity – the lifespan of most pigs is roughly 15 years, currently it is unknown how xenotransplanted organs age hormone and protein differences – some proteins will be molecularly incompatible, which could cause malfunction of important regulatory processes. These differences also make the prospect of hepatic xenotransplantation less promising, since the liver plays an important role in the production of so many proteins environment – for example, pig hearts work in a different anatomical site and under different hydrostatic pressure than in humans temperature – the body temperature of pigs is 39 °C (2 °C above the average human body temperature). Implications of this difference, if any, on the activity of important enzymes are currently unknown. Xenozoonosis Xenozoonosis, also known as zoonosis or xenosis, is the transmission of infectious agents between species via xenograft. Animal to human infection is normally rare, but has occurred in the past. An example of such is the avian influenza, when an influenza A virus was passed from birds to humans. Xenotransplantation may increase the chance of disease transmission for 3 reasons: (1) implantation breaches the physical barrier that normally helps to prevent disease transmission, (2) the recipient of the transplant will be severely immunosuppressed, and (3) human complement regulators (CD46, CD55, and CD59) expressed in transgenic pigs have been shown to serve as virus receptors, and may also help to protect viruses from attack by the complement system. Examples of viruses carried by pigs include porcine herpesvirus, rotavirus, parvovirus, and circovirus. Porcine herpesviruses and rotaviruses can be eliminated from the donor pool by screening, however others (such as parvovirus and circovirus) may contaminate food and footwear then re-infect the herd. Thus, pigs to be used as organ donors must be housed under strict regulations and screened regularly for microbes and pathogens. Unknown viruses, as well as those not harmful in the animal, may also pose risks. Of particular concern are PERVS (porcine endogenous retroviruses), vertically transmitted microbes that embed in swine genomes. The risks with xenosis are twofold, as not only could the individual become infected, but a novel infection could initiate an epidemic in the human population. Because of this risk, the FDA has suggested any recipients of xenotransplants shall be closely monitored for the remainder of their life, and quarantined if they show signs of xenosis. Baboons and pigs carry myriad transmittable agents that are harmless in their natural host, but extremely toxic and deadly in humans. HIV is an example of a disease believed to have jumped from monkeys to humans. Researchers also do not know if an outbreak of infectious diseases could occur and if they could contain the outbreak even though they have measures for control. Another obstacle facing xenotransplants is that of the body's rejection of foreign objects by its immune system. These antigens (foreign objects) are often treated with powerful immunosuppressive drugs that could, in turn, make the patient vulnerable to other infections and actually aid the disease. This is the reason the organs would have to be altered to fit the patients' DNA (histocompatibility). In 2005, the Australian National Health and Medical Research Council (NHMRC) declared an eighteen-year moratorium on all animal-to-human transplantation, concluding that the risks of transmission of animal viruses to patients and the wider community had not been resolved. This was repealed in 2009 after an NHMRC review stated "... the risks, if appropriately regulated, are minimal and acceptable given the potential benefits.", citing international developments on the management and regulation of xenotransplantation by the World Health Organisation and the European Medicines Agency. Porcine endogenous retroviruses Endogenous retroviruses are remnants of ancient viral infections, found in the genomes of most, if not all, mammalian species. Integrated into the chromosomal DNA, they are vertically transferred through inheritance. Due to the many deletions and mutations they accumulate over time, they usually are not infectious in the host species, however the virus may become infectious in another species. PERVS were originally discovered as retrovirus particles released from cultured porcine kidney cells. Most breeds of swine harbor approximately 50 PERV genomes in their DNA. Although it is likely that most of these are defective, some may be able to produce infectious viruses so every proviral genome must be sequenced to identify which ones pose a threat. In addition, through complementation and genetic recombination, two defective PERV genomes could give rise to an infectious virus. There are three subgroups of infectious PERVs (PERV-A, PERV-B, and PERV-C). Experiments have shown that PERV-A and PERV-B can infect human cells in culture. To date no experimental xenotransplantations have demonstrated PERV transmission, yet this does not mean PERV infections in humans are impossible. Pig cells have been engineered to inactivate all 62 PERVs in the genome using CRISPR Cas9 genome editing technology, and eliminated infection from the pig to human cells in culture. Ethics Xenografts have been a controversial procedure since they were first attempted. Many, including animal rights groups, strongly oppose killing animals to harvest their organs for human use. In the 1960s, many organs came from the chimpanzees, and were transferred into people that were deathly ill, and in turn, did not live much longer afterwards. Modern scientific supporters of xenotransplantation argue that the potential benefits to society outweigh the risks, making pursuing xenotransplantation the moral choice. None of the major religions object to the use of genetically modified pig organs for life-saving transplantation. Religions such as Buddhism and Jainism, however, have long espoused non-violence towards all living creatures. In general, the use of pig and cow tissue in humans has been met with little resistance, save some religious beliefs and a few philosophical objections. Experimentation without consent doctrines are now followed, which was not the case in the past, which may lead to new religious guidelines to further medical research on pronounced ecumenical guidelines. The "Common Rule" is the United States bio-ethics mandate . History of xenotransplantation in ethics At the beginning of the 20th century when studies in xenotransplantation were just beginning, few questioned the morality of it, turning to animals as a "natural" alternative to allografts. While satirical plays mocked xenografters such as Serge Voronoff, and some images showing emotionally distraught primates appeared – who Voronoff had deprived of their testicles – no serious attempts were yet made to question the science based on animal rights concerns. Xenotransplantation was not taken seriously, at least in France, during the first half of the 20th century. With the Baby Fae incident of 1984 as the impetus, animal rights activists began to protest, gathering media attention and proving that some people felt that it was unethical and a violation of the animal's own rights to use its organs to preserve a sick human's life. Treating animals as mere tools for the slaughter on demand by human will would lead to a world they would not prefer. Supporters of the transplant pushed back, claiming that saving a human life justifies the sacrifice of an animal one. Most animal rights activists found the use of primate organs more reprehensible than those of, for example, pigs. As Peter Singer et al. have expressed, many primates exhibit greater social structure, communication skills, and affection than mentally deficient humans and human infants. Despite this, it is considerably unlikely that animal suffering will provide sufficient impetus for regulators to prevent xenotransplantation. Informed consent of patient Autonomy and informed consent are important when considering the future uses of xenotransplantation. A patient undergoing xenotransplantation should be fully aware of the procedure and should have no outside force influencing their choice. The patient should understand the risks and benefits of such a transplantation. A public health dimension can also be considered. The Ethics Committee of the International Xenotransplantation Association pointed out in 2003 that one major ethical issue is the societal response to such a procedure. The application of the four bioethics principles is standardized in the moral conduct of laboratories. The four principles emphasize informed consent, the Hippocratic Oath to do no harm, using skills to help others, and protecting the right to quality care. Though xenotransplantation may have future medical benefits, it also has the serious risk of introducing and spreading the infectious diseases, into the human population. Guidelines have been drafted by governments with the purpose of forming the foundation of infectious disease surveillance. United Kingdom guidelines state that patients have to agree to "the periodic provision of bodily samples that would then be archived for epidemiological purposes", "post-mortem analysis in case of death, the storage of samples post-mortem, and the disclosure of this agreement to their family", "refrain from donating blood, tissue or organs", "the use of barrier contraception when engaging in sexual intercourse", "keep both name and current address on register and to notify the relevant health authorities when moving abroad" and "divulge confidential information, including one's status as a xenotransplantation recipient to researchers, all health care professionals from whom one seeks professional services, and close contacts such as current and future sexual partners." The patient must abide by these rules throughout their lifetime or until the government determines that there is no need for public health safeguards. Xenotransplantation guidelines in the United States The Food and Drug Administration (FDA) has also stated that if a transplantation takes place, the recipient must undergo monitoring for the rest of their lifetime and waive their right to withdraw. The reason for requiring lifelong monitoring is due to the risk of acute infections that may occur. The FDA suggests that a passive screening program should be implemented and should extend for the life of the recipient. See also Bartley P. Griffith Allograft Isograft Medical grafting Xenopregnancy Xenotransfusion References External links PBS Special on Pig to Human Transplants Campaign for Responsible Transplantation The Australian National Health and Medical Research Council's 2005 statement on xenotransplantation Organ transplantation Animal testing Life extension
Xenotransplantation
Chemistry
7,887
18,711,086
https://en.wikipedia.org/wiki/Moniliformin
Moniliformin is the organic compound with the formula (M+ = K+ or Na+). Both the sodium and potassium salts are generally hydrated, e.g. . In terms of its structure, it is the alkali metal salt of the conjugate base of 3-hydroxy-1,2-cyclobutenedione (the enolate of 1,2,3-cyclobutanetrione), a planar molecule related to squaric acid. It is an unusual mycotoxin, a feed contaminant that is lethal to fowl, especially ducklings. Moniliformin is formed in many cereals by a number of Fusarium species that include Fusarium moniliforme, Fusarium avenaceum, Fusarium subglutinans, Fusarium proliferatum, Fusarium fujikuroi and others. It is mainly cardiotoxic and causes ventricular hypertrophy. Biochemistry Moniliformin actually causes competitive inhibition of the activity of pyruvate dehydrogenase complex of respiratory reaction, which prevents pyruvic acid, product of glycolysis, to convert to acetyl-CoA. Ultrastructural examination of right ventricular wall of 9 month old female mink (Mustela vison) fed acute doses of moniliformin (2.2 and 2.8 mg/kg diet) and sub-acute doses (1.5 to 3.2 mg/kg diet) reveals significant damage to myofiber, mitochondria, Z and M lines and sarcoplasmic reticulum as well as increased extracellular collagen deposition. Minks are considered most sensitive mammals to the toxicity of moniliformin. Chemically speaking, it is the sodium salt of deoxysquaric acid (the other name of that acid is semisquaric acid). Physicochemical information Moniliformin is soluble in water and polar solvents, such as methanol. λmax: 226 nm and 259 nm in methanol. See also Mycotoxin Squaric acid Sources and references Mycotoxins Organic sodium salts Cyclobutenes Respiratory toxins Enols Ketones Organic acids
Moniliformin
Chemistry
481
1,124,032
https://en.wikipedia.org/wiki/IBM%20LAN%20Server
IBM LAN Server is a discontinued network operating system introduced by International Business Machines (IBM) in 1988. LAN Server started as a close cousin of Microsoft's LAN Manager and first shipped in early 1988. It was originally designed to run on top of Operating System/2 (OS/2) Extended Edition. The network client was called IBM LAN Requester and was included with OS/2 EE 1.1 by default. (Eventually IBM shipped other clients and supported yet more. Examples include the IBM OS/2 File/Print Client, IBM OS/2 Peer, and client software for Microsoft Windows.) Here the short term LAN Server refers to the IBM OS/2 LAN Server product. There were also LAN Server products for other operating systems, notably AIX—now called Fast Connect—and OS/400. Version history Predecessors included IBM PC LAN Program (PCLP). Variants included LAN Server Ultimedia (optimized for network delivery of multimedia files) and LAN On-Demand. Add-ons included Directory and Security Server, Print Services Facility/2 (later known as Advanced Printing), Novell NetWare for OS/2, and LAN Server for Macintosh. Innovations LAN Server pioneered certain file and print sharing concepts such as domains (and domain controllers), networked COM ports, domain aliases, and automatic printer driver selection and installation. See also LAN messenger Server Message Block (SMB) References Further reading Computer-related introductions in 1988 LAN Server Network operating systems OS/2 Servers (computing)
IBM LAN Server
Technology,Engineering
307
2,557,627
https://en.wikipedia.org/wiki/Curved%20space
Curved space often refers to a spatial geometry which is not "flat", where a flat space has zero curvature, as described by Euclidean geometry. Curved spaces can generally be described by Riemannian geometry, though some simple cases can be described in other ways. Curved spaces play an essential role in general relativity, where gravity is often visualized as curved spacetime. The Friedmann–Lemaître–Robertson–Walker metric is a curved metric which forms the current foundation for the description of the expansion of the universe and the shape of the universe. The fact that photons have no mass yet are distorted by gravity, means that the explanation would have to be something besides photonic mass. Hence, the belief that large bodies curve space and so light, traveling on the curved space will, appear as being subject to gravity. It is not, but it is subject to the curvature of space. Simple two-dimensional example A very familiar example of a curved space is the surface of a sphere. While to our familiar outlook the sphere looks three-dimensional, if an object is constrained to lie on the surface, it only has two dimensions that it can move in. The surface of a sphere can be completely described by two dimensions, since no matter how rough the surface may appear to be, it is still only a surface, which is the two-dimensional outside border of a volume. Even the surface of the Earth, which is fractal in complexity, is still only a two-dimensional boundary along the outside of a volume. Embedding One of the defining characteristics of a curved space is its departure from the Pythagorean theorem. In a curved space . The Pythagorean relationship can often be restored by describing the space with an extra dimension. Suppose we have a three-dimensional non-Euclidean space with coordinates . Because it is not flat . But if we now describe the three-dimensional space with four dimensions () we can choose coordinates such that . Note that the coordinate is not the same as the coordinate . For the choice of the 4D coordinates to be valid descriptors of the original 3D space it must have the same number of degrees of freedom. Since four coordinates have four degrees of freedom it must have a constraint placed on it. We can choose a constraint such that Pythagorean theorem holds in the new 4D space. That is . The constant can be positive or negative. For convenience we can choose the constant to be where now is positive and . We can now use this constraint to eliminate the artificial fourth coordinate . The differential of the constraining equation is leading to . Plugging into the original equation gives . This form is usually not particularly appealing and so a coordinate transform is often applied: , , . With this coordinate transformation . Without embedding The geometry of a n-dimensional space can also be described with Riemannian geometry. An isotropic and homogeneous space can be described by the metric: . This reduces to Euclidean space when . But a space can be said to be "flat" when the Weyl tensor has all zero components. In three dimensions this condition is met when the Ricci tensor () is equal to the metric times the Ricci scalar (, not to be confused with the R of the previous section). That is . Calculation of these components from the metric gives that where . This gives the metric: . where can be zero, positive, or negative and is not limited to ±1. Open, flat, closed An isotropic and homogeneous space can be described by the metric: . In the limit that the constant of curvature () becomes infinitely large, a flat, Euclidean space is returned. It is essentially the same as setting to zero. If is not zero the space is not Euclidean. When the space is said to be closed or elliptic. When the space is said to be open or hyperbolic. Triangles which lie on the surface of an open space will have a sum of angles which is less than 180°. Triangles which lie on the surface of a closed space will have a sum of angles which is greater than 180°. The volume, however, is not . See also CAT(k) space Non-positive curvature References Further reading The Feynman Lectures on Physics Vol. II Ch. 42: Curved Space External links Curved Spaces, simulator for multi-connected universes developed by Jeffrey Weeks Riemannian geometry Physical cosmology Differential geometry General relativity
Curved space
Physics,Astronomy
903
2,410,182
https://en.wikipedia.org/wiki/Hydrolyzed%20vegetable%20protein
Hydrolyzed vegetable protein (HVP) products are foodstuffs obtained by the hydrolysis of protein, and have a meaty, savory taste similar to broth (bouillon). Regarding the production process, a distinction can be made between acid-hydrolyzed vegetable protein (aHVP), enzymatically produced HVP, and other seasonings, e.g., fermented soy sauce. Hydrolyzed vegetable protein products are particularly used to round off the taste of soups, sauces, meat products, snacks, and other dishes, as well as for the production of ready-to-cook soups and bouillons. History Food technologists have long known that protein hydrolysis produces a meat bouillon-like odor and taste. Hydrolysates have been a part of the human diet for centuries, notably in the form of fermented soy sauce, or Shoyu. Shoyu, traditionally made from wheat and soy protein, has been produced in Japan for over 1,500 years, following its introduction from mainland China. The origins of producing these materials through the acid hydrolysis of protein (aHVP) can be traced back to the scarcity and economic challenges of obtaining meat extracts during the Napoleonic wars. In 1831, Berzelius obtained products having a meat bouillon taste when hydrolysing proteins with hydrochloric acid. Julius Maggi produced acid-catalyzed hydrolyzed vegetable protein industrially for the first time in 1886. In 1906, Fischer found that amino acids contributed to the specific taste. In 1954, D. Phillips found that the bouillon odor required the presence of proteins containing threonine. Another important substance that gives a characteristic taste is glutamic acid. Manufacture Almost all products rich in protein are suitable for the production of HVP. Today, it is made mainly from protein resources of vegetable origin, such as defatted oil seeds (soybean meal, grapeseed meal) and protein from maize (Corn gluten meal), wheat (gluten), pea, and rice. The process and the feedstock determines the organoleptic properties of the end product. Proteins consist of chains of amino acids joined through amide bonds. When subjected to hydrolysis (hydrolyzed), the protein is broken down into its component amino acids. In aHVP, hydrochloric acid is used for hydrolysis. The remaining acid is then neutralized by mixing with an alkali such as sodium hydroxide, which leaves behind table salt, which comprises up to 20% of the final product (acid-hydrolyzed vegetable protein, aHVP). In enzymatic HVP (eHVP), proteases are used to break down the proteins under a more neutral pH and lower temperatures. The amount of salt is greatly reduced. Because of the different processing conditions, the two types of HVP have different sensory profiles. aHVP is usually dark-brown in color and has a strong savory flavor, whereas eHVP usually is lighter in color and has a mild savory flavor. Acid hydrolysis Acid hydrolysates are produced from various edible protein sources, with soy, corn, wheat, and casein being the most common. For the production of aHVP, the proteins are hydrolyzed by cooking with a diluted (15–20%) hydrochloric acid, at a temperature between 90 and 120 °C for up to 8 hours. After cooling, the hydrolysate is neutralized with either sodium carbonate or sodium hydroxide to a pH of 5 to 6. During hydrolysis, extraneous polymeric material known as humin, which forms from the interaction of carbohydrate and protein fragments, is generated and subsequently removed by filtration and then further refined. The source of the raw material, concentration of the acid, the temperature of the reaction, the time of the reaction, and other factors can all affect the organoleptic properties of the final product. Activated carbon treatment can be employed to remove both flavor and color components, to the required specification. Following a final filtration, the aHVP may, depending upon the application, be fortified with additional flavoring components. Thereafter, the product can be stored as a liquid at 30–40% dry matter, or alternatively it may be spray dried or vacuum dried and further used as a food ingredient. One hundred pounds (45kg) of material containing 60% protein will yield 100 pounds of aHVP, which contains approximately of salt. This salt gain occurs during the neutralization step. Enzymatic hydrolysis For the production process of enzymatic HVP, enzymes are used to break down the proteins. To break down the protein to amino acids, proteases are added to the mixture of defatted protein and water. Due to the sensitivity of enzymes to a specific pH, either an acid or a base is added to match the optimum pH. Depending on the activity of the enzymes, up to 24 hours are needed to break down the proteins. The mixture is heated to inactivate the enzymes and then filtered to remove the insoluble carbohydrates (humin). Since no salt is formed during the production process, manufacturers may add salt to eHVP preparations to extend shelf life or to provide a product similar to conventional aHVP. A commonly used protease mixture is "Flavourzyme", extracted from Aspergillus oryzae. Composition Liquid aHVP typically contains 55% water, 16% salt, 25% organic substances (thereof 20% protein (amino acids) analyzed as about 3% total nitrogen and 2% amino nitrogen). Many amino acids have either a bitter or sweet taste. In many commercial processes, nonpolar amino acids such as L-leucine and L-isoleucine are often removed to create hydrolysates with a more mellow and less bitter character. D-tryptophan, D-histidine, D-phenylalanine, D-tyrosine, D-leucine, L-alanine, and glycine are known to be sweet, while bitterness is associated with L-tryptophan, L-phenylalanine, L-tyrosine, and L-leucine. Tyrosine is an amino acid susceptible to halogenation during hydrolysis with HCl. Lysine is stable under standard acid hydrolysis, but during heat treatment, the side-chain amino group can react with other compounds, such as reducing sugars, producing Maillard products. The organoleptic properties of HVP is determined not only by amino acid composition, but also by the various aroma-bearing substances other than the amino acids created during the production of both aHVP and eHVP. Aromas can be formed via amino acid decomposition, Maillard reaction, sugar cyclization, and lipid oxidation. A complex mix of aromas similar to butter, meat, bone stock, wood smoke, lovage and many other substances can be produced, depending on reaction conditions (time, temperature, hydrolysis method, additional feedstock such as xylose and spices). According to the European Code of Practice for Bouillons and Consommés, hydrolyzed protein products intended for retail sale correspond to these characteristics: Specific gravity at 20°C min.: 1.22 Total nitrogen min.: 4% (on dry matter) Amino nitrogen min.: 1.3% (on dry matter) Sodium chloride max.: 50% (on dry matter) Use When foods are produced by canning, freezing, or drying, some flavor loss is almost inevitable. Manufacturers can use HVP to make up for it. Therefore, HVP is used in a wide variety of products, such as in the spice, meat, fish, fine-food, snack, flavor, and soup industries. Safety 3-MCPD 3-MCPD, a carcinogen in rodents and a suspected human carcinogen, is created during acid-hydrolysis as glycerol released from lipid (e.g. triglycerides) reacts with hydrochloric acid. Legal limits have been set to keep aHVP products safe for human consumption. aHVP manufacturers can reduce the amount of 3-MCPD to acceptable limits by (1) careful control of reaction time and temperature (2) timely neutralization of hydrochloric acid, optionally extending to an alkaline hydrolysis step to destroy any 3-MCPD already formed (3) replacement of hydrochloric acid with other acids such as sulfuric acid. As an allergen Whether hydrolyzed vegetable protein is an allergen or not is contentious. According to European law, wheat and soy are subject to allergen labelling in terms of Regulation (EU) 1169/2011 on food information to consumers. Since wheat and soy used for the production of HVP are not exempted from allergen labelling for formal reasons, HVP produced by using those raw materials has to be labelled with a reference to wheat or soy in the list of ingredients. Nevertheless, strong evidence indicates at least aHVP is not allergenic, since proteins are degraded to single amino acids which are not likely to trigger an allergic reaction. A 2010 study has shown that aHVP does not contain detectable traces of proteins or IgE-reactive peptides. This provides strong evidence that aHVP is very unlikely to trigger an allergic reaction to people who are intolerant or allergic to soy or wheat. Earlier peer-reviewed animal studies done in 2006 also indicate that soy-hypersensitive dogs do not react to soy hydrolysate, a proposed protein source for soy-sensitive dogs. There are reports of a cosmetic-grade aHVP, Glupearl 19S (GP19S), inducing anaphylaxis when present in soap. Unlike food aHVP, this Japanese wheat aHVP is only very mildly hydrolyzed. The unusual chemical condition makes GP19S more allergenic than pure gluten. Newer regulations for cosmetic hydrolyzed wheat protein have been developed in response, requiring an average molecular mass of less than 3500 Da – about 35 residues long. In theory, "an allergen must have at least 2 IgE-binding epitopes, and each epitope must be at least 15 amino acid residues long, to trigger a type 1 hypersensitivity reaction." Experiments also show that this degree of hydrolysis is sufficient to not trigger IgE binding from GP19S-allergic patients. Allergenicity of eHVP depends on the specific food source and the enzyme used. Alcalase is able to render chickpea and green pea completely non-immunoreactive but papain only achieves partial reduction. Alcalase is also unable to make white beans non-reactive due to the antinutritional factors preventing complete digestion. Alcalase, but not "Flavourzyme" (a commercial Aspergillus oryzae protease blend for eHVP production), is able to make roasted peanut non-reactive. See also Hydrolyzed protein MSG References Chemical reactions Food ingredients Umami enhancers
Hydrolyzed vegetable protein
Chemistry,Technology
2,356
2,026,812
https://en.wikipedia.org/wiki/EISCAT
EISCAT (European Incoherent Scatter Scientific Association) operates three incoherent scatter radar systems in Northern Scandinavia and Svalbard. The facilities are used to study the interaction between the Sun and the Earth as revealed by disturbances in the ionosphere and magnetosphere. The EISCAT Scientific Association exists to provide scientists with access to incoherent scatter radar facilities of the highest technical standard. EISCAT 3D The construction of EISCAT's new generation of incoherent radars: EISCAT 3D, has started in November 2022. The first stage of the new system will consist of three radar sites, functioning together, just as the old mainland system. Later, transmitter up grade and more sites will be added to the system.   Instead of parabolic dishes, as the old system, EISCAT 3D is a multistatic radar composed of three phased-array antenna fields. Many small antennas working together as one. Each field will have between 5 000 - 10 000 crossed dipole antenna mounted on top of a ground plane 70 meters in diameter. The core site of EISCAT 3D is located just outside Skibotn, Norway. The facility will have 109 hexagonal antenna units as its main antenna, and 10 antenna units spread out around the main site. On top of the antenna units the dipole antennas are mounted. The Skibotn facility will have 10 000 of these small antennas. The Skibotn facility will act both as a transceiver and receiver of the EISCAT 3D system. Two receiver sites are located in Karesuvando, Finland and Kaiseniemi, Sweden. The facilities will consist of 54 and 55 antenna units with approximately 5 000 dipole antennas. Space debris tracking, tracking of meteorites, research on GPS and radio traffic, space weather, aurora research, climate research and near-Earth space are some of the areas where EISCAT 3D will be able to offer much more flexible and meticulous research data. The use of EISCAT 3D is solely civil.The new system should be up and running 2023/2024. This also means that the old mainland system will be dismantled. The mainland system The mainland system consisted of three parabolic dish research radar antennas, designed as a tristatic radar, that is, three facilities that work together. The radar antennas are located in Tromsø, Norway; Sodankylä, Finland and Kiruna, Sweden, north of the Scandinavian Arctic Circle. The core in the tri-static system, is located at Ramfjordmoen, outside Tromsø, Norway with a 32 meter mechanically fully steerable parabolic dish used for transmission and reception in the UHF-band. Operating in the 930 MHz band with a transmitter peak power 2.0 MW, 12.5% duty cycle and 1 μs – 10 ms pulse length with frequency and phase modulation capability. And the VHF radar that operates in the 224 MHz band with transmitter peak power 3 MW, 12.5% duty cycle and 1 μs – 2 ms pulse length with frequency and phase modulation capability. The antenna, used for transmission and reception, is a parabolic cylinder antenna consisting of 4 quarters, constituting a total aperture of 120 m x 40 m. This antenna is mechanically steerable in the meridional plane (-30° to 60° zenith angle), and electronically steerable in the longitudinal direction (±12° off-boresight).The receiving antennas in Sodankylä, Finland and Kiruna, Sweden, is fully steerable 32 meter parabolic dish antennas. The receivers include multiple channels the UHF radar and the VHF radars. The data are pre-processed in signal processors, displayed and analysed in real-time and can be recorded to mass storage media. EISCAT Svalbard Radar The location in Longyearbyen, Svalbard, high above the arctic circle and near the north pole, offers unique capabilities in auroral research. Svalbard’s unique climate with polar night from November until February, make the season for observing the northern lights long. The EISCAT Svalbard Radar (ESR) also operates the UHF-band, at 500 MHz with a transmitter peak power of 1000 kW, 25 % duty cycle and 1 μs – 2 ms pulse length with frequency and phase modulation capability. There are two antennas, a 32 meter mechanically fully steerable parabolic dish antenna, and a 42 meter fixed parabolic antenna aligned along the direction of the local geomagnetic field. The whole radar system is controlled by computers, and the sites in Tromsø, Kiruna, Sodankylä, and Longyearbyen are interconnected via the Internet. Tromsø Ionospheric Modification facility An ionospheric heating facility, Heating, is also located in Ramfjordmoen outside Tromsø, Norway. It consists of 12 transmitters of 100 kW CW power, which can be modulated, and three antenna arrays covering the frequency range 3.85 MHz to 8 MHz. History EISCAT was founded in December 1975, as an association of research councils in six member countries. But the plans to establish a research facility focusing on incoherent scatter technology in the Northern Lights zone, started as early as 1969. Many meetings with interested researchers were held in the early 70s, but it was not until Professor Sir Granville Beynon organized a meeting in 1973, where a board and a chairman were appointed, that the work really began. In 1974, the Council presented a report on how the organisation, operations and implementation of EISCAT's UHF system could take place, and at the end of 1975 the first six member states agreed to start the work towards the construction of EISCAT. The member countries are now Sweden, Norway, Finland, Japan, China and the United Kingdom. The members have changed somewhat: Germany is no longer a full member, France was a member from the start of the organization in 1975 until 2005, while Japan and China were added later (1996 and 2007 respectively). EISCAT is governed by The EISCAT Council, which consists of representatives from research institutions in the various member countries. Two committees, the Administrative and Financial Committee (AFC) and the Advisory Scientific Committee (SAC), assist the Council in its work. References External links https://eiscat.se/about/ https://eiscat.se/eiscat3d-information/ https://eiscat.se/eiscat3d-information/eiscat_3d-faq/ University Courses on Svalbard (UNIS) Arctic research Ionosphere Pan-European scientific societies Longyearbyen Kiruna Space Situational Awareness Programme Radar networks Sodankylä Tromsø
EISCAT
Environmental_science
1,385
51,499,423
https://en.wikipedia.org/wiki/Grunt%20%28software%29
Grunt is a JavaScript task runner, a tool used to automatically perform frequent tasks such as minification, compilation, unit testing, and linting. It uses a command-line interface to run custom tasks defined in a file (known as a Gruntfile). Grunt was created by Ben Alman and is written in Node.js. It is distributed via npm. As of October 2022, there were more than 6,000 plugins available in the Grunt ecosystem. Companies and projects that use Grunt include Adobe Systems, jQuery, Twitter, Mozilla, Bootstrap, Cloudant, Opera, WordPress, Walmart, and Microsoft. Overview Grunt was originally created by Ben Alman in 2012 as an efficient alternative to simplify writing and maintaining a suite of JavaScript build process tasks in one huge file. It was designed as a task-based command line build tool for JavaScript projects. Grunt is primarily used to automate tasks that need to be performed routinely. There are thousands of plugins that can be installed and used directly to accomplish some commonly used tasks. One of Grunt's most desirable features is that it is highly customizable—i.e., it allows developers to add, extend, and modify custom tasks to fit their personal needs; each task has a set of configuration options that the user can set. Moreover, Grunt offers the ability to define custom tasks, which can combine multiple existing tasks into a single task or add entirely new functionality. Basic concepts Command-line interface Grunt's command-line interface (CLI) can be installed globally through npm. Executing the grunt command will load and run the version of Grunt locally installed in the current directory. Hence, we can maintain different versions of Grunt in different folders and execute each one as we wish. Files To use Grunt in a project, two specific files need to be created in the root directory, namely package.json and a Gruntfile. package.json - contains the metadata for the project including name, version, description, authors, licenses and its dependencies (Grunt plugins required by the project). All the dependencies are listed either in the dependencies or the devDependencies section. Gruntfile - a valid JavaScript or CoffeeScript file named "Gruntfile.js" or "Gruntfile.coffee" that contains code to configure tasks, load existing plugins and/or create custom tasks. Tasks Tasks are the modules that perform a specified job. They are defined in the Gruntfile. Developers can load predefined tasks from existing Grunt plugins and/or write custom code to define their own tasks depending on their requirements. Once defined, these tasks can be run from the command line by simply executing grunt <taskname>. If the <taskname> defined in the Gruntfile is 'default' then simply executing grunt will suffice. Example The following is an example of a Gruntfile written in JavaScript that shows how to load plugins, create custom tasks and configure them: module.exports = function(grunt) { // Task configuration grunt.initConfig({ taskName1: 'Task1 Configuration', taskName2: 'Task2 Configuration' }); // Loads plugins grunt.loadNpmTasks('pluginName1'); grunt.loadNpmTasks('pluginName2'); // Custom tasks grunt.registerTask('customTaskName1', 'Custom task description', function(taskParameter) { // Custom statements }); // Combining multiple tasks to a single task grunt.registerTask('customTaskName2', ['taskName1', 'customTaskName1']); // Default task - runs if task name is not specified grunt.registerTask('default', ['customTaskName2']); }; In the above example, executing the grunt command will run <customtaskName2> which has been defined above as a combination of both <taskName1> and <customTaskName1>. Plugins Plugins are reusable code that defines a set of tasks. Each plugin internally contains a tasks directory with JavaScript files that have the same syntax as a Gruntfile. Most of the Grunt plugins are published with the keyword gruntplugin in npm and prefixed with grunt. This helps Grunt in showing all the plugins in Grunt's plugin listing. The plugins officially supported by Grunt are prefixed with grunt-contrib and are also marked with a star symbol in the plugins listing. Some popular plugins include grunt-contrib-watch, grunt-contrib-clean, grunt-contrib-uglify. Developers can even create their own Grunt plugins by using the grunt-init plugin and publish them to npm using the npm publish command. Advantages The following are some of the advantages of using Grunt: All task runners have the following properties: consistency, effectiveness, efficiency, repeatability, etc. Access to many predefined plugins that can be used to work with JavaScript tasks and on static content. Allows users to customize tasks using predefined plugins. Prefers the configuration approach to coding. Allows users to add their own plugins and publish them to npm. Comparison Ant Ant or Apache Ant is a Java-based build tool. Ant has a little over a hundred built-in tasks that are better suited to projects with a Java build structure. Writing custom code in Ant requires users to write a JAR file and reference it from XML. This would add unnecessary complexities to projects that do not require Java themselves. Ant build configurations are listed in XML rather than in JSON format. Rake Rake allows developers to define tasks in Ruby. Rake doesn't have the concept of plugins or predefined tasks which means all the required actions must be written and then executed. This makes the developments costly when compared to Grunt which has a large set of reusable plugins. Gulp Gulp.js is a JavaScript based task runner tool similar to Grunt since both follow a modular-based architecture and are based on npm. Gulp tasks are defined by code rather than configuration. Gulp is faster than Grunt. Grunt uses temporary files to transfer output from one task to another whereas in Gulp files are piped between the tasks. See also Node.js Build automation List of build automation software Apache Maven Yeoman (computing) Modernizr JavaScript framework JavaScript library References Further reading External links Automation software JavaScript programming tools
Grunt (software)
Engineering
1,389
8,362,545
https://en.wikipedia.org/wiki/Batman%3A%20Legacy
Legacy is a crossover story arc in the Batman comic book series, which is a sequel to another Batman story arc, Contagion and also serves as a follow-up to the Knightfall story arc. The tagline is: "The stakes are higher than they've ever been as Batman and his outnumbered forces race to solve a riddle from the distant past that threatens to erase all of mankind's tomorrow". The story concerns the returning outbreak of a lethal disease in Gotham City, and Batman's attempts to combat it with his closest allies by discovering its origin in the Middle East. The disease is known as the Apocalypse Plague, the Filovirus, Ebola Gulf A, or its more popular nickname: the Clench. An unlikely alliance searches the world for a possible cure including: Batman, Robin, Oracle, Nightwing, Huntress, Azrael, and Catwoman. There, Batman faces two of his deadliest foes: Ra's al Ghul and Bane. The Gotham Knights travel throughout the world as they race to stop the League of Assassins from releasing the pure strain of the virus across the globe, and Gotham itself would be a place for the rematch between the Dark Knight and Bane. Ra's continues his search for the suitable mate for his daughter, Talia al Ghul. Meanwhile, Batman leads the chase for a cure to save the life of Tim Drake (Robin) and prevent the end of the world. The events of this lead into Batman: Cataclysm (though there was a gap of over a year between the two story arcs), which itself leads into Batman: No Man's Land. 1st Edition - 1st printing. Collects Batman (1940-2011) #533-534, Batman: Shadow of the Bat (1992-2000) #53-54, Catwoman (1993-2001 2nd Series) #35-36, Detective Comics (1937-2011 1st Series) #699-702, and Robin (1993-2009) #31-33. Written by Chuck Dixon, Doug Moench, and Alan Grant. Art by Graham Nolan, Jim Aparo, Staz Johnson, Dave Taylor, Jim Balent, and Mike Wieringo. Batman and his small cadre of allies race against the clock to stop a threat from the past that may wipe out mankind's future. Reading order Legacy reading order: Prequel: Catwoman #33-35 Prequel: Bane of the Demon #1–4 Prelude: Shadow of the Bat #53 Prelude: Batman #533 Part 1: Detective Comics #700 Part 2: Catwoman #36 Part 3: Robin #32 Part 4: Shadow of the Bat #54 Part 5: Batman #534 Part 6: Detective Comics #701 Part 7: Robin #33 Epilogue: Detective Comics #702 Epilogue: Batman: Bane (one-shot) Plot Catwoman #33 Catwoman travels to Rheelasia to steal back a microchip before it can be copied and pirated. She obtains the chip easily, but is attacked and abducted by a group led by Hellhound. Catwoman #34 "The Collector" uncovers an ancient journal describing an underground wheel. It is heavily booby-trapped and Catwoman is to be the front line going in. A former member of the Order of St. Dumas, who had translated much of the journal, accompanies them. He and Catwoman enters the labyrinth as Hellhound's men are attacked. Catwoman #35 They make it through the traps and to the wheel. Hellhound, Catwoman, and the translator are attacked by a large, unidentified man. Catwoman wakes up in a cell. Batman: Shadow of the Bat #53 Bruce tells Tim about the new mutation of the Clench. He calls Azrael for more information. Batman, Nightwing, and Robin prepared to leave for Sudan. D.A. Voder reports to Penguin on the city's affairs. Huntress continues taking down looters raiding the homes of Clench victims; Robin informs her that she will be in charge of Gotham City while they are gone. Batman brings Gordon up to speed on the situation. Batman #533 Batman, Nightwing, and Robin land in the desert and find the entrance point Azrael described. After taking out the guards, Batman finds what appears to be a map, and guides them through the maze. At the end of the tunnel, they are met by three shadows. Detective Comics #700 Ra's al Ghul, Talia, and Ubu (Ra's servant) stand above Batman, Nightwing, and Robin. Ra's orders them killed. Nightwing is wounded while running for cover. They discover that the ancient wheel beneath the desert has generated a plague virus, part of Ra's' plan to "cleanse" the world of 90% of humanity. His technicians finish digitally rendering the wheel; Ra's orders the entire underground facility, including the wheel, destroyed, so Ubu floods the caves. Batman is able to get himself and Robin to safety, as Nightwing faces off against Ra's. His two partners arrive soon after. Ra's escapes with Talia and Ubu, who takes off his mask to reveal himself as Bane. Catwoman #36 Catwoman breaks free from her cell and frees Umberto (the translator) and Hellhound, who makes a temporary truce with her. Outside the compound, she defeats Hellhound and ties him up. Catwoman and Umberto set out toward civilization. Batman gets word from Oracle of the three destinations that Ra's has taken; Nightwing and Robin leave for Paris and Batman heads for Edinburgh, agreeing to meet up in Gotham City later. Robin #32 Nightwing and Robin split up in Paris to cover more ground. Robin meets up with Henri Ducard and tells him what is going on before meeting back up with Nightwing. They find the spot - Nightwing goes into the sewers below the Louvre, while Robin goes into the city's various other tourist attractions. Nightwing takes out the plague spreaders below, while Robin and Ducard take care of Ra's al Ghul's agents inside. Nightwing and Robin then head back to Gotham City. Batman: Shadow of the Bat #54 Batman is able to stop the dispersion of the virus in Edinburgh. Robin informs him that they were successful in Paris, but Oracle says that Calcutta will be the next target. Batman heads to this new destination. Batman #534 Oracle contacts Batman in Calcutta and directs him to meet with a contact. While waiting he meets a young boy who offers to help Batman in his "admirable enterprise". Batman tells to him to keep his distance because there could be danger. The contact then arrives and turns out to be Lady Shiva. The duo is attacked by Ra's al Ghul's men shortly after and a brief fight breaks out. After the scuffle, Batman takes a ring from one of the assailants, which he gives to a merchant and tells him to feed the boy well. The pair locate Ra's' agents at the festival of Durga attempting to release the virus into the water supply. Batman and Lady Shiva chase down the men responsible. During the fight, one of the culprits pulls out a gun, and having followed Batman, the boy he met earlier jumps on the back of the man with the gun ruining his aim. The boy is knocked to the ground and shot. Batman disables the shooter and asks him where the virus is. Before killing himself with a poison capsule hidden in a tooth, the man tells Batman that the virus is in a soluble wax container hidden in a statue that was already thrown into the river. Batman jumps from the bridge and manages to reach the container and get back to the surface while it is still intact. He then finds the boy still alive and tells Lady Shiva that, because the boy almost died but killed no one, he chose the path of a hero. Detective Comics #701 Back in Gotham City, Batman finds Ra's al Ghul's agents at the site of an upcoming grand opening of a casino. Bane is there and attacks him. Batman sabotages the building and it explodes. His rage helps him defeat Bane, but the current from the river below the casino drags him away. Nightwing, Robin, and Huntress pursue Ra's by boat. Robin #33 Robin, Nightwing, and Huntress make it aboard Ra's al Ghul's yacht as Batman continues to search for Bane. Robin find the computers with the plague information and sends it to Oracle. Nightwing and Huntress put up a fight against Ra's' agents, but he and Talia capture them. They also attack Robin, leading to an explosion. He gets Huntress and Nightwing off the ship before it explodes; Oracle gets the entire program in time. Renee Montoya and Harvey Bullock discover dozens of mobsters who washed ashore from Blüdhaven. Detective Comics #702 Wayne Pharmaceuticals begins disbursing the antidote to Ra's al Ghul's plague. Ra's' remaining Gotham agents attack GCPD headquarters with a suicide bombing. Gordon and former Commissioner Sarah Essen-Gordon, along with the rest of the force, drive them away. The estranged couple make up and head home. Batman: Bane Bane hijacks a mobile nuclear power plant, intending to irradiate Gotham City into a wasteland. With the combined efforts of Batman, Robin, Nightwing and the plant's owner, however, Bane's plan is thwarted. References External links http://en.dcdatabaseproject.com/Batman:_Legacy Biological weapons in popular culture Comics by Alan Grant (writer) Comics by Doug Moench Viral outbreaks in comics Sequel comics Ebola in popular culture
Batman: Legacy
Biology
2,023
25,125,998
https://en.wikipedia.org/wiki/Detection%20error%20tradeoff
A detection error tradeoff (DET) graph is a graphical plot of error rates for binary classification systems, plotting the false rejection rate vs. false acceptance rate. The x- and y-axes are scaled non-linearly by their standard normal deviates (or just by logarithmic transformation), yielding tradeoff curves that are more linear than ROC curves, and use most of the image area to highlight the differences of importance in the critical operating region. Axis warping The normal deviate mapping (or normal quantile function, or inverse normal cumulative distribution) is given by the probit function, so that the horizontal axis is x = probit(Pfa) and the vertical is y = probit(Pfr), where Pfa and Pfr are the false-accept and false-reject rates. The probit mapping maps probabilities from the unit interval [0,1], to the extended real line [−∞, +∞]. Since this makes the axes infinitely long, one has to confine the plot to some finite rectangle of interest. See also Constant false alarm rate Detection theory False alarm Receiver operating characteristic References Error detection and correction
Detection error tradeoff
Engineering
240
33,431,450
https://en.wikipedia.org/wiki/Modern%20searches%20for%20Lorentz%20violation
Modern searches for Lorentz violation are scientific studies that look for deviations from Lorentz invariance or symmetry, a set of fundamental frameworks that underpin modern science and fundamental physics in particular. These studies try to determine whether violations or exceptions might exist for well-known physical laws such as special relativity and CPT symmetry, as predicted by some variations of quantum gravity, string theory, and some alternatives to general relativity. Lorentz violations concern the fundamental predictions of special relativity, such as the principle of relativity, the constancy of the speed of light in all inertial frames of reference, and time dilation, as well as the predictions of the standard model of particle physics. To assess and predict possible violations, test theories of special relativity and effective field theories (EFT) such as the Standard-Model Extension (SME) have been invented. These models introduce Lorentz and CPT violations through spontaneous symmetry breaking caused by hypothetical background fields, resulting in some sort of preferred frame effects. This could lead, for instance, to modifications of the dispersion relation, causing differences between the maximal attainable speed of matter and the speed of light. Both terrestrial and astronomical experiments have been carried out, and new experimental techniques have been introduced. No Lorentz violations have been measured thus far, and exceptions in which positive results were reported have been refuted or lack further confirmations. For discussions of many experiments, see Mattingly (2005). For a detailed list of results of recent experimental searches, see Kostelecký and Russell (2008–2013). For a recent overview and history of Lorentz violating models, see Liberati (2013). Assessing Lorentz invariance violations Early models assessing the possibility of slight deviations from Lorentz invariance have been published between the 1960s and the 1990s. In addition, a series of test theories of special relativity and effective field theories (EFT) for the evaluation and assessment of many experiments have been developed, including: The parameterized post-Newtonian formalism is widely used as a test theory for general relativity and alternatives to general relativity, and can also be used to describe Lorentz violating preferred frame effects. The Robertson-Mansouri-Sexl framework (RMS) contains three parameters, indicating deviations in the speed of light with respect to a preferred frame of reference. The c2 framework (a special case of the more general THεμ framework) introduces a modified dispersion relation and describes Lorentz violations in terms of a discrepancy between the speed of light and the maximal attainable speed of matter, in presence of a preferred frame. Doubly special relativity (DSR) preserves the Planck length as an invariant minimum length-scale, yet without having a preferred reference frame. Very special relativity describes space-time symmetries that are certain proper subgroups of the Poincaré group. It was shown that special relativity is only consistent with this scheme in the context of quantum field theory or CP conservation. Noncommutative geometry (in connection with Noncommutative quantum field theory or the Noncommutative standard model) might lead to Lorentz violations. Lorentz violations are also discussed in relation to Alternatives to general relativity such as Loop quantum gravity, Emergent gravity, Einstein aether theory, Hořava–Lifshitz gravity. However, the Standard-Model Extension (SME) in which Lorentz violating effects are introduced by spontaneous symmetry breaking, is used for most modern analyses of experimental results. It was introduced by Kostelecký and colleagues in 1997 and the following years, containing all possible Lorentz and CPT violating coefficients not violating gauge symmetry. It includes not only special relativity, but the standard model and general relativity as well. Models whose parameters can be related to SME and thus can be seen as special cases of it, include the older RMS and c2 models, the Coleman-Glashow model confining the SME coefficients to dimension 4 operators and rotation invariance, and the Gambini-Pullin model or the Myers-Pospelov model corresponding to dimension 5 or higher operators of SME. Speed of light Terrestrial Many terrestrial experiments have been conducted, mostly with optical resonators or in particle accelerators, by which deviations from the isotropy of the speed of light are tested. Anisotropy parameters are given, for instance, by the Robertson-Mansouri-Sexl test theory (RMS). This allows for distinction between the relevant orientation and velocity dependent parameters. In modern variants of the Michelson–Morley experiment, the dependence of light speed on the orientation of the apparatus and the relation of longitudinal and transverse lengths of bodies in motion is analyzed. Also modern variants of the Kennedy–Thorndike experiment, by which the dependence of light speed on the velocity of the apparatus and the relation of time dilation and length contraction is analyzed, have been conducted; the recently reached limit for Kennedy-Thorndike test yields 7 10−12. The current precision, by which an anisotropy of the speed of light can be excluded, is at the 10−17 level. This is related to the relative velocity between the Solar System and the rest frame of the cosmic microwave background radiation of ~368 km/s (see also Resonator Michelson–Morley experiments). In addition, the Standard-Model Extension (SME) can be used to obtain a larger number of isotropy coefficients in the photon sector. It uses the even- and odd-parity coefficients (3×3 matrices) , and . They can be interpreted as follows: represent anisotropic shifts in the two-way (forward and backwards) speed of light, represent anisotropic differences in the one-way speed of counterpropagating beams along an axis, and represent isotropic (orientation-independent) shifts in the one-way phase velocity of light. It was shown that such variations in the speed of light can be removed by suitable coordinate transformations and field redefinitions, though the corresponding Lorentz violations cannot be removed, because such redefinitions only transfer those violations from the photon sector to the matter sector of SME. While ordinary symmetric optical resonators are suitable for testing even-parity effects and provide only tiny constraints on odd-parity effects, also asymmetric resonators have been built for the detection of odd-parity effects. For additional coefficients in the photon sector leading to birefringence of light in vacuum, which cannot be redefined as the other photon effects, see . Another type of test of the related one-way light speed isotropy in combination with the electron sector of the SME was conducted by Bocquet et al. (2010). They searched for fluctuations in the 3-momentum of photons during Earth's rotation, by measuring the Compton scattering of ultrarelativistic electrons on monochromatic laser photons in the frame of the cosmic microwave background radiation, as originally suggested by Vahe Gurzadyan and Amur Margarian (for details on that 'Compton Edge' method and analysis see, discussion e.g.). Solar System Besides terrestrial tests also astrometric tests using Lunar Laser Ranging (LLR), i.e. sending laser signals from Earth to Moon and back, have been conducted. They are ordinarily used to test general relativity and are evaluated using the Parameterized post-Newtonian formalism. However, since these measurements are based on the assumption that the speed of light is constant, they can also be used as tests of special relativity by analyzing potential distance and orbit oscillations. For instance, Zoltán Lajos Bay and White (1981) demonstrated the empirical foundations of the Lorentz group and thus special relativity by analyzing the planetary radar and LLR data. In addition to the terrestrial Kennedy–Thorndike experiments mentioned above, Müller & Soffel (1995) and Müller et al. (1999) tested the RMS velocity dependence parameter by searching for anomalous distance oscillations using LLR. Since time dilation is already confirmed to high precision, a positive result would prove that light speed depends on the observer's velocity and length contraction is direction dependent (like in the other Kennedy–Thorndike experiments). However, no anomalous distance oscillations have been observed, with a RMS velocity dependence limit of , comparable to that of Hils and Hall (1990, see table above on the right). Vacuum dispersion Another effect often discussed in connection with quantum gravity (QG) is the possibility of dispersion of light in vacuum (i.e. the dependence of light speed on photon energy), due to Lorentz-violating dispersion relations. This effect should be strong at energy levels comparable to, or beyond the Planck energy GeV, while being extraordinarily weak at energies accessible in the laboratory or observed in astrophysical objects. In an attempt to observe a weak dependence of speed on energy, light from distant astrophysical sources such as gamma ray bursts and distant galaxies has been examined in many experiments. Especially the Fermi-LAT group was able show that no energy dependence and thus no observable Lorentz violation occurs in the photon sector even beyond the Planck energy, which excludes a large class of Lorentz-violating quantum gravity models. Vacuum birefringence Lorentz violating dispersion relations due to the presence of an anisotropic space might also lead to vacuum birefringence and parity violations. For instance, the polarization plane of photons might rotate due to velocity differences between left- and right-handed photons. In particular, gamma ray bursts, galactic radiation, and the cosmic microwave background radiation are examined. The SME coefficients and for Lorentz violation are given, 3 and 5 denote the mass dimensions employed. The latter corresponds to in the EFT of Meyers and Pospelov by , being the Planck mass. Maximal attainable speed Threshold constraints Lorentz violations could lead to differences between the speed of light and the limiting or maximal attainable speed (MAS) of any particle, whereas in special relativity the speeds should be the same. One possibility is to investigate otherwise forbidden effects at threshold energy in connection with particles having a charge structure (protons, electrons, neutrinos). This is because the dispersion relation is assumed to be modified in Lorentz violating EFT models such as SME. Depending on which of these particles travels faster or slower than the speed of light, effects such as the following can occur: Photon decay at superluminal speed. These (hypothetical) high-energy photons would quickly decay into other particles, which means that high energy light cannot propagate over long distances. So the mere existence of high energy light from astronomic sources constrains possible deviations from the limiting velocity. Vacuum Cherenkov radiation at superluminal speed of any particle (protons, electrons, neutrinos) having a charge structure. In this case, emission of Bremsstrahlung can occur, until the particle falls below threshold and subluminal speed is reached again. This is similar to the known Cherenkov radiation in media, in which particles are traveling faster than the phase velocity of light in that medium. Deviations from the limiting velocity can be constrained by observing high energy particles of distant astronomic sources that reach Earth. The rate of synchrotron radiation could be modified, if the limiting velocity between charged particles and photons is different. The Greisen–Zatsepin–Kuzmin limit could be evaded by Lorentz violating effects. However, recent measurements indicate that this limit really exists. Since astronomic measurements also contain additional assumptions – like the unknown conditions at the emission or along the path traversed by the particles, or the nature of the particles –, terrestrial measurements provide results of greater clarity, even though the bounds are wider (the following bounds describe maximal deviations between the speed of light and the limiting velocity of matter): Clock comparison and spin coupling By this kind of spectroscopy experiments – sometimes called Hughes–Drever experiments as well – violations of Lorentz invariance in the interactions of protons and neutrons are tested by studying the energy levels of those nucleons in order to find anisotropies in their frequencies ("clocks"). Using spin-polarized torsion balances, also anisotropies with respect to electrons can be examined. Methods used mostly focus on vector spin interactions and tensor interactions, and are often described in CPT odd/even SME terms (in particular parameters of bμ and cμν). Such experiments are currently the most sensitive terrestrial ones, because the precision by which Lorentz violations can be excluded lies at the 10−33 GeV level. These tests can be used to constrain deviations between the maximal attainable speed of matter and the speed of light, in particular with respect to the parameters of cμν that are also used in the evaluations of the threshold effects mentioned above. Time dilation The classic time dilation experiments such as the Ives–Stilwell experiment, the Moessbauer rotor experiments, and the time dilation of moving particles, have been enhanced by modernized equipment. For example, the Doppler shift of lithium ions traveling at high speeds is evaluated by using saturated spectroscopy in heavy ion storage rings. For more information, see Modern Ives–Stilwell experiments. The current precision with which time dilation is measured (using the RMS test theory), is at the ~10−8 level. It was shown, that Ives-Stilwell type experiments are also sensitive to the isotropic light speed coefficient of the SME, as introduced above. Chou et al. (2010) even managed to measure a frequency shift of ~10−16 due to time dilation, namely at everyday speeds such as 36 km/h. CPT and antimatter tests Another fundamental symmetry of nature is CPT symmetry. It was shown that CPT violations lead to Lorentz violations in quantum field theory (even though there are nonlocal exceptions). CPT symmetry requires, for instance, the equality of mass, and equality of decay rates between matter and antimatter. Modern tests by which CPT symmetry has been confirmed are mainly conducted in the neutral meson sector. In large particle accelerators, direct measurements of mass differences between top- and antitop-quarks have been conducted as well. Using SME, also additional consequences of CPT violation in the neutral meson sector can be formulated. Other SME related CPT tests have been performed as well: Using Penning traps in which individual charged particles and their counterparts are trapped, Gabrielse et al. (1999) examined cyclotron frequencies in proton-antiproton measurements, and couldn't find any deviation down to 9·10−11. Hans Dehmelt et al. tested the anomaly frequency, which plays a fundamental role in the measurement of the electron's gyromagnetic ratio. They searched for sidereal variations, and differences between electrons and positrons as well. Eventually they found no deviations, thereby establishing bounds of 10−24 GeV. Hughes et al. (2001) examined muons for sidereal signals in the spectrum of muons, and found no Lorentz violation down to 10−23 GeV. The "Muon g-2" collaboration of the Brookhaven National Laboratory searched for deviations in the anomaly frequency of muons and anti-muons, and for sidereal variations under consideration of Earth's orientation. Also here, no Lorentz violations could be found, with a precision of 10−24 GeV. Other particles and interactions Third generation particles have been examined for potential Lorentz violations using SME. For instance, Altschul (2007) placed upper limits on Lorentz violation of the tau of 10−8, by searching for anomalous absorption of high energy astrophysical radiation. In the BaBar experiment (2007), the D0 experiment (2015), and the LHCb experiment (2016), searches have been made for sidereal variations during Earth's rotation using B mesons (thus bottom quarks) and their antiparticles. No Lorentz and CPT violating signal were found with upper limits in the range 10−15 − 10−14 GeV. Also top quark pairs have been examined in the D0 experiment (2012). They showed that the cross section production of these pairs doesn't depend on sidereal time during Earth's rotation. Lorentz violation bounds on Bhabha scattering have been given by Charneski et al. (2012). They showed that differential cross sections for the vector and axial couplings in QED become direction dependent in the presence of Lorentz violation. They found no indication of such an effect, placing upper limits on Lorentz violations of . Gravitation The influence of Lorentz violation on gravitational fields and thus general relativity was analyzed as well. The standard framework for such investigations is the Parameterized post-Newtonian formalism (PPN), in which Lorentz violating preferred frame effects are described by the parameters (see the PPN article on observational bounds on these parameters). Lorentz violations are also discussed in relation to Alternatives to general relativity such as Loop quantum gravity, Emergent gravity, Einstein aether theory or Hořava–Lifshitz gravity. Also SME is suitable to analyze Lorentz violations in the gravitational sector. Bailey and Kostelecky (2006) constrained Lorentz violations down to by analyzing the perihelion shifts of Mercury and Earth, and down to in relation to solar spin precession. Battat et al. (2007) examined Lunar Laser Ranging data and found no oscillatory perturbations in the lunar orbit. Their strongest SME bound excluding Lorentz violation was . Iorio (2012) obtained bounds at the level by examining Keplerian orbital elements of a test particle acted upon by Lorentz-violating gravitomagnetic accelerations. Xie (2012) analyzed the advance of periastron of binary pulsars, setting limits on Lorentz violation at the level. Neutrino tests Neutrino oscillations Although neutrino oscillations have been experimentally confirmed, the theoretical foundations are still controversial, as it can be seen in the discussion related to sterile neutrinos. This makes predictions of possible Lorentz violations very complicated. It is generally assumed that neutrino oscillations require a certain finite mass. However, oscillations could also occur as a consequence of Lorentz violations, so there are speculations as to how much those violations contribute to the mass of the neutrinos. Additionally, a series of investigations have been published in which a sidereal dependence of the occurrence of neutrino oscillations was tested, which could arise when there were a preferred background field. This, possible CPT violations, and other coefficients of Lorentz violations in the framework of SME, have been tested. Here, some of the achieved GeV bounds for the validity of Lorentz invariance are stated: Neutrino speed Since the discovery of neutrino oscillations, it is assumed that their speed is slightly below the speed of light. Direct velocity measurements indicated an upper limit for relative speed differences between light and neutrinos of , see measurements of neutrino speed. Also indirect constraints on neutrino velocity, on the basis of effective field theories such as SME, can be achieved by searching for threshold effects such as Vacuum Cherenkov radiation. For example, neutrinos should exhibit Bremsstrahlung in the form of electron-positron pair production. Another possibility in the same framework is the investigation of the decay of pions into muons and neutrinos. Superluminal neutrinos would considerably delay those decay processes. The absence of those effects indicate tight limits for velocity differences between light and neutrinos. Velocity differences between neutrino flavors can be constrained as well. A comparison between muon- and electron-neutrinos by Coleman & Glashow (1998) gave a negative result, with bounds <6. Reports of alleged Lorentz violations Open reports LSND, MiniBooNE In 2001, the LSND experiment observed a 3.8σ excess of antineutrino interactions in neutrino oscillations, which contradicts the standard model. First results of the more recent MiniBooNE experiment appeared to exclude this data above an energy scale of 450 MeV, but they had checked neutrino interactions, not antineutrino ones. In 2008, however, they reported an excess of electron-like neutrino events between 200 and 475 MeV. And in 2010, when carried out with antineutrinos (as in LSND), the result was in agreement with the LSND result, that is, an excess at the energy scale from 450 to 1250 MeV was observed. Whether those anomalies can be explained by sterile neutrinos, or whether they indicate Lorentz violations, is still discussed and subject to further theoretical and experimental researches. Solved reports In 2011 the OPERA Collaboration published (in a non-peer reviewed arXiv preprint) the results of neutrino measurements, according to which neutrinos were traveling slightly faster than light. The neutrinos apparently arrived early by ~60 ns. The standard deviation was 6σ, clearly beyond the 5σ limit necessary for a significant result. However, in 2012 it was found that this result was due to measurement errors. The result was consistent with the speed of light; see Faster-than-light neutrino anomaly. In 2010, MINOS reported differences between the disappearance (and thus the masses) of neutrinos and antineutrinos at the 2.3 sigma level. This would violate CPT symmetry and Lorentz symmetry. However, in 2011 MINOS updated their antineutrino results; after evaluating additional data, they reported that the difference is not as great as initially thought. In 2012, they published a paper in which they reported that the difference is now removed. In 2007, the MAGIC Collaboration published a paper, in which they claimed a possible energy dependence of the speed of photons from the galaxy Markarian 501. They admitted, that also a possible energy-dependent emission effect could have cause this result as well. However, the MAGIC result was superseded by the substantially more precise measurements of the Fermi-LAT group, which couldn't find any effect even beyond the Planck energy. For details, see section Dispersion. In 1997, Nodland & Ralston claimed to have found a rotation of the polarization plane of light coming from distant radio galaxies. This would indicate an anisotropy of space. This attracted some interest in the media. However, some criticisms immediately appeared, which disputed the interpretation of the data, and who alluded to errors in the publication. More recent studies have not found any evidence for this effect (see section on Birefringence). See also Tests of special relativity Phenomenological quantum gravity References External links Kostelecký: Background information on Lorentz and CPT violation Roberts, Schleif (2006); Relativity FAQ: What is the experimental basis of special relativity? Physics experiments Tests of special relativity
Modern searches for Lorentz violation
Physics
4,830
58,885,745
https://en.wikipedia.org/wiki/Quine%E2%80%93Putnam%20indispensability%20argument
The Quine–Putnam indispensability argument is an argument in the philosophy of mathematics for the existence of abstract mathematical objects such as numbers and sets, a position known as mathematical platonism. It was named after the philosophers Willard Van Orman Quine and Hilary Putnam, and is one of the most important arguments in the philosophy of mathematics. Although elements of the indispensability argument may have originated with thinkers such as Gottlob Frege and Kurt Gödel, Quine's development of the argument was unique for introducing to it a number of his philosophical positions such as naturalism, confirmational holism, and the criterion of ontological commitment. Putnam gave Quine's argument its first detailed formulation in his 1971 book Philosophy of Logic. He later came to disagree with various aspects of Quine's thinking, however, and formulated his own indispensability argument based on the no miracles argument in the philosophy of science. A standard form of the argument in contemporary philosophy is credited to Mark Colyvan; whilst being influenced by both Quine and Putnam, it differs in important ways from their formulations. It is presented in the Stanford Encyclopedia of Philosophy: We ought to have ontological commitment to all and only the entities that are indispensable to our best scientific theories. Mathematical entities are indispensable to our best scientific theories. Therefore, we ought to have ontological commitment to mathematical entities. Nominalists, philosophers who reject the existence of abstract objects, have argued against both premises of this argument. An influential argument by Hartry Field claims that mathematical entities are dispensable to science. This argument has been supported by attempts to demonstrate that scientific and mathematical theories can be reformulated to remove all references to mathematical entities. Other philosophers, including Penelope Maddy, Elliott Sober, and Joseph Melia, have argued that we do not need to believe in all of the entities that are indispensable to science. The arguments of these writers inspired a new explanatory version of the argument, which Alan Baker and Mark Colyvan support, that argues mathematics is indispensable to specific scientific explanations as well as whole theories. Background In his 1973 paper "Mathematical Truth", Paul Benacerraf raised a problem for the philosophy of mathematics. According to Benacerraf, mathematical sentences such as "two is a prime number" imply the existence of mathematical objects. He supported this claim with the idea that mathematics should not have its own special semantics, or in other words, the meaning of mathematical sentences should follow the same rules as non-mathematical sentences. For example, according to this reasoning, if the sentence "Mars is a planet" implies the existence of the planet Mars, then the sentence "two is a prime number" should also imply the existence of the number two. But according to Benacerraf, if mathematical objects existed, they would be unknowable. This is because mathematical objects, if they exist, are abstract objects: objects that cannot cause things to happen and that have no location in space and time. Benacerraf argued, on the basis of the causal theory of knowledge, that it would be impossible to know about such objects because they cannot come into causal contact with us. This is called Benacerraf's epistemological problem because it concerns the epistemology of mathematics, that is, how we come to know what we do about mathematics. The philosophy of mathematics is split into two main strands: platonism and nominalism. Platonism holds that there exist abstract mathematical objects such as numbers and sets whilst nominalism denies their existence. Each of these views faces issues due to the problem raised by Benacerraf. Because nominalism rejects the existence of mathematical objects, it faces no epistemological problem but it does face problems concerning the idea that mathematics should not have its own special semantics. Platonism does not face problems concerning the semantic half of the dilemma but it has difficulty explaining how we can have any knowledge about mathematical objects. The indispensability argument aims to overcome the epistemological problem posed against platonism by providing a justification for belief in abstract mathematical objects. It is part of a broad class of indispensability arguments most commonly applied in the philosophy of mathematics, but which also includes arguments in the philosophy of language and ethics. In the most general sense, indispensability arguments aim to support their conclusion based on the claim that the truth of the conclusion is indispensable or necessary for a certain purpose. When applied in the field of ontology—the study of what exists—they exemplify a Quinean strategy for establishing the existence of controversial entities that cannot be directly investigated. According to this strategy, the indispensability of these entities for formulating a theory of other less controversial entities counts as evidence for their existence. In the case of philosophy of mathematics, the indispensability of mathematical entities for formulating scientific theories is taken as evidence for the existence of those mathematical entities. Overview of the argument Mark Colyvan presents the argument in the Stanford Encyclopedia of Philosophy in the following form: We ought to have ontological commitment to all and only the entities that are indispensable to our best scientific theories. Mathematical entities are indispensable to our best scientific theories. Therefore, we ought to have ontological commitment to mathematical entities. Here, an ontological commitment to an entity is a commitment to believing that that entity exists. The first premise is based on two fundamental assumptions: naturalism and confirmational holism. According to naturalism, we should look to our best scientific theories to determine what we have best reason to believe exists. Quine summarized naturalism as "the recognition that it is within science itself, and not in some prior philosophy, that reality is to be identified and described". Confirmational holism is the view that scientific theories cannot be confirmed in isolation and must be confirmed as wholes. Therefore, according to confirmational holism, if we should believe in science, then we should believe in all of science, including any of the mathematics that is assumed by our best scientific theories. The argument is mainly aimed at nominalists that are scientific realists as it attempts to justify belief in mathematical entities in a manner similar to the justification for belief in theoretical entities such as electrons or quarks; Quine held that such nominalists have a "double standard" with regards to ontology. The indispensability argument differs from other arguments for platonism because it only argues for belief in the parts of mathematics that are indispensable to science. It does not necessarily justify belief in the most abstract parts of set theory, which Quine called "mathematical recreation … without ontological rights". Some philosophers infer from the argument that mathematical knowledge is a posteriori because it implies mathematical truths can only be established via the empirical confirmation of scientific theories to which they are indispensable. This also indicates mathematical truths are contingent since empirically known truths are generally contingent. Such a position is controversial because it contradicts the traditional view of mathematical knowledge as a priori knowledge of necessary truths. Whilst Quine's original argument is an argument for platonism, indispensability arguments can also be constructed to argue for the weaker claim of sentence realism—the claim that mathematical theory is objectively true. This is a weaker claim because it does not necessarily imply there are abstract mathematical objects. Major concepts Indispensability The second premise of the indispensability argument states mathematical objects are indispensable to our best scientific theories. In this context, indispensability is not the same as ineliminability because any entity can be eliminated from a theoretical system given appropriate adjustments to the other parts of the system. Indispensability instead means that an entity cannot be eliminated without reducing the attractiveness of the theory. The attractiveness of the theory can be evaluated in terms of theoretical virtues such as explanatory power, empirical adequacy and simplicity. Furthermore, if an entity is dispensable to a theory, an equivalent theory can be formulated without it. This is the case, for example, if each sentence in one theory is a paraphrase of a sentence in another or if the two theories predict the same empirical observations. According to the Stanford Encyclopedia of Philosophy, one of the most influential arguments against the indispensability argument comes from Hartry Field. It rejects the claim that mathematical objects are indispensable to science; Field has supported this argument by reformulating or "nominalizing" scientific theories so they do not refer to mathematical objects. As part of this project, Field has offered a reformulation of Newtonian physics in terms of the relationships between space-time points. Instead of referring to numerical distances, Field's reformulation uses relationships such as "between" and "congruent" to recover the theory without implying the existence of numbers. John Burgess and Mark Balaguer have taken steps to extend this nominalizing project to areas of modern physics, including quantum mechanics. Philosophers such as David Malament and Otávio Bueno dispute whether such reformulations are successful or even possible, particularly in the case of quantum mechanics. Field's alternative to platonism is mathematical fictionalism, according to which mathematical theories are false because they refer to abstract objects which do not exist. As part of his argument against the indispensability argument, Field has tried to explain how it is possible for false mathematical statements to be used by science without making scientific predictions false. His argument is based on the idea that mathematics is conservative. A mathematical theory is conservative if, when combined with a scientific theory, it does not imply anything about the physical world that the scientific theory alone would not have already. This explains how it is possible for mathematics to be used by scientific theories without making the predictions of science false. In addition, Field has attempted to specify how exactly mathematics is useful in application. Field thinks mathematics is useful for science because mathematical language provides a useful shorthand for talking about complex physical systems. Another approach to denying that mathematical entities are indispensable to science is to reformulate mathematical theories themselves so they do not imply the existence of mathematical objects. Charles Chihara, Geoffrey Hellman, and Putnam have offered modal reformulations of mathematics that replace all references to mathematical objects with claims about possibilities. Naturalism The naturalism underlying the indispensability argument is a form of methodological naturalism that asserts the primacy of the scientific method for determining the truth. In other words, according to Quine's naturalism, our best scientific theories are the best guide to what exists. This form of naturalism rejects the idea that philosophy precedes and ultimately justifies belief in science, instead holding that science and philosophy are continuous with one another as part of a single, unified investigation into the world. As such, this form of naturalism precludes the idea of a prior philosophy that can overturn the ontological commitments of science. This is in contrast to metaphysical forms of naturalism, which rule out the existence of abstract objects because they are not physical. An example of such a naturalism is supported by David Armstrong. It holds a principle called the Eleatic principle, which states that only causal entities exist and there are no non-causal entities. Quine's naturalism claims such a principle cannot be used to overturn our best scientific theories' ontological commitment to mathematical entities because philosophical principles cannot overrule science. Quine held his naturalism as a fundamental assumption but later philosophers have provided arguments to support it. The most common arguments in support of Quinean naturalism are track-record arguments. These are arguments that appeal to science's successful track record compared to philosophy and other disciplines. David Lewis famously made such an argument in a passage from his 1991 book Parts of Classes, deriding the track record of philosophy compared to mathematics and arguing that the idea of philosophy overriding science is absurd. Critics of the track record argument have argued that it goes too far, discrediting philosophical arguments and methods entirely, and contest the idea that philosophy can be uniformly judged to have had a bad track record. Quine's naturalism has also been criticized by Penelope Maddy for contradicting mathematical practice. According to the indispensability argument, mathematics is subordinated to the natural sciences in the sense that its legitimacy depends on them. But Maddy argues mathematicians do not seem to believe their practice is restricted in any way by the activity of the natural sciences. For example, mathematicians' arguments over the axioms of Zermelo–Fraenkel set theory do not appeal to their applications to the natural sciences. Similarly, Charles Parsons has argued that mathematical truths seem immediately obvious in a way that suggests they do not depend on the results of our best theories. Confirmational holism Confirmational holism is the view that scientific theories and hypotheses cannot be confirmed in isolation and must be confirmed together as part of a larger cluster of theories. An example of this idea provided by Michael Resnik is of the hypothesis that an observer will see oil and water separate out if they are added together because they do not mix. This hypothesis cannot be confirmed in isolation because it relies on assumptions such as the absence of any chemical that will interfere with their separation and that the eyes of the observer are functioning well enough to observe the separation. Because mathematical theories are likewise assumed by scientific theories, confirmational holism implies the empirical confirmations of scientific theories also support these mathematical theories. According to a counterargument by Maddy, the theses of naturalism and confirmational holism that make up the first premise of the indispensability argument are in tension with one another. Maddy said naturalism tells us that we should respect the methods used by scientists as the best method for uncovering the truth, but scientists do not act as if we should believe in all of the entities that are indispensable to science. To illustrate this point, Maddy uses the example of atomic theory; she states that despite the atom being indispensable to scientists' best theories by 1860, their reality was not universally accepted until 1913 when they were put to a direct experimental test. Maddy, and others such as Mary Leng, also appeal to the fact that scientists use mathematical idealizations—such as assuming bodies of water to be infinitely deep—without regard for whether they are true. According to Maddy, this indicates that scientists do not view the indispensable use of mathematics for science as justification for the belief in mathematics or mathematical entities. Overall, Maddy said we should side with naturalism and reject confirmational holism, meaning we do not need to believe in all of the entities that are indispensable to science. Another counterargument due to Elliott Sober claims that mathematical theories are not tested in the same way as scientific theories. Sober states that scientific theories compete with alternatives to find which theory has the most empirical support. But there are no alternatives for mathematical theory to compete with because all scientific theories share the same mathematical core. As a result, according to Sober, mathematical theories do not share the empirical support of our best scientific theories so we should reject confirmational holism. Since these counterarguments have been raised, a number of philosophers—including Resnik, Alan Baker, Patrick Dieveney, David Liggins, Jacob Busch, and Andrea Sereni—have argued that confirmational holism can be eliminated from the argument. For example, Resnik has offered a pragmatic indispensability argument focused less on the notion of evidence and more on the practical importance of mathematics in conducting scientific enquiry. Ontological commitment Another key part of the argument is the concept of ontological commitment, which can be applied to both theories and individual people. The ontological commitments of a theory are all the things that exist according to that theory. In turn, for a person to be ontologically committed to something is for them to be committed to believing that it exists. Quine believed that we should be committed to the same entities that our best scientific theories are committed. He formulated a "criterion of ontological commitment", which aims to uncover the commitments of our best theories by translating or "regimenting" them from ordinary language into first-order logic. In ordinary language, Quine believed the term "there is" must carry ontological commitment; to say "there is" something means that that thing exists. And for Quine, the existential quantifier in first-order logic was the natural equivalent of "there is". Therefore, Quine's criterion takes the ontological commitments of the theory to be all of the objects over which the regimented theory quantifies. Quine thought it is important to translate our best scientific theories into first-order logic because ordinary language is ambiguous, whereas logic can make the commitments of a theory more precise. Translating theories to first-order logic also has advantages over translating them to higher-order logics such as second-order logic. Whilst second-order logic has the same expressive power as first-order logic, it lacks some of the technical strengths of first-order logic such as completeness and compactness. Second-order logic also allows quantification over properties like "redness", but whether we have ontological commitment to properties is controversial. According to Quine, such quantification is simply ungrammatical. Jody Azzouni has objected to Quine's criterion of ontological commitment, saying that the existential quantifier in first-order logic does not always carry ontological commitment. According to Azzouni, the ordinary language equivalent of existential quantification "there is" is often used in sentences without implying ontological commitment. In particular, Azzouni points to the use of "there is" when referring to fictional objects in sentences such as "there are fictional detectives who are admired by some real detectives". According to Azzouni, for us to have ontological commitment to an entity, we must have the right level of epistemic access to it. This means, for example, that it must overcome some epistemic burdens for us to be able to postulate it. But according to Azzouni, mathematical entities are "mere posits" that can be postulated by anyone at any time by "simply writing down a set of axioms", so we do not need to treat them as real. More modern presentations of the argument do not necessarily accept Quine's criterion of ontological commitment and may allow for ontological commitments to be directly determined from ordinary language. Mathematical explanation One issue with the argument, raised by Joseph Melia, is that it doesn't account for the role of mathematics in science. According to Melia, we only need to believe in mathematics if it is indispensable to science in the right kind of way. In particular, it needs to be indispensable to scientific explanations. But according to Melia, mathematics plays a purely representational role in science, it merely "[makes] more things sayable about concrete objects". He argues that it is legitimate to withdraw commitment to mathematics for this reason, citing a linguistic phenomenon he calls "weaseling". This is when a person makes a statement and then later withdraws something implied by that statement. An example of weaseling used to express information in an everyday context is "Everyone who came to the seminar had a handout. But the person who came in late didn't get one." Here, seemingly contradictory information is conveyed, but read charitably it simply states that everyone apart from the person who came in late got a handout. Similarly, according to Melia, although mathematics is indispensable to science "almost all scientists ... deny that there are such things as mathematical objects", implying that commitment to mathematical objects is being weaseled away. For Melia, such weaseling is acceptable because mathematics does not play a genuinely explanatory role in science. Inspired both by the arguments against confirmational holism and Melia's argument that we can suspend belief in mathematics if it does not play a genuinely explanatory role in science, Colyvan and Baker have defended an explanatory version of the indispensability argument. This version of the argument attempts to remove the reliance on confirmational holism by replacing it with an inference to the best explanation. It states we are justified in believing in mathematical objects because they appear in our best scientific explanations, not because they inherit the empirical support of our best theories. It is presented by the Internet Encyclopedia of Philosophy in the following form: There are genuinely mathematical explanations of empirical phenomena. We ought to be committed to the theoretical posits in such explanations. Therefore, we ought to be committed to the entities postulated by the mathematics in question. An example of mathematics' explanatory indispensability presented by Baker is the periodic cicada, a type of insect that usually has life cycles of 13 or 17 years. It is hypothesized that this is an evolutionary advantage because 13 and 17 are prime numbers. Because prime numbers have no non-trivial factors, this means it is less likely predators can synchronize with the cicadas' life cycles. Baker said that this is an explanation in which mathematics, specifically number theory, plays a key role in explaining an empirical phenomenon. Other important examples are explanations of the hexagonal structure of bee honeycombs and the impossibility of crossing all seven bridges of Königsberg only once in a walk across the city. The main response to this form of the argument, which philosophers such as Melia, Chris Daly, Simon Langford, and Juha Saatsi have adopted, is to deny there are genuinely mathematical explanations of empirical phenomena, instead framing the role of mathematics as representational or indexical. Historical development Precursors and influences on Quine The argument is historically associated with Willard Van Orman Quine and Hilary Putnam but it can be traced to earlier thinkers such as Gottlob Frege and Kurt Gödel. In his arguments against mathematical formalism—a view that likens mathematics to a game like chess with rules about how mathematical symbols such as "2" can be manipulated—Frege said in 1893 that "it is applicability alone which elevates arithmetic from a game to the rank of a science". Gödel, in a 1947 paper on the axioms of set theory, said that if a new axiom were to have enough verifiable consequences, it "would have to be accepted at least in the same sense as any well‐established physical theory". Frege's and Gödel's arguments differ from the later Quinean indispensability argument because they lack features such as naturalism and subordination of practice, leading some philosophers, including Pieranna Garavaso, to say that they are not genuine examples of the indispensability argument. Whilst developing his philosophical view of confirmational holism, Quine was influenced by Pierre Duhem. At the beginning of the twentieth century, Duhem defended the law of inertia from critics who said that it is devoid of empirical content and unfalsifiable. These critics based this claim on the fact that the law does not make any observable predictions without positing some observational frame of reference and that falsifying instances can always be avoided by changing the choice of reference frame. Duhem responded by saying that the law produces predictions when paired with auxiliary hypotheses fixing the frame of reference and is therefore no different from any other physical theory. Duhem said that although individual hypotheses may make no observable predictions alone, they can be confirmed as parts of systems of hypotheses. Quine extended this idea to mathematical hypotheses, claiming that although mathematical hypotheses hold no empirical content on their own, they can share in the empirical confirmations of the systems of hypotheses in which they are contained. This thesis later came to be known as the Duhem–Quine thesis. Quine described his naturalism as the "abandonment of the goal of a first philosophy. It sees natural science as an inquiry into reality, fallible and corrigible but not answerable to any supra-scientific tribunal, and not in need of any justification beyond observation and the hypothetico-deductive method." The term "first philosophy" is used in reference to Descartes' Meditations on First Philosophy, in which Descartes used his method of doubt in an attempt to secure the foundations of science. Quine said that Descartes' attempts to provide the foundations for science had failed and that the project of finding a foundational justification for science should be rejected because he believed philosophy could never provide a method of justification more convincing than the scientific method. Quine was also influenced by the logical positivists, such as his teacher Rudolf Carnap; his naturalism was formulated in response to many of their ideas. For the logical positivists, all justified beliefs were reducible to sense data, including our knowledge of ordinary objects such as trees. Quine criticized sense data as self-defeating, saying that we must believe in ordinary objects to organize our experiences of the world. He also said that because science is our best theory of how sense-experience gives us beliefs about ordinary objects, we should believe in it as well. Whilst the logical positivists said that individual claims must be supported by sense data, Quine's confirmational holism means scientific theory is inherently tied up with mathematical theory and so evidence for scientific theories can justify belief in mathematical objects despite their not being directly perceived. Quine and Putnam Whilst he eventually became a platonist due to his formulation of the indispensability argument, Quine was sympathetic to nominalism from the early stages of his career. In a 1946 lecture, he said: "I will put my cards on the table now and avow my prejudices: I should like to be able to accept nominalism". He and Nelson Goodman subsequently released a joint 1947 paper titled "Steps toward a Constructive Nominalism" as part of an ongoing project of Quine's to "set up a nominalistic language in which all of natural science can be expressed". In a letter to Joseph Henry Woodger the following year, however, Quine said that he was becoming more convinced "the assumption of abstract entities and the assumptions of the external world are assumptions of the same sort". He later released the 1948 paper "On What There Is", in which he said that "[t]he analogy between the myth of mathematics and the myth of physics is ... strikingly close", marking a shift towards his eventual acceptance of a "reluctant platonism". Throughout the 1950s, Quine regularly mentioned platonism, nominalism, and constructivism as plausible views, and he had not yet reached a definitive conclusion about which was correct. It is unclear exactly when Quine accepted platonism; in 1953, he distanced himself from the claims of nominalism in his 1947 paper with Goodman, but by 1956, Goodman was still describing Quine's "defection" from nominalism as "still somewhat tentative". According to Lieven Decock, Quine had accepted the need for abstract mathematical entities by the publication of his 1960 book Word and Object, in which he wrote "a thoroughgoing nominalist doctrine is too much to live up to". However, whilst he released suggestions of the indispensability argument in a number of papers, he never gave it a detailed formulation. Putnam gave the argument its first explicit presentation in his 1971 book Philosophy of Logic in which he attributed it to Quine. He stated the argument as "quantification over mathematical entities is indispensable for science, both formal and physical; therefore we should accept such quantification; but this commits us to accepting the existence of the mathematical entities in question". He also wrote Quine had "for years stressed both the indispensability of quantification over mathematical entities and the intellectual dishonesty of denying the existence of what one daily presupposes". Putnam's endorsement of Quine's version of the argument is disputed. The Internet Encyclopedia of Philosophy states: "In his early work, Hilary Putnam accepted Quine's version of the indispensability argument." Liggins and Bueno, however, argue that Putnam never endorsed the argument and only presented it as an argument from Quine. In a 1990 lecture, Putnam said that he had shared Quine's views on the indispensability argument since 1948 when he was a student at Harvard, but that he had since come to disagree with them. He later said that he differed with Quine in his attitude to the argument from at least 1975. Features of the argument that Putnam came to disagree with include its reliance on a single, regimented, best theory. In 1975, Putnam formulated his own indispensability argument based on the no miracles argument in the philosophy of science, which argues the success of science can only be explained by scientific realism without being rendered miraculous. He wrote that year: "I believe that the positive argument for realism [in science] has an analogue in the case of mathematical realism. Here too, I believe, realism is the only philosophy that doesn't make the success of the science a miracle." The Internet Encyclopedia of Philosophy terms this version of the argument "Putnam's success argument" and presents it in the following form: Mathematics succeeds as the language of science. There must be a reason for the success of mathematics as the language of science. No positions other than realism in mathematics provide a reason. Therefore, realism in mathematics must be correct. According to the Internet Encyclopedia of Philosophy, the first and second premises of the argument have been seen as uncontroversial, so discussion of this argument has been focused on the third premise. Other positions that have attempted to provide a reason for the success of mathematics include Field's reformulations of science, which explain the usefulness of mathematics as a conservative shorthand. Putnam has criticized Field's reformulations for only applying to classical physics and for being unlikely to be able to be extended to future fundamental physics. Continued development of the argument According to Ian Hacking, there was no "concerted challenge" to the indispensability argument for a number of decades after Quine first raised it. Chihara, in his 1973 book Ontology and the Vicious Circle Principle, was one of the earliest philosophers to attempt to reformulate mathematics in response to Quine's arguments. Field followed with Science Without Numbers in 1980 and dominated discussion about the indispensability argument throughout the 1980s and 1990s. With the introduction of arguments against the first premise of the argument, initially by Maddy in the 1990s and continued by Melia and others in the 2000s, Field's approach has come to be known as "Hard Road Nominalism" due to the difficulty of creating technical reconstructions of science that it requires. Approaches attacking the first premise, in contrast, have come to be known as "Easy Road Nominalism". Colyvan is often seen as presenting the standard or "canonical" formulation of the argument within more recent philosophical work, and his version of the argument has been influential within contemporary philosophy of mathematics. It differs in key ways from the arguments presented by Quine and Putnam. Quine's version of the argument relies on translating scientific theories from ordinary language into first-order logic to determine its ontological commitments, which is not explicitly required by Colyvan's formulation. Putnam's arguments were for the objectivity of mathematics but not necessarily for mathematical objects. Putnam has explicitly distanced himself from this version of the argument, saying, "from my point of view, Colyvan's description of my argument(s) is far from right", and has contrasted his indispensability argument with "the fictitious 'Quine–Putnam indispensability argument. Colyvan has said "the attribution to Quine and Putnam [is] an acknowledgement of intellectual debts rather than an indication that the argument, as presented, would be endorsed in every detail by either Quine or Putnam". Influence The indispensability argument is widely, though not universally, considered to be the best argument for platonism in the philosophy of mathematics. According to the Stanford Encyclopedia of Philosophy, some within the field see it as the only good argument for platonism. It is one of just a few arguments that have come to dominate the debate between mathematical realism and mathematical anti-realism. In contemporary philosophy, many types of nominalism define themselves in opposition to the indispensability argument, and it is generally seen as the most important argument to overcome for nominalist views such as fictionalism. Quine's and Putnam's arguments have also been influential outside philosophy of mathematics, inspiring indispensability arguments in other areas of philosophy. For example, David Lewis, who was a student of Quine, used an indispensability argument to argue for modal realism in his 1986 book On the Plurality of Worlds. According to his argument, quantification over possible worlds is indispensable to our best philosophical theories, so we should believe in their concrete existence. Other indispensability arguments in metaphysics are defended by philosophers such as David Armstrong, Graeme Forbes, and Alvin Plantinga, who have argued for the existence of states of affairs due to the indispensable theoretical role they play in our best philosophical theories of truthmakers, modality, and possible worlds. In the field of ethics, David Enoch has expanded the criterion of ontological commitment used in the Quine–Putnam indispensability argument to argue for moral realism. According to Enoch's "deliberative indispensability argument", indispensability to deliberation is just as ontologically committing as indispensability to science, and moral facts are indispensable to deliberation. Therefore, according to Enoch, we should believe in moral facts. Notes References Citations Sources Further reading Philosophy of mathematics Philosophical arguments Willard Van Orman Quine
Quine–Putnam indispensability argument
Mathematics
7,053
67,414,446
https://en.wikipedia.org/wiki/Carbonea%20supersparsa
Carbonea supersparsa is a species of lichenicolous fungus belonging to the family Lecanoraceae. It is widespread in the Northern Hemisphere. In Iceland it has been reported growing on Lecanora cenisia near Egilsstaðir and Lecanora polytropa near Seyðisfjörður. References Lecanoraceae Fungi described in 1865 Fungi of Iceland Fungi of Europe Fungi of North America Taxa named by William Nylander (botanist) Fungus species
Carbonea supersparsa
Biology
102
35,638,659
https://en.wikipedia.org/wiki/Xenaroswelliana
Xenaroswelliana is a genus of ground beetles in the family Carabidae, the sole genus in the subfamily Xenaroswellianinae. This genus has a single species, Xenaroswelliana deltaquadrant. It was described from a single specimen found in Brazil. References Carabidae Monotypic insect taxa Species known from a single specimen
Xenaroswelliana
Biology
75
1,390,297
https://en.wikipedia.org/wiki/Nanomotor
A nanomotor is a molecular or nanoscale device capable of converting energy into movement. It can typically generate forces on the order of piconewtons. While nanoparticles have been utilized by artists for centuries, such as in the famous Lycurgus cup, scientific research into nanotechnology did not come about until recently. In 1959, Richard Feynman gave a famous talk entitled "There's Plenty of Room at the Bottom" at the American Physical Society's conference hosted at Caltech. He went on to wage a scientific bet that no one person could design a motor smaller than 400 μm on any side. The purpose of the bet (as with most scientific bets) was to inspire scientists to develop new technologies, and anyone who could develop a nanomotor could claim the $1,000 USD prize. However, his purpose was thwarted by William McLellan, who fabricated a nanomotor without developing new methods. Nonetheless, Richard Feynman's speech inspired a new generation of scientists to pursue research into nanotechnology. Nanomotors are the focus of research for their ability to overcome microfluidic dynamics present at low Reynold's numbers. Scallop Theory explains that nanomotors must break symmetry to produce motion at low Reynold's numbers. In addition, Brownian motion must be considered because particle-solvent interaction can dramatically impact the ability of a nanomotor to traverse through a liquid. This can pose a significant problem when designing new nanomotors. Current nanomotor research seeks to overcome these problems, and by doing so, can improve current microfluidic devices or give rise to new technologies. Significant research has been done to overcome microfluidic dynamics at low Reynolds numbers. Now, the more pressing challenge is to overcome issues such as biocompatibility, control on directionality and availability of fuel before nanomotors can be used for theranostic applications within the body. Nanotube and nanowire motors In 2004, Ayusman Sen and Thomas E. Mallouk fabricated the first synthetic and autonomous nanomotor. The two-micron long nanomotors were composed of two segments, platinum and gold, that could catalytically react with diluted hydrogen peroxide in water to produce motion. The Au-Pt nanomotors have autonomous, non-Brownian motion that stems from the propulsion via catalytic generation of chemical gradients. As implied, their motion does not require the presence of an external magnetic, electric or optical field to guide their motion. By creating their own local fields, these motors are said to move through self-electrophoresis. Joseph Wang in 2008 was able to dramatically enhance the motion of Au-Pt catalytic nanomotors by incorporating carbon nanotubes into the platinum segment. Since 2004, different types of nanotube and nanowire based motors have been developed, in addition to nano- and micromotors of different shapes. Most of these motors use hydrogen peroxide as fuel, but some notable exceptions exist. These silver halide and silver-platinum nanomotors are powered by halide fuels, which can be regenerated by exposure to ambient light. Some nanomotors can even be propelled by multiple stimuli, with varying responses. These multi-functional nanowires move in different directions depending on the stimulus (e.g. chemical fuel or ultrasonic power) applied. For example, bimetallic nanomotors have been shown to undergo rheotaxis to move with or against fluid flow by a combination of chemical and acoustic stimuli. In Dresden Germany, rolled-up microtube nanomotors produced motion by harnessing the bubbles in catalytic reactions. Without the reliance on electrostatic interactions, bubble-induced propulsion enables motor movement in relevant biological fluids, but typically still requires toxic fuels such as hydrogen peroxide. This has limited nanomotors' in vitro applications. One in vivo application, however, of microtube motors has been described for the first time by Joseph Wang and Liangfang Zhang using gastric acid as fuel. Recently titanium dioxide has also been identified as a potential candidate for nanomotors due to their corrosion resistance properties and biocompatibility. Future research into catalytical nanomotors holds major promise for important cargo-towing applications, ranging from cell sorting microchip devices to directed drug delivery. Enzymatic nanomotors Recently, there has been more research into developing enzymatic nanomotors and micropumps. At low Reynold's numbers, single molecule enzymes could act as autonomous nanomotors. Ayusman Sen and Samudra Sengupta demonstrated how self-powered micropumps can enhance particle transportation. This proof-of-concept system demonstrates that enzymes can be successfully utilized as an "engine" in nanomotors and micropumps. It has since been shown that particles themselves will diffuse faster when coated with active enzyme molecules in a solution of their substrate., and further particles coated with active enzymes subject to a surface of their substrate have demonstrated directional motor-like motion. Microfluidic experiments have shown that enzyme molecules will undergo directional swimming up their substrate gradient. It has also been shown that catalysis is sufficient in rendering directed motion in enzymes. This remains the only method of separating enzymes based on activity alone. Additionally, enzymes in cascade have also shown aggregation based on substrate driven chemotaxis. Developing enzyme-driven nanomotors promises to inspire new biocompatible technologies and medical applications. However, several limitations, such as biocompatibility and cellpenetration, have to be overcome for realizing these applications. One of the new biocompatible technologies would be to utilize enzymes for the directional delivery of cargo. A proposed branch of research is the integration of molecular motor proteins found in living cells into molecular motors implanted in artificial devices. Such a motor protein would be able to move a "cargo" within that device, via protein dynamics, similarly to how kinesin moves various molecules along tracks of microtubules inside cells. Starting and stopping the movement of such motor proteins would involve caging the ATP in molecular structures sensitive to UV light. Pulses of UV illumination would thus provide pulses of movement. DNA nanomachines, based on changes between two molecular conformations of DNA in response to various external triggers, have also been described. Helical nanomotors Another interesting direction of research has led to the creation of helical silica particles coated with magnetic materials that can be maneuvered using a rotating magnetic field. Such nanomotors are not dependent on chemical reactions to fuel the propulsion. A triaxial Helmholtz coil can provide directed rotating field in space. Recent works have shown how such nanomotors can be used to measure viscosity of non-newtonian fluids at a resolution of a few microns. This technology promises creation of viscosity map inside cells and the extracellular milieu. Such nanomotors have been demonstrated to move in blood. Recently, researchers have managed to controllably move such nanomotors inside cancer cells allowing them to trace out patterns inside a cell. Nanomotors moving through the tumor microenvironment have demonstrated the presence of sialic acid in the cancer-secreted extracellular matrix. Current-driven nanomotors (Classical) In 2003 Fennimore et al. presented the experimental realization of a prototypical current-driven nanomotor. It was based on tiny gold leaves mounted on multiwalled carbon nanotubes, with the carbon layers themselves carrying out the motion. The nanomotor is driven by the electrostatic interaction of the gold leaves with three gate electrodes where alternate currents are applied. Some years later, several other groups showed the experimental realizations of different nanomotors driven by direct currents. The designs typically consisted of organic molecules adsorbed on a metallic surface with a scanning-tunneling-microscope (STM) on top of it. The current flowing from the tip of the STM is used to drive the directional rotation of the molecule or of a part of it. The operation of such nanomotors relies on classical physics and is related to the concept of Brownian motors. These examples of nanomotors are also known as molecular motors. Quantum effects in current-driven nanomotors Due to their small size, quantum mechanics plays an important role in some nanomotors. For example, in 2020 Stolz et al. showed the cross-over from classical motion to quantum tunneling in a nanomotor made of a rotating molecule driven by the STM's current. Cold-atom-based ac-driven quantum motors have been explored by several authors. See also Carbon nanotube Electrostatic motor Molecular motor Nanocar Adiabatic quantum motor Nanomechanics Protein dynamics Synthetic molecular motors Micromotors References External links Berkeley.edu – Physicists build world's smallest motor Nanotube Nanomotor research project Nonomotor Nanotechnology, nanomotor, and nanopump Nanoelectronics
Nanomotor
Materials_science
1,844
47,743,437
https://en.wikipedia.org/wiki/Elephantomyia%20bozenae
Elephantomyia (Elephantomyia) bozenae is an extinct species of crane fly in the family Limoniidae. The species is solely known from the Middle Eocene Baltic amber deposits in the Baltic Sea region of Europe. The species is one of six described from Baltic amber. History and classification Elephantomyia (Elephantomyia) bozenae is known only from the holotype specimen, collection number MP/3338, which is preserved as an inclusion in transparent Baltic amber. As of 2015, the amber specimen was included in the collections of the Polish Academy of Sciences. Baltic amber is recovered from fossil bearing rocks in the Baltic Sea region of Europe. Estimates of the age date between 37 million years old, for the youngest sediments and 48 million years old. This age range straddles the middle Eocene, ranging from near the beginning of the Lutetian to the beginning of the Pribonian. E. bozenae is one of six crane fly species in the genus Elephantomyia described from the Baltic amber, the others being E. baltica, E. brevipalpa, E. irinae, E. longirostris, and E. pulchella. All six species are placed into the Elephantomyia subgenus Elephantomyia based on the lack of tibial spurs and by several aspects of the wing morphology. The type specimen was first studied by paleoentomologist Iwona Kania, of the University of Rzeszów, whose 2015 type description for the species was published in the journal PLoS ONE. The specific epithet bozenae was coined to honor the biologist Bożena Szala. Description The E. bozenae type specimen is a well preserved male that is approximately long, not including the rostrum. The head has a rostrum that is long, just over half the length of the fore-wing and longer than the abdomen. The tip of the rostrum has elongate palpus at the tip. Each palpus is composed of four segments, with the basal three segments long and the apical segment short. All four segments host a system of microtrichia. The antennae are small, composed fifteen segments. They have an elongated scape and widened pedicle. As the flagellomeres progress from the base to the tip of the antennae they change from squat and crowded together to elongated and the apical segment is widened at the tip. All of the flagellomeres host two setae each. The wings are long with a pale brown pterostigma that is oval in shape. The D cell, as designated by the Comstock–Needham system, is notably elongated and narrowed, in comparison to all other Baltic amber Elephantomyia. References Limoniidae Eocene insects of Europe Fossil taxa described in 2015 Diptera of Europe Baltic amber Insects described in 2015 Species known from a single specimen
Elephantomyia bozenae
Biology
590
1,071,314
https://en.wikipedia.org/wiki/List%20of%20straight-chain%20alkanes
The following is a list of straight-chain alkanes, the total number of isomers of each (including branched chains), and their common names, sorted by number of carbon atoms. See also Higher alkane List of compounds with carbon numbers 50+ References Alkanes Alkanes
List of straight-chain alkanes
Chemistry
62
238,329
https://en.wikipedia.org/wiki/DotGNU
DotGNU is a decommissioned part of the GNU Project that started in January 2001 and aimed to provide a free software replacement for Microsoft's .NET Framework. The DotGNU project was run by the Free Software Foundation. Other goals of the project are better support for non-Windows platforms and support for more processors. The main goal of the DotGNU project code base was to provide a class library that is 100% Common Language Specification (CLS) compliant. Main development projects Portable.NET DotGNU Portable.NET, an implementation of the ECMA-335 Common Language Infrastructure (CLI), includes software to compile and run Visual Basic .NET, C#, and C applications that use the .NET base class libraries, XML, and Windows Forms. Portable.NET claims to support various instruction set architectures including x86, PPC, ARM, and SPARC. DGEE DotGNU Execution Environment (DGEE) is a web service server. libJIT libJIT is a just-in-time compilation library for development of advanced just-in-time compilation in virtual machine implementations, dynamic programming languages, and scripting languages. It implements an intermediate representation based on three-address code, in which variables are kept in static single assignment form. libJIT has also seen some use in other open source projects, including GNU Emacs, ILDJIT and HornetsEye. Framework architecture The Portable .NET class library seeks to provide facilities for application development. These are primarily written in C#, but because of the Common Language Specification they can be used by any .NET language. Like .NET, the class library is structured into Namespaces and Assemblies. It has additional top-level namespaces including Accessibility and DotGNU. In a typical operation, the Portable .NET compiler generates a Common Language Specification (CLS) image, as specified in chapter 6 of ECMA-335, and the Portable .NET runtime takes this image and runs it. Free software DotGNU points out that it is Free Software, and it sets out to ensure that all aspects of DotGNU minimize dependence on proprietary components, such as calls to Microsoft Windows' GUI code. DotGNU was one of the High Priority Free Software Projects from till . DotGNU and Microsoft's patents DotGNU's implementation of those components of the .NET stack not submitted to the ECMA for standardization has been the source of patent violation concerns for much of the life of the project. In particular, discussion has taken place about whether Microsoft could destroy the DotGNU project through patent suits. The base technologies submitted to the ECMA may be non-problematic. The concerns primarily relate to technologies developed by Microsoft on top of the .NET Framework, such as ASP.NET, ADO.NET, and Windows Forms (see Non standardized namespaces), i.e. parts composing DotGNU's Windows compatibility stack. These technologies are today not fully implemented in DotGNU and are not required for developing DotGNU-applications. In 2009, Microsoft released .NET Micro Framework under Apache License, Version 2.0, which includes a patent grant. However, the .NET Micro Framework is a reimplementation of the CLR and limited subset of the base class libraries meant for use on embedded devices. Additionally, the patent grant in the Apache License would have protected only contributors and users of the .NET Micro Framework—not users and developers of alternative implementations such as DotGNU or Mono. In 2014, Microsoft released Roslyn, the next generation official Microsoft C# compiler, under the Apache License. Later that year, Microsoft announced a "reboot" of the official .NET Framework. The framework would be based on .NET Core, including the official runtime and standard libraries released under the MIT License and a patent grant explicitly protecting recipients from Microsoft-owned patents regarding .NET Core. See also Comparison of application virtual machines Portable.NET – A portable version of DotGNU toolchain and runtime Mono – A popular free software implementation of Microsoft's .NET Common Language Runtime Shared Source Common Language Infrastructure – Microsoft's shared source implementation of .NET, previously codenamed Rotor References External links Project homepage Article '2001 – The Year When DotGNU Was Born' A 2003 interview with Norbert Bollow of DotGNU .NET implementations Computing platforms GNU Project software
DotGNU
Technology
902
12,527,680
https://en.wikipedia.org/wiki/Little%20pocket%20mouse
The little pocket mouse (Perognathus longimembris) is a species of rodent in the family Heteromyidae. It is found in Baja California and Sonora in Mexico and in Arizona, California, Idaho, Nevada, Oregon and Utah in the United States. Its natural habitat is subtropical or tropical dry lowland grassland. It is a common species and faces no particular threats and the IUCN has listed it as being of "least concern". Five mice of this species travelled to and orbited the Moon 75 times in an experiment on board the Apollo 17 command module in December 1972. Four of the mice survived the trip. Six other little pocket mice were sent into orbit with Skylab 3 in July 1973, though these animals died only 30 hours into the mission due to a power failure. Behavior This small mouse, with a long tail, inhabits arid and semiarid habitats with grasses, sagebrush and other scrubby vegetation. It is nocturnal and has a short period of activity for the first two hours after sunset, and then sporadic activity through the rest of the night. It sleeps in winter and is only active between April and November with numbers building up rapidly in the spring, peaking in June and July. It forages for seeds, plant material and small invertebrates which it carries back to its burrow in its cheek pouches. Status The little pocket mouse is common within most of its range although it is scarce in Baja California. The population appears to be steady and no particular threats have been identified for this species so the International Union for Conservation of Nature has assessed it as being of "least concern". See also Pacific pocket mouse (Perognathus longimembris pacificus) — an endangered subspecies from coastal Southern California. References Perognathus Fauna of the Southwestern United States Fauna of the Baja California Peninsula Rodents of Mexico Rodents of the United States Mammals described in 1875 Least concern biota of North America Least concern biota of the United States Space-flown life Taxonomy articles created by Polbot Taxa named by Elliott Coues
Little pocket mouse
Biology
413
20,016,887
https://en.wikipedia.org/wiki/History%20of%20the%20diesel%20car
Diesel engines began to be used in automobiles in the 1930s. Mainly used for commercial applications early on, they did not gain popularity for passenger travel until their development in Europe in the 1950s. After reaching a peak in popularity worldwide around 2015, in the aftermath of Dieselgate, the diesel car rapidly fell out of favor with consumers and regulators. History Early 20th century Production diesel car history started in 1933 with Citroën's Rosalie, which featured a diesel engine option (the 1,766 cc 11UD engine) in the Familiale (estate or station wagon) model. The Mercedes-Benz 260 D and the Hanomag Rekord were introduced soon thereafter, in 1936. Three years later, a modified Hanomag Rekord was used to set a world record for Diesel cars – it reached a top speed of . Immediately after World War II, and throughout the 1950s and 1960s, diesel-powered cars began to gain limited popularity, particularly for commercial applications, such as ambulances, taxis, and station wagons used for delivery work. Mercedes-Benz offered a continuous stream of diesel-powered taxis, beginning in 1949 with their 170 D powered by the OM 636 engine. The engine was carried over to the 170 D's successor – the 180 D – in 1953. Later, in 1958 their OM 621 engine was introduced in the 190 D. This 1.9 L engine produced at 4,000/min. Beginning in 1959, Peugeot offered the 403D with their TMD-85 four-cylinder engine of 1.8 L and , followed in 1962 by the 404D with the same engine. In 1964, the 404D became available with the improved XD88 four-cylinder engine of 2.0 L and . Other cars offered with diesel power during the 1950s included the Austin A60 Cambridge, Isuzu Bellel, Fiat 1400-A, Standard Vanguard, and, briefly, the Borgward Hansa. In 1967, Peugeot introduced the world's first compact, high-speed diesel car, the Peugeot 204BD. Its 1.3 L XL4D engine produces at 5,000 rpm. Following the 1970s oil crisis (1973 and 1979), Volkswagen introduced their first diesel car, the VW Golf, with a 1.5 L naturally aspirated indirect-injection engine which was a redesigned (dieselised) version of a gasoline engine. Mercedes-Benz tested turbodiesels in cars (e.g. by the Mercedes-Benz C111 experimental and record-setting vehicles) and the first production turbo diesel cars were introduced in 1978, being the 3.0 5-cylinder 115 hp (86 kW) Mercedes 300 SD, available only in North America, and the 2.5-litre 4-cylinder Peugeot 604. A big step forward for mass-market diesel cars came in 1982 when PSA Peugeot Citroën introduced the XUD engine in the Peugeot 305, Peugeot 205 and Talbot Horizon. Diesel Car magazine considers it the class leading automotive diesel engine until the mid-1990s. The 1988 Citroën BX and the 1989 Peugeot 405 (both powered by the XUD engine) were among the earliest mass-market diesel cars able to achieve petrol engine standards. Diesel Car magazine said of the Citroën BX "We can think of no other car currently on sale in the UK that comes anywhere near approaching the BX Turbo's combination of performance, accommodation and economy". German engine and car manufacturer BMW announced its first series-production diesel car, the 524td on the 1981 Frankfurt IAA. It was presented on the 1983 Frankfurt IAA with a 85 kW BMW M21 turbodiesel engine, giving it a top speed of 180 km/h. Ronan Glon considers it the fastest series-production diesel car of its time, being slightly faster than Daimler-Benz's OM 617-powered Mercedes-Benz W 123 300 D Turbodiesel. In 1986, BMW presented an electronic engine control unit for the M21 engine, being the first manufacturer to use this technology in a diesel passenger car. Diesels carried a 2.5% share of the European Community market in 1973. Following the fuel crisis, this share increased to 4.1% in 1975. This more than doubled (to 8.6%) by 1980, and by 1983 diesels represented 11% of new car sales in the EU. Motor vehicle diesel engines in North America have typically only been used in trucks, commercial vehicles and buses. Jeep had offered a Perkins Diesel option for its models in the early 1960s. Chrysler offered these engines as well – although mainly for the European market. Oldsmobile released a 350 in3 (5.7 L) V8 diesel engine, starting in 1977. Most Buick, Oldsmobile, Pontiac, Chevrolet and even Cadillac divisions of General Motors had received this engine by the 1980 model year, and continued to be sold until the engine was discontinued in 1985. The 350 in3 diesel engine proved to be unreliable and gave diesel cars a bad reputation in the United States. 1990s-2015 diesel boom Diesels steadily gained in acceptance with private buyers in the 1970s and into the 1990s. Having originally been mainly marketed to commercial users such as taxi drivers, European diesel sales increased steadily and reached 17.3 per cent of the overall European market by 1992. As a way to curb carbon dioxide emissions, sales of diesel vehicles in Europe were incentivized by the ACEA agreement. The peak of diesel popularity was reached in 2015, with 52% of new cars sold in Europe being diesel powered. The only other major car market where diesel cars were popular is India. Driven by cheap subsidized diesel fuel, diesel cars had a peak market share of 47% around 2012. Meanwhile, diesel market share in the United States and China remained low. In China, diesel cars are associated with heavy goods vehicles in consumer's minds, and environmental regulations kept diesel cars pricey to maintain. In South Korea, diesel cars became popular after the government eased emissions regulations in 2005. Oldsmobile offered the world's first V8 diesel engine for the passenger cars in 1978. Due to the cost-cutting measures, the Oldsmobile V8 diesel engine was a dismal failure and changed the American perspectives toward the diesel engines for many years. The V8 diesel engine wasn't offered again until 1999 when Mercedes-Benz introduced the 4-litre OM628 V8 diesel engine for its passenger vehicles. Audi followed in 2003 with its 4-litre V8 TDI. Mercedes-Benz ended the production in 2010, leaving Audi to be exclusive manufacturer of V8 diesel engine to this day. Volkswagen Group introduced the world's first V10 and V12 diesel engines for the passenger vehicles. The 5-litre V10 was offered in Volkswagen Phaeton V10 TDI (2002–2006) and Volkswagen Touareg V10 TDI (2002–2010). The 6-litre V12 was exclusive to Q7 V12 TDI (2008–2012). Developments The Fiat Croma TD-i.d. was the first turbo charged direct injection diesel in 1987 followed one year later by the Austin Rover Montego. The Audi 100, however, pioneered electronic control of the engine, while the Fiat and Austin had Bosch mechanically controlled injection. The electronic control of direct injection really made a difference in terms of emissions, refinement and power. All earlier generation car direct injection diesel engines benefit greatly from the use of biodiesel fuel, which reduces emissions and greatly improves refinement without engine modifications, provided they use compatible 'Viton' type rubber in their fuel systems. The diesel car markets are the same ones who pioneered various developments (Mercedes-Benz, BMW, Peugeot/Citroën, Fiat, Alfa Romeo, Volkswagen Group). There were also small diesel engines produced in England by British Leyland and Perkins. For reasons of economy the petrol BMC "B" series engine was converted to diesel and produced in capacities of 1.5 and 1.8 litres. Perkins produced the 4.99, 4.107 and 4.108 engines all of which were extremely reliable. Later BL produced the five main bearing "O" series engine which was extremely strong. Petrol turbo variants could make 200HP and the engine was ideal for converting to a diesel. In fact, the 1988 Austin-Rover MDi unit (also known as the 'Perkins Prima') was developed by Perkins Engines of Peterborough, who have designed and built high-speed diesels since the 1930s. It is not easy to make a lightweight and powerful top class diesel engine owing to the immense pressures and heat produced within the engine. These problems were solved by VM Motori and the engines were apparently so good that Rover, Ford and Jeep bought them. The interesting features of the engines were the tunnel-bore block and separate cylinder heads to allow for expansion. In 1997, the first common rail diesel passenger car was introduced, the Alfa Romeo 156. In 2004 Honda released their first diesel engine, the N22A branded as the i-CTDI, it first featured in the Honda Accord. The engine featured an aluminium block, DOHC chain driven valvetrain, common rail direct injection and variable geometry turbocharger. In Spring 2005, Mercedes-Benz unveiled their first application of a mass-produced aluminum block diesel engine for passenger vehicles and commercial use. Aluminum was traditionally considered of inferior strength and temperature resistance to withstand diesel applications. Its first use was in 2006 model-year E-Class sedan, ML-class and GL-class vehicles. The 3.0 liter V6 engine is similar in weight () to the five-cylinder it replaced, and considerably lighter than the in-line six-cylinder it also replaced. 2015-present: decline of diesel cars Since the numerous diesel emissions scandals of recent years, the most high profile of which was the Dieselgate scandal of 2015, it has been revealed that the levels of toxic emissions coming from diesel cars are higher and pose a greater risk to human health than those of vehicles powered by other means. In response, the image of the diesel car took a hit with consumers, resale values of diesel cars dropped and hundreds of cities in Europe started banning older diesel cars to curb air pollution, including Paris, Hamburg and Madrid. According to the German environment agency, diesel cars have a 39% share in German urban nitrogen dioxide pollution levels. In comparison to that, the share of diesel-powered busses and lorries is significantly lower. From a peak of 52% market share of new cars sold in Europe being diesel powered, by 2018, this number had declined to 36%. In September 2020, European market share of electric cars was higher than for diesel cars. In India, BS6 emissions standard and bans on older diesel vehicles have also caused diesel market share to slip from 58% in 2013 to 29% in 2020. In West-Africa, diesel cars are still the majority of vehicles on the road, as emissions standards are less stringent compared to the rest of the world and old diesel cars are allowed to be imported from the rest of the world. A 2018 report found that the majority of a sample of 160 cars exported from the Netherlands destined for West-Africa and Libya did not meet Euro IV (2005) emissions norms. One exception to the decline of diesel cars is Japan, where diesel cars remained at 17% market share in 2023, and fall under "next-generation vehicles", a program to promote energy-efficient cars, subject to tax exemptions. Several major car manufacturers have announced to end development of new diesel engines for cars, including Volvo in 2017, Mitsubishi in 2019 and Renault and Hyundai in 2021 However BMW will continue developing new 4 and 6-cylinder diesel engines, discontinuing only the 3-cylinder B37 diesel engine. Diesel engine vehicle racing Although the weight and lower output of a diesel engine tend to keep them away from automotive racing applications, there are many diesels being raced in classes that call for them, mainly in truck racing and tractor pulling, as well in types of racing where these drawbacks are less severe, such as land speed record racing or endurance racing. Even diesel-engined dragsters exist, despite the diesel's drawbacks of weight and low peak rpm, specifications central to performance in this sport. However, in 2006, the new Audi R10 TDI LMP1 entered by Joest Racing became the first diesel-engined car to win the 24 Hours of Le Mans. History As early as 1931, Clessie Cummins installed his diesel in the Cummins "Diesel Special" race car, hitting at Daytona and at the Indianapolis 500 race, where Dave Evans became the first driver to complete the Indianapolis 500 without making a single pit stop, completing the full distance on the lead lap and finishing 13th, relying on torque and fuel efficiency to overcome weight and low peak power. In 1933, a 1925 Bentley with a Gardner 4LW engine was the first diesel-engine car to take part in the Monte Carlo Rally when it was driven by Lord Howard de Clifford. It was the leading British car and finished fifth overall. In 1952, Fred Agabashian in a Cummins diesel won the pole at the Indianapolis 500 race with a turbocharged 6.6-liter diesel car, setting a record for pole position lap speed, . Don Cummins and his chief engineer Neve Reiners recognized that the low center of gravity of the flat engine configuration (designed to lie beneath the floor of a bus) plus the power advantage gained by the novel use of Elliott turbocharging would be a winning combination. At the start, a slow pace lap (reportedly less than ) apparently induced what is now referred to as "turbo lag" and badly hampered the throttle response of the Cummins Diesel. Although Agabashian found himself in eighth place before reaching the first turn, he moved up to fifth in a few laps and was running competitively (albeit well back in the field after a tire change) until the badly situated air intake of the car swallowed enough debris from the track to disable the turbocharger at lap 71; he finished 27th. In the 1990s and rule makers supported the concept, BMW and Volkswagen raced diesel touring cars, with BMW winning the 1998 24 Hours Nürburgring with a 320d against other factory-entered diesel competition of VW and about 200 normally powered cars, mainly by being able to drive very long stints. Alfa Romeo even organized a racing series with their Alfa Romeo 147 1.9 JTD models. In 2006, a BMW 120d repeated a similar result, scoring 5th in a field of 220 cars, many of them much more powerful, a significantly stronger competition than in 1998. The VW Dakar Rally race Touareg for 2005 and 2006 are powered by their own line of TDI engines in order to challenge for the first overall diesel win there. Meanwhile, the five time 24 Hours of Le Mans winner Audi R8 race car was replaced by the Audi R10 TDI in 2006, which is powered by a and V12 TDI common rail diesel engine, mated to a 5-speed gearbox, instead of the 6 used in the R8, to handle the extra torque produced. The gearbox is considered the main problem, as earlier attempts by others failed due to the lack of suitable transmissions that could stand the torque long enough. After winning the 12 Hours of Sebring in 2006 with their diesel-powered Audi R10 TDI LMP1, Audi obtained the overall win at the 2006 24 Hours of Le Mans, too. This is the first time a sports car could compete for overall victories with diesel fuel against cars powered with regular fuel or methanol and bio-ethanol. However, the significance of this is slightly lessened by the fact that the ACO/ALMS race rules encourage the use of alternative fuels such as diesel. The winning car also bettered the post-1990 course configuration lap record by 1, at 380 laps. However, this fell short of the all-time distance record set in 1971 by over . Audi again triumphed at Sebring in 2007. It had both a speed and fuel economy advantage over the entire field including the Porsche RS Spyders, gasoline powered purpose-built race cars. Audi's diesels won again the 2007 24 Hours of Le Mans, against competition coming from the Peugeot 908 HDi FAP diesel powered racer. In 2006, the JCB Dieselmax broke the diesel land speed record posting an average speed of over . The vehicle used "two diesel engines that have a combined total of 1,500 horsepower (1120 kilowatts). Each is a 4-cylinder, 4.4-liter engine used commercially in a backhoe loader." In the 2008 BTCC, Jason Plato and Darren Turner raced factory sponsored SEAT Leon TDI with some success against a variety of gasoline powered competitors. See also Diesel cycle Diesel generator Diesel motorcycle Dieselisation Forced induction Gasoline direct injection Hesselman engine History of the internal combustion engine Hybrid power source Indirect injection Junkers Jumo 205—The more successful of the first series of production diesel aircraft engines. List of countries banning fossil fuel vehicles Napier Deltic—a high-speed, lightweight diesel engine used in fast naval craft and some railway locomotives. SVO—Straight Vegetable Oil—alternative fuel for diesel engines. Wärtsilä-Sulzer RTA96-C—world's most powerful, most efficient and largest diesel engine. WVO—Waste Vegetable Oil—filtered, alternative fuel for diesel engines. References Further reading Early history of the Diesel Car and early 1920s racing "Diesel Engine In Racing Car Develops High Speed", May 193, Popular Mechanics history only known diesel race car Cars by period Automotive industry Diesel car History of the diesel engine
History of the diesel car
Technology
3,605
56,582,267
https://en.wikipedia.org/wiki/Distributed%20ledger%20technology%20law
Distributed ledger technology law ("DLT law") (also called blockchain law, Lex Cryptographia or algorithmic legal order) is not yet defined and recognized but an emerging field of law due to the recent dissemination of distributed ledger technology application in business and governance environment. Those smart contracts which were created through interaction of lawyers and developers and are intended to also be enforceable legal contracts are called smart legal contracts. DLT and law issues Issue of situs and place for dispute resolution In the legal context DLT and smart contracts are distinct and face their own problems and challenges. Issue of situs is an example which relates to DLT rather than smart contracts. International private law and legislation of various jurisdictions require to identify the location of an asset or place of an agreement in order to solve conflict of law problem and determine the applicable governing law. "However, the distribution of the register across nodes in multiple jurisdictions raises a seemingly intractable problem – under current legal principles at least – as to where the situs should be." Holding something on DLT, including smart contract or title to an asset, does not isolate it from the legal system and laws of respective jurisdiction. "Some blockchain enthusiasts may have misinterpreted the statement 'code is law' as implying that code can supersede the law or that decentralised networks create their own legal regimes." In case of a dispute between the parties of the smart contract within the DLT, the issue arises where the distributed ledger is located in order to determine the place for dispute resolution. "Blockchain also poses questions concerning the ability to identify the parties to a transaction, to the extent a system utilizing this technology remains anonymous, which may rise a host of additional issues related to dispute resolution." Issue of legal validity and update of code on DLT An absence of legal compliance mechanism on DLT, self-executing nature of code on DLT and limited ability to update the code if the law changes create a number of legal issues. There are several possible solutions of addressing these issues. "One method could be a system in which the relevant jurisdiction creates a publicly available database and application programming interface (API) of relevant legal provisions. These would be provisions related to the terms of the contract. The smart contract would call these terms and would be able to update those provisions terms in accord with the jurisdiction's update of the database." On more conservative side of DLT and law interaction spectrum are two solutions proposed by Alexander Savelyev: "(1) To introduce the concept of a 'Superuser' for government authorities, which will have a right to modify the content of Blockchain databases in accordance with a specified procedure in order to reflect the decisions of state authority. (2) To enforce decisions of state authorities in 'offline' mode by pursuing the specific users and forcing them to include changes in Blockchain themselves as well as by using traditional tort claims, unjust enrichment claims, and specific performance claims." Issue of automation of smart contracts and 'Oracles' To facilitate the self-execution, a smart contract needs access to sources of event information through which the execution of its terms and conditions is assessed. "In the interest rate swap example, the distributed ledger must have access to assets of the parties in order to fulfil the parties' payment obligations, and it must have access to a provider of interest rate information." The solutions to the issue of access to assets vary and may be solved through locking and release of assets in smart contract as it is performed through use of cryptocurrency Ether on Ethereum blockchain or by introducing new mechanism of access to assets like 'cash states' proposed by Corda distributed ledger. The solution to the issue of access to information may require use of so-called 'Oracles' – an external party (or a machine) providing the judgement to determine whether or not respective conditions under the agreement have been met. "Turning again to the interest rate swap example, an oracle could be used to provide interest rate information on a payment calculation date. The oracle's digital signature would be retained on the distributed ledger so that parties could review the payment process and confirm that payments were made correctly." Areas of law Ubiquitous dissemination of information technology and the Internet led to a discussion of two opposite legal theories of the regulation of cyberspace. According to The Law of the Horse theory proposed by Frank H. Easterbrook, general principles of law governing property, transactions and torts apply to any relationship whether in case of the horse or cyberspace and there is no reason to invent new fields of law designated for each. This theory was challenged by Lawrence Lessig, who argued that in case of cyberspace the code may be considered as another way of regulation and therefore the cyberspace may be treated more widely than just another area of relations regulated by conventional legal principles. Employing more liberal approach the DLT law may mean the body of law "characterized by a set of rules administered through self-executing smart contracts and decentralized (and potentially autonomous) organizations". As of the beginning of 2018 the DLT law does not constitute a separate field of law rather it encompasses aspects of corporate, contract, investment, banking and finance law. According to conservative approach the DLT law may be considered as a part of existent area of law, which may be applied to regulate different aspects of DLT use and new kind of legal relations on blockchain, such as issue of authorisation (electronic signature), admissibility of blockchain evidence in court, status of cryptocurrency and regulation of initial coin offering, use of smart contracts, status of DAO (decentralized autonomous organization) and other. Legal status in the US of blockchain-based records and smart contracts Several states in the United States have enacted legislation providing a framework for business and legal application of blockchain technology and enforceability of smart contracts: Vermont, Arizona, Nevada, Delaware, and Illinois. Vermont On 2 June 2016, Vermont became the first state, which recognised blockchain-based records having legal bearing in a court under the Vermont Rules of Evidence and defined blockchain technology as "mathematically secured, chronological, and decentralized consensus ledger or database, whether maintained via Internet interaction, peer-to-peer network, or otherwise" Arizona In March 2017, Arizona's Electronic Transactions Act (the AETA) was amended by HB 2417 Act to clarify that "electronic records, electronic signatures, and smart contract terms secured through blockchain technology and governed under UCC Articles 2, 2A and 7 will be considered to be in an electronic form and to be an electronic signature under AETA." HB 2417 Act also provides a definition of blockchain technology as a "distributed, decentralized, shared and replicated ledger, which may be public or private, permissioned or permissionless, or driven by tokenized crypto economics or tokenless" and definition of smart contract as "event driven program, with state, that runs on a distributed, decentralized, shared and replicated ledger that can take custody over and instruct transfer of assets on that ledger". The State also identifies areas where blockchain technology should not be used. For instance, the new law adopted in 2017 prohibits the use of blockchain technology to locate or control firearms. Nevada In June 2017, similar legislation has been enacted in Nevada. In addition, Nevada was the first state to ban local governments from taxing the use of "blockchain". With regard to the definition of blockchain, the Nevada Senate defines it as "an electronic record created by the use of a decentralized method by multiple parties to verify and store a digital record of transactions which is secured by the use of a cryptographic hash of previous transaction information". Delaware On August 1, 2017, Delaware's blockchain law became effective, which amends the Delaware General Corporation Law explicitly permitting the use of distributed ledger technology in the administration of Delaware corporate records, including records of stock and stockholders. "Before this new law was adopted, there was nothing specifically stopping a Delaware corporation from using blockchain technology to keep track of its stockholders, but there was also a great deal of regulatory uncertainty." Illinois On January 31, 2018, Illinois regarded its role in the development of the blockchain ecosystem as one which "supports the distinct needs of the respective ecosystem stakeholders: entrepreneurs, capital providers, developers, governments, and academics to support and encourage the creation and growth of 15 blockchain companies in Illinois". To accomplish this mission the Illinois Blockchain Initiative created the role of the State of Illinois Blockchain Business Liaison, which is responsible for the engagement of these stakeholders within the ecosystem to identify and conclusively work to resolve their respective needs. Over the past year, the Illinois Blockchain Initiative has compiled a database of over 200 blockchain and distributed ledger technology pilots, projects and strategies announced by public sector entities. The database is an overview of how government at various levels globally 29 are employing blockchain technology in their efforts to govern, improve the competitiveness of their economy and also deliver high-quality services in a more efficient manner. The public sector is one of the most active blockchain sector's exploring the technology for a wide variety of use cases. Adoption of the technology in the public sector is accelerating at an extraordinary pace. See also Markets in Crypto-Assets (EU) References Law by issue Blockchain entities Government by algorithm
Distributed ledger technology law
Engineering
1,946
60,415,782
https://en.wikipedia.org/wiki/Generalized%20Cohen%E2%80%93Macaulay%20ring
In algebra, a generalized Cohen–Macaulay ring is a commutative Noetherian local ring of Krull dimension d > 0 that satisfies any of the following equivalent conditions: For each integer , the length of the i-th local cohomology of A is finite: . where the sup is over all parameter ideals and is the multiplicity of . There is an -primary ideal such that for each system of parameters in , For each prime ideal of that is not , and is Cohen–Macaulay. The last condition implies that the localization is Cohen–Macaulay for each prime ideal . A standard example is the local ring at the vertex of an affine cone over a smooth projective variety. Historically, the notion grew up out of the study of a Buchsbaum ring, a Noetherian local ring A in which is constant for -primary ideals ; see the introduction of. Notes References Ring theory
Generalized Cohen–Macaulay ring
Mathematics
190
1,358,791
https://en.wikipedia.org/wiki/Nicolae%20Popescu
Nicolae Popescu (; 22 September 1937 – 29 July 2010) was a Romanian mathematician and professor at the University of Bucharest. He also held a research position at the Institute of Mathematics of the Romanian Academy, and was elected corresponding Member of the Romanian Academy in 1997. He is best known for his contributions to algebra and the theory of abelian categories. From 1964 to 2007 he collaborated with Pierre Gabriel on the characterization of abelian categories; their best-known result is the Gabriel–Popescu theorem, published in 1964. His areas of expertise were category theory, abelian categories with applications to rings and modules, adjoint functors, limits and colimits, the theory of sheaves, the theory of rings, fields and polynomials, and valuation theory. He also had interests and published in algebraic topology, algebraic geometry, commutative algebra, K-theory, class field theory, and algebraic function theory. Biography Popescu was born on September 22, 1937, in Strehaia-Comanda, Mehedinți County, Romania. In 1954 he graduated from the Carol I High School in Craiova and went on to study mathematics at the University of Iași. In his third year of studies he was expelled from the university, having been deemed "hostile to the regime" for remarking that "the achievements of American scientists are also worth of consideration." He then went back home to Strehaia, where he worked for a year in a collective farm, after which he was admitted in 1959 at the University of Bucharest, only to start anew as a freshman. Popescu earned his M.S. degree in mathematics in 1964, and his Ph.D. degree in mathematics in 1967, with thesis Krull–Remak–Schmidt Theorem and Theory of Decomposition written under the direction of . He was awarded a D. Phil. degree (Doctor Docent) in 1972, also by the University of Bucharest. While still a student, Popescu focused on category theory. He first approached the general theory, with its connections to homological algebra and algebraic topology, then shifted his focus on theory of Abelian categories, being one of the main promoters of this theory in Romania. He carried out mathematics studies at the Institute of Mathematics of the Romanian Academy in the Algebra research group, and also had international collaborations on three continents. He shared many moral, ethical, and religious values with Alexander Grothendieck, who visited the Faculty of Mathematics in Bucharest in 1968. Like Grothendieck, he had a long-standing interest in category theory and number theory, and supported promising young mathematicians in his fields of interest. He also promoted the early developments of category theory applications in relational biology and mathematical biophysics/mathematical biology. Academic positions Popescu was appointed as a Lecturer at the University of Bucharest in 1968 where he taught graduate students until 1972. Starting in 1964 he also held a research appointment at the Institute of Mathematics of the Romanian Academy. The institute was closed in 1976 by order of Nicolae Ceaușescu (for reasons related to his daughter Zoia Ceaușescu, who had been hired at the institute two years before), but was reopened in 1990, after the Romanian Revolution. Publications Between 1962 and 2008 Popescu published more than 102 papers in peer-reviewed mathematics journals, several monographs on the theory of sheaves, and several books on abelian category theory and abstract algebra, including In a Grothendieck-like, energetic style, he initiated and provided scientific leadership to several seminars on category theory, sheaves and abstract algebra which resulted in a continuous stream of high-quality mathematical publications in international, peer-reviewed mathematics journals by several members participating in his Seminar series. Personal life Popescu died in Bucharest on July 29, 2010. He is survived by his wife, Professor Dr. (a mathematician, poet, literary translator and editor), and their three children, one of whom, , is a politician. Recognition In 1971 Popescu received the Simion Stoilow Prize in Mathematics of the Romanian Academy. He was elected President of the Romanian Mathematical Society in 1990 and corresponding Member of the Romanian Academy in 1997. On the 80th anniversary of his birthday, the Faculty of Mathematics and Informatics at the University of Bucharest and the Institute of Mathematics of the Romanian Academy organized a conference in his memory. Notes 1937 births 2010 deaths People from Strehaia Carol I National College alumni University of Bucharest alumni Academic staff of the University of Bucharest Romanian scientists Romanian inventors 20th-century Romanian mathematicians 21st-century Romanian mathematicians Corresponding members of the Romanian Academy Algebraists Category theorists
Nicolae Popescu
Mathematics
935
56,758,195
https://en.wikipedia.org/wiki/K2-141b
K2-141b (also designated EPIC 246393474.01) is a massive rocky exoplanet orbiting extremely close to a K Type orange main-sequence star K2-141. The planet was first discovered by the Kepler space telescope during its K2 “Second Light” mission and later observed by the HARPS-N spectrograph. It is classified as an ultra-short period planet (USP) and is confirmed to be terrestrial in nature. Its high density implies a massive iron core taking up between 30% and 50% of the planet's total mass. Characteristics Mass and radius Like the majority of known exoplanets, K2-141b was detected using the transit method, where a planet blocks a tiny fraction of its star's light as it passed between our line of the sight and its host. This method is only able to determine the radius of the planet, not its mass. However, K2-141b was also detected by the radial velocity method using the HARPS-N spectrograph. Therefore, its mass could also be determined along with its radius. The planet is classified as a Super-Earth, being significantly larger and more massive than Earth but not as large as the ice giants Uranus and Neptune. K2-141b has a radius of 1.51 , below the 1.6 threshold where most planets are expected to accumulate thick hydrogen and helium atmospheres, transforming them into mini-Neptunes. The planet's mass confirms that it is rocky. It has a mass of 5.08 , which gives K2-141b a high density of 8.2 g/cm3, about 1.48 times the density of Earth. This high density implies a composition with a large iron core taking up about 30% to 50% of the planet's total mass. Orbit K2-141b has one of the shortest known orbital periods of any confirmed exoplanet. With an orbital period of only 6.7 hours, it is the shortest-period planet known to date with a precisely determined mass. Only a few planets, including those around Kepler-70, have shorter orbital periods. At this proximity, K2-141b is most likely tidally locked with its host star, meaning that the same side of the planet always faces the star. It has a semi-major axis of 0.00716 AU. For a comparison, Mercury's perihelion is 0.307499 AU, more than 41 times farther away from the Sun. Atmosphere and climate Despite its terrestrial nature, K2-141b is far from habitable. Its extremely close proximity to its host star has resulted in an equilibrium temperature of about . However, the actual temperature is probably much higher. About two-thirds of K2-141b faces perpetual daylight. The night side experiences frigid temperatures of below . The day side of the exoplanet, at an estimated , is hot enough to not only melt rocks but vaporize them as well. A low albedo of around 0.3 means most light which hits the planet is absorbed, adding to the heat of the dayside. K2-141b is believed to have both an atmosphere and oceans, which are magma and likely tens of kilometers deep. The makeup of the atmosphere is unknown but likely consists of vaporized metals which are common in solid form on Earth. The atmosphere is believed to have extreme wind speeds of over 1.75 kilometers per second. Temperatures are high enough that the magma in the oceans can vaporize into the atmosphere. The mineral vapor formed by evaporated rock is swept to the frigid night side by supersonic winds and rocks "rain" back down into a magma ocean. The resulting currents flow back to the hot day side of the exoplanet, where rock evaporates once more. If the planet's atmosphere has high levels of sodium, then solid sodium might slowly slide towards the planet's oceans, similarly to how glaciers move on Earth. Host star K2-141 is an K5 main-sequence star about 61 parsecs (202 light years) away, in the direction of the constellation Pisces. Based on the spectral type (K5/6 D) of the star, the star's colour is orange. It has a radius of 0.681 ±0.018 and a mass of 0.708 ±0.028 . It has a temperature of 4,599 ±79 K and is between 1.6 and 12.9 billion years old. For comparison, the Sun has a temperature of 5778 K and is 4.5 billion years old. See also COROT-7b Kepler-10b Kepler-78b Lists of exoplanets List of directly imaged exoplanets List of largest exoplanets List of nearest exoplanets WASP-47e References Exoplanets discovered by K2 Exoplanets discovered in 2018 Super-Earths Pisces (constellation)
K2-141b
Astronomy
1,035
43,628,949
https://en.wikipedia.org/wiki/Alan%20D.%20Roberts
Alan D. Roberts is a Tun Abdul Razak Research Centre (TARRC) scientist noted for his contributions to understanding contact phenomena in elastomers, and in particular the JKR equation. Education Roberts completed his Doctor of Philosophy degree in 1968, having worked in the Cavendish laboratory at the University of Cambridge, under the supervision of tribologist David Tabor. Career His 1971 paper with Kevin Kendall and Kenneth L. Johnson forms the basis of modern theories of contact mechanics. In 1974, Roberts was recruited to the Applied Physics Group at the Malaysian Rubber Producers' Research Association (MRPRA) by Alan G. Thomas. He researched the sliding friction of rubber on wet surfaces and on ice, the effects of pH and salt concentration, and other effects. In 1983, he was promoted to Assistant Director of MRPRA, and the following year to Deputy Director. Awards Roberts received the 1998 Lavoisier Medal of the French Society of the Chemical Industry, and in 2014 he received the Charles Goodyear Medal of the Rubber Division of the American Chemical Society. References Living people Polymer scientists and engineers British scientists Year of birth missing (living people) Tribologists
Alan D. Roberts
Chemistry,Materials_science
235
3,593,509
https://en.wikipedia.org/wiki/Power%20optimization%20%28EDA%29
Power optimization is the use of electronic design automation tools to optimize (reduce) the power consumption of a digital design, such as that of an integrated circuit, while preserving the functionality. Introduction and history The increasing speed and complexity of today’s designs implies a significant increase in the power consumption of very-large-scale integration (VLSI) chips. To meet this challenge, researchers have developed many different design techniques to reduce power. The complexity of today’s ICs, with over 100 million transistors, clocked at over 1 GHz, means manual power optimization would be hopelessly slow and all too likely to contain errors. Computer-aided design (CAD) tools and methodologies are mandatory. One of the key features that led to the success of complementary metal-oxide semiconductor, or CMOS, technology was its intrinsic low-power consumption. This meant that circuit designers and electronic design automation (EDA) tools could afford to concentrate on maximizing circuit performance and minimizing circuit area. Another interesting feature of CMOS technology is its nice scaling properties, which has permitted a steady decrease in the feature size (see Moore's law), allowing for more and more complex systems on a single chip, working at higher clock frequencies. Power consumption concerns came into play with the appearance of the first portable electronic systems in the late 1980s. In this market, battery lifetime is a decisive factor for the commercial success of the product. Another fact that became apparent at about the same time was that the increasing integration of more active elements per die area would lead to prohibitively large-energy consumption of an integrated circuit. A high absolute level of power is not only undesirable for economic and environmental reasons, but it also creates the problem of heat dissipation. In order to keep the device working at acceptable temperature levels, excessive heat may require expensive heat removal systems. These factors have contributed to the rise of power as a major design parameter on par with performance and die size. In fact, power consumption is regarded as the limiting factor in the continuing scaling of CMOS technology. To respond to this challenge, in the last decade or so, intensive research has been put into developing computed-aided design (CAD) tools that address the problem of power optimization. Initial efforts were directed to circuit and logic-level tools because at this level CAD tools were more mature and there was a better handle on the issues. Today, most of the research for CAD tools targets system or architectural level optimization, which potentially have a higher overall impact, given the breadth of their application. Together with optimization tools, efficient techniques for power estimation are required, both as an absolute indicator that the circuit’s consumption meets some target value and as a relative indicator of the power merits of different alternatives during design space exploration. Power analysis of CMOS circuits The power consumption of digital CMOS circuits is generally considered in terms of three components: The dynamic power component, related to the charging and discharging of the load capacitance at the gate output. The short-circuit power component. During the transition of the output line (of a CMOS gate) from one voltage level to the other, there is a period of time when both the PMOS and the NMOS transistors are on, thus creating a path from VDD to ground. The static power component, due to leakage, that is present even when the circuit is not switching. This, in turn, is composed of two components - gate to source leakage, which is leakage directly through the gate insulator, mostly by tunnelling, and source-drain leakage attributed to both tunnelling and sub-threshold conduction. The contribution of the static power component to the total power number is growing very rapidly in the current era of Deep Sub-Micrometre (DSM) Design. Power can be estimated at a number of levels of detail. The higher levels of abstraction are faster and handle larger circuits, but are less accurate. The main levels include: Circuit Level Power Estimation, using a circuit simulator such as SPICE Static Power Estimation does not use the input vectors, but may use the input statistics. Analogous to static timing analysis. Logic-Level Power Estimation, often linked to logic simulation. Analysis at the Register-Transfer Level. Fast and high capacity, but not as accurate. Circuit-level power optimization Many different techniques are used to reduce power consumption at the circuit level. Some of the main ones are: Transistor sizing: adjusting the size of each gate or transistor for minimum power. Voltage scaling: lower supply voltages use less power, but go slower. Voltage islands: Different blocks can be run at different voltages, saving power. This design practice may require the use of level-shifters when two blocks with different supply voltages communicate with each other. Variable VDD: The voltage for a single block can be varied during operation - high voltage (and high power) when the block needs to go fast, low voltage when slow operation is acceptable. Multiple threshold voltages: Modern processes can build transistors with different thresholds. Power can be saved by using a mixture of CMOS transistors with two or more different threshold voltages. In the simplest form there are two different thresholds available, common called High-Vt and Low-Vt, where Vt stands for threshold voltage. High threshold transistors are slower but leak less, and can be used in non-critical circuits. Power gating: This technique uses high Vt sleep transistors which cut-off a circuit block when the block is not switching. The sleep transistor sizing is an important design parameter. This technique, also known as MTCMOS, or Multi-Threshold CMOS reduces stand-by or leakage power, and also enables Iddq testing. Long-Channel transistors: Transistors of more than minimum length leak less, but are bigger and slower. Stacking and parking states: Logic gates may leak differently during logically equivalent input states (say 10 on a NAND gate, as opposed to 01). State machines may have less leakage in certain states. Logic styles: dynamic and static logic, for example, have different speed/power tradeoffs. Logic synthesis for low power Logic synthesis can also be optimized in many ways to keep power consumption under control. Details of the following steps can have a significant impact on power optimization: Clock gating Logic Factorization Path Balancing Technology Mapping State Encoding Finite-State Machine Decomposition Retiming Power Aware EDA Support There are file formats that can be used to write design files specifying the Power intent and implementation of a design. The information in these files allow the EDA tools to automatically insert power control features and to check that the result matches the intent. The IEEE DASC provides a home for developing this format in the form of the IEEE P1801 working group. During 2006 and the first two months of 2007, both Unified Power Format and Common Power Format were developed to support various tools. The IEEE P1801 working groups operates with the goal of providing for convergence of these two standards. Several EDA tools have been developed for supporting architectural level power estimation including McPAT, Wattch, and Simplepower. See also Data organization for low power References Electronic Design Automation For Integrated Circuits Handbook, by Lavagno, Martin, and Scheffer, A survey of the field, from which the above summary was derived, with permission. Jan M. Rabaey, Anantha Chandrakasan, and Borivoje Nikolic, Digital Integrated Circuits, 2nd Edition, , Publisher: Prentice Hall Further reading/External links Power standards Digital electronics Electronic design automation Electronics optimization
Power optimization (EDA)
Engineering
1,563
58,432,548
https://en.wikipedia.org/wiki/Maurice%20Sion
Maurice Sion (17 October 1927, Skopje – 17 April 2018, Vancouver) was an American and Canadian mathematician, specializing in measure theory and game theory. He is known for Sion's minimax theorem. Biography Sion received from New York University his B.A. in 1947 and his M.A. in 1948. He received from the University of California, Berkeley in 1951 his Ph.D. under the supervision of Anthony Morse with thesis On the existence of functions having given partial derivatives on Whitney's curve. Sion was a member of the mathematics faculty at U.C. Berkeley until 1960, when he immigrated to Canada with his wife Emilie and his two children born in the U.S.A. (His two younger children were born in Canada.) From 1960 until he retired in 1989, Maurice Sion was a professor of mathematics at the University of British Columbia. For two academic years from 1957 to 1959 and in the autumn of 1962 he was at the Institute for Advanced Study. He wrote several books on mathematics and served for many years as the head of the University of British Columbia's mathematics department. In 1957 he was the coauthor with Philip Wolfe of a paper with an example of a zero-sum game without a minimax value. Sion was an Invited Speaker at the International Congress of Mathematicians (ICM) in 1970 in Nice and was appointed the Main Organizer for the ICM held in Vancouver in 1974. In 2012 he was elected a Fellow of the American Mathematical Society. Sion was fluent in Spanish, Italian, French, and English. He was predeceased by his youngest child. Upon his death he was survived by his widow, three children, and six grandchildren. Selected publications Articles with R. C. Willmott: Books References 1927 births 2018 deaths People from Skopje New York University alumni University of California, Berkeley alumni Academic staff of the University of British Columbia American people of Sephardic-Jewish descent 20th-century American mathematicians 21st-century American mathematicians 20th-century Canadian mathematicians 21st-century Canadian mathematicians Mathematical analysts Measure theorists Fellows of the American Mathematical Society
Maurice Sion
Mathematics
429
21,397,214
https://en.wikipedia.org/wiki/Scene%20statistics
Scene statistics is a discipline within the field of perception. It is concerned with the statistical regularities related to scenes. It is based on the premise that a perceptual system is designed to interpret scenes. Biological perceptual systems have evolved in response to physical properties of natural environments. Therefore natural scenes receive a great deal of attention. Natural scene statistics are useful for defining the behavior of an ideal observer in a natural task, typically by incorporating signal detection theory, information theory, or estimation theory. Within-domain versus across-domain Geisler (2008) distinguishes between four kinds of domains: (1) Physical environments, (2) Images/Scenes, (3) Neural responses, and (4) Behavior. Within the domain of images/scenes, one can study the characteristics of information related to redundancy and efficient coding. Across-domain statistics determine how an autonomous system should make inferences about its environment, process information, and control its behavior. To study these statistics, it is necessary to sample or register information in multiple domains simultaneously. Applications Prediction of picture and video quality One of the most successful applications of Natural Scenes Statistics Models has been perceptual picture and video quality prediction. For example, the Visual Information Fidelity (VIF) algorithm, which is used to measure the degree of distortion of pictures and videos, is used extensively by the image and video processing communities to assess perceptual quality, often after processing, such as compression, which can degrade the appearance of a visual signal. The premise is that the scene statistics are changed by distortion, and that the visual system is sensitive to the changes in the scene statistics. VIF is heavily used in the streaming television industry. Other popular picture quality models that use natural scene statistics include BRISQUE, and NIQE both of which are no-reference, since they do not require any reference picture to measure quality against. References Bibliography Field, D. J. (1987). Relations between the statistics of natural images and the response properties of cortical cells. Journal of the Optical Society of America A 4, 2379–2394. Ruderman, D. L., & Bialek, W. (1994). Statistics of Natural Images – Scaling in the Woods. Physical Review Letters, 73(6), 814–817. Brady, N., & Field, D. J. (2000). Local contrast in natural images: normalisation and coding efficiency. Perception, 29, 1041–1055. Frazor, R.A., Geisler, W.S. (2006) Local luminance and contrast in natural images. Vision Research, 46, 1585–1598. Mante et al. (2005) Independence of luminance and contrast in natural scenes and in the early visual system. Nature Neuroscience, 8 (12) 1690–1697. Bell, A. J., & Sejnowski, T. J. (1997). The "independent components" of natural scenes are edge filters. Vision Research, 37, 3327–3338. Olshausen, B. A., & Field, D. J. (1997). Sparse coding with an overcomplete basis set: A strategy by V1? Vision Research, 37(23), 3311–3325. Sigman, M., Cecchi, G. A., Gilbert, C. D., & Magnasco, M. O. (2001). On a common circle: Natural scenes and Gestalt rules. PNAS, 98(4), 1935–1940. Hoyer, PO and Hyvärinen, A. A multi-layer sparse coding network learns contour coding from natural images, Vis. Res., vol. 42, no. 12, pp. 1593–1605, 2002. Geisler, W. S., Perry, J. S., Super, B. J., & Gallogly, D. P. (2001). Edge co-occurrence in natural images predicts contour grouping performance. Vision Research, 41, 711–724. Elder JH, Goldberg RM. (2002) Ecological statistics for the Gestalt laws of perceptual organization of contours. J. Vis. 2:324–53. Krinov, E. (1947). Spectral reflectance properties of natural formations (Technical translation No. TT-439). Ottawa: Nation Research Council of Canada. Ruderman, D. L., Cronin, T. W., & Chiao, C. (1998). Statistics of cone responses to natural images: implications for visual coding. Journal of the Optical Society of America A, 15, 2036–2045. Stockman, A., MacLeod, D. I. A., & Johnson, N. E. (1993). Spectral sensitivities of the human cones. J Opt Soc Am A Opt Image Sci Vis, 10, 1396–1402. Lee TW, Wachtler, T, Sejnowski, TJ. (2002) Color opponency is an efficient representation of spectral properties in natural scenes. Vision Research 42:2095–2103. Fine, I., MacLeod, D. I. A., & Boynton, G. M. (2003). Surface segmentation based on the luminance and color statistics of natural scenes. Journal of the Optical Society of America a-Optics Image Science and Vision, 20(7), 1283–1291. Lewis A, Zhaoping L. (2006) Are cone sensitivities determined by natural color statistics? Journal of Vision. 6:285–302. Lovell PG et al. (2005) Stability of the color-opponent signals under changes of illuminant in natural scenes. J. Opt. Soc. Am. A 22:10. Endler, J.A. 1993. The color of light in forests and its implications. Ecological Monographs 63:1–27. Wachtler T, Lee TW, Sejnowski TJ (2001) Chromatic structure of natural scenes. J. Opt. Soc. Am. A 18(1):65–77. Long F, Yang Z, Purves D. Spectral statistics in natural scenes predict hue, saturation, and brightness. PNAS 103(15):6013–6018. Van Hateren, J. H., & Ruderman, D. L. (1998). Independent component analysis of natural image sequences yields spatio-temporal filters similar to simple cells in primary visual cortex. Proceedings of the Royal Society of London B, 265, 2315–2320. Potetz, B., & Lee, T. S. (2003). Statistical correlations between two-dimensional images and three-dimensional structures in natural scenes. Journal of the Optical Society of America a-Optics Image Science and Vision, 20(7), 1292–1303. Howe, C. Q., & Purves, D. (2002). Range image statistics can explain the anomalous perception of length. Proceedings of the National Academy of Sciences of the United States of America, 99(20), 13184–13188.Howe, C. Q., & Purves, D. (2005a). Natural-scene geometry predicts the perception of angles and line orientation. Proceedings of the National Academy of Sciences of the United States of America, 102(4), 1228–1233. Howe, C. Q., & Purves, D. (2004). Size contrast and assimilation explained by the statistics of natural scene geometry. Journal of Cognitive Neuroscience, 16(1), 90–102. Howe, C. Q., & Purves, D. (2005b). The Muller–Lyer illusion explained by the statistics of image-source relationships. Proceedings of the National Academy of Sciences of the United States of America, 102(4), 1234–1239. Howe, C. Q., Yang, Z. Y., & Purves, D. (2005). The Poggendorff illusion explained by natural scene geometry. Proceedings of the National Academy of Sciences of the United States of America, 102(21), 7707–7712. Kalkan, S. Woergoetter, F. & Krueger, N., Statistical Analysis of Local 3D Structure in 2D Images, IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2006. Kalkan, S. Woergoetter, F. & Krueger, N., First-order and Second-order Statistical Analysis of 3D and 2D Structure, Network: Computation in Neural Systems, 18(2), pp. 129–160, 2007. Perception Neuroscience Image processing
Scene statistics
Biology
1,842
56,745,603
https://en.wikipedia.org/wiki/Passive%20autocatalytic%20recombiner
Passive autocatalytic recombiner (PAR) is a device that removes hydrogen from the containment of a nuclear power plant during an accident. Its purpose is to prevent hydrogen explosions. Recombiners come into action spontaneously as soon as the hydrogen concentration increases. They are passive devices because their operation does not require external energy. Hydrogen may be generated in a nuclear accident if the reactor fuel overheats and zirconium cladding of the fuel rods reacts chemically with steam. If the hydrogen is released from the reactor to the containment, it may get mixed with air and form a flammable or even explosive mixture. A hydrogen explosion could break the containment and cause radioactive materials to be released to the environment. Recombiners aim at removing hydrogen and thereby preventing explosions. Inside a recombiner there are plates or pellets that are coated with platinum or palladium catalyst. On the surface of the catalyst, hydrogen and oxygen molecules react chemically at low temperature and low hydrogen concentration. The reaction generates steam. The reaction starts spontaneously when the hydrogen concentration reaches 1–2 percent. Burning of hydrogen in air requires at least 4 percent hydrogen concentration, and even higher for an explosion. Therefore, a recombiner is able to remove hydrogen from the containment before a flammable concentration is reached. A recombiner is a box that is open from the bottom and from the top. The catalyst is located at the lower part of the box. The reaction of hydrogen and oxygen on the catalyst surface generates heat, and temperature in the recombiner reaches hundreds of degrees Celsius. Hot steam is lighter than the air in the containment, so buoyancy is caused inside the recombiner, much like in a chimney. This causes a strong airflow through the recombiner, feeding hydrogen and oxygen from the containment to the device. Hundreds of kilograms of hydrogen may be generated in a few hours during a severe reactor accident. The most efficient recombiner made by Framatome (formerly Areva) removes slightly over five kilograms of hydrogen per hour when the hydrogen concentration is four percent. Therefore, many recombiners are needed. For example, the containment of Olkiluoto 3 EPR in Finland has 50 recombiners. Manufacturers of passive autocatalytic recombiners include Framatome, SNC-Lavalin (formerly Atomic Energy of Canada Ltd, AECL), and German Siempelkamp-NIS. See also Hydrogen safety References Catalysis Nuclear power plant components Nuclear safety and security Hydrogen
Passive autocatalytic recombiner
Chemistry
529
14,583,316
https://en.wikipedia.org/wiki/SX-3228
SX-3228 is a sedative and hypnotic drug used in scientific research. It has similar effects to sedative-hypnotic benzodiazepine drugs, but is structurally distinct and so is classed as a nonbenzodiazepine hypnotic. SX-3228 is a subtype-selective GABAA positive allosteric modulator acting primarily at the α1 subtype. It thus has similar effects to other α1-selective drugs such as zolpidem and zaleplon in animal studies. It only partly substitutes for ethanol, and is a strong sedative-hypnotic with only limited anxiolytic effects which appear only at doses that also produce significant sedation. References GABAA receptor positive allosteric modulators Sedatives Lactams Oxadiazoles Ethers Naphthyridines
SX-3228
Chemistry
177
41,243,273
https://en.wikipedia.org/wiki/Spilocaea%20oleaginea
Spilocaea oleaginea is a deuteromycete fungal plant pathogen, the cause of the disease olive peacock spot, also known as olive leaf spot and bird's eye spot. This plant disease commonly affects the leaves of olive trees worldwide. The disease affects trees throughout the growing season and can cause significant losses in yield. The disease causes blemishes on the fruit, delays ripening, and reduces the yield of oil. Defoliation and in severe cases, twig death, can occur, and the disease can have long-term health effects on the trees. Hosts Olive plants are the only known host of the pathogen, which is able to infect all olive cultivars, although different cultivars vary in their susceptibility. Young leaves are more likely to develop greater symptoms than intermediate or old leaves. Symptoms In late spring, dark spots appear on the upper surface of leaf cuticles in the low canopy. These spots are lesions produced by the infecting fungus, and later are the site of sporulation. Symptoms may also appear on the stem and fruit, but are most common on the leaf surface. As the season progresses, the dark spots grow to a size of between in diameter, with the emergence of a yellow halo around each spot. Plants may experience defoliation and in severe cases twig death. Blooms may also fail, resulting in significant reductions to crop production. Cerocospora leaf spot may appear in tandem with the development of peacock spot, as grey or ashy fungus signs, due to conidia on the bottom of leaves. Disease cycle Spilocaea oleagina is a Deuteromycete because it has no known sexual stage. If the sexual stage exists and is discovered, it will belong to the genus Venturia. The mycelium typically develops on the leaf tissue. Lesions can be seen on the upper surface of leaves. The reproductive spores of Olive Peacock Spot that are known to exist are conidia. The disease is spread in several ways. The conidia can be spread by insects and the wind, and locally through rain water. The insect suspected of spreading Olive Peacock Spot is Ectopsocus briggsi, which is in the same order as lice. Olive trees keep their leaves year round. The primary infection occurs in the fall. The mycelia in leaf lesions infect the surrounding tissue and produce conidia for the primary infection. Sporulation from the leaf lesions spreads the conidia to healthy plant tissue. Young leaves are more susceptible to infection than older leaves. Sporulation continues during the winter and into the spring. The pathogen goes dormant during the hot, dry summer and survives as mycelium. The mycelia go dormant inside lesions on living leaves. Leaves that have dropped to the ground have also been known to produce infection from lesions, but this is not usually a significant source of infection. Environment Olive peacock spot disease is a worldwide agricultural problem and it thrives in similar conditions wherever it occurs. It depends on mild to low temperatures and free moisture to germinate and so it usually infects in the fall, winter, and spring. Hot and dry conditions in the summer cause the fungus to become inactivated and the leaf spots to turn white and crusted. During the summer, the diseased leaves fall leaving only the healthy ones on the partially defoliated trees. This provides a natural control for the disease. The disease also mainly infects young leaves in the spring. The presence of free moisture on the leaves is crucial for the conidia to germinate. This can occur in as little as 9 hours in the optimum temperature range, and usually in no more than 24 hours. Without free moisture, the conidia will not germinate. The preferred temperature range is , however it can persist between . Landscape can also affect the spread of olive peacock spot disease. It thrives in low-lying areas or in environments that receive little sunlight or have a closed tree canopy. Fog, dew, and high humidity are important factors. Under these conditions, this disease can spread even in summer. Nutrient deficiencies or imbalances in the soil have been linked to increased susceptibility. An excess of nitrogen and a calcium deficiency may weaken an olive tree's defenses. However, attempts to fix this with foliar nutrients and compost tea have not proven effective. Management Current practices in managing olive peacock spot disease aim for consistent suppression by keeping the levels of inoculum low through preventative measures. That is because there is no way to treat the disease once it appears in the spring or while the trees have fruit. The most common management approach is to spray the foliage with a copper compound after the fruit has been harvested in the fall, and again in the late winter if the environment is extremely wet. A power sprayer with high pressure is the most effective because it helps coat the entire surface of each leaf, even in the interior of the tree. If copper is sprayed on the fruit it is nearly impossible to wash away, so late harvests are often lost to infection. The spray comes in various forms of copper hydroxide, copper oxychloride, tribasic copper sulfate, and copper oxide. A few of those have been legally classified as organic. There are other commercially available fungicides that don't contain copper, such as "Spotless", which is applied monthly as a foliar spray between harvesting and flowering. No olive varieties are completely resistant to the fungus, but susceptibility varies among cultivars. Partially-resistant varieties have been found to have genetic markers that can be used to select for resistant progeny. Information is usually listed in descriptions of varieties provided by the growers. Importance Losses of 10 to 20 percent of fruiting wood have been observed in plants highly infected with olive peacock spot. While the disease is not highly detrimental, it can cause chronic problems and severe economic losses in some olive orchards. These losses are significant in an industry that occupies 8.5 million hectares. References Agosteo G.E., Schena L., 2011. Olive leaf spot. In: Schena L., Agosteo G.E., Cacciola S.O., Magnano di San Lio G. (Eds.). Olive diseases and disorders. Research Signpost, Kerala, India pp. 143–176. Fruit tree diseases Fungal plant pathogens and diseases Pleosporales Fungus species
Spilocaea oleaginea
Biology
1,320
47,967,891
https://en.wikipedia.org/wiki/Cantharellus%20symoensii
Cantharellus symoensii is a species of fungus in the family Cantharellaceae. First described by mycologist Paul Heinemann in 1966 as a species of Cantharellus, it was transferred to the new genus Afrocantharellus in 2012. References External links Cantharellaceae Fungi described in 1966 Fungi of Africa Fungus species
Cantharellus symoensii
Biology
75
1,516,108
https://en.wikipedia.org/wiki/Crystallographic%20texture
In materials science and related fields, crystallographic texture is the distribution of crystallographic orientations of a polycrystalline sample. A sample in which these orientations are fully random or is amorphous and thus no crystallographic planes, is said to have no texture. If the crystallographic orientations are not random, but have some preferred orientation, then the sample may have a weak, moderate or strong texture. The degree is dependent on the percentage of crystals having the preferred orientation. Texture is seen in almost all engineered materials, and can have a great influence on materials properties. The texture forms in materials during thermo-mechanical processes, for example during production processes e.g. rolling. Consequently, the rolling process is often followed by a heat treatment to reduce the amount of unwanted texture. Controlling the production process in combination with the characterization of texture and the material's microstructure help to determine the materials properties, i.e. the processing-microstructure-texture-property relationship. Also, geologic rocks show texture due to their thermo-mechanic history of formation processes. One extreme case is a complete lack of texture: a solid with perfectly random crystallite orientation will have isotropic properties at length scales sufficiently larger than the size of the crystallites. The opposite extreme is a perfect single crystal, which likely has anisotropic properties by geometric necessity. Characterization and representation Texture can be determined by various methods. Some methods allow a quantitative analysis of the texture, while others are only qualitative. Among the quantitative techniques, the most widely used is X-ray diffraction using texture goniometers, followed by the electron backscatter diffraction (EBSD) method in scanning electron microscopes. Qualitative analysis can be done by Laue photography, simple X-ray diffraction or with a polarized microscope. Neutron and synchrotron high-energy X-ray diffraction are suitable for determining textures of bulk materials and in situ analysis, whereas laboratory X-ray diffraction instruments are more appropriate for analyzing textures of thin films. Texture is often represented using a pole figure, in which a specified crystallographic axis (or pole) from each of a representative number of crystallites is plotted in a stereographic projection, along with directions relevant to the material's processing history. These directions define the so-called sample reference frame and are, because the investigation of textures started from the cold working of metals, usually referred to as the rolling direction RD, the transverse direction TD and the normal direction ND. For drawn metal wires the cylindrical fiber axis turned out as the sample direction around which preferred orientation is typically observed (see below). Common textures There are several textures that are commonly found in processed (cubic) materials. They are named either by the scientist that discovered them, or by the material they are most found in. These are given in Miller indices for simplification purposes. Cube component: (001)[100] Brass component: (110)[-112] Copper component: (112)[11-1] S component: (123)[63-4] Orientation distribution function The full 3D representation of crystallographic texture is given by the orientation distribution function (ODF) which can be achieved through evaluation of a set of pole figures or diffraction patterns. Subsequently, all pole figures can be derived from the ODF. The ODF is defined as the volume fraction of grains with a certain orientation . The orientation is normally identified using three Euler angles. The Euler angles then describe the transition from the sample’s reference frame into the crystallographic reference frame of each individual grain of the polycrystal. One thus ends up with a large set of different Euler angles, the distribution of which is described by the ODF. The orientation distribution function, ODF, cannot be measured directly by any technique. Traditionally both X-ray diffraction and EBSD may collect pole figures. Different methodologies exist to obtain the ODF from the pole figures or data in general. They can be classified based on how they represent the ODF. Some represent the ODF as a function, sum of functions or expand it in a series of harmonic functions. Others, known as discrete methods, divide the ODF space in cells and focus on determining the value of the ODF in each cell. Origins In wire and fiber, all crystals tend to have nearly identical orientation in the axial direction, but nearly random radial orientation. The most familiar exceptions to this rule are fiberglass, which has no crystal structure, and carbon fiber, in which the crystalline anisotropy is so great that a good-quality filament will be a distorted single crystal with approximately cylindrical symmetry (often compared to a jelly roll). Single-crystal fibers are also not uncommon. The making of metal sheet often involves compression in one direction and, in efficient rolling operations, tension in another, which can orient crystallites in both axes by a process known as grain flow. However, cold work destroys much of the crystalline order, and the new crystallites that arise with annealing usually have a different texture. Control of texture is extremely important in the making of silicon steel sheet for transformer cores (to reduce magnetic hysteresis) and of aluminium cans (since deep drawing requires extreme and relatively uniform plasticity). Texture in ceramics usually arises because the crystallites in a slurry have shapes that depend on crystalline orientation, often needle- or plate-shaped. These particles align themselves as water leaves the slurry, or as clay is formed. Casting or other fluid-to-solid transitions (i.e., thin-film deposition) produce textured solids when there is enough time and activation energy for atoms to find places in existing crystals, rather than condensing as an amorphous solid or starting new crystals of random orientation. Some facets of a crystal (often the close-packed planes) grow more rapidly than others, and the crystallites for which one of these planes faces in the direction of growth will usually out-compete crystals in other orientations. In the extreme, only one crystal will survive after a certain length: this is exploited in the Czochralski process (unless a seed crystal is used) and in the casting of turbine blades and other creep-sensitive parts. Texture and materials properties Material properties such as strength, chemical reactivity, stress corrosion cracking resistance, weldability, deformation behavior, resistance to radiation damage, and magnetic susceptibility can be highly dependent on the material’s texture and related changes in microstructure. In many materials, properties are texture-specific, and development of unfavorable textures when the material is fabricated or in use can create weaknesses that can initiate or exacerbate failures. Parts can fail to perform due to unfavorable textures in their component materials. Failures can correlate with the crystalline textures formed during fabrication or use of that component. Consequently, consideration of textures that are present in and that could form in engineered components while in use can be a critical when making decisions about the selection of some materials and methods employed to manufacture parts with those materials. When parts fail during use or abuse, understanding the textures that occur within those parts can be crucial to meaningful interpretation of failure analysis data. Thin film textures As the result of substrate effects producing preferred crystallite orientations, pronounced textures tend to occur in thin films. Modern technological devices to a large extent rely on polycrystalline thin films with thicknesses in the nanometer and micrometer ranges. This holds, for instance, for all microelectronic and most optoelectronic systems or sensoric and superconducting layers. Most thin film textures may be categorized as one of two different types: (1) for so-called fiber textures the orientation of a certain lattice plane is preferentially parallel to the substrate plane; (2) in biaxial textures the in-plane orientation of crystallites also tend to align with respect to the sample. The latter phenomenon is accordingly observed in nearly epitaxial growth processes, where certain crystallographic axes of crystals in the layer tend to align along a particular crystallographic orientation of the (single-crystal) substrate. Tailoring the texture on demand has become an important task in thin film technology. In the case of oxide compounds intended for transparent conducting films or surface acoustic wave (SAW) devices, for instance, the polar axis should be aligned along the substrate normal. Another example is given by cables from high-temperature superconductors that are being developed as oxide multilayer systems deposited on metallic ribbons. The adjustment of the biaxial texture in YBa2Cu3O7−δ layers turned out as the decisive prerequisite for achieving sufficiently large critical currents. The degree of texture is often subjected to an evolution during thin film growth and the most pronounced textures are only obtained after the layer has achieved a certain thickness. Thin film growers thus require information about the texture profile or the texture gradient in order to optimize the deposition process. The determination of texture gradients by x-ray scattering, however, is not straightforward, because different depths of a specimen contribute to the signal. Techniques that allow for the adequate deconvolution of diffraction intensity were developed only recently. References Further reading Bunge, H.-J. "Mathematische Methoden der Texturanalyse" (1969) Akademie-Verlag, Berlin Bunge, H.-J. "Texture Analysis in Materials Science" (1983) Butterworth, London Kocks, U. F., Tomé, C. N., Wenk, H.-R., Beaudoin, A. J., Mecking, H. "Texture and Anisotropy – Preferred Orientations in Polycrystals and Their Effect on Materials Properties" (2000) Cambridge University Press Birkholz, M., chapter 5 of "Thin Film Analysis by X-ray Scattering" (2006) Wiley-VCH, Weinheim External links aluMatter: Representing Texture MAUD program for Texture Analysis (diffraction, from patterns or pole figures) MTEX MATLAB toolbox for Texture Analysis (EBSD or diffraction, from pole figures) Labotex, ODF/texture analysis software for Microsoft Windows (EBSD or diffraction, from pole figures) Crystallographic Texture Combined Analysis Crystallography Metallurgy Materials science Neutron-related techniques Petrology Synchrotron-related techniques
Crystallographic texture
Physics,Chemistry,Materials_science,Engineering
2,178
1,058,672
https://en.wikipedia.org/wiki/Ataxia%E2%80%93telangiectasia
Ataxia–telangiectasia (AT or A–T), also referred to as ataxia–telangiectasia syndrome or Louis–Bar syndrome, is a rare, neurodegenerative disease causing severe disability. Ataxia refers to poor coordination and telangiectasia to small dilated blood vessels, both of which are hallmarks of the disease. A–T affects many parts of the body: It impairs certain areas of the brain including the cerebellum, causing difficulty with movement and coordination. It weakens the immune system, causing a predisposition to infection. It prevents repair of broken DNA, increasing the risk of cancer. Symptoms most often first appear in early childhood (the toddler stage) when children begin to sit or walk. Though they usually start walking at a normal age, they wobble or sway when walking, standing still or sitting. In late pre-school and early school age, they develop difficulty moving their eyes in a natural manner from one place to the next (oculomotor apraxia). They develop slurred or distorted speech, and swallowing problems. Some have an increased number of respiratory tract infections (ear infections, sinusitis, bronchitis, and pneumonia). Because not all children develop in the same manner or at the same rate, it may be some years before A–T is properly diagnosed. Most children with A–T have stable neurologic symptoms for the first 4–5 years of life, but begin to show increasing problems in early school years. Causes A–T has an autosomal recessive pattern of inheritance. A–T is caused by a defect in the ATM gene, named after this disease, which is involved in the recognition and repair of damaged DNA. Heterozygotes will not experience the characteristic symptoms but it has been reported they have higher risks of cancer and heart disease. The prevalence of A–T is estimated to be as high as 1 in 40,000 to as low as 1 in 300,000 people. Symptoms and signs There is substantial variability in the severity of features of A–T among affected individuals, and at different ages. The following symptoms or problems are either common or important features of A–T: Ataxia (difficulty with control of movement) that is apparent early but worsens in school to pre-teen years Oculomotor apraxia (difficulty with coordination of head and eye movement when shifting gaze from one place to the next) Involuntary movements Telangiectasia (dilated blood vessels) over the white (sclera) of the eyes, making them appear bloodshot. These are not apparent in infancy and may first appear at age 5–8 years. Telangiectasia may also appear on sun-exposed areas of skin. Problems with infections, especially of the ears, sinuses and lungs Increased incidence of cancer (primarily, but not exclusively, lymphomas and leukemias) Delayed onset or incomplete pubertal development, and very early menopause Slowed rate of growth (weight and/or height) Drooling particularly in young children when they are tired or concentrating on activities Dysarthria (slurred, slow, or distorted speech) Diabetes in adolescence or later Premature changes in hair and skin Many children are initially misdiagnosed as having cerebral palsy. The diagnosis of A–T may not be made until the preschool years when the neurologic symptoms of impaired gait, hand coordination, speech and eye movement appear or worsen, and the telangiectasia first appear. Because A–T is so rare, doctors may not be familiar with the symptoms, or methods of making a diagnosis. The late appearance of telangiectasia may be a barrier to the diagnosis. It may also take some time before doctors consider A–T as a possibility because of the early stability of symptoms and signs. There are patients who have been diagnosed with A-T only in adulthood due to an attenuated form of the disease, and this has been correlated with the type of their gene mutation. Ataxia and other neurologic problems The first indications of A–T usually occur during the toddler years. Children start walking at a normal age, but may not improve much from their initial wobbly gait. Sometimes they have problems standing or sitting still and tend to sway backward or from side to side. In primary school years, walking becomes more difficult, and children will use doorways and walls for support. Children with A–T often appear better when running or walking quickly in comparison to when they are walking slowly or standing in one place. Around the beginning of their second decade, children with the more severe ("classic") form of A–T start using a wheelchair for long distances. During school years, children may have increasing difficulty with reading because of impaired coordination of eye movement. At the same time, other problems with fine-motor functions (writing, coloring, and using utensils to eat), and with speech (dysarthria) may arise. Most of these neurologic problems stop progressing after the age of about 12 – 15 years, though involuntary movements may start at any age and may worsen over time. These extra movements can take many forms, including small jerks of the hands and feet that look like fidgeting (chorea), slower twisting movements of the upper body (athetosis), adoption of stiff and twisted postures (dystonia), occasional uncontrolled jerks (myoclonic jerks), and various rhythmic and non-rhythmic movements with attempts at coordinated action (tremors). Telangiectasia Prominent blood vessels (telangiectasia) over the white (sclera) of the eyes usually occur by the age of 5–8 years, but sometimes appear later or not at all. The absence of telangiectasia does not exclude the diagnosis of A–T. Potentially a cosmetic problem, the ocular telangiectasia do not bleed or itch, though they are sometimes misdiagnosed as chronic conjunctivitis. It is their constant nature, not changing with time, weather or emotion, that marks them as different from other visible blood vessels. Telangiectasia can also appear on sun-exposed areas of skin, especially the face and ears. They occur in the bladder as a late complication of chemotherapy with cyclophosphamide, have been seen deep inside the brain of older people with A–T, and occasionally arise in the liver and lungs. Immune problems About two-thirds of people with A–T have abnormalities of the immune system. The most common abnormalities are low levels of one or more classes of immunoglobulins (IgA, IgM, and IgG subclasses), not making antibodies in response to vaccines or infections, and having low numbers of lymphocytes (especially T-lymphocytes) in the blood. Some people have frequent infections of the upper (colds, sinus and ear infections) and lower (bronchitis and pneumonia) respiratory tract. All children with A–T should have their immune systems evaluated to detect those with severe problems that require treatment to minimize the number or severity of infections. Some people with A–T need additional immunizations (especially with pneumonia and influenza vaccines), antibiotics to provide protection (prophylaxis) from infections, and/or infusions of immunoglobulins (gamma globulin). The need for these treatments should be determined by an expert in the field of immunodeficiency or infectious diseases. Cancer People with A–T have a highly increased incidence (approximately 25% lifetime risk) of cancers, particularly lymphomas and leukemia, but other cancers can occur. Women who are A–T carriers (who have one mutated copy of the ATM gene), have approximately a two-fold increased risk for the development of breast cancer compared to the general population. This includes all mothers of A–T children and some female relatives. Current consensus is that special screening tests are not helpful, but all women should have routine cancer surveillance. Skin A–T can cause features of early aging such as premature graying of the hair. It can also cause vitiligo (an auto-immune disease causing loss of skin pigment resulting in a blotchy "bleach-splashed" look), and warts which can be extensive and recalcitrant to treatment. A small number of people develop a chronic inflammatory skin disease (granulomas). Lung disease Chronic lung disease develops in more than 25% of people with A–T. Lung function tests (spirometry) should be performed at least annually in children old enough to perform them, influenza and pneumococcal vaccines given to eligible individuals, and sinopulmonary infections treated aggressively to limit the development of chronic lung disease. Feeding, swallowing, and nutrition Feeding and swallowing can become difficult for people with A–T as they get older. Involuntary movements may make feeding difficult or messy and may excessively prolong mealtimes. It may be easier to finger feed than use utensils (e.g., spoon or fork). For liquids, it is often easier to drink from a closed container with a straw than from an open cup. Caregivers may need to provide foods or liquids so that self-feeding is possible, or they may need to feed the person with A–T. In general, meals should be completed within approximately 30 minutes. Longer meals may be stressful, interfere with other daily activities, and limit the intake of necessary liquids and nutrients. If swallowing problems (dysphagia) occur, they typically present during the second decade of life. Dysphagia is common because of the neurological changes that interfere with coordination of mouth and pharynx (throat) movements that are needed for safe and efficient swallowing. Coordination problems involving the mouth may make chewing difficult and increase the duration of meals. Problems involving the pharynx may cause liquid, food, and saliva to be inhaled into the airway (aspiration). People with dysphagia may not cough when they aspirate (silent aspiration). Swallowing problems and especially swallowing problems with silent aspiration may cause lung problems due to inability to cough and clear food and liquids from the airway. Warning signs of a swallowing problem Choking or coughing when eating or drinking Poor weight gain (during ages of expected growth) or weight loss at any age Excessive drooling Mealtimes longer than 40 – 45 minutes, on a regular basis Foods or drinks previously enjoyed are now refused or difficult Chewing problems Increase in the frequency or duration of breathing or respiratory problems Increase in lung infections Eye and vision Most people develop telangiectasia (prominent blood vessels) in the membrane that covers the white part (sclera) of the eye. Vision (ability to see objects in focus) is normal. Control of eye movement is often impaired, affecting visual functions that require fast, accurate eye movements from point to point (e.g. reading). Eye misalignments (strabismus) are common, but may be treatable. There may be difficulty in coordinating eye position and shaping the lens to see objects up close. Orthopedics Many individuals with A–T develop deformities of the feet that compound the difficulty they have with walking due to impaired coordination. Early treatment may slow progression of this deformity. Bracing or surgical correction sometimes improves stability at the ankle sufficient to enable an individual to walk with support, or bear weight during assisted standing transfers from one seat to another. Severe scoliosis is relatively uncommon, but probably does occur more often than in those without A–T. Spinal fusion is only rarely indicated. Genetics A–T is caused by mutations in the ATM (ATM serine/threonine kinase or ataxia–telangiectasia mutated) gene, which was cloned in 1995. ATM is located on human chromosome 11 (11q22.3) and is made up of 69 exons spread across 150kb of genomic DNA. The mode of inheritance for A–T is autosomal recessive. Each parent is a carrier, meaning that they have one normal copy of the A–T gene (ATM) and one copy that is mutated. A–T occurs if a child inherits the mutated A–T gene from each parent, so in a family with two carrier parents, there is 1 chance in 4 that a child born to the parents will have the disorder. Prenatal diagnosis (and carrier detection) can be carried out in families if the errors (mutation) in an affected child's two ATM genes have been identified. The process of getting this done can be complicated and, as it requires time, should be arranged before conception. Looking for mutations in the ATM gene of an unrelated person (for example, the spouse of a known A–T carrier) presents significant challenges. Genes often have variant spellings (polymorphisms) that do not affect function. In a gene as large as ATM, such variant spellings are likely to occur and doctors cannot always predict whether a specific variant will or will not cause disease. Genetic counseling can help family members of an A–T patient understand what can or cannot be tested, and how the test results should be interpreted. Carriers of A–T, such as the parents of a person with A–T, have one mutated copy of the ATM gene and one normal copy. They are generally healthy, but there is an increased risk of breast cancer in women. This finding has been confirmed in a variety of different ways, and is the subject of current research. Standard surveillance (including monthly breast self-exams and mammography at the usual schedule for age) is recommended, unless additional tests are indicated because the individual has other risk factors (e.g., family history of breast cancer). Non-canonical variants such as the insertion of a retrotransposon, which had not been studied until a few years ago, actually appear to also have therapeutic implications in the develpoment of Ataxia Telangiectasa. This issue was investigated by a recent study, which carried out NGS sequencing techniques and in vitro studies in a cohort of 235 AT patients from a Boston children's hospital. The study showed that insertions of retroelements in the ATM gene are the cause of the development of the disease in 5.5% of patients, and that insertions occur in non-coding regions in 92.3% of cases. This happens because insertions of retroelements, especially Alu elements, near an exon-intron boundary, cause changes in the splicing sites, resulting in the exclusion of an exon from the mature mRNA. This causes the appearance of premature stop codons, leading to degradation and loss-of-function of ATM. In addition, the insertion of the DUSP16 pseudogene into ATM can also result in loss of ATM function, as it leads to the appearance of a cryptic exon in mRNA due to the formation of new splicing acceptor and donor sites. This, again, generates premature stop codons. Pathophysiology How loss of the ATM protein creates a multisystem disorder A–T has been described as a genome instability syndrome, a DNA repair disorder and a DNA damage response (DDR) syndrome. ATM, the gene responsible for this multi-system disorder, encodes a protein of the same name which coordinates the cellular response to DNA double strand breaks (DSBs). Radiation therapy, chemotherapy that acts like radiation (radiomimetic drugs) and certain biochemical processes and metabolites can cause DSBs. When these breaks occur, ATM stops the cell from making new DNA (cell cycle arrest) and recruits and activates other proteins to repair the damage. Thus, ATM allows the cell to repair its DNA before the completion of cell division. If DNA damage is too severe, ATM will mediate the process of programmed cell death (apoptosis) to eliminate the cell and prevent genomic instability. Cancer and radiosensitivity In the absence of the ATM protein, cell-cycle check-point regulation and programmed cell death in response to DSBs are defective. The result is genomic instability which can lead to the development of cancers. Irradiation and radiomimetic compounds induce DSBs which are unable to be repaired appropriately when ATM is absent. Consequently, such agents can prove especially cytotoxic to A–T cells and people with A–T. Delayed pubertal development (gonadal dysgenesis) Infertility is often described as a characteristic of A–T. Whereas this is certainly the case for the mouse model of A–T, in humans it may be more accurate to characterize the reproductive abnormality as gonadal atrophy or dysgenesis characterized by delayed pubertal development. Because programmed DSBs are generated to initiate genetic recombinations involved in the production of sperm and eggs in reproductive organs (a process known as meiosis), meiotic defects and arrest can occur when ATM is not present. Immune system defects and immune-related cancers As lymphocytes develop from stem cells in the bone marrow into mature lymphocytes in the periphery, they rearrange special segments of their DNA [V(D)J recombination process]. This process requires them to make DSBs, which are difficult to repair in the absence of ATM. As a result, most people with A–T have reduced numbers of lymphocytes and some impairment of lymphocyte function (such as an impaired ability to make antibodies in response to vaccines or infections). In addition, broken pieces of DNA in chromosomes involved in the above-mentioned rearrangements have a tendency to recombine with other genes (translocation), making the cells prone to the development of cancer (lymphoma and leukemia). Progeric changes Cells from people with A–T demonstrate genomic instability, slow growth and premature senescence in culture, shortened telomeres and an ongoing, low-level stress response. These factors may contribute to the progeric (signs of early aging) changes of skin and hair sometimes observed in people with A–T. For example, DNA damage and genomic instability cause melanocyte stem cell (MSC) differentiation which produces graying. Thus, ATM may be a "stemness checkpoint" protecting against MSC differentiation and premature graying of the hair. Telangiectasia The cause of telangiectasia or dilated blood vessels in the absence of the ATM protein is not yet known. Increased alpha-fetoprotein (AFP) levels Approximately 95% of people with A–T have elevated serum AFP levels after the age of two, and measured levels of AFP appear to increase slowly over time. AFP levels are very high in the newborn, and normally descend to adult levels over the first year to 18 months. The reason why individuals with A–T have elevated levels of AFP is not yet known. Neurodegeneration A–T is one of several DNA repair disorders that result in neurological abnormalities or degeneration. Arguably some of the most devastating symptoms of A–T are a result of progressive cerebellar degeneration, characterized by the loss of Purkinje cells and, to a lesser extent, granule cells (located exclusively in the cerebellum). The cause of this cell loss is not known, though many hypotheses have been proposed based on experiments performed both in cell culture and in the mouse model of A–T. Current hypotheses explaining the neurodegeneration associated with A–T include the following: Defective DNA damage response in neurons which can lead to Failed clearance of genomically damaged neurons during development Transcription stress and abortive transcription including topoisomerase 1 cleavage complex (TOP1cc) dependent lesions Aneuploidy Defective response to oxidative stress characterized by elevated ROS and altered cellular metabolism Mitochondrial dysfunction Defects in neuronal function: Inappropriate cell cycle re-entry of post-mitotic (mature) neurons Synaptic/vesicular dysregulation HDAC4 dysregulation Histone hypermethylation and altered epigenetics Altered protein turnover These hypotheses may not be mutually exclusive and more than one of these mechanisms may underlie neuronal cell death when there is an absence or deficiency of ATM. Further, cerebellar damage and loss of Purkinje and granule cells do not explain all of the neurologic abnormalities seen in people with A–T. The effects of ATM deficiency on the other areas of the brain outside of the cerebellum are being actively investigated. Radiation exposure People with A–T have an increased sensitivity to ionizing radiation (X-rays and gamma rays). Therefore, X-ray exposure should be limited to times when it is medically necessary, as exposing an A–T patient to ionizing radiation can damage cells in such a way that the body cannot repair them. The cells can cope normally with other forms of radiation, such as ultraviolet light, so there is no need for special precautions from sunlight exposure. Diagnosis The diagnosis of A–T is usually suspected by the combination of neurologic clinical features (ataxia, abnormal control of eye movement, and postural instability) with telangiectasia and sometimes increased infections, and confirmed by specific laboratory abnormalities (elevated alpha-fetoprotein levels, increased chromosomal breakage or cell death of white blood cells after exposure to X-rays, absence of ATM protein in white blood cells, or mutations in each of the person's ATM genes). A variety of laboratory abnormalities occur in most people with A–T, allowing for a tentative diagnosis to be made in the presence of typical clinical features. Not all abnormalities are seen in all patients. These abnormalities include: Elevated and slowly increasing alpha-fetoprotein levels in serum after 2 years of age Immunodeficiency with low levels of immunoglobulins (especially IgA, IgM, IgG, and IgG subclasses) and low number of lymphocytes in the blood Chromosomal instability (broken pieces of chromosomes) Increased sensitivity of cells to x-ray exposure (cells die or develop even more breaks and other damage to chromosomes) Cerebellar atrophy on MRI scan The diagnosis can be confirmed in the laboratory by finding an absence or deficiency of the ATM protein in cultured blood cells, an absence or deficiency of ATM function (kinase assay), or mutations in both copies of the cell's ATM gene. These more specialized tests are not always needed, but are particularly helpful if a child's symptoms are atypical. Differential diagnosis There are several other disorders with similar symptoms or laboratory features that physicians may consider when diagnosing A–T. The three most common disorders that are sometimes confused with A–T are: Cerebral palsy Friedreich's ataxia Cogan oculomotor apraxia Each of these can be distinguished from A–T by the neurologic exam and clinical history. Cerebral palsy (CP) describes a non-progressive disorder of motor function stemming from malformation or early damage to the brain. CP can manifest in many ways, given the different manner in which the brain can be damaged; in common to all forms is the emergence of signs and symptoms of impairment as the child develops. However, milestones that have been accomplished and neurologic functions that have developed do not deteriorate in CP as they often do in children with A–T in the late pre-school years. Most children with ataxia caused by CP do not begin to walk at a normal age, whereas most children with A–T start to walk at a normal age even though they often "wobble" from the start. Pure ataxia is a rare manifestation of early brain damage or malformation, however, and the possibility of an occult genetic disorder of brain should be considered and sought for those in whom ataxia is the chief manifestation of CP. Children with ataxic CP will not manifest the laboratory abnormalities associated with A–T. Cogan occulomotor apraxia is a rare disorder of development. Affected children have difficulty moving their eyes only to a new visual target, so they will turn their head past the target to "drag" the eyes to the new object of interest, then turn the head back. This tendency becomes evident in late infancy and toddler years, and mostly improves with time. This contrasts to the oculomotor difficulties evident in children with A–T, which are not evident in early childhood but emerge over time. Cogan's oculomotor apraxia is generally an isolated problem, or may be associated with broader developmental delay. Friedreich ataxia (FA) is the most common genetic cause of ataxia in children. Like A–T, FA is a recessive disease, appearing in families without a history of the disorder. FA is caused by mutation in the frataxin gene, most often an expansion of a naturally occurring repetition of the three nucleotide bases GAA from the usual 5–33 repetitions of this trinucleotide sequence to greater than 65 repeats on each chromosome. Most often the ataxia appears between 10 and 15 years of age, and differs from A–T by the absence of telangiectasia and oculomotor apraxia, a normal alpha fetoprotein, and the frequent presence of scoliosis, absent tendon reflexes, and abnormal features on the EKG. Individuals with FA manifest difficulty standing in one place that is much enhanced by closure of the eyes (Romberg sign) that is not so apparent in those with A–T – even though those with A–T may have greater difficulty standing in one place with their eyes open. There are other rare disorders that can be confused with A–T, either because of similar clinical features, a similarity of some laboratory features, or both. These include: Ataxia–oculomotor apraxia type 1 (AOA1) Ataxia–oculomotor apraxia type 2 (AOA2 also known as SCAR1) Ataxia–telangiectasia like disorder (ATLD) Nijmegen breakage syndrome (NBS) Ataxia–oculomotor apraxia type 1 (AOA1) is an autosomal recessive disorder similar to A–T in manifesting increasing problems with coordination and oculomotor apraxia, often at a similar age to those having A–T. It is caused by mutation in the gene coding for the protein aprataxin. Affected individuals differ from those with A–T by the early appearance of peripheral neuropathy, early in their course manifest difficulty with initiation of gaze shifts, and the absence of ocular telangiectasia, but laboratory features are of key importance in the differentiation of the two. Individuals with AOA1 have a normal AFP, normal measures of immune function, and after 10–15 years have low serum levels of albumin. Genetic testing of the aprataxin gene can confirm the diagnosis. There is no enhanced risk for cancer. Ataxia–oculomotor apraxia type 2 (AOA2) is an autosomal recessive disorder also similar to A–T in manifesting increasing problems with coordination and peripheral neuropathy, but oculomotor apraxia is present in only half of affected individuals. Ocular telangiectasia do not develop. Laboratory abnormalities of AOA2 are like A–T, and unlike AOA1, in having an elevated serum AFP level, but like AOA1 and unlike A–T in having normal markers of immune function. Genetic testing of the senataxin gene (SETX) can confirm the diagnosis. There is no enhanced risk for cancer. Ataxia–telangiectasia like disorder (ATLD) is an extremely rare condition, caused by mutation in the hMre11 gene, that could be considered in the differential diagnosis of A–T. Patients with ATLD are very similar to those with A–T in showing a progressive cerebellar ataxia, hypersensitivity to ionizing radiation and genomic instability. Those rare individuals with ATLD who are well described differ from those with A–T by the absence of telangiectasia, normal immunoglobulin levels, a later onset, and a slower progression of the symptoms. Because of its rarity, it is not yet known whether or not ATLD carries an increased risk to develop cancer. Because those mutations of Mre11 that severely impair the MRE11 protein are incompatible with life, individuals with ATLD all have some partial function of the Mre11 protein, and hence likely all have their own levels of disease severity. Nijmegen breakage syndrome (NBS) is a rare genetic disorder that has similar chromosomal instability to that seen in people with A–T, but the problems experienced are quite different. Children with NBS have significant microcephaly, a distinct facial appearance, short stature, and moderate cognitive impairment, but do not experience any neurologic deterioration over time. Like those with A–T, children with NBS have enhanced sensitivity to radiation, disposition to lymphoma and leukemia, and some laboratory measures of impaired immune function, but do not have ocular telangiectasia or an elevated level of AFP. The proteins expressed by the hMre11 (defective in ATLD) and Nbs1 (defective in NBS) genes exist in the cell as a complex, along with a third protein expressed by the hRad50 gene. This complex, known as the MRN complex, plays an important role in DNA damage repair and signaling and is required to recruit ATM to the sites of DNA double strand breaks. Mre11 and Nbs1 are also targets for phosphorylation by the ATM kinase. Thus, the similarity of the three diseases can be explained in part by the fact that the protein products of the three genes mutated in these disorders interact in common pathways in the cell. Differentiation of these disorders is often possible with clinical features and selected laboratory tests. In cases where the distinction is unclear, clinical laboratories can identify genetic abnormalities of ATM, aprataxin and senataxin, and specialty centers can identify abnormality of the proteins of potentially responsible genes, such as ATM, MRE11, nibrin, TDP1, aprataxin and senataxin as well as other proteins important to ATM function such as ATR, DNA-PK, and RAD50. Management Ataxia and other neurologic problems There is no treatment known to slow or stop the progression of the neurologic problems. Immune problems All individuals with A–T should have at least one comprehensive immunologic evaluation that measures the number and type of lymphocytes in the blood (T-lymphocytes and B-lymphocytes), the levels of serum immunoglobulins (IgG, IgA, and IgM) and antibody responses to T-dependent (e.g., tetanus, Hemophilus influenzae b) and T-independent (23-valent pneumococcal polysaccharide) vaccines. For the most part, the pattern of immunodeficiency seen in an A–T patient early in life (by age five) will be the same pattern seen throughout the lifetime of that individual. Therefore, the tests need not be repeated unless that individual develops more problems with infection. Problems with immunity sometimes can be overcome by immunization. Vaccines against common bacterial respiratory pathogens such as Hemophilus influenzae, pneumococci and influenza virus (the "flu") are commercially available and often help to boost antibody responses, even in individuals with low immunoglobulin levels. If the vaccines do not work and the patient continues to have problems with infections, gamma globulin therapy (IV or subcutaneous infusions of antibodies collected from normal individuals) may be of benefit. A small number of people with A–T develop an abnormality in which one or more types of immunoglobulin are increased far beyond the normal range. In a few cases, the immunoglobulin levels can be increased so much that the blood becomes thick and does not flow properly. Therapy for this problem must be tailored to the specific abnormality found and its severity. If an individual patient's susceptibility to infection increases, it is important to reassess immune function in case deterioration has occurred and a new therapy is indicated. If infections are occurring in the lung, it is also important to investigate the possibility of dysfunctional swallow with aspiration into the lungs (see above sections under Symptoms: Lung Disease and Symptoms: Feeding, Swallowing and Nutrition.) Most people with A–T have low lymphocyte counts in the blood. This problem seems to be relatively stable with age, but a rare number of people do have progressively decreasing lymphocyte counts as they get older. In the general population, very low lymphocyte counts are associated with an increased risk for infection. Such individuals develop complications from live viral vaccines (measles, mumps, rubella and chickenpox), chronic or severe viral infections, yeast infections of the skin and vagina, and opportunistic infections (such as pneumocystis pneumonia). Although lymphocyte counts are often as low in people with A–T, they seldom have problems with opportunistic infections. (The one exception to that rule is that problems with chronic or recurrent warts are common.) The number and function of T-lymphocytes should be re-evaluated if a person with A–T is treated with corticosteroid drugs such as prednisone for longer than a few weeks or is treated with chemotherapy for cancer. If lymphocyte counts are low in people taking those types of drugs, the use of prophylactic antibiotics is recommended to prevent opportunistic infections. If the tests show significant abnormalities of the immune system, a specialist in immunodeficiency or infectious diseases will be able to discuss various treatment options. Absence of immunoglobulin or antibody responses to vaccine can be treated with replacement gamma globulin infusions, or can be managed with prophylactic antibiotics and minimized exposure to infection. If antibody function is normal, all routine childhood immunizations including live viral vaccines (measles, mumps, rubella and varicella) should be given. In addition, several "special" vaccines (that is, licensed but not routine for otherwise healthy children and young adults) should be given to decrease the risk that an A–T patient will develop lung infections. The patient and all household members should receive the influenza (flu) vaccine every fall. People with A–T who are less than two years old should receive three doses of a pneumococcal conjugate vaccine (Prevnar) given at two month intervals. People older than two years who have not previously been immunized with Prevnar should receive two doses of Prevnar. At least 6 months after the last Prevnar has been given and after the child is at least two years old, the 23-valent pneumococcal vaccine should be administered. Immunization with the 23-valent pneumococcal vaccine should be repeated approximately every five years after the first dose. In people with A–T who have low levels of IgA, further testing should be performed to determine whether the IgA level is low or completely absent. If absent, there is a slightly increased risk of a transfusion reaction. "Medical Alert" bracelets are not necessary, but the family and primary physician should be aware that if there is elective surgery requiring red cell transfusion, the cells should be washed to decrease the risk of an allergic reaction. People with A–T also have an increased risk of developing autoimmune or chronic inflammatory diseases. This risk is probably a secondary effect of their immunodeficiency and not a direct effect of the lack of ATM protein. The most common examples of such disorders in A–T include immune thrombocytopenia (ITP), several forms of arthritis, and vitiligo. Lung disease Recurrent sinus and lung infections can lead to the development of chronic lung disease. Such infections should be treated with appropriate antibiotics to prevent and limit lung injury. Administration of antibiotics should be considered when children and adults have prolonged respiratory symptoms (greater than 7 days), even following what was presumed to have been a viral infection. To help prevent respiratory illnesses from common respiratory pathogens, annual influenza vaccinations should be given and pneumococcal vaccines should be administered when appropriate. Antibiotic treatment should also be considered in children with chronic coughs that are productive of mucous, those who do not respond to aggressive pulmonary clearance techniques and in children with muco-purulent secretions from the sinuses or chest. A wet cough can also be associated with chronic aspiration which should be ruled out through proper diagnostic studies, however, aspiration and respiratory infections are not necessarily exclusive of each other. In children and adults with bronchiectasis, chronic antibiotic therapy should be considered to slow chronic lung disease progression. Culturing of the sinuses may be needed to direct antibiotic therapy. This can be done by an Ear Nose and Throat (ENT) specialist. In addition, diagnostic bronchoscopy may be necessary in people who have recurrent pneumonias, especially those who do not respond or respond incompletely to a course of antibiotics. Clearance of bronchial secretions is essential for good pulmonary health and can help limit injury from acute and chronic lung infections. Children and adults with increased bronchial secretions can benefit from routine chest therapy using the manual method, an a cappella device or a chest physiotherapy vest. Chest physiotherapy can help bring up mucous from the lower bronchial tree, however, an adequate cough is needed to remove secretions. In people who have decreased lung reserve and a weak cough, use of an insufflator-exsufflator (cough-assist) device may be useful as a maintenance therapy or during acute respiratory illnesses to help remove bronchial secretions from the upper airways. Evaluation by a Pulmonology specialist, however, should first be done to properly assess patient suitability. Children and adults with chronic dry cough, increased work of breathing (fast respiratory rate, shortness of breath at rest or with activities) and absence of an infectious process to explain respiratory symptoms should be evaluated for interstitial lung disease or another intrapulmonary process. Evaluation by a Pulmonologist and a CT scan of the chest should be considered in individuals with symptoms of interstitial lung disease or to rule other non-infectious pulmonary processes. People diagnosed with interstitial lung disease may benefit from systemic steroids. Feeding, swallowing and nutrition Oral intake may be aided by teaching persons with A–T how to drink, chew and swallow more safely. The propriety of treatments for swallowing problems should be determined following evaluation by an expert in the field of speech-language pathology. Dieticians may help treat nutrition problems by recommending dietary modifications, including high calorie foods or food supplements. A feeding (gastrostomy) tube is recommended when any of the following occur: A child cannot eat enough to grow or a person of any age cannot eat enough to maintain weight; Aspiration is problematic; Mealtimes are stressful or too long, interfering with other activities. Education and socialization Most children with A–T have difficulty in school because of a delay in response time to visual, verbal or other cues, slurred and quiet speech (dysarthria), abnormalities of eye control (oculomotor apraxia), and impaired fine motor control. Despite these problems, children with A–T often enjoy school if proper accommodations to their disability can be made. The decision about the need for special education classes or extra help in regular classes is highly influenced by the local resources available. Decisions about proper educational placement should be revisited as often as circumstances warrant. Despite their many neurologic impairments, most individuals with A–T are very socially aware and socially skilled, and thus benefit from sustained peer relationships developed at school. Some individuals are able to function quite well despite their disabilities and a few have graduated from community colleges. Many of the problems encountered will benefit from special attention, as problems are often related more to "input and output" issues than to intellectual impairment. Problems with eye movement control make it difficult for people with A–T to read, yet most fully understand the meaning and nuances of text that is read to them. Delays in speech initiation and lack of facial expression make it seem that they do not know the answers to questions. Reduction of the skilled effort needed to answer questions, and an increase of the time available to respond, is often rewarded by real accomplishment. It is important to recognize that intellectual disability is not regularly a part of the clinical picture of A–T although school performance may be suboptimal because of the many difficulties in reading, writing, and speech. Children with A–T are often very conscious of their appearance, and strive to appear normal to their peers and teachers. Life within the ataxic body can be tiring. The enhanced effort needed to maintain appearances and increased energy expended in abnormal tone and extra movements all contribute to physical and mental fatigue. As a consequence, for some a shortened school day yields real benefits. General recommendations All children with A–T need special attention to the barriers they experience in school. In the United States, this takes the form of a formal IEP (Individualized Education Program). Children with A–T tend to be excellent problem solvers. Their involvement in how to best perform tasks should be encouraged. Speech-language pathologists may facilitate communication skills that enable persons with A–T to get their messages across (using key words vs. complete sentences) and teach strategies to decrease frustration associated with the increase time needed to respond to questions (e.g., holding up a hand and informing others about the need to allow more time for responses). Rarely helpful are traditional speech therapies that focus on the production of specific sounds and strengthening of the lip and tongue muscles. Classroom aides may be appropriate, especially to help with scribing, transportation through the school, mealtimes and toileting. The impact of an aide on peer relationships should be monitored carefully. Physical therapy is useful to maintain strength and general cardiovascular health. Horseback therapy and exercises in a swimming pool are often well tolerated and fun for people with A–T. However, no amount of practice will slow the cerebellar degeneration or improve neurologic function. Exercise to the point of exhaustion should be avoided. Hearing is normal throughout life. Books on tape may be a useful adjunct to traditional school materials. Early use of computers (preschool) with word completion software should be encouraged. Practicing coordination (e.g. balance beam or cursive writing exercises) is not helpful. Occupational therapy is helpful for managing daily living skills. Allow rest time, shortened days, reduced class schedule, reduced homework, modified tests as necessary. Like all children, those with A–T need to have goals to experience the satisfaction of making progress. Social interactions with peers are important, and should be taken into consideration for class placement. For everyone long-term peer relationships can be the most rewarding part of life; for those with A–T establishing these connections in school years can be helpful. Treatment No curative medication has been approved for the treatment of inherited cerebellar ataxias, including Ataxia-Telangiectasia. Nonetheless, a new study that identified retroelement insertions in ATM as one of the causes for ATM loss-of-function in A-T patients has also suggested that antisense oligonucleotides might be a viable therapy. In this novel research article, antisense oligonucleotides corrected the mis-splicing caused by retroelement insertion of DUSP16 pseudogene in ATM in vitro, restoring the level of normal ATM transcripts. N-Acetyl-Leucine N-Acetyl-Leucine is an orally administered, modified amino acid that is being developed as a novel treatment for multiple rare and common neurological disorders by IntraBio Inc (Oxford, United Kingdom). N-Acetyl-Leucine has been granted multiple orphan drug designations from the U.S. Food & Drug Administration (FDA) and the European Medicines Agency (EMA) for the treatment of various genetic diseases, including Ataxia-Telangiectasia. N-Acetyl-Leucine has also been granted Orphan Drug Designations in the US and EU for related inherited cerebellar ataxias, such as Spinocerebellar Ataxias. U.S. Food & Drug Administration (FDA) and the European Medicines Agency (EMA). Published case series studies have demonstrated the positive clinical benefit of treatment with N-Acetyl-Leucine various inherited cerebellar ataxias. A multinational clinical trial investigating N-Acetyl-L-Leucine for the treatment Ataxia-Telangiectasia began in 2019. IntraBio is also conducting two parallel clinical trials with N-Acetyl-L-Leucine for the treatment Niemann-Pick disease type C and GM2 Gangliosidosis (Tay-Sachs and Sandhoff Disease) Future opportunities to develop N-Acetyl-Leucine include Lewy Body Dementia, Amyotrophic lateral sclerosis, Restless Leg Syndrome, Multiple Sclerosis, and Migraine. Prognosis Median survival in two large cohorts studies was 25 and 19 years of age, with a wide range. Life expectancy does not correlate well with severity of neurological impairment. Epidemiology Individuals of all races and ethnicities are affected equally. The incidence worldwide is estimated to be between 1 in 40,000 and 1 in 100,000 people. Research directions An open-label Phase II clinical trial studying the use of red blood cells (erythrocytes) loaded with dexamethasone sodium phosphate found that this treatment improved symptoms and appeared to be well tolerated. This treatment uses a unique delivery system for medication by using the patient's own red blood cells as the delivery vehicle for the drug. History Denise Louis-Bar, from whom it received the name Louis-Bar Syndrome, first described the condition in 1941. References External links About A–T from the NINDS Orphanet for A–T GeneReviews for ataxia–telangiectasia Replication-Independent Double-Strand Breaks (DSBs) Discusses importance of the ATM kinase Chromosome instability syndromes Genodermatoses Systemic atrophies primarily affecting the central nervous system Neurodegenerative disorders IUIS-PID table 3 immunodeficiencies DNA replication and repair-deficiency disorders Syndromes affecting the nervous system Syndromes with tumors Rare syndromes
Ataxia–telangiectasia
Biology
9,839
43,453,945
https://en.wikipedia.org/wiki/2014%20Kunshan%20explosion
The 2014 Kunshan explosion () was a dust explosion that occurred at Zhongrong Metal Production Company, an automotive parts factory located in Kunshan, Jiangsu, China, on 2 August 2014. As of December 30, 2014, the explosion killed 146 workers and injured 114 others. Event A massive explosion occurred at the factory at 7:37 a.m. At the time, more than 260 people were present, which was more than the usual number of employees working since overtime wages were doubled during the weekends. 44 people died at the scene of the explosion, while another 31 died at local hospitals. Five hospitals in Kunshan and nearby Suzhou treated over 180 wounded. It is believed the explosion may have been caused by flames igniting metal polishing dust. References Kunshan explosion Kunshan explosion Kunshan explosion Dust explosions Kunshan explosion History of Suzhou Industrial fires and explosions in China Kunshan
2014 Kunshan explosion
Chemistry
181
41,422,644
https://en.wikipedia.org/wiki/SDSS%20J1106%2B1939
SDSS J1106+1939 (SDSS J110644.95+193930.6) is a quasar, notable for its energetic matter outflow. It is the record holder for the most powerful matter outflow by a quasar. The engine is a supermassive black hole, pulling in matter at the rate of 400 solar masses per year and ejecting it at the speed of 8000 km/s. The outflow produces a luminosity of 1046  ergs. This makes the quasar more than two trillion times brighter than the Sun, one of the most luminous quasars on record. The quasar has the visual magnitude of about ~19, despite its extreme distance of 11 billion light years. The outflow of matter from the quasar produces about 1/20 of its luminosity. References Quasars Supermassive black holes SDSS objects Leo (constellation)
SDSS J1106+1939
Physics,Astronomy
196
2,922,278
https://en.wikipedia.org/wiki/Omega%20Canis%20Majoris
Omega Canis Majoris, Latinized from ω Canis Majoris, is a solitary, blue-white-hued star in the equatorial constellation of Canis Major. It is visible to the naked eye with an apparent visual magnitude of about 4. Based upon an annual parallax shift of just 3.58 mas as seen from Earth, this system is located roughly 910 light-years from the Sun. This star has a stellar classification of B2.5Ve, indicating it is a main sequence Be star, although it has also been classified as a subgiant. One of the most observed Be stars of the Southern Hemisphere, Omega Canis Majoris is classified as a Gamma Cassiopeiae-type variable star. Both the luminosity and the radial velocity vary with a primary cyclical period of 1.372 days. The variation in brightness, ranging from magnitude +3.60 to +4.18, shows changes over time, which suggests there are two overlapping periods of 1.37 and 1.49 days. The star also undergoes transient periodicities following outbursts. This is a massive star with ten times the mass of the Sun and 6.2 times the Sun's radius. At an estimated age of 22.5 million years, it is radiating 13,081 times the Sun's luminosity from its photosphere at an effective temperature of 21,878 K. The star is being viewed nearly pole on, so the measured projected rotational velocity of 80 km/s is only a fraction of the true equatorial velocity, estimated as 350 km/s. It is surrounded by a symmetric circumstellar decretion disk of material that is being heated by the star, which in turn is inserting emission lines into the combined spectrum. References External links </ref> B-type main-sequence stars B-type subgiants Be stars Gamma Cassiopeiae variable stars Canis Majoris, Omega Canis Major Durchmusterung objects Canis Majoris, 28 056139 035037 2749
Omega Canis Majoris
Astronomy
424
2,902,495
https://en.wikipedia.org/wiki/8%20Andromedae
8 Andromedae, abbreviated 8 And, is a probable triple star system in the northern constellation of Andromeda. 8 Andromedae is the Flamsteed designation. It is visible to the naked eye with an apparent visual magnitude of 4.82. Based upon an annual parallax shift of , it is located about 570 light years from the Earth. It is moving closer with a heliocentric radial velocity of −8 km/s. The primary component is an ageing red giant star with a stellar classification of . The suffix notation indicates this is a mild barium star, which means the stellar atmosphere is enriched with s-process elements. It is either a member of a close binary system and has previously acquired these elements from a (now) white dwarf companion or else it is on the asymptotic giant branch and is generating the elements itself. This is a periodic variable of unknown type, changing in brightness with an amplitude of 0.0161 magnitude at a frequency of 0.23354 d−1, or once every 4.3 days. The third component is the magnitude 13.0 star at an angular separation of along a position angle of 164°, as of 2015. It has a Gaia Data Release 3 parallax of and a proper motion almost identical to 8 Andromedae. A number of other faint stars within a few arc-minutes of 8 Andromedae have been listed as companions, but none are at the same distance. Within Andromeda it is the second of a northerly chain asterism – 11 is further south-westward, with 7, 5, then 3 Andromedae in the other direction. References M-type giants Suspected variables Barium stars Triple stars 08 Andromedae Durchmusterung objects Andromedae, 08 219734 115022 8860
8 Andromedae
Astronomy
382