id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
52,780,566
https://en.wikipedia.org/wiki/Tuniu
Tuniu Corporation () is a Chinese online travel agency. Products and services include packaged tours, accommodation reservation, airline and railway ticketing, car rentals, and corporate travel. The company listed on the Nasdaq Stock Exchange on May 9, 2014. The company headquarters are located in Nanjing with offices in Shanghai and Beijing. History Founded in 2006 in Nanjing by current CEO Donald Dunde Yu and current COO Alex Haifeng Yan, the company was fully incorporated on June 1, 2008. On May 9, 2014, Tuniu was listed on the Nasdaq Stock Exchange under TOUR, co-managed by Morgan Stanley & Co, Credit Suisse Securities LLC and China Renaissance Securities. Tuniu raised $72 million in its initial public offering, pricing 8 million shares at $9 per share. CEO Donald Dunde Yu rang the opening bell at the Nasdaq MarketSite in Times Square. In April 2015, Tuniu was the subject of a boycott by seventeen Chinese travel agencies over a pricing dispute. The issue was settled a few days later following an investigation by the China National Tourism Administration, with partner relations returning to normal. Tuniu's share price fell 4.7% following news of the dispute. On August 23, 2016, Tuniu's Board of Directors authorized a share repurchase program to repurchase up to $150 million worth of shares. Tuniu's share price had fallen below opening price. Investments and acquisitions On July 1, 2014, Ctrip CEO James Jianzhang Liang was appointed to Tuniu's board of directors. On December 10, 2014, Tuniu and Ctrip signed a strategic collaboration agreement to share travel resources. On December 15, 2014, Tuniu announced $148 million investment in aggregate from a group that included the investment arms of Hony Capital, JD.com, Ctrip Investment Holding Ltd, and the personal holding companies of Tuniu's CEO and COO. Ctrip acquired $15 million Tuniu shares during their IPO, and currently owns over 3% of Tuniu's outstanding shares. In May, 2015, Tuniu announced the investment of $500 million from a group of investors led by JD.com. JD.com became the largest shareholder in Tuniu with 27.5% stake. On March 9, 2015, Tuniu announced the acquisition of majority stakes in two Chinese travel agencies, Hangzhou-based Zhejiang Zhongshan International Services and Tianjin-based China Classical Holiday. On January 21, 2016, Tuniu announced the completion of a US$500 million investment from HNA Tourism Group. Transaction purchase price was US$5.50 per Class A ordinary share. HNA Tourism Group bought 24.1% share of Tuniu. Brand ambassadors In July 2016, Tuniu announced the signing of Taiwanese pop stars Jay Chou and Jimmy Lin as its celebrity brand ambassadors. References External links Official Website Chinese companies established in 2006 Online travel agencies Online retailers of China Companies based in Nanjing E-commerce
Tuniu
Technology
605
3,132,697
https://en.wikipedia.org/wiki/Mordell%20curve
In algebra, a Mordell curve is an elliptic curve of the form y2 = x3 + n, where n is a fixed non-zero integer. These curves were closely studied by Louis Mordell, from the point of view of determining their integer points. He showed that every Mordell curve contains only finitely many integer points (x, y). In other words, the differences of perfect squares and perfect cubes tend to infinity. The question of how fast was dealt with in principle by Baker's method. Hypothetically this issue is dealt with by Marshall Hall's conjecture. Properties If (x, y) is an integer point on a Mordell curve, then so is (x, −y). If (x, y) is a rational point on a Mordell curve with y ≠ 0, then so is . Moreover, if xy ≠ 0 and n is not 1 or −432, an infinite number of rational solutions can be generated this way. This formula is known as Bachet's duplication formula. When n ≠ 0, the Mordell curve only has finitely many integer solutions (see Siegel's theorem on integral points). There are certain values of n for which the corresponding Mordell curve has no integer solutions; these values are: 6, 7, 11, 13, 14, 20, 21, 23, 29, 32, 34, 39, 42, ... . −3, −5, −6, −9, −10, −12, −14, −16, −17, −21, −22, ... . The specific case where n = −2 is also known as Fermat's Sandwich Theorem. List of solutions The following is a list of solutions to the Mordell curve y2 = x3 + n for |n| ≤ 25. Only solutions with y ≥ 0 are shown. In 1998, J. Gebel, A. Pethö, H. G. Zimmer found all integers points for 0 < |n| ≤ 104. In 2015, M. A. Bennett and A. Ghadermarzi computed integer points for 0 < |n| ≤ 107. References External links J. Gebel, Data on Mordell's curves for –10000 ≤ n ≤ 10000 M. Bennett, Data on Mordell curves for –107 ≤ n ≤ 107 Algebraic curves Diophantine equations Elliptic curves
Mordell curve
Mathematics
499
77,770,500
https://en.wikipedia.org/wiki/Dioxethedrin
Dioxethedrin (), or dioxethedrine, also known as 3,4-dihydroxy-N-ethylnorephedrine, is a sympathomimetic medication. It was a component of the antitussive syrup Bexol (a combination of dioxethedrin, codeine, and promethazine). It is an ephedrine derivative (and hence is a phenethylamine and amphetamine) and is described as a bronchodilator and β-adrenergic receptor agonist. Analogues of dioxethedrin include dioxifedrine (α-methylepinephrine; 3,4-dihydroxyephedrine), corbadrine (levonordefrin; α-methylnorepinephrine), and α-methyldopamine. References Abandoned drugs Antitussives Beta-adrenergic agonists Beta-Hydroxyamphetamines Bronchodilators Catecholamines Decongestants Peripherally selective drugs Sympathomimetics Triols
Dioxethedrin
Chemistry
239
78,086,043
https://en.wikipedia.org/wiki/SB269652
SB269652 is an experimental dopamine D2 and D3 receptor negative allosteric modulator. It is of interest in the potential development of novel antipsychotics for treatment of schizophrenia with reduced side effects, such as extrapyramidal symptoms. The drug is described as a dual orthosteric and allosteric (i.e., bitopic) modulator of the dopamine D2 and D3 receptors, as an atypical allosteric modulator of these receptors, and as specifically targeting D2–D3 receptor dimers. SB269652 was first described in the scientific literature by 1999. It was originally thought to act purely as an antagonist of the dopamine D2 and D3 receptors, but was serendipitously found to be a negative allosteric modulator of these receptors in 2010. It was the first dopamine D2 and D3 receptor negative allosteric modulator to be discovered. More potent analogues of SB269652 have been developed. References Abandoned drugs Carboxamides Cyano compounds Cyclohexanes D2 antagonists D3 antagonists Dopamine receptor modulators Experimental drugs developed for schizophrenia Isoquinolines Indoles
SB269652
Chemistry
262
2,928,212
https://en.wikipedia.org/wiki/Bauschinger%20effect
The Bauschinger effect refers to a property of materials where the material's stress/strain characteristics change as a result of the microscopic stress distribution of the material. For example, an increase in tensile yield strength occurs at the expense of compressive yield strength. The effect is named after German engineer Johann Bauschinger. While more tensile cold working increases the tensile yield strength, the local initial compressive yield strength after tensile cold working is actually reduced. The greater the tensile cold working, the lower the compressive yield strength. It is a general phenomenon found in most polycrystalline metals. Based on the cold work structure, two types of mechanisms are generally used to explain the Bauschinger effect: Local back stresses may be present in the material, which assist the movement of dislocations in the reverse direction. The pile-up of dislocations at grain boundaries and Orowan loops around strong precipitates are two main sources of these back stresses. When the strain direction is reversed, dislocations of the opposite sign can be produced from the same source that produced the slip-causing dislocations in the initial direction. Dislocations with opposite signs can attract and annihilate each other. Since strain hardening is related to an increased dislocation density, reducing the number of dislocations reduces strength. The net result is that the yield strength for strain in the opposite direction is less than it would be if the strain had continued in the initial direction. Mechanism of action The Bauschinger effect is primarily attributed to the interaction between dislocations and the internal stress fields within the material. Initially, as external stress is applied, dislocations are generated and traverse the crystal lattice, creating internal stress fields. These fields, in turn, interact with the applied stress, leading to a phenomenon known as work hardening or strain hardening. With the accumulation of dislocations, the material's yield strength rises, hindering further plastic deformation. When stresses are applied in the reverse direction, the dislocations are now aided by the back stresses that were present at the dislocation barriers previously and also because the back stresses at the dislocation barriers in the back are not likely to be strong compared to the previous case. Hence the dislocations glide easily, resulting in lower yield stress for plastic deformation for reversed direction of loading. Bauschinger effect, varies in magnitude based on factors like material composition, crystal structure, and prior plastic deformation. Materials with a higher density of dislocations and more internal stress fields tend to exhibit a more obvious Bauschinger effect. Additionally, the Bauschinger effect often accompanies other phenomena, such as Permanent Softening and Transient effects. There is also a considerable amount of contribution of residual lattice stresses/strains to the Bauschinger Effect in materials that is associated with anisotropy in deformation. During loading-unloading cycles, dislocations do not return to their original position after unloading, which leaves residual strains in the lattice. These strains interact with stresses applied in the opposite direction which affect the materials response to subsequent loading-unloading cycles. The biggest effect observed is plastic yield asymmetry wherein the material will yield at different values in different loading directions. [1] There are three types of residual stresses - type I, type II and type III that contribute to the Bauschinger effect in polycrystalline materials. Type I residual stresses arise during manufacturing due to thermal gradients and usually self-equilibrate over the length comparable to the macroscopic dimension of the material. So, they do not contribute significantly to the Bauschinger effect [2]. However, type II stresses equilibrate at the grain size scale and thus, contribute significantly to the Bauschinger effect. They result from strain incompatibility between neighboring grains due to plastic and elastic anisotropy. Thus, they are responsible to change the material's yield behavior along different directions by affecting dislocation motion along these differently oriented grains [3]. Type III stresses on the other hand arise due to mismatch between the soft matrix material and hard precipitates or dislocation cell walls (microstructural elements). They last over extremely short distances but significantly affect areas having microstructural heterogeneity. Dislocations pile-ups or stress concentration at the grain boundaries are examples of these type of residual stresses [4], [5]. As a whole, these three types of residual stresses impact properties like strength, flexibility, fatigue and durability. Thus, understanding the mechanism of residual stresses is important to mitigate the influence of the Bauschinger effect. References [1]  A. A. Mamun, R. J. Moat, J. Kelleher, and P. J. Bouchard, “Origin of the Bauschinger effect in a polycrystalline material,” Mater. Sci. Eng. A, vol. 707, pp. 576–584, Nov. 2017, doi: 10.1016/j.msea.2017.09.091. [2]  J. Hu, B. Chen, D. J. Smith, P. E. J. Flewitt, and A. C. F. Cocks, “On the evaluation of the Bauschinger effect in an austenitic stainless steel—The role of multi-scale residual stresses,” Int. J. Plast., vol. 84, pp. 203–223, Sep. 2016, doi: 10.1016/j.ijplas.2016.05.009. [3]  B. Chen et al., “Role of the misfit stress between grains in the Bauschinger effect for a polycrystalline material,” Acta Mater., vol. 85, pp. 229–242, Feb. 2015, doi: 10.1016/j.actamat.2014.11.021. [4]  J. H. Kim, D. Kim, F. Barlat, and M.-G. Lee, “Crystal plasticity approach for predicting the Bauschinger effect in dual-phase steels,” Mater. Sci. Eng. A, vol. 539, pp. 259–270, Mar. 2012, doi: 10.1016/j.msea.2012.01.092. [5]  C.-S. Han, R. H. Wagoner, and F. Barlat, “On precipitate induced hardening in crystal plasticity: theory,” Int. J. Plast., vol. 20, no. 3, pp. 477–494, Mar. 2004, doi: 10.1016/S0749-6419(03)00098-6. Consequence of the Bauschinger effect Metal forming operations result in situations exposing the metal workpiece to stresses of reversed sign. The Bauschinger effect contributes to work softening of the workpiece, for example in straightening of drawn bars or rolled sheets, where rollers subject the workpiece to alternate bending stresses, thereby reducing the yield strength and enabling greater cold drawability of the workpiece. Implications The Bauschinger effect have the applications in various fields due to its implications for the mechanical behavior of metallic materials subjected to cyclic loading. It is particularly relevant in applications involving cyclic loading or loading with changes in stress direction, facilitating the design and optimization of engineering structures. Seismic Analysis: Earthquake engineering and seismic design are crucial aspects of geology engineering. During earthquakes, structural components endure alternating stress directions, with the Bauschinger effect influencing material response, energy dissipation, and potential damage accumulation. The Giuffré-Menegotto-Pinto model is widely utilized to accurately predict the seismic performance of structures by incorporating the Bauschinger effect. This model introduces a transition curve in the stress-strain relationship to capture both the Bauschinger effect and the pinching behavior observed in reinforced concrete structures under cyclic loading. Fatigue Life Prediction: Researchers have developed methods and models to incorporate the Bauschinger effect into fatigue life prediction techniques, such as the strain-life and energy-based approaches. This plays a pivotal role in predicting and designing the fatigue life of machinery, vehicles, and engineering structures. A clear understanding of the Bauschinger effect ensures accurate predictions, enhancing the reliability and safety of components subjected to cyclic loading conditions. The strain-life approach correlates the plastic strain amplitude with the number of cycles to failure, while the energy-based approach considers plastic strain energy as a driving force for fatigue damage accumulation. These models integrate the Bauschinger effect by adjusting the calculation of plastic strain energy or introducing additional energy terms to address the asymmetry in hysteresis loops caused by the effect. Aerospace and Automotive Engineering: In aerospace engineering, materials undergo repeated loading cycles during flight, leading to fatigue and deformation. Similarly, in the automotive industry, vehicles endure cyclic loading due to road conditions and operations. Understanding the Bauschinger effect is crucial for predicting material behavior under such conditions and designing components with improved fatigue resistance. Research in this domain focuses on characterizing the Bauschinger effect in alloys and developing predictive models to assess fatigue life, ensuring structural integrity and reliability. Metal Forming: The Bauschinger effect significantly influences the material's flow behavior, strain distribution, and required forming loads during these processes. Hence, understanding the Bauschinger effect is significant for optimizing forming processes, predicting material behavior. Mitigation of the Bauschinger effect To mitigate the influence of the Bauschinger effect and enhance the performance of metallic materials, several strategies and techniques have been developed, including heat and surface treatments, the use of composite materials, and composition optimization. Surface Treatment: This method aims to alleviate the Bauschinger effect by changing the surface properties of metallic materials. Common treatments include creating a protective layer or modifying the surface microstructure through processes such as physical vapor deposition (PVD) coatings. This treatment reduces the Bauschinger effect in the near-surface regions. Another effective approach is shot peening, where high-velocity particles impact the material's surface, inducing compressive residual stresses. These stresses counteract the internal tensile stresses associated with the Bauschinger effect to reduce its impact. Heat Treatment: Heat treatment and thermomechanical processing are widely used to mitigate the Bauschinger effect by relieving residual stresses and dislocation structures within the material. Stress relief annealing is a common approach, where the material is heated to a specific temperature and held for a certain duration, allowing dislocations to rearrange and internal stresses to dissipate. This process reduces the Bauschinger effect by minimizing internal stress fields and achieving a more uniform distribution of dislocations. Composition Optimization and Composite Materials: Optimizing material composition is another effective approach for mitigation, as certain compositions and microstructures exhibit reduced Bauschinger effect. Materials with high stacking fault energy, such as aluminum alloys and austenitic stainless steels, tend to show less pronounced Bauschinger effect due to their enhanced ability to accommodate dislocations. Additionally, hybrid and composite materials offer mitigation potential. Metal-matrix composites (MMCs), for instance, consist of a metallic matrix reinforced with ceramic particles or fibers, which can reduce the Bauschinger effect by constraining dislocation motion in the matrix. Moreover, laminated or graded composite structures strategically combine different materials to mitigate the Bauschinger effect in critical regions while maintaining desired properties elsewhere. See also Backlash, a similar extrinsic behavior in large-scale mechanical systems. Fatigue References Materials science
Bauschinger effect
Physics,Materials_science,Engineering
2,467
1,043,247
https://en.wikipedia.org/wiki/Rhyolite%2C%20Nevada
Rhyolite is a ghost town in Nye County, in the U.S. state of Nevada. It is in the Bullfrog Hills, about northwest of Las Vegas, near the eastern boundary of Death Valley National Park. The town began in early 1905 as one of several mining camps that sprang up after a prospecting discovery in the surrounding hills. During an ensuing gold rush, thousands of gold-seekers, developers, miners and service providers flocked to the Bullfrog Mining District. Many settled in Rhyolite, which lay in a sheltered desert basin near the region's biggest producer, the Montgomery Shoshone Mine. Industrialist Charles M. Schwab bought the Montgomery Shoshone Mine in 1906 and invested heavily in infrastructure, including piped water, electric lines and railroad transportation, that served the town as well as the mine. By 1907, Rhyolite had electric lights, water mains, telephones, newspapers, a hospital, a school, an opera house, and a stock exchange. Published estimates of the town's peak population vary widely, but scholarly sources generally place it in a range between 3,500 and 5,000 in 1907–08. Rhyolite declined almost as rapidly as it rose. After the richest ore was exhausted, production fell. The 1906 San Francisco earthquake and the financial panic of 1907 made it more difficult to raise development capital. In 1908, investors in the Montgomery Shoshone Mine, concerned that it was overvalued, ordered an independent study. When the study's findings proved unfavorable, the company's stock value crashed, further restricting funding. By the end of 1910, the mine was operating at a loss, and it closed in 1911. By this time, many out-of-work miners had moved elsewhere, and Rhyolite's population dropped well below 1,000. By 1920, it was close to zero. After 1920, Rhyolite and its ruins became a tourist attraction and a setting for motion pictures. Most of its buildings crumbled, were salvaged for building materials, or were moved to nearby Beatty or other towns, although the railway depot and a house made chiefly of empty bottles were repaired and preserved. From 1988 to 1998, three companies operated a profitable open-pit mine at the base of Ladd Mountain, about south of Rhyolite. The Goldwell Open Air Museum lies on private property just south of the ghost town, which is on property overseen by the Bureau of Land Management. Names The town is named for rhyolite, an igneous rock composed of light-colored silicates, usually buff to pink and occasionally light gray. It belongs to the same rock class, felsic, as granite but is much less common. The Amargosa River, which flows through nearby Beatty, gets its name from the Spanish word for "bitter", amargo. In its course, the river takes up large amounts of salts, which give it a bitter taste. "Bullfrog" was the name Frank "Shorty" Harris and Ernest "Ed" Cross, the prospectors who started the Bullfrog gold rush, gave to their mine. As quoted by Robert D. McCracken in A History of Beatty, Nevada, Harris said during a 1930 interview for Westways magazine, "The rock was green, almost like turquoise, spotted with big chunks of yellow metal, and looked a lot like the back of a frog." The Bullfrog Mining District, the Bullfrog Hills, the town of Bullfrog, and other geographical entities in the region took their name from the Bullfrog Mine. "Bullfrog" became so popular that Giant Bullfrog, Bullfrog Merger, Bullfrog Apex, Bullfrog Annex, Bullfrog Gold Dollar, Bullfrog Mogul, and most of the district's other 200 or so mining companies included "Bullfrog" in their names. The name persisted and, decades later, was given to the short-lived Bullfrog County. Beatty is named after "Old Man" Montillus (Montillion) Murray Beatty, a Civil War veteran and miner who bought a ranch along the Amargosa River just north of what became the town of Beatty. In 1906, he sold the ranch to the Bullfrog Water, Power, and Light Company. "Shoshone" in "Montgomery Shoshone Mine" refers to the Western Shoshone people indigenous to the region. In about 1875, the Shoshone had six camps along the Amargosa River near Beatty. The total population of these camps was 29, and because game was scarce, they subsisted largely on seeds, bulbs and plants gathered throughout the region, including the Bullfrog Hills. Geology The Bullfrog Hills are at the western edge of the southwestern Nevada volcanic field. Extensionally faulted volcanic rocks, ranging in age from about 13.3 million years to about 7.6 million years, overlie the region's Paleozoic sedimentary rocks. The prevailing rocks, which contain the ore deposits, are a series of rhyolitic lava flows that built to a combined thickness of about above the more ancient rock. After the flows ceased, tectonic stresses fractured the area into many separate fault blocks. Most of these blocks tilt to the east, and the horizontal banding of individual flows shows clearly on their western scarps. Within the blocks, the ore deposits tend to occur in nearly vertical mineralized faults or fault zones in the rhyolite. Most of the lodes in the Bullfrog Hills are not simple veins but rather fissure zones with many stringers of vein material. Geography and climate Rhyolite is at the northern end of the Amargosa Desert in Nye County in the U.S. state of Nevada. Nestled in the Bullfrog Hills, about northwest of Las Vegas, it is about south of Goldfield, and south of Tonopah. Roughly to the east lie Beatty and the Amargosa River. To the west, roughly from Rhyolite, the Funeral and Grapevine Mountains of the Amargosa Range rise between the Amargosa Desert in Nevada and Death Valley in California. State Route 374, passing about south of Rhyolite, links Beatty to Death Valley via Daylight Pass. Rhyolite is about west of Yucca Mountain and the proposed Yucca Mountain nuclear waste repository, which is adjacent to the Nevada Test Site. Bordered on three sides by ridges but open to the south, the ghost town is at above sea level. The high points of the ridges are Ladd Mountain to the east, Sutherland Mountain to the west, and Busch Peak to the north. Sawtooth Mountain, the highest point in the Bullfrog Hills, rises to above sea level about northwest of Rhyolite. The hills form a barrier between the Amargosa Desert and Sarcobatus Flat to the north. Most of the primary mining communities in the Beatty–Rhyolite area during the gold-rush boom of 1904–08 were either in or on the edge of the Bullfrog Hills. Of these and many smaller towns and camps in the Bullfrog district, only Beatty survived as a populated place. Prior to its demise, the rival town of Bullfrog lay about southwest of Rhyolite, and the Montgomery Shoshone Mine was on the north side of Montgomery Mountain, about northeast of Rhyolite. Nevada's main climatic features are bright sunshine, low annual precipitation, heavy snowfall in the higher mountains, clean, dry air, and large daily temperature ranges. Strong surface heating occurs by day and rapid cooling by night, and usually even the hottest days have cool nights. The average percentage of possible sunshine in southern Nevada is more than 80 percent. Sunshine and low humidity in this region account for an average evaporation, as measured in evaporation pans, of more than of water a year. Beatty, about lower in elevation than Rhyolite, receives only about of precipitation a year. July is the hottest month in Beatty, when the average high temperature is and the average low is . December and January are the coolest months with an average high of and an average low of in December and in January. Rhyolite is high enough in the hills to have relatively cool summers, and it has relatively mild winters. However, it is far from sources of water. History Boom On August 9, 1904, Cross and Harris found gold on the south side of a southwestern Nevada hill later called Bullfrog Mountain. Assays of ore samples from the site suggested values up to $3,000 a ton, or about $ a ton in dollars when adjusted for inflation. Word of the discovery spread to Tonopah and beyond, and soon thousands of hopeful prospectors and speculators rushed to what became known as the Bullfrog Mining District. Within the district, gold rush settlements quickly arose near the mines, and Rhyolite became the largest. It sprang up near the most promising discovery, the Montgomery Shoshone Mine, which in February 1905 produced ores assayed as high as $16,000 a ton, equivalent to $ a ton in . Starting as a two-man camp in January 1905, Rhyolite became a town of 1,200 people in two weeks and reached a population of 2,500 by June 1905. By then it had 50 saloons, 35 gambling tables, cribs for prostitution, 19 lodging houses, 16 restaurants, half a dozen barbers, a public bath house, and a weekly newspaper, the Rhyolite Herald. Four daily stage coaches connected Goldfield, to the north, and Rhyolite. Rival auto lines ferried people between Rhyolite and Goldfield and the rail station in Las Vegas in Pope-Toledos, White Steamers, and other touring cars. Ernest Alexander "Bob" Montgomery, the original owner, and his partners sold the mine to industrialist Charles M. Schwab in February 1906. Schwab expanded the operation on a grand scale, hiring workers, opening new tunnels and drifts, and building a huge mill to process the ore. He had water piped in, paid to have an electric line run from a hydroelectric plant at the foot of the Sierra Nevada mountain range to Rhyolite, and contracted with the Las Vegas and Tonopah Railroad to run a spur line to the mine. Three railroads eventually served Rhyolite. The first was the Las Vegas and Tonopah Railroad (LVTR), which began running regular trains to the city on December 14, 1906. Its depot, built in California-mission style, cost about $130,000, equivalent to about $ in . About a half-year later, the Bullfrog Goldfield Railroad (BGR) began regular service from the north. By December 1907, the Tonopah and Tidewater Railroad (TTR) began service to Rhyolite on tracks leased from the BGR. The TTR was built to reach the borax-bearing colemanite beds in Death Valley as well as the gold fields. By 1907, about 4,000 people lived in Rhyolite, according to Richard E. Lingenfelter in Death Valley & the Amargosa: A Land of Illusion. Russell R. Elliott cites an estimated population of 5,000 in 1907–08 in Nevada's Twentieth-Century Mining Boom, noting that "accurate population figures during the boom are impossible to obtain". Alan H. Patera in Rhyolite: The Boom Years states published estimates of the peak population have been "as high as 6,000 or 8,000, but the town itself never claimed more than 3,500 through its newspapers". The newspapers estimated that 6,000 people lived in the Bullfrog mining district, which included the towns of Rhyolite, Bullfrog, Gold Center, and Beatty as well as camps at the major mines. Rhyolite in 1907 had concrete sidewalks, electric lights, water mains, telephone and telegraph lines, daily and weekly newspapers, a monthly magazine, police and fire departments, a hospital, school, train station and railway depot, at least three banks, a stock exchange, an opera house, a public swimming pool and two formal church buildings. Most prominent was the three-story John S. Cook and Co. Bank on Golden Street. Finished in 1908, it cost more than $90,000, equivalent to $ in . Much of the cost went for Italian marble stairs, imported stained-glass windows, and other luxuries. The building housed brokerage offices, and a post office, as well as the bank. Other large buildings included the train depot, the three-story Overbury Bank building, and the two-story eight-room school. A miner named Tom T. Kelly built the Bottle House in February 1906 from 50,000 discarded beer and liquor bottles. Another building housed the Rhyolite Mining Stock Exchange, which opened on March 25, 1907, with 125 members, including brokers from New York, Philadelphia, Los Angeles, and other large cities. The small, modestly equipped storefront listed shares of 74 Bullfrog companies and a similar number of companies in nearby mining districts. Sixty thousand shares changed hands on the first day, and by the end of the second week the number had topped 750,000. Bust Although the mine produced more than $1 million (equivalent to about $24 million in 2009) in bullion in its first three years, its shares declined from $23 a share (in historical dollars) to less than $3. In February 1908, a committee of minority stockholders, suspecting that the mine was overvalued, hired a British mining engineer to conduct an inspection. The engineer's report was unfavorable, and news of this caused a sudden further decline in share value from $3 to 75 cents. Schwab expressed disappointment when he learned that "the wonderful high-grade [ore] that had brought [the mine] fame was confined to only a few stringers and that what he had actually bought was a large low-grade mine." Although the mine was still profitable, by 1909 no new ore was being discovered, and the value of the remaining ore steadily decreased. In 1910, the mine operated at a loss for most of the year, and on March 14, 1911, it was closed. By then, the stock, which had fallen to 10 cents a share, slid to 4 cents and was dropped from the exchanges. Rhyolite began to decline before the final closing of the mine. At roughly the same time that the Bullfrog mines were running out of high-grade ore, the 1906 San Francisco earthquake diverted capital to California while interrupting rail service, and the financial panic of 1907 restricted funding for mine development. As mines in the district reduced production or closed, unemployed miners left Rhyolite to seek work elsewhere, businesses failed, and by 1910, the census reported only 675 residents. All three banks in the town closed by March 1910. The newspapers, including the Rhyolite Herald, the last to go, all shut down by June 1912. The post office closed in November 1913; the last train left Rhyolite Station in July 1914, and the Nevada-California Power Company turned off the electricity and removed its lines in 1916. Within a year the town was "all but abandoned", and the 1920 census reported a population of only 14. A 1922 motor tour by the Los Angeles Times found only one remaining resident, a 92-year-old man who died in 1924. Much of Rhyolite's remaining infrastructure became a source of building materials for other towns and mining camps. Whole buildings were moved to Beatty. The Miners' Union Hall in Rhyolite became the Old Town Hall in Beatty, and two-room cabins were moved and reassembled as multi-room homes. Parts of many buildings were used to build a Beatty school. Ghost town The Rhyolite historic townsite, maintained by the Bureau of Land Management, is "one of the most photographed ghost towns in the West". Ruins include the railroad depot and other buildings, and the Bottle House, which the Famous Players Lasky Corporation, the parent of Paramount Pictures, restored in 1925 for the filming of a silent movie, The Air Mail. The ruins of the Cook Bank building were used in the 1964 film The Reward and again in 2004 for the filming of The Island. Orion Pictures used Rhyolite for its 1988 science-fiction movie Cherry 2000 depicting the collapse of American society. Six-String Samurai (1998) was another movie using Rhyolite as a setting. The Rhyolite-Bullfrog cemetery, with many wooden headboards, is slightly south of Rhyolite. Tourism flourished in and near Death Valley in the 1920s, and souvenir sellers set up tables in Rhyolite to sell rocks and bottles on weekends. In the 1930s, Revert Mercantile of Beatty acquired a Union Oil distributorship, built a gas station in Beatty, and supplied pumps in other locations, including Rhyolite. The Rhyolite service station consisted of an old caboose, a storage tank, and a pump, managed by a local owner. In 1937, the train depot became a casino and bar called the Rhyolite Ghost Casino, which was later turned into a small museum and curio shop that remained open into the 1970s. In 1984, Belgian artist Albert Szukalski created his sculpture The Last Supper on Golden Street near the Rhyolite railway depot. The art became part of the Goldwell Open Air Museum, an outdoor sculpture park near the southern entrance to the ghost town. Barrick Bullfrog Mine Mining in and around Rhyolite after 1920 consisted mainly of working old tailings until a new mine opened in 1988 on the south side of Ladd Mountain. A company known as Bond Gold built an open-pit mine and mill at the site, about south of Rhyolite along State Route 374. LAC Minerals acquired the mine from Bond in 1989 and established an underground mine there in 1991 after a new body of ore called the North Extension was discovered. Barrick Gold acquired LAC Minerals in 1994 and continued to extract and process ore at what became known as the Barrick Bullfrog Mine until the end of 1998. The mine used a chemical extraction process known as vat leaching involving the use of a weak cyanide solution. The process, like heap leaching, makes it possible to process ore profitably that otherwise would not qualify as mill-grade. Over its entire life, the mine processed about of ore and produced about of gold. See also List of ghost towns in Nevada References Further reading Elliott, Russell R. (1988). Nevada's Twentieth-Century Mining Boom: Tonopah, Goldfield, Ely. Reno: University of Nevada Press. . Hall, Shawn. (1999). Preserving the Glory Days: Ghost Towns and Mining Camps of Nye County, Nevada. Reno: University of Nevada Press. . Hustrulid, William A., and Bullock, Richard L., eds. (2001) Underground Mining Methods: Engineering Fundamentals and International Case Studies. Littleton, Colorado: Society for Mining, Metallurgy, and Exploration (SME). . Lingenfelter, Richard E. (1986). Death Valley & the Amargosa: A Land of Illusion. Berkeley and Los Angeles, California: University of California Press. . McCoy, Suzy. (2004). Rebecca's Walk Through Time: A Rhyolite Story. Lake Grove, Oregon: Western Places. . McCracken, Robert D. (1992). A History of Beatty, Nevada. Tonopah, Nevada: Nye County Press. . McCracken, Robert D. (1992). Beatty: Frontier Oasis. Tonopah, Nevada: Nye County Press. . Patera, Alan H. (2001). Rhyolite: the Boom Years (Western Places #10, fourth printing). Lake Grove, Oregon: Western Places. . Ransome, R.L. (1907). "Preliminary Account of Goldfield, Bullfrog and Other Mining Districts in Southern Nevada". Originally published as "United States Geological Survey Bulletin 303". Reprinted in Mines of Goldfield, Bullfrog and Other Southern Nevada Districts (1983). Las Vegas: Nevada Publications. . External links Beatty Museum and Historical Society From the Ghost Town – Suzy McCoy Rhyolite – Ghost Town Gallery Rhyolite Ghost Town – National Park Service Rhyolite video – Vimeo 1920s images of Rhyolite from the Death Valley Region Photographs Digital Collection – Utah State University Ghost towns in Nye County, Nevada Mining communities in Nevada Amargosa Desert Death Valley National Park Tonopah and Tidewater Railroad Populated places established in 1905 1905 establishments in Nevada Ghost towns in Nevada Bottle houses
Rhyolite, Nevada
Engineering
4,324
13,567,584
https://en.wikipedia.org/wiki/Energy%20efficient%20transformer
In a typical power distribution grid, electric transformer power loss typically contributes to about 40-50% of the total transmission and distribution loss. Energy efficient transformers are therefore an important means to reduce transmission and distribution loss. With the improvement of electrical steel (silicon steel) properties, the losses of a transformer in 2010 can be half that of a similar transformer in the 1970s. With new magnetic materials, it is possible to achieve even higher efficiency. The amorphous metal transformer is a modern example. References External links World's largest Amorphous Metal Power Transformer: 99.31% Efficiency Amorphous Metals in Electric-Power Distribution Applications Australian MandatoryEfficiency Requirements for Distribution Transformers Electronic engineering Energy conservation Electric transformers
Energy efficient transformer
Technology,Engineering
149
5,529,740
https://en.wikipedia.org/wiki/Sex%20allocation
Sex allocation is the allocation of resources to male versus female reproduction in sexual species. Sex allocation theory tries to explain why many species produce equal number of males and females. In dioecious species, where individuals are either male or female for their entire lifetimes, the allocation decision lies between producing male or female offspring. In sequential hermaphrodites, where individuals function as one sex early in life and then switch to the other, the allocation decisions lie in what sex to be first and when to change sex. Animals may be dioecious or sequential hermaphrodites. Sex allocation theory also applies to flowering plants, which can be dioecious, simultaneous hermaphrodites, have unisexual plants and hermaphroditic plants in the same population, have unisexual flowers and hermaphroditic flowers on the same plant or to have only hermaphroditic flowers. Fisher's principle and equal sex allocation R.A. Fisher developed an explanation, known as Fisher's principle, of why sex ratios in many animals are 1:1. If there were 10 times more females in a population than males, a male would on average be able to mate with more partners than a female would. Parents who preferentially invested in producing male offspring would have a fitness advantage over those who preferentially produced females. This strategy would result in increasing numbers of males in the population, thus eliminating the original advantage of males. The same would occur if there were originally more males than females in a population. The evolutionarily stable strategy (ESS) in this case would be for parents to produce a 1:1 ratio of males and females. This explanation assumed that males and females are equally costly for parents to produce. However, if one sex were more costly than the other, parents would allot their resources to their offspring differentially. If parents could have two daughters for the same cost as one male because males took twice the energy to rear, parents would preferentially invest in daughters. Females would increase in the population until the sex ratio was 2 females: 1 male, meaning that a male could have twice the offspring a female could. As a result, males will be twice as costly while producing twice as many offspring, so that males and females provide the same proportion of offspring in proportion to the investment the parent allotted, resulting in an ESS. Therefore, parents allot equal investment of effort in both sexes. More generally, the expected sex ratio is the ratio of the allotted investment between the sexes, and is sometimes referred to as Fisherian sex ratios. However, there are many examples of organisms that do not demonstrate the expected 1:1 ratio or the equivalent investment ratio. The idea of equal allocation fails to explain these expected ratios because it assume that relatives do not interact with one another, and that the environment has no effect. Interactions between relatives W.D. Hamilton hypothesized that non-Fisherian sex ratios can result when relatives interact with one another. He argued that if relatives experienced competition for resources, or benefited from the presence of other relatives, then sex ratios would become skewed. This led to a great deal of research on whether competition or cooperation between relatives results in differential sex ratios that do not support Fisher's principle. Local resource competition Local resource competition (LRC) was first hypothesized by Anne Clark. She argued that the African bushbaby (Otolemur crassicaudatus) demonstrated a male-biased sex ratio because daughters associated with mothers for longer periods of time than did sons. Since sons disperse further from the maternal territory than do daughters, they do not remain on the territories and do not act as competitors with mothers for resources. Clark predicted that the effect of the LRC on sex allocation resulted in a mother investing preferentially in male offspring to reduce competition between daughters and herself. By producing more male offspring that disperse and do not compete with her, the mother will have a greater fitness than she would if she had produced the ratio predicted by the equal investment theory. Further research has found that LRC may influence the sex ratio in birds. Passerine birds demonstrate largely daughter-based dispersal, while ducks and geese demonstrate mainly male-based dispersal. Local resource competition has been hypothesized to be the reason that passerine birds are more likely to be female, while ducks and geese are more likely to have male offspring. Other studies have hypothesized that LRC is likely to influence sex ratios in roe deer, as well as primates. Consistent with these hypotheses, the sex-ratios in roe deer and several primates have been found to be skewed towards the sex that does not compete with mothers. Local mate competition Local mate competition (LMC) can be considered a special type of LRC. Fig wasps lay fertilized eggs within figs, and no females disperse. In some species, males are wingless upon hatching and cannot leave the fig to seek mates elsewhere. Instead, males compete with their brothers in order to fertilize their sisters in the figs; after fertilization, the males die. In such a case, mothers would preferentially adjust the sex ratio to be female-biased, as only a few males are needed in order to fertilize all of the females. If there were too many males, competition between the males will result in some failing to mate, and the production of those males would therefore be a waste of the mother's resources. A mother that allotted more resources to the production of female offspring would therefore have greater fitness than one who produced fewer females. Support for LMC influencing sex ratio was found by examining the sex ratios of different fig wasps. Species with wingless males that can only mate with sisters were predicted to have higher rates of female-biased sex ratios, while species with winged males that can travel to other figs to fertilize non-related females were predicted to have less biased sex ratios. Consistent with LMC influencing sex ratio, these predictions were found to be true. In the latter case, LMC is reduced, and investment in male offspring is less likely to be “wasted” from the mother's point of view. Research on LMC has focused on insects, such as wasps and ants, because they often face strong LMC. Other animals that often disperse from natal groups are much less likely to experience LMC. Local resource enhancement Local resource enhancement (LRE) occurs when relatives help one another instead of competing with one another in LRC or LMC. In cooperative breeders, mothers are assisted by their previous offspring in raising new offspring. In animals with these systems, females are predicted to preferentially have offspring that are the helping sex if there are not enough helpers. However, if there are already enough helpers, it is predicted that females would invest in offspring of the other sex, as this would allow them to increase their own fitness by having dispersing offspring with a greater rate of reproduction than the helpers. It is also predicted that the strength of the selection upon the mothers to adjust the sex ratio of their offspring depends upon the magnitude of the benefits they gain from their helpers. These predictions were found to be true in African wild dogs, where females disperse more rapidly than males from their natal packs. Males are therefore more helpful towards their mothers, as they remain in the same pack as her and help provide food for her and her new offspring. The LRE the males provide is predicted to result in a male-biased sex ratio, which is the pattern observed in nature. Consistent with predictions of LRE influencing sex ratios, African wild dog mothers living in smaller packs were seen to produce more male-biased sex ratios than mothers in a larger pack, since they had fewer helpers and would benefit more from additional helpers than mothers living in larger packs. Evidence for LRE leading to sex ratios biased in favor of helpers has also been found in a number of other animals, including the Seychelles warbler (Acrocephalus sechellensis) and various primates. Trivers–Willard hypothesis The Trivers-Willard hypothesis provides a model for sex allocation that deviates from Fisherian sex ratios. Trivers and Willard (1973) originally proposed a model that predicted individuals would skew the sex ratio of males to females in response to certain parental conditions, which was supported by evidence from mammals. Though individuals may not consciously decide to have fewer or more offspring of the same sex, their model suggested that individuals could be selected to adjust the sex ratio of offspring produced based on their ability to invest in offspring, if fitness returns for male and female offspring differ based on these conditions. While the Trivers-Willard hypothesis applied specifically to instances where preferentially having female offspring as maternal condition deteriorates was more advantageous, it spurred a great deal of further research on how environmental conditions can differentially affect sex ratios, and there are now a number of empirical studies that have found individuals adjust their ratio of male and female offspring. Food availability In many species, the abundance of food in a given habitat dictates the level of parental care and investment in offspring. This, in turn, influences the development and viability of the offspring. If food availability has differential effects on the fitness of male and female offspring, then selection should shift offspring sex ratios based on specific conditions of food availability. Appleby (1997) proposed evidence for conditional sex allocation in a study done on tawny owls (Strix aluco). In tawny owls, a female-biased sex ratio was observed in breeding territories where there was an abundance of prey (field voles). In contrast, in breeding territories with a scarcity of prey, a male-biased sex ratio was seen. This appeared to be adaptive because females demonstrated higher reproductive success when prey density was high, whereas males did not appear to have any reproductive advantage with high prey density. Appleby hypothesized that parents should adjust the sex ratio of their offspring based on the availability of food, with a female sex bias in areas of high prey density and a male sex bias in areas of low prey density. The results support the Trivers-Willard model, as parents produced more of the sex that benefited most from plentiful resources. Wiebe and Bortolotti (1992) observed sex ratio adjustment in a sexually dimorphic (by size) population of American kestrels (Falco sparverius). In general, the larger sex in a species requires more resources than the smaller sex during development and is thus more costly for parents to raise. Wiebe and Bortolotti provided evidence that kestral parents produced more of the smaller (less costly) sex given limited food resources and more of the larger (more costly) sex given an abundance of food resources. These findings modify the Trivers-Willard hypothesis by suggesting sex ratio allocation can be biased by sexual size dimorphism as well as parental conditions. Maternal condition or quality A study by Clutton-Brock (1984) on red deer (Cervus elaphus), a polygynous species, examined the effects of dominance rank and maternal quality on female breeding success and sex ratios of offspring. Based on the Trivers-Willard model, Clutton-Brock hypothesized that the sex ratio of mammalian offspring may change according to maternal condition, where high-ranked females should produce more male offspring and low-ranked females should produce more female offspring. This is based on the assumption that high-ranked females are in better condition, so that they have more access to resources and can afford to invest more in their offspring. In the study, high-ranked females were shown to give birth to healthier offspring than low-ranked females, and the offspring of high-ranked females also developed into healthier adults. Clutton-Brock suggested that the advantage of being a healthy adult was more beneficial for male offspring because stronger males are more capable of defending harems of females during breeding seasons. Therefore, Clutton-Brock proposed that males produced by females in better conditions are more likely to have greater reproductive success in the future than males produced by females in poorer conditions. These findings support the Trivers-Willard hypothesis, as parental quality affected the sex of their offspring, in such a way as to maximize their reproductive investment. Mate attractiveness and quality Similar to the idea behind the Trivers-Willard hypothesis, studies show that mate attractiveness and quality may also explain differences in sex ratios and offspring fitness. Weatherhead and Robertson (1979) predicted that females bias the sex ratio of their offspring in favor of sons if they are mated to more attractive and better quality males. This is related to Fisher's “sexy son” hypothesis, which suggests a causal link between male attractiveness and the quality of sons based on the inheritance of “good genes” that should improve the reproductive success of sons. Fawcett (2007) predicted that it is adaptive for females to adjust their sex ratio to favor sons in response to attractive males. Based on a computer model, he proposed that if sexual selection favors costly male traits, i.e. ornamentation, and costly female preferences, females should produce more male offspring when they mate with an attractive male compared to an unattractive male. Fawcett proposed that there is a direct correlation between female bias for male offspring and attractiveness of their mate. Computer simulations have costs and constraints, and selection may be weaker in natural populations than it was in Fawcett's study. While his results provide support for the Trivers-Willard hypothesis that animals adaptively adjust the sex ratio of offspring due to environmental variables, further empirical studies are needed to see if sex ratio is adjusted in response to mate attractiveness. Sex change The principles of the Trivers-Willard hypothesis can also be applied to sequentially hermaphroditic species, in which individuals undergo sex change. Ghiselin (1969) proposed that individuals change from one sex to another as they age and grow because larger body size provides a greater advantage to one sex than the other. For example, in the bluehead wrasse, the largest males have 40 times the mating success of smaller ones. Thus, as individuals age, they can maximize their mating success by changing from female to male. Removal of the largest males on a reef results in the largest females changing sex to male, supporting the hypothesis that competition for mating success drives sex change. Sex allocation in plants A great deal of research has focused on sex allocation in plants to predict when plants would be dioecious, simultaneous hermaphrodites, or demonstrate both in the same population or plant. Research has also examined how outcrossing, which occurs when individual plants can fertilize and be fertilized by other individuals or selfing (self-pollination) affect sex allocation. Selfing in simultaneous hermaphrodites has been predicted to favor allocating fewer resources to the male function, as it is hypothesized to be more advantageous for hermaphrodites to invest in female functions, so long as they have enough males to fertilize themselves. Consistent with this hypothesis, as selfing in wild rice (Oryza perennis) increases, the plants allocate more resources to the female function than to male. Charlesworth and Charlesworth (1981) applied similar logic to both outcrossing and selfing species, and created a model that predicted when dioecy would be favored over hermaphroditism, and vice versa. The model predicted that dioecy evolves if investing in one sexual function has accelerating fitness benefits than investing in both sexual functions, while hermaphroditism evolves if investing in one sexual function had decreasingly lower fitness benefits. It has been difficult to measure exactly how much fitness individual plants are able to gain from investing in one or both sexual functions, and further empirical research is needed to support this model. Mechanisms of sex allocation decisions Depending on the mechanism of sex determination for a species, decisions about sex allocation may be carried out in different ways. In haplodiploid species, like bees and wasps, females control the sex of offspring by deciding whether or not to fertilize each egg. If she fertilizes the egg, it will become diploid and develop as a female. If she does not fertilize the egg, it will remain haploid and develop as a male. In an elegant experiment, researchers showed that female N. vitripennis parasitoid wasps altered the sex ratio in their offspring in response to the environmental cue of eggs laid by other females. Historically, many theorists have argued that the Mendelian nature of chromosomal sex determination limits opportunities for parental control of offspring sex ratio. However, adaptive adjustment of sex ratio has been found among many animals, including primates, red deer, and birds. The exact mechanism of such allocation is unknown, but several studies indicate that hormonal, pre-ovulatory control may be responsible. For example, higher levels of follicular testosterone in mothers, signifying maternal dominance, correlated with a higher chance of forming a male embryo in cows. Higher corticosterone levels in breeding female Japanese quails were associated with female-biased sex ratios at laying. In species that have environmental sex determination, like turtles and crocodiles, the sex of an offspring is determined by environmental features such as temperature and day length. The direction of bias differs between species. For example, in turtles with ESD, males are produced at lower temperatures, but in many alligators, males are produced at higher temperatures. References Sex-determination systems
Sex allocation
Biology
3,611
75,893,889
https://en.wikipedia.org/wiki/Plasmalysis
Plasmalysis is a electrochemical process that requires a voltage source. On the one hand, it describes the plasma-chemical dissociation of organic and inorganic compounds (e.g. C-H and N-H compounds) in interaction with a thermal/non-thermal plasma between two electrodes. On the other hand, it describes the synthesis, i.e. the combination of two or more elements to form a new molecule (e.g. methane synthesis/methanation). Plasmalysis is an artificial word made of plasma and lysis (Greek λύσις, "[dissolution]"). Thermal/non-thermal plasma Thermal plasmas. can be technically generated, for example, by inductive coupling of high-frequency fields in the MHz range (ICP: Inductively coupled plasma) or by direct current coupling (arc discharges). A thermal plasma is characterized by the fact that electrons, ions and neutral particles are in thermodynamic equilibrium. For atmospheric-pressure plasmas, the temperatures in thermal plasmas are usually above 6000 K. This corresponds to average kinetic energies of less than 1 eV. Nonthermal plasmas are found in low-pressure arc discharges, such as fluorescent lamps, in dielectrically barrier discharges (DBD), such as ozone tubes, in microwave plasmas (plasma torches, i.e. PLexc oder MagJet) or in GHz-plasmajets. A non-thermal plasma shows a significant difference between the electron and gas temperature. For example, the electron temperature can be several 10,000 K, which corresponds to average kinetic energies of more than 1 eV while a gas temperature close to room temperature is measured. Despite their low temperature, such plasmas can trigger chemical reactions and excitation states via electron collisions. Pulsed coronal and dielectrically impeded discharges belong to the family of nonthermal plasmas. Here the electrons are much hotter (several eV) than the ions/neutral gas particles (room temperature). Technical aspects To generate a nonthermal plasma at atmospheric pressure, a working gas (molecular or inert gas, e.g. air, nitrogen, argon, helium) is passed through an electric field. Electrons originating from ionization processes can be accelerated in this field to trigger impact ionization processes. If more free electrons are produced during this process than are lost, a discharge can build up. The degree of ionization in technically used plasmas is usually very low, typically a few per mille or less. The electrical conductivity generated by these free charge carriers is used to couple in electrical power. When colliding with other gas atoms or molecules, the free electrons can transfer their energy to them and thus generate highly reactive species that act on the material to be treated (gaseous, liquid, solid). The electron energy is sufficient to split covalent bonds in organic molecules. The energy required to split single bonds is in the range of about 1.5 - 6.2 eV, for double bonds in the range of about 4.4 - 7.4 eV and for triple bonds in the range of 8.5 - 11.2 eV . For gases that can also be used as process gases, dissociation energies are e.g. 5.7 eV (O2) and 9.8 eV (N2) Applications of atmospheric pressure plasmas Atmospheric-pressure plasmas have been used for a variety of industrial applications, including volatile organic compound (VOC) removal, exhaust gas emission treatment and polymer surface and food treatment. For decades, non-thermal plasmas have also been used to generate ozone for water purification. Atmospheric pressure plasmas can be characterized primarily by a large number of electrical discharges in which the majority of the electrical energy is used to generate energetic electrons. These energetic electrons produce chemically excited species - free radicals and ions - and additional electrons by dissociation, excitation and ionization of background gas molecules by electron impact. These excited species in turn oxidize, reduce or decompose the molecules, such as wastewater or biomethane, that are brought into contact with them. Part of the electrical energy is converted into chemical energy. Plasmalysis can thus be used to store energy, for example in the plasma analysis of ammonium from waste water or liquid fermentation residue, which produces hydrogen and nitrogen. The hydrogen thus produced can serve as an energy carrier for a hydrogen economy. Dissociation mechanisms of gases and liquids In the following section XH stands for any hydrogen compound, e.g. CH- and NH-compounds. Thermal dissociation: gaseous hydrogen molecules are being dissociated at temperatures above 3000 K e.g. in a plasma. At temperatures above 3500 K H2 und O2 are dissociated. electron impact dissociation: The density of radicals scales with the electron density and higher gas and electron temperatures (thermal dissociation and electron impact). ion impact dissociation: dissociative electron attachment: This process generates negative ions as well as neutral particles. The collision electron is captured by collision excitation. The energy difference between the ground state and the excited state dissociates the molecule. The electron-induced dissociation of water depends on the electron temperature, which influences the ratio of the OH density (n_OH) to the electron density (n_e) significantly. The maximum OH density is reached in the early afterglow when the electron temperature (T_e) is low. Photoionisation: High-energy photons dissociate molecules Solvated electrons: Reducing agent in liquid Dissociation efficiency of different hydrogen sources Water Electrolysis Since the focus is always on the most energy-efficient dissociation of chemical compounds, the benchmark is the energy input of the electrolysis of distilled water (45 kWh/kgH2) as in the following reaction equation: Methane-plasmalysis A particularly efficient way of generating hydrogen (10 kWh/kgH2) is the methane plasmalysis. In this process, methane (e.g. from natural gas) is decomposed in the plasma under oxygen exclusion, forming hydrogen and elemental carbon, as in the following reaction equation: Methane plasmalysis offers, among other things, the possibility of decentralized decarbonization of natural gas or, if biogas is used, also the realization of a CO2 sink, whereby, in contrast to the CCS process commonly used to date, no gas has to be compressed and stored, but the elemental carbon produced can be bound in product form. This technology can also be used to prevent the flaring of so-called "flare gases" by using them as a feedstock for the production of hydrogen and carbon. Wastewater-plasmalysis The plasmalysis of wastewater and liquid manure enables hydrogen to be recovered from pollutants contained in the wastewater (ammonium (NH4) or hydrocarbon compounds (COD)). The plasma-catalytic decomposition of ammonia takes place as shown in the following reaction equation: The treated wastewater is purified in the process. The energy requirement for the production of green hydrogen is approx. 12 kWh/kgH2. This technology can also be used as ammonia cracking (chemistry) technology for splitting the hydrogen carrier ammonia. Dissociation of hydrogen sulfide Hydrogen sulfide - a component of crude oil and natural gas and a by-product in anaerobic digestion of biomass - is also suitable for plasma-catalytic decomposition to produce hydrogen and elemental sulfur due to its weak binding energy. The energy requirement for the production of hydrogen from H2S is approx. 5 kWh/kgH2. Reactor geometry It is apparent that both the reactor geometry and the method by which the plasma is generated strongly influence the performance of the system. References Electrochemistry Process engineering
Plasmalysis
Chemistry,Engineering
1,609
2,310,971
https://en.wikipedia.org/wiki/Gauss%27s%20lemma%20%28polynomials%29
In algebra, Gauss's lemma, named after Carl Friedrich Gauss, is a theorem about polynomials over the integers, or, more generally, over a unique factorization domain (that is, a ring that has a unique factorization property similar to the fundamental theorem of arithmetic). Gauss's lemma underlies all the theory of factorization and greatest common divisors of such polynomials. Gauss's lemma asserts that the product of two primitive polynomials is primitive. (A polynomial with integer coefficients is primitive if it has 1 as a greatest common divisor of its coefficients.) A corollary of Gauss's lemma, sometimes also called Gauss's lemma, is that a primitive polynomial is irreducible over the integers if and only if it is irreducible over the rational numbers. More generally, a primitive polynomial has the same complete factorization over the integers and over the rational numbers. In the case of coefficients in a unique factorization domain , "rational numbers" must be replaced by "field of fractions of ". This implies that, if is either a field, the ring of integers, or a unique factorization domain, then every polynomial ring (in one or several indeterminates) over is a unique factorization domain. Another consequence is that factorization and greatest common divisor computation of polynomials with integers or rational coefficients may be reduced to similar computations on integers and primitive polynomials. This is systematically used (explicitly or implicitly) in all implemented algorithms (see Polynomial greatest common divisor and Factorization of polynomials). Gauss's lemma, and all its consequences that do not involve the existence of a complete factorization remain true over any GCD domain (an integral domain over which greatest common divisors exist). In particular, a polynomial ring over a GCD domain is also a GCD domain. If one calls primitive a polynomial such that the coefficients generate the unit ideal, Gauss's lemma is true over every commutative ring. However, some care must be taken when using this definition of primitive, as, over a unique factorization domain that is not a principal ideal domain, there are polynomials that are primitive in the above sense and not primitive in this new sense. The lemma over the integers If is a polynomial with integer coefficients, then is called primitive if the greatest common divisor of all the coefficients is 1; in other words, no prime number divides all the coefficients. Proof: Clearly the product f(x)g(x) of two primitive polynomials has integer coefficients. Therefore, if it is not primitive, there must be a prime p which is a common divisor of all its coefficients. But p cannot divide all the coefficients of either f(x) or g(x) (otherwise they would not be primitive). Let arxr be the first term of f(x) not divisible by p and let bsxs be the first term of g(x) not divisible by p. Now consider the term xr+s in the product, whose coefficient is The term arbs is not divisible by p (because p is prime), yet all the remaining ones are, so the entire sum cannot be divisible by p. By assumption all coefficients in the product are divisible by p, leading to a contradiction. Therefore, the coefficients of the product can have no common divisor and are thus primitive. The proof is given below for the more general case. Note that an irreducible element of Z (a prime number) is still irreducible when viewed as constant polynomial in Z[X]; this explains the need for "non-constant" in the statement. Statements for unique factorization domains Gauss's lemma holds more generally over arbitrary unique factorization domains. There the content of a polynomial can be defined as the greatest common divisor of the coefficients of (like the gcd, the content is actually a set of associate elements). A polynomial with coefficients in a UFD is then said to be primitive if the only elements of that divide all coefficients of at once are the invertible elements of ; i.e., the gcd of the coefficients is one. Primitivity statement: If is a UFD, then the set of primitive polynomials in is closed under multiplication. More generally, the content of a product of polynomials is the product of their individual contents. Irreducibility statement: Let be a unique factorization domain and its field of fractions. A non-constant polynomial in is irreducible in if and only if it is both irreducible in and primitive in . (For the proofs, see #General version below.) Let be a unique factorization domain with field of fractions . If is a polynomial over then for some in , has coefficients in , and so – factoring out the gcd of the coefficients – we can write for some primitive polynomial . As one can check, this polynomial is unique up to the multiplication by a unit and is called the primitive part (or primitive representative) of and is denoted by . The procedure is compatible with product: . The construct can be used to show the statement: A polynomial ring over a UFD is a UFD. Indeed, by induction, it is enough to show is a UFD when is a UFD. Let be a non-zero polynomial. Now, is a unique factorization domain (since it is a principal ideal domain) and so, as a polynomial in , can be factorized as: where are irreducible polynomials of . Now, we write for the gcd of the coefficients of (and is the primitive part) and then: Now, is a product of prime elements of (since is a UFD) and a prime element of is a prime element of , as is an integral domain. Hence, admits a prime factorization (or a unique factorization into irreducibles). Next, observe that is a unique factorization into irreducible elements of , as (1) each is irreducible by the irreducibility statement and (2) it is unique since the factorization of can also be viewed as a factorization in and factorization there is unique. Since and are uniquely determined by up to unit elements, the above factorization of is a unique factorization into irreducible elements. The condition that "R is a unique factorization domain" is not superfluous because it implies that every irreducible element of this ring is also a prime element, which in turn implies that every non-zero element of R has at most one factorization into a product of irreducible elements and a unit up to order and associate relationship. In a ring where factorization is not unique, say with p and q irreducible elements that do not divide any of the factors on the other side, the product shows the failure of the primitivity statement. For a concrete example one can take , , , , . In this example the polynomial (obtained by dividing the right hand side by ) provides an example of the failure of the irreducibility statement (it is irreducible over R, but reducible over its field of fractions ). Another well-known example is the polynomial , whose roots are the golden ratio and its conjugate showing that it is reducible over the field , although it is irreducible over the non-UFD which has as field of fractions. In the latter example the ring can be made into an UFD by taking its integral closure in (the ring of Dirichlet integers), over which becomes reducible, but in the former example R is already integrally closed. General version Let be a commutative ring. If is a polynomial in , then we write for the ideal of generated by all the coefficients of ; it is called the content of . Note that for each in . The next proposition states a more substantial property. A polynomial is said to be primitive if is the unit ideal . When (or more generally when is a Bézout domain), this agrees with the usual definition of a primitive polynomial. (But if is only a UFD, this definition is inconsistent with the definition of primitivity in #Statements for unique factorization domains.) Proof: This is easy using the fact that implies Proof: () First note that the gcd of the coefficients of is 1 since, otherwise, we can factor out some element from the coefficients of to write , contradicting the irreducibility of . Next, suppose for some non-constant polynomials in . Then, for some , the polynomial has coefficients in and so, by factoring out the gcd of the coefficients, we write . Do the same for and we can write for some . Now, let for some . Then . From this, using the proposition, we get: . That is, divides . Thus, and then the factorization constitutes a contradiction to the irreducibility of . () If is irreducible over , then either it is irreducible over or it contains a constant polynomial as a factor, the second possibility is ruled out by the assumption. Proof of the proposition: Clearly, . If is a prime ideal containing , then modulo . Since is a polynomial ring over an integral domain and thus is an integral domain, this implies either or modulo . Hence, either or is contained in . Since is the intersection of all prime ideals that contain and the choice of was arbitrary, . We now prove the "moreover" part. Factoring out the gcd's from the coefficients, we can write and where the gcds of the coefficients of are both 1. Clearly, it is enough to prove the assertion when are replaced by ; thus, we assume the gcd's of the coefficients of are both 1. The rest of the proof is easy and transparent if is a unique factorization domain; thus we give the proof in that case here (and see for the proof for the GCD case). If , then there is nothing to prove. So, assume otherwise; then there is a non-unit element dividing the coefficients of . Factorizing that element into a product of prime elements, we can take that element to be a prime element . Now, we have: . Thus, either contains or ; contradicting the gcd's of the coefficients of are both 1. Remark: Over a GCD domain (e.g., a unique factorization domain), the gcd of all the coefficients of a polynomial , unique up to unit elements, is also called the content of . Applications It follows from Gauss's lemma that for each unique factorization domain , the polynomial ring is also a unique factorization domain (see #Statements for unique factorization domains). Gauss's lemma can also be used to show Eisenstein's irreducibility criterion. Finally, it can be used to show that cyclotomic polynomials (unitary units with integer coefficients) are irreducible. Gauss's lemma implies the following statement: If is a monic polynomial in one variable with coefficients in a unique factorization domain (or more generally a GCD domain), then a root of that is in the field of fractions of is in . If , then it says a rational root of a monic polynomial over integers is an integer (cf. the rational root theorem). To see the statement, let be a root of in and assume are relatively prime. In we can write with for some . Then is a factorization in . But is primitive (in the UFD sense) and thus divides the coefficients of by Gauss's lemma, and so with in . Since is monic, this is possible only when is a unit. A similar argument shows: Let be a GCD domain with the field of fractions and . If for some polynomial that is primitive in the UFD sense and , then . The irreducibility statement also implies that the minimal polynomial over the rational numbers of an algebraic integer has integer coefficients. Notes References Theorems about polynomials Theorems in ring theory Lemmas in algebra
Gauss's lemma (polynomials)
Mathematics
2,519
45,642
https://en.wikipedia.org/wiki/Demography
Demography () is the statistical study of human populations: their size, composition (e.g., ethnic group, age), and how they change through the interplay of fertility (births), mortality (deaths), and migration. Demographic analysis examines and measures the dimensions and dynamics of populations; it can cover whole societies or groups defined by criteria such as education, nationality, religion, and ethnicity. Educational institutions usually treat demography as a field of sociology, though there are a number of independent demography departments. These methods have primarily been developed to study human populations, but are extended to a variety of areas where researchers want to know how populations of social actors can change across time through processes of birth, death, and migration. In the context of human biological populations, demographic analysis uses administrative records to develop an independent estimate of the population. Demographic analysis estimates are often considered a reliable standard for judging the accuracy of the census information gathered at any time. In the labor force, demographic analysis is used to estimate sizes and flows of populations of workers; in population ecology the focus is on the birth, death, migration and immigration of individuals in a population of living organisms, alternatively, in social human sciences could involve movement of firms and institutional forms. Demographic analysis is used in a wide variety of contexts. For example, it is often used in business plans, to describe the population connected to the geographic location of the business. Demographic analysis is usually abbreviated as DA. For the 2010 U.S. Census, The U.S. Census Bureau has expanded its DA categories. Also as part of the 2010 U.S. Census, DA now also includes comparative analysis between independent housing estimates, and census address lists at different key time points. Patient demographics form the core of the data for any medical institution, such as patient and emergency contact information and patient medical record data. They allow for the identification of a patient and their categorization into categories for the purpose of statistical analysis. Patient demographics include: date of birth, gender, date of death, postal code, ethnicity, blood type, emergency contact information, family doctor, insurance provider data, allergies, major diagnoses and major medical history. Formal demography limits its object of study to the measurement of population processes, while the broader field of social demography or population studies also analyses the relationships between economic, social, institutional, cultural, and biological processes influencing a population. History Demographic thoughts traced back to antiquity, and were present in many civilisations and cultures, like Ancient Greece, Ancient Rome, China and India. Made up of the prefix demo- and the suffix -graphy, the term demography refers to the overall study of population. In ancient Greece, this can be found in the writings of Herodotus, Thucydides, Hippocrates, Epicurus, Protagoras, Polus, Plato and Aristotle. In Rome, writers and philosophers like Cicero, Seneca, Pliny the Elder, Marcus Aurelius, Epictetus, Cato, and Columella also expressed important ideas on this ground. In the Middle Ages, Christian thinkers devoted much time in refuting the Classical ideas on demography. Important contributors to the field were William of Conches, Bartholomew of Lucca, William of Auvergne, William of Pagula, and Muslim sociologists like Ibn Khaldun. One of the earliest demographic studies in the modern period was Natural and Political Observations Made upon the Bills of Mortality (1662) by John Graunt, which contains a primitive form of life table. Among the study's findings were that one-third of the children in London died before their sixteenth birthday. Mathematicians, such as Edmond Halley, developed the life table as the basis for life insurance mathematics. Richard Price was credited with the first textbook on life contingencies published in 1771, followed later by Augustus De Morgan, On the Application of Probabilities to Life Contingencies (1838). In 1755, Benjamin Franklin published his essay Observations Concerning the Increase of Mankind, Peopling of Countries, etc., projecting exponential growth in British colonies. His work influenced Thomas Robert Malthus, who, writing at the end of the 18th century, feared that, if unchecked, population growth would tend to outstrip growth in food production, leading to ever-increasing famine and poverty (see Malthusian catastrophe). Malthus is seen as the intellectual father of ideas of overpopulation and the limits to growth. Later, more sophisticated and realistic models were presented by Benjamin Gompertz and Verhulst. In 1855, a Belgian scholar Achille Guillard defined demography as the natural and social history of human species or the mathematical knowledge of populations, of their general changes, and of their physical, civil, intellectual, and moral condition. The period 1860–1910 can be characterized as a period of transition where in demography emerged from statistics as a separate field of interest. This period included a panoply of international 'great demographers' like Adolphe Quetelet (1796–1874), William Farr (1807–1883), Louis-Adolphe Bertillon (1821–1883) and his son Jacques (1851–1922), Joseph Körösi (1844–1906), Anders Nicolas Kaier (1838–1919), Richard Böckh (1824–1907), Émile Durkheim (1858–1917), Wilhelm Lexis (1837–1914), and Luigi Bodio (1840–1920) contributed to the development of demography and to the toolkit of methods and techniques of demographic analysis. Methods Demography is the statistical and mathematical study of the size, composition, and spatial distribution of human populations and how these features change over time. Data are obtained from a census of the population and from registries: records of events like birth, deaths, migrations, marriages, divorces, diseases, and employment. To do this, there needs to be an understanding of how they are calculated and the questions they answer which are included in these four concepts: population change, standardization of population numbers, the demographic bookkeeping equation, and population composition. There are two types of data collection—direct and indirect—with several methods of each type. Direct methods Direct data comes from vital statistics registries that track all births and deaths as well as certain changes in legal status such as marriage, divorce, and migration (registration of place of residence). In developed countries with good registration systems (such as the United States and much of Europe), registry statistics are the best method for estimating the number of births and deaths. A census is the other common direct method of collecting demographic data. A census is usually conducted by a national government and attempts to enumerate every person in a country. In contrast to vital statistics data, which are typically collected continuously and summarized on an annual basis, censuses typically occur only every 10 years or so, and thus are not usually the best source of data on births and deaths. Analyses are conducted after a census to estimate how much over or undercounting took place. These compare the sex ratios from the census data to those estimated from natural values and mortality data. Censuses do more than just count people. They typically collect information about families or households in addition to individual characteristics such as age, sex, marital status, literacy/education, employment status, and occupation, and geographical location. They may also collect data on migration (or place of birth or of previous residence), language, religion, nationality (or ethnicity or race), and citizenship. In countries in which the vital registration system may be incomplete, the censuses are also used as a direct source of information about fertility and mortality; for example, the censuses of the People's Republic of China gather information on births and deaths that occurred in the 18 months immediately preceding the census. Indirect methods Indirect methods of collecting data are required in countries and periods where full data are not available, such as is the case in much of the developing world, and most of historical demography. One of these techniques in contemporary demography is the sister method, where survey researchers ask women how many of their sisters have died or had children and at what age. With these surveys, researchers can then indirectly estimate birth or death rates for the entire population. Other indirect methods in contemporary demography include asking people about siblings, parents, and children. Other indirect methods are necessary in historical demography. There are a variety of demographic methods for modelling population processes. They include models of mortality (including the life table, Gompertz models, hazards models, Cox proportional hazards models, multiple decrement life tables, Brass relational logits), fertility (Hermes model, Coale-Trussell models, parity progression ratios), marriage (Singulate Mean at Marriage, Page model), disability (Sullivan's method, multistate life tables), population projections (Lee-Carter model, the Leslie Matrix), and population momentum (Keyfitz). The United Kingdom has a series of four national birth cohort studies, the first three spaced apart by 12 years: the 1946 National Survey of Health and Development, the 1958 National Child Development Study, the 1970 British Cohort Study, and the Millennium Cohort Study, begun much more recently in 2000. These have followed the lives of samples of people (typically beginning with around 17,000 in each study) for many years, and are still continuing. As the samples have been drawn in a nationally representative way, inferences can be drawn from these studies about the differences between four distinct generations of British people in terms of their health, education, attitudes, childbearing and employment patterns. Indirect standardization is used when a population is small enough that the number of events (births, deaths, etc.) are also small. In this case, methods must be used to produce a standardized mortality rate (SMR) or standardized incidence rate (SIR). Population change Population change is analyzed by measuring the change between one population size to another. Global population continues to rise, which makes population change an essential component to demographics. This is calculated by taking one population size minus the population size in an earlier census. The best way of measuring population change is using the intercensal percentage change. The intercensal percentage change is the absolute change in population between the censuses divided by the population size in the earlier census. Next, multiply this a hundredfold to receive a percentage. When this statistic is achieved, the population growth between two or more nations that differ in size, can be accurately measured and examined. Standardization of population numbers For there to be a significant comparison, numbers must be altered for the size of the population that is under study. For example, the fertility rate is calculated as the ratio of the number of births to women of childbearing age to the total number of women in this age range. If these adjustments were not made, we would not know if a nation with a higher rate of births or deaths has a population with more women of childbearing age or more births per eligible woman. Within the category of standardization, there are two major approaches: direct standardization and indirect standardization. Common rates and ratios The crude birth rate, the annual number of live births per 1,000 people. The general fertility rate, the annual number of live births per 1,000 women of childbearing age (often taken to be from 15 to 49 years old, but sometimes from 15 to 44). The age-specific fertility rates, the annual number of live births per 1,000 women in particular age groups (usually age 15–19, 20–24 etc.) The crude death rate, the annual number of deaths per 1,000 people. The infant mortality rate, the annual number of deaths of children less than 1 year old per 1,000 live births. The expectation of life (or life expectancy), the number of years that an individual at a given age could expect to live at present mortality levels. The total fertility rate, the number of live births per woman completing her reproductive life, if her childbearing at each age reflected current age-specific fertility rates. The replacement level fertility, the average number of children women must have in order to replace the population for the next generation. For example, the replacement level fertility in the US is 2.11. The gross reproduction rate, the number of daughters who would be born to a woman completing her reproductive life at current age-specific fertility rates. The net reproduction ratio is the expected number of daughters, per newborn prospective mother, who may or may not survive to and through the ages of childbearing. A stable population, one that has had constant crude birth and death rates for such a long period of time that the percentage of people in every age class remains constant, or equivalently, the population pyramid has an unchanging structure. A stationary population, one that is both stable and unchanging in size (the difference between crude birth rate and crude death rate is zero). Measures of centralisation are concerned with the extent to which an area's population is concentrated in its urban centres. A stable population does not necessarily remain fixed in size. It can be expanding or shrinking. The crude death rate as defined above and applied to a whole population can give a misleading impression. For example, the number of deaths per 1,000 people can be higher in developed nations than in less-developed countries, despite standards of health being better in developed countries. This is because developed countries have proportionally more older people, who are more likely to die in a given year, so that the overall mortality rate can be higher even if the mortality rate at any given age is lower. A more complete picture of mortality is given by a life table, which summarizes mortality separately at each age. A life table is necessary to give a good estimate of life expectancy. Basic equation regarding development of a population Suppose that a country (or other entity) contains Populationt persons at time t. What is the size of the population at time t + 1 ? Natural increase from time t to t + 1: Net migration from time t to t + 1: These basic equations can also be applied to subpopulations. For example, the population size of ethnic groups or nationalities within a given society or country is subject to the same sources of change. When dealing with ethnic groups, however, "net migration" might have to be subdivided into physical migration and ethnic reidentification (assimilation). Individuals who change their ethnic self-labels or whose ethnic classification in government statistics changes over time may be thought of as migrating or moving from one population subcategory to another. More generally, while the basic demographic equation holds true by definition, in practice the recording and counting of events (births, deaths, immigration, emigration) and the enumeration of the total population size are subject to error. So allowance needs to be made for error in the underlying statistics when any accounting of population size or change is made. The figure in this section shows the latest (2004) UN (United Nations) WHO projections of world population out to the year 2150 (red = high, orange = medium, green = low). The UN "medium" projection shows world population reaching an approximate equilibrium at 9 billion by 2075. Working independently, demographers at the International Institute for Applied Systems Analysis in Austria expect world population to peak at 9 billion by 2070. Throughout the 21st century, the average age of the population is likely to continue to rise. Science of population Populations can change through three processes: fertility, mortality, and migration. Fertility involves the number of children that women have and is to be contrasted with fecundity (a woman's childbearing potential). Mortality is the study of the causes, consequences, and measurement of processes affecting death to members of the population. Demographers most commonly study mortality using the life table, a statistical device that provides information about the mortality conditions (most notably the life expectancy) in the population. Migration refers to the movement of persons from a locality of origin to a destination place across some predefined, political boundary. Migration researchers do not designate movements 'migrations' unless they are somewhat permanent. Thus, demographers do not consider tourists and travellers to be migrating. While demographers who study migration typically do so through census data on place of residence, indirect sources of data including tax forms and labour force surveys are also important. Demography is today widely taught in many universities across the world, attracting students with initial training in social sciences, statistics or health studies. Being at the crossroads of several disciplines such as sociology, economics, epidemiology, geography, anthropology and history, demography offers tools to approach a large range of population issues by combining a more technical quantitative approach that represents the core of the discipline with many other methods borrowed from social or other sciences. Demographic research is conducted in universities, in research institutes, as well as in statistical departments and in several international agencies. Population institutions are part of the CICRED (International Committee for Coordination of Demographic Research) network while most individual scientists engaged in demographic research are members of the International Union for the Scientific Study of Population, or a national association such as the Population Association of America in the United States, or affiliates of the Federation of Canadian Demographers in Canada. Population composition Population composition is the description of population defined by characteristics such as age, race, sex or marital status. These descriptions can be necessary for understanding the social dynamics from historical and comparative research. This data is often compared using a population pyramid. Population composition is also a very important part of historical research. Information ranging back hundreds of years is not always worthwhile, because the numbers of people for which data are available may not provide the information that is important (such as population size). Lack of information on the original data-collection procedures may prevent accurate evaluation of data quality. Demographic analysis in institutions and organizations Labor market The demographic analysis of labor markets can be used to show slow population growth, population aging, and the increased importance of immigration. The U.S. Census Bureau projects that in the next 100 years, the United States will face some dramatic demographic changes. The population is expected to grow more slowly and age more rapidly than ever before and the nation will become a nation of immigrants. This influx is projected to rise over the next century as new immigrants and their children will account for over half the U.S. population. These demographic shifts could ignite major adjustments in the economy, more specifically, in labor markets. Turnover and in internal labor markets People decide to exit organizations for many reasons, such as, better jobs, dissatisfaction, and concerns within the family. The causes of turnover can be split into two separate factors, one linked with the culture of the organization, and the other relating to all other factors. People who do not fully accept a culture might leave voluntarily. Or, some individuals might leave because they fail to fit in and fail to change within a particular organization. Population ecology of organizations A basic definition of population ecology is a study of the distribution and abundance of organisms. As it relates to organizations and demography, organizations go through various liabilities to their continued survival. Hospitals, like all other large and complex organizations are impacted in the environment they work. For example, a study was done on the closure of acute care hospitals in Florida between a particular time. The study examined effect size, age, and niche density of these particular hospitals. A population theory says that organizational outcomes are mostly determined by environmental factors. Among several factors of the theory, there are four that apply to the hospital closure example: size, age, density of niches in which organizations operate, and density of niches in which organizations are established. Business organizations Problems in which demographers may be called upon to assist business organizations are when determining the best prospective location in an area of a branch store or service outlet, predicting the demand for a new product, and to analyze certain dynamics of a company's workforce. Choosing a new location for a branch of a bank, choosing the area in which to start a new supermarket, consulting a bank loan officer that a particular location would be a beneficial site to start a car wash, and determining what shopping area would be best to buy and be redeveloped in metropolis area are types of problems in which demographers can be called upon. Standardization is a useful demographic technique used in the analysis of a business. It can be used as an interpretive and analytic tool for the comparison of different markets. Nonprofit organizations These organizations have interests about the number and characteristics of their clients so they can maximize the sale of their products, their outlook on their influence, or the ends of their power, services, and beneficial works. See also Biodemography Biodemography of human longevity Demographics of the world Demographic economics Gompertz–Makeham law of mortality Linguistic demography List of demographics articles Medieval demography National Security Study Memorandum 200 of 1974 NRS social grade Political demography Population biology Population dynamics Population geography Population reconstruction Population statistics Religious demography Replacement migration Reproductive health Social surveys Current Population Survey (CPS) Demographic and Health Surveys (DHS) European Social Survey (ESS) General Social Survey (GSS) German General Social Survey (ALLBUS) Multiple Indicator Cluster Surveys (MICS) National Longitudinal Survey (NLS) Panel Study of Income Dynamics (PSID) Performance Monitoring and Accountability 2020 (PMA2020) Socio-Economic Panel (SOEP, German) World Values Survey (WVS) Organizations Global Social Change Research Project (United States) Institut national d'études démographiques (INED) (France) Max Planck Institute for Demographic Research (Germany) Office of Population Research (Princeton University) (United States) Population Council (United States) Population Studies Center at the University of Michigan (United States) Vienna Institute of Demography (VID) (Austria) Wittgenstein Centre for Demography and Global Human Capital (Austria) Scientific journals Brazilian Journal of Population Studies Cahiers québécois de démographie Demography Population and Development Review References Further reading Josef Ehmer, Jens Ehrhardt, Martin Kohli (Eds.): Fertility in the History of the 20th Century: Trends, Theories, Policies, Discourses. Historical Social Research 36 (2), 2011. Glad, John. 2008. Future Human Evolution: Eugenics in the Twenty-First Century. Hermitage Publishers, Gavrilova N.S., Gavrilov L.A. 2011. Ageing and Longevity: Mortality Laws and Mortality Forecasts for Ageing Populations [In Czech: Stárnutí a dlouhověkost: Zákony a prognózy úmrtnosti pro stárnoucí populace]. Demografie, 53(2): 109–128. Preston, Samuel, Patrick Heuveline, and Michel Guillot. 2000. Demography: Measuring and Modeling Population Processes. Blackwell Publishing. Gavrilov L.A., Gavrilova N.S. 2010. Demographic Consequences of Defeating Aging. Rejuvenation Research, 13(2-3): 329–334. Paul R. Ehrlich (1968), The Population Bomb Controversial Neo-Malthusianist pamphlet Leonid A. Gavrilov & Natalia S. Gavrilova (1991), The Biology of Life Span: A Quantitative Approach. New York: Harwood Academic Publisher, Andrey Korotayev & Daria Khaltourina (2006). Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. Moscow: URSS Uhlenberg P. (Editor), (2009) International Handbook of the Demography of Aging, New York: Springer-Verlag, pp. 113–131. Paul Demeny and Geoffrey McNicoll (Eds.). 2003. The Encyclopedia of Population. New York, Macmillan Reference USA, vol.1, 32-37 Phillip Longman (2004), The Empty Cradle: how falling birth rates threaten global prosperity and what to do about it Sven Kunisch, Stephan A. Boehm, Michael Boppel (eds) (2011). From Grey to Silver: Managing the Demographic Change Successfully, Springer-Verlag, Berlin Heidelberg, Joe McFalls (2007), Population: A Lively Introduction, Population Reference Bureau Ben J. Wattenberg (2004), How the New Demography of Depopulation Will Shape Our Future. Chicago: R. Dee, Perry, Marc J. & Mackun, Paul J. Population Change & Distribution: Census 2000 Brief. (2001) Preston, Samuel; Heuveline, Patrick; and Guillot Michel. 2000. Demography: Measuring and Modeling Population Processes. Blackwell Publishing. Schutt, Russell K. 2006. "Investigating the Social World: The Process and Practice of Research". SAGE Publications. Siegal, Jacob S. (2002), Applied Demography: Applications to Business, Government, Law, and Public Policy. San Diego: Academic Press. Wattenberg, Ben J. (2004), How the New Demography of Depopulation Will Shape Our Future. Chicago: R. Dee, External links Quick demography data lookup (archived 4 March 2016) Historicalstatistics.org Links to historical demographic and economic statistics United Nations Population Division: Homepage World Population Prospects, the 2012 Revision, Population estimates and projections for 230 countries and areas (archived 6 May 2011) World Urbanization Prospects, the 2011 Revision, Estimates and projections of urban and rural populations and urban agglomerations Probabilistic Population Projections, the 2nd Revision, Probabilistic Population Projections, based on the 2010 Revision of the World Population Prospects (archived 13 December 2012) Java Simulation of Population Dynamics. Basic Guide to the World: Population changes and trends, 1960–2003 Brief review of world basic demographic trends Family and Fertility Surveys (FFS) Actuarial science Environmental social science Interdisciplinary subfields of sociology Human geography Market segmentation Human populations
Demography
Mathematics,Environmental_science
5,315
74,838,922
https://en.wikipedia.org/wiki/Ballet%20and%20fashion
Throughout its history, the costume of ballet has influenced and been influenced by fashion. Ballet-specific clothing used in productions and during practice, such as ballet flats, ballerina skirt, legwarmers, and leotards have been elements of fashion trends. Ballet costume itself has adapted aesthetically over the years, incorporating contemporary fashion trends while also updating fabrics and materials to allow for greater freedom of movement for the dancers. The classic ballerina costume with a tutu and pointe shoes debuted in the 1830s. Ballet costume is marked by the innovation in lightweight materials such as tulle, chiffon, and organza. In the early 20th century, productions by the Russian ballet company Ballets Russes had a large influence on fashion design in Paris. Designers incorporated ballet-inspired themes in their creations. Designers that have been influenced by ballet include Christian Dior, Elsa Schiaparelli, Paul Poiret, Coco Chanel, Jacques Fath, Jeanne Lanvin, Madeleine Vionnet, Molly Goddard, and Simone Rocha. History 17th and 18th centuries Ballet costume originated in the 17th-century royal courts of Italy and France, including that of Louis XIV. Early costume designs in ballet productions were based on court dress, though more extravagant. All of the performers in early ballets were men, with boys performing the female roles en travesti. In the 18th century, as ballet became professionalized and moved from the courts to the theaters, women joined the ranks of ballet dancers. Traditionally, dancers wore heeled shoes, until the 1730s, when Paris Opera Ballet dancer Marie Camargo was one of the first to wear ballet slippers instead. She also wore midcalf-length skirts and close-fitting drawers. Until the late 18th century, lead dancers in a ballet company often wore masks. The practice was abandoned after balletmaster Jean-Georges Noverre and choreographer Maximilien Gardel dispensed with them, seeing how they impeded the dancers' movements and the ability to see their facial expressions. Similarly, cumbersome hairstyles and wigs that were not conducive to ballet movements were largely excluded from the stage. 19th century Ballet costume has an essential role in facilitating the movements of dancers while "maintaining the integrity of the line of the body". Technical and visual problems with ballet costume are avoided through the creation of well-designed and proportioned clothing. Ballet costume has evolved alongside choreography to allow for the display of musculature. In the late 18th and early 19th centuries, the industrialisation of cotton manufacturing led to the widespread availability of cheap cotton fabrics such as tulle, muslin, tarlatan, and gauze. Ballet companies were able to produce new costumes for each production. Ballet costume during the early 19th century mirrored the women's fashions of the era. Ballet appropriated high fashion elements, including full sleeves, revealing decolletage, fitted waist, bell-shaped skirts, and more diaphanous fabrics. Adaptations such as lighter fabrics and raised hemlines allowed dancers greater freedom of movement and the audience to appreciate the dancer's footwork. As clothing became less restricted, the natural silhouette was emphasized. Pointe shoes were invented around 1820 and the archetypal look of the romantic ballerina was provided by Marie Taglioni in the 1832 ballet La Sylphide. Her fitted décolleté bodice, diaphanous calf-length tulle skirt, and satin pointe shoes laced around the calf provided the template for the ballerina costume. Her ballerina skirt was a shortened version of the 1830s fashion gown. She was the first ballerina to dance a full-length ballet en pointe, and became very popular with images of her widely published. Following her fame, luxury fabrics and corsets were produced bearing the names Taglioni or La Sylphide. As ballet emerged as entertainment for aristocrats, the ballet dancer became principally a woman's profession and the reputation of ballerinas declined in the later 19th century. The feminization of ballet was due in part to a larger male audience. Ballerinas were frequently poor, marginalized members of society, regarded more as workers than artists. They were often subject to the attention of lascivious men, sexually commodified, and sometimes forced into prostitution. Styles of ballet costume were influenced by the popularity of romantic narratives of regional and supernatural folklore, such as the sylph motif. Towards the end of the 19th century, the classical tutu was codified in St. Petersburg during the era of ballet master Marius Petipa. During this time, the tutu was shortened and the boxes of pointe shoes were reinforced. 20th century Ballets Russes Beginning in 1909, the Russian ballet company Ballets Russes brought high classical ballet to the West, principally in Paris. Fashion designers and haute couture were inspired by the influential ballet company. Léon Bakst was the troupe's principal costume designer in the early 1900s. His designs inspired Paul Poiret, who also designed for the company. Trends in Parisian fashion were adapted into ballet costume by Ballets Russes. The dress from Stravinsky's 1910 ballet The Firebird was influential in fashion design. The Orientalist aesthetic of Ballets Russes influenced the boldly colored trousers and harem skirts and trousers of fashion designer Paul Poiret. Coco Chanel designed costumes for the 1924 ballet Le Train Bleu and went on to create ballet-inspired fashions. 1920s Ballets Russes continued to have an influence on fashion into the 1920s. A turning point in the relationship between ballet and fashion was Sergei Diaghilev's 1921 production of The Sleeping Beauty. The ballet's use of light pastels such as lilac influenced color trends in fashion. The production's bluebird blue costumes inspired Elsa Schiaparelli to create her signature color "sleeping blue". French fashion designer Jeanne Lanvin's full-skirted robe de style dresses of the mid-1920s and Madeleine Vionnet's Ballerina dress both had inspiration in the ballerina costume. According to ballet historian Ilyana Karthas, during the 1920s images of femininity were promoted in the context of athleticism, exercise, and the physical body. Italian fashion designer Elsa Schiaparelli also collaborated with the Ballets Russes, inspired by the surrealistic costuming of Giorgio de Chirico in Diaghilev's 1929 production of Le Bal. 1930s and balletomania The 1932 ballet Cotillon was choreographed by George Balanchine and starred Tamara Toumanova, one of the first Baby Ballerinas. Costumes from the production were designed by Christian Bérard and made by Barbara Karinska, who innovated the layering of differently colored tulle. Bérard's designs inspired the glittering tulle gowns that Coco Chanel designed in the 1930s. Since the 1930s, ballet costume has inspired the fashion trends of fitted bodices and bell-shaped silhouettes. Materials used for tutus, such as chiffon, silk tulle, and organza were later incorporated into fashion collections. The romantic-era tutu style also had an influence on the design of gowns. In the 1930s, longer dresses with tulle skirts became fashionable, as exemplified by Coco Chanel's 1937 "Etoiles" dress. which drew inspiration from Balanchine's 1932 ballet Cotillon. The balletomania trend of the 1930s and 1940s had a marked influence on fashion. In the early 1930s, ballet fashion was frequently featured in magazines. Ballerinas were also employed as models from the 1930s onward. 1940s and 1950s With the advent of synthetic materials, ballet practice clothing such as leotards and tights became popular as fashion pieces from the 1940s on. In 1941, former ballet student and fashion editor Diana Vreeland innovated the use of pointe shoes as everyday wear, in part because wartime restrictions did not apply to them. Due to a shortage of leather, fashion designer Claire McCardell commissioned the dance house Capezio to produce a range of ballet flats to match her designs. The ballet flat went on to become everyday footwear. Designers of high fashion and haute couture collaborated frequently with star ballerinas such as Margot Fonteyn in the 1940s. Couturiers such as Pierre Balmain designed costume for ballet as well as high fashion. Designers Christian Dior and Jacques Fath were both influenced by ballet costume. Costumes designed by Fath for the 1948 film The Red Shoes featuring the ballerina Moira Shearer were also influential in creating a demand for ballet-inspired fashion. The fashion house Balmain, founded by Pierre Balmain, and the designer Cristóbal Balenciaga drew inspiration from the aesthetics of ballet costume. The use of feathers in the ballet costumes of ballerina-bird characters in productions of The Firebird, The Dying Swan, and Swan Lake was also mimicked in fashion. 1960s and 1970s During the late 1960s and 1970s, the clothing brand Danskin produced leotards that could be worn for dance as well as streetwear. Fashion designer Bonnie August popularized the look of unitards worn under wrap skirts in the mid-1970s. Ballet-inspired fashion designs experienced a revival in the 1970s during the disco era while athleisure incorporated mainstays of ballet rehearsal clothing such as leotards. In the 1970s, Dance Theatre of Harlem founder Arthur Mitchell decided that dancers' tights and shoes should match their skin tone. The dance apparel company Capezio produced brown pointe shoes for the company. A 1976 collection from Yves Saint Laurent paid homage to the Ballets Russes and Serge Diaghilev. 21st century During the early 2000s, a ballet-inspired fashion trend drawing heavily on warm-up clothing was called "dancer off-duty". In the 2000s, ballet fashion was popularized on film and television through the film Black Swan and Carrie Bradshaw's iconic tulle skirt from Sex and The City. The 2000s saw the lines of companies that produce pointe shoes broaden to include skin tones of people of color, including Black women in ballet. A 2020 exhibition Ballerina: Fashion's Modern Muse was held at The Museum at FIT. Balletcore A resurgence in interest in ballerina-inspired fashion in the mid-2020s came to be known as balletcore. The fashion trend drew inspiration from the graceful and elegant aesthetic of ballet dancers, which has been called "hyper-feminine" and embraces both comfort and body movement in a context that explores femininity. The popularity of the trend has been attributed to Gen Z's obsession with nostalgia. Balletcore continued fashion's use of traditional ballet costumes such as ballet flats, pointe shoes, ballerina skirts, leotards, and tights. Athleisure fashions incorporate dancewear elements such as legwarmers, which are often layered or combined with tie skirts and wrap tops, as well as delicate accessories like ribbon chokers and ballet slipper-inspired shoes. Balletcore continued to rely on lightweight materials such as tulle and satin, organza, sheer fabrics, mesh, and spandex. Ballet-inspired fashion continues to emphasize soft pastel hues such as pink, peach, baby blue, lilac, and light neutral colors. In the 2020s, ballet-inspired elements have increased the popularity as a part of the collections of Rodarte and Miu Miu, as well as those of fashion designers Molly Goddard and Simone Rocha. While principally a phenomenon in women's clothing, ballet has also influenced designs in men's wear and workout wear, with brands creating collections that combined functionality with a balletic aesthetic. See also History of ballet History of fashion design Music and fashion References Further reading Fashion Fashion design Fashion industry Fashion History of fashion Internet aesthetics
Ballet and fashion
Engineering
2,388
69,072,781
https://en.wikipedia.org/wiki/Find-me%20signals
Cells destined for apoptosis release molecules referred to as find-me signals. These signal molecules are used to attract phagocytes which engulf and eliminate damaged cells. Find-me signals are typically released by the apoptotic cells while the cell membrane remains intact. This ensures that the phagocytic cells are able to remove the dying cells before their membranes are compromised. A leaky membrane leads to secondary necrosis which may cause additional inflammation, therefore, it is best to remove dying cells before this occurs. One cell is capable of releasing multiple find-me signals. Should a cell lack the ability to release its find-me signal, other cells may release additional find-me signals to overcome the discrepancy. Inflammation can be suppressed by find-me signals during cell clearance. A phagocyte may also be able to engulf more material or enhance its ability to engulf materials when stimulated by find-me signals. A wide range of molecules, from cellular lipids, proteins, peptides, to nucleotides, act as find-me signals. History The correlation between the early stages of cell death and the removal of apoptotic cells was first studied in C. elegans. Mutants that could not carry out normal caspase-mediated apoptosis were used to demonstrate that cells in the beginning stages of death were still efficiently recognized and removed by phagocytes. This occurred because the engulfment machinery of the phagocytes was still functioning normally even though the apoptotic process in the dying cell was disrupted. A study done in 2003 showed the breast cancer cells release find me signals known as lysophosphatidylcholine. This research brought the concept of find-me signals to the fore front of cell clearance research and introduced the idea that dying cells release signals that flow throughout the body's tissues in order to alert and recruit monocytes to their location. Chemicals that act as find-me signals Known types of find-me signals include: Lipids: lysophosphatidylcholine (lysoPC) sphingosine-1-phosphate (S1P) Proteins and peptides: fractalkine (CX3CL1) interleukin-8 (IL-8) complement components C3a and C5a split tyrosyl tRNA synthetase (mini TyrRS) dimerized ribosomal protein S19 (RP S19) endothelial monocyte-activating polypeptide II (EMAP II) Formyl peptides, especially N-formylmethionine-leucyl-phenylalanine, fMLP) Nucleotides: adenosine triphosphate (ATP), adenosine diphosphate (ADP), uridine triphosphate (UTP) and uridine diphosphate (UDP). All of these molecules are linked to monocyte or macrophage recruitment towards dying cells. The receptor on the monocyte or other phagocyte for ATP and UTP signals has been shown to be P2Y2 in vivo. The receptor on the monocyte or other phagocyte for the CX3CL1 signal has been shown to be CX3CR1 in vivo. The roles of the S1P and LPC signals remained to be established through a model in vivo. Lipids Lysophosphatidylcholine (LPC) Identified in breast cancer cells, this find-me signals is released by MCF-7 cells to attract the THP-1 monocytes. Other cells and different methods of apoptosis may be able to release LPC, but MCF-7 cells have been the most thoroughly studied. The enzyme calcium-independent phospholipase A2 (iPLA2) is most likely responsible for the apoptotic cell releasing LPC as it is dying. The amount of LPC released is small, so it is unclear how it is able to set up a concentration gradient in the serum or plasma in order to attract phagocytes to their location. High concentrations of LPC cause lysis of many cells in its vicinity. LPC may be present in a different chemical from rather than its native form when released by an apoptotic cell. It may bind to components of the serum, making it unavailable to be modified or taken into other tissues. LPC may also be able to function with other soluble molecules. The receptor on the phagocyte that is thought to be linked to LPC is G2A, but it has not been confirmed. The role of LPC as a find-me signal has also not been characterized in vivo. Sphingosine 1-phosphate (S1P) It has been suggested that the induction of apoptosis results in increased expression of S1P kinase 1 (SphK1). The increased presence of SphK1 is linked to the creation of S1P, which then recruits macrophages to the immediate area surrounding apoptotic cells. It has also been suggested that S1P kinase 2 (SphK2) is a target of caspase 1, and that a cleaved fragment of SphK2 is what is released from dying cells into the surrounding extracellular space where it is transformed into S1P. All of the studies thus far characterizing S1P have been done in vitro, and the role or S1P in recruiting phagocytes to apoptotic cells in vivo has not been determined. Staurosine-induced cell death has been shown to influence caspase-1 to initiate the cleavage of SphK2. In other forms of apoptosis, caspase-1 is not normally induced, meaning the formation of S1P needs to be further studied. S1P can be recognized by the G protein-coupled receptors S1P1 through S1P5. Which one of these receptors is relevant in the recruitment of phagocytes to apoptotic cells is not yet known. Sphingosine kinase 1 and sphingosine kinase 2 have been linked to S1P generation during apoptosis through different pathways. The level of SphK1 is increased during apoptosis while caspases cleave SphK2. CX3CL1 CX3CL1 is a soluble fragment of fractalkine protein that serves as a find-me signal for monocytes. A soluble fragment of fractalkine that is usually on the plasma membrane as an intercellular adhesion molecule is sent out as a 60 kDa fragment during apoptosis as a find me signal. CX3CL1 release is dependent upon caspase indirectly. CX3CL1 could also be released as part of microparticles from the beginning stages of apoptotic death of Burkitt Lymphoma cells. The receptors on monocytes that are able to detect the presence of CX3CL1 are CX3R1 receptors, as shown in both in vivo and in vitro studies. Nucleotides: ATP and UTP These were the most recent find me signals to be characterized as components of the supernatant of apoptotic cells. Studies were able to show that the controlled release of the nucleotides ATP and UTP from cells in the beginning stages of apoptosis can potentially attract monocytes in vivo and in vitro. This has been observed in Jurkat cells (primary thymocytes), MCF-7 cells, and lung epithelial cells. Release is dependent upon caspase activity. Less than 2% of ATP released from the beginning stages of cell death is released when the dying cell's plasma membrane is still intact. The released ATP preferentially attracts phagocytes through chemotaxis, rather than random migration through chemokineses. The receptors on monocytes that are able to sense the release of nucleotides are in the P2Y family of nucleotide receptors. Monocytic P2Y2 has been shown to be able to recognize nucleotides in vitro and in genetically modified mice. Nucleotides are often degraded by nucleotide triphosphatases (NTPases) when they are in the extracellular space. Only a small amount of ATP is released during find me signaling, so it is unclear how the nucleotide avoids degradation by NTPases in order to establish a gradient used to signal clearing by monocytes. NTPases may serve as regulators in various tissues in order to control how far the nucleotide signal can travel. The signaling pathway within the monocyte downstream of P2Y receptor activation is still unknown. Others The ribosomal protein S19 has been suggested as a possible find me signal. Apoptosis causes a dimerization of S19, inducing a conformation change that allows it to bind to the C5a receptor on monocytes. Research suggests that S19 is released during the late to final stages of apoptosis. EMAPII, a fragment of tyrosyl tRNA synthetase, has also been shown to attract monocytes. This molecule has inflammatory properties, meaning it is capable of attracting and activating neutrophils. In apoptosis Background Humans turn over billions of cells as a part of normal bodily processes every day, which correlates with about 1 million cells being replaced per second. The ultimate goal of the body's intrinsic cell death mechanisms is to efficiently and asymptomatically clear dying cells. There are many reasons as to why the body needs to get rid of non diseased and diseased cells. As a part of the cell's natural division process, excess cells may be generated during normal growth, development, or tissue repair after illness or an injury. Only a fraction of these new cells will stay and become mature, while the rest will die and be cleared by the body's immune system. Cells may also need to be removed because they are too old or become damaged overtime. Cell damage can occur through environmental factors such as air pollution, UV radiation from the sun, or physical injury. In most cases, the cells that are dying are recognized by phagocytes through find-me signals and removed. Quick and efficient clearing of apoptotic cells is crucial to prevent secondary necrosis of dying cells and to avoid autoantigens causing immune responses. Find-me signals alert the presence of apoptotic cells to phagocytes when they are in the beginning states of dying. The phagocytes are able to use the find-me signals to locate the dying cell. Find-me signals set up a gradient within the tissue they are in to attract phagocytes to their location. The phagocytes migrate to the dying cell through the use of their receptors responding to the find-me signals initiating a signaling pathway within, causing them to move to the proximity of the cell emitting those signals. If the body's immune system, or more specifically phagocytes, fail to clear dying cells in the body, symptoms such as chronic inflammation, autoimmune disorders, and developmental abnormalities have been shown to occur. As long as the engulfment process is functioning and efficient, uncleared apoptotic cells go unnoticed in the body and do not cause any long-term symptoms. If this process is disrupted in any way, the accumulation of secondary necrotic cells in tissues of the body can occur. This is associated with autoimmune disorders, causing the immune system to attack self-antigens on the uncleared cells. Release from dying cells The main function of a find-me signal is to be released while a cell undergoing apoptosis is still intact in order to attract phagocytes to come and clear the dying cell before secondary necrosis can occur. This suggests that the initiation of apoptosis may be coupled with the release of find me signals from the dying cells. As of now, it is unknown how LPC is released from apoptotic cells. S1P generation involved caspase-1-dependent release of sphingosine kinase 2 (SphK2) fragments. CX3CL1 release is mediated through the release of a 60 kDa microparticle fragment of fractalkine from the beginning stages of Burkitt Lymphoma cell apoptosis. Nucleotide release is one of the better defined find me signal release mechanisms. They are released through a pannexin family channel known as PANX1. PANX1 is a four pass transmembrane protein that forms large pores in the plasma membrane of a cell, allowing molecules up to 1 kDa in size to pass through. The nucleotides are detected by P2Y2 on monocytes, which causes them to migrate to the location of the apoptotic cell. Engulfment and clearance of apoptotic cells by phagocytes Phagocytes are able to sense the find-me signals presented by an apoptotic cell during the beginning stages of cell death. They sense the find-me signal gradient and migrate to the vicinity of the signaling cell. Using the presented find-me signal along with the "eat-me" signal also exposed by the apoptotic cell, the phagocyte is able to recognize the dying cell and engulf it. Phagocytes contribute to the "final stages" of cell death by apoptosis. They are often already nearby a dying cell and do not have to travel far in order to engulf and clear it. In most mammalian systems, however, this is not the case. In the human thymus, for example, a dying thymocyte is likely to be engulfed by a healthy neighboring thymocyte, and a macrophage or dendritic cell that resides in the thymus is likely to carry out clearance of the corpse. In this case, a dying cell needs to be able to send out an advertisement of sorts to declare its state of death in order to recruit phagocytes to its location. Phagocytic cells use the soluble find-me signals released by the apoptotic signals to do this. Phagocytes detect the gradient set up by the find-me signals presented by the dying cell in order to navigate to their location. Steps in the engulfment and clearance of apoptotic cells by phagocytes: Phagocytes need to be in the vicinity of the cells presenting find-me signals. The phagocytes use the find-me signals to locate these cells and move to their location. The phagocytes interact with the dying cells through the presenting eat-me signals through specific eat-me signal receptors on the phagocytic cell. The phagocyte will engulf the eat-me signal presenting cell through induced signaling of engulfment receptors and by the reorganization of the phagocytic cell's cytoskeleton. The components of the dying cell are processed by the phagocytes within their lysosomes. Non-apoptotic roles Find me signals may also play a role in phagocytic activity of cell in the direct vicinity of cells undergoing apoptosis. This phenomenon allows neighboring cells adjacent to the apoptotic cell sending out the find me signal to be engulfed without going through the trouble of releasing find me signals of their own. Find me signals could possibly play a role in priming phagocytes to enhance their phagocytic capacity. In addition, they may also be able to enhance production of certain bridging molecules created by macrophages. See also Eat-me signals References Molecules Phagocytes
Find-me signals
Physics,Chemistry
3,248
56,626,732
https://en.wikipedia.org/wiki/Yu%20Xi
Yu Xi (虞喜; 307–345 AD), courtesy name Zhongning (仲寧), was a Chinese astronomer, politician, and writer of the Jin dynasty (266–420 AD). He is best known for his discovery of the precession of the equinoxes, independently of the earlier ancient Greek astronomer Hipparchus. He also postulated that the Earth could be spherical in shape instead of being flat and square, long before the idea became widely accepted in Chinese science with the advances in circumnavigation by Europeans from the 16th-20th centuries, especially with their arrival into the capital's imperial court in the 17th century. Background and official career The life and works of Yu Xi are described in his biography found in the Book of Jin, the official history of the Jin dynasty. He was born in Yuyao, Guiji (modern Shaoxing, Zhejiang province, China). Yu Song, son of Yu Fan, was recorded to be his clan elder. His father Yu Cha (虞察) was a military commander and his younger brother Yu Yu (虞預; fl. 307–329 AD) was likewise a scholar and writer. During the reign of Emperor Min of Jin (r. 313–317 AD) he obtained a low-level position in the administration of the governor of Guiji commandery. He declined a series of nominations and promotions thereafter, including a teaching position at the imperial university in 325 AD, an appointment at the imperial court in 333 AD, and the post of cavalier attendant-in-ordinary in 335 AD. Works In 336 AD Yu Xi wrote the An Tian Lun (安天論; Discussion of Whether the Heavens Are At Rest or Disquisition on the Conformation of the Heavens). In it he described the precession of the equinoxes (i.e. axial precession). He observed that the position of the Sun during the winter solstice had drifted roughly one degree over the course of fifty years relative to the position of the stars. This was the same discovery made earlier by the ancient Greek astronomer Hipparchus (c. 190–120 BC), who found that the measurements for either the Sun's path around the ecliptic to the vernal equinox or the Sun's relative position to the stars were not equal in length. Yu Xi wrote a critical analysis of the huntian (渾天) theory of the celestial sphere, arguing that the heavens surrounding the earth were infinite and motionless. He advanced the idea that the shape of the Earth was either square or round, but that it had to correspond to the shape of the heavens enveloping it. The huntian theory, as mentioned by Western Han dynasty astronomer Luoxia Hong (fl. 140–104 BC) and fully described by the Eastern Han dynasty polymath scientist and statesman Zhang Heng (78–139 AD), insisted that the heavens were spherical and that the Earth was like an egg yolk at its center. Yu Xi's ideas about the infinity of outer space seem to echo Zhang's ideas of endless space even beyond the celestial sphere. Although mainstream Chinese science before European influence in the 17th century surmised that the Earth was flat and square-shaped, some scholars, such as Song dynasty mathematician Li Ye (1192–1279 AD), proposed the idea that it was spherical like the heavens. The acceptance of a spherical Earth can be seen in the astronomical and geographical treatise Gezhicao (格致草) written in 1648 by Xiong Mingyu (熊明遇). It rejects the square-Earth theory and, with clear European influence, explains that ships are capable of circumnavigating the globe. However, it explained this using classical Chinese phrases, such as the Earth being as round as a crossbow bullet, a phrase Zhang Heng had previously used to describe the shape of both the Sun and Moon. Ultimately, though, it was the European Jesuits in China of the 17th century that dispelled the Chinese theory of a flat Earth, convincing the Chinese to adopt the spherical Earth theory established by the ancient Greeks Anaxagoras (c. 500–428 BC), Philolaus (c. 470–385), Aristotle (384–322 BC), and Eratosthenes (c. 276–195 BC). Yu Xi is known to have written commentaries on the various Chinese classics. His commentaries and notes were mostly lost before the Tang dynasty, but the fragments preserved in other texts were collected in a single compendium by Qing-dynasty scholar Ma Guohan (1794–1857). See also History of science and technology in China Kunyu Wanguo Quantu, Chinese world map published in 1602 by the Jesuit Matteo Ricci and Ming-Chinese colleagues, based on European discoveries Shanhai Yudi Quantu, Chinese world map published in 1609 Citations References Cullen, Christopher. (1993). "Appendix A: A Chinese Eratosthenes of the Flat Earth: a Study of a Fragment of Cosmology in Huainanzi", in Major, John. S. (ed), Heaven and Earth in Early Han Thought: Chapters Three, Four, and Five of the Huananzi. Albany: State University of New York Press. . Knechtges, David R.; Chang, Taiping. (2014). Ancient and Early Medieval Chinese Literature: a Reference Guide, vol 3. Leiden: Brill. . Needham, Joseph; Wang, Ling. (1995) [1959]. Science and Civilization in China: Mathematics and the Sciences of the Heavens and the Earth, vol. 3, reprint edition. Cambridge: Cambridge University Press. . Song, Zhenghai; Chen, Chuankang. (1996). "Why did Zheng He’s Sea Voyage Fail to Lead the Chinese to Make the ‘Great Geographic Discovery’?" in Fan, Dainian; Cohen, Robert S. (eds), Chinese Studies in the History and Philosophy of Science and Technology, translated by Kathleen Dugan and Jiang Mingshan, pp 303-314. Dordrecht: Kluwer Academic Publishers. . Sun, Kwok. (2017). Our Place in the Universe: Understanding Fundamental Astronomy from Ancient Discoveries, second edition. Cham, Switzerland: Springer. . External links "Yu Xi 虞喜 (fl. 307–46) (Zhongning 仲寧) - Astronome et érudit érémitique des Jin de l'Est" Archives-Ouvertes (French) 4th-century Chinese astronomers 4th-century Chinese writers 4th-century Confucianists Ancient Chinese astronomers Equinoxes Jin dynasty (266–420) science writers Politicians from Shaoxing Scientists from Shaoxing Writers from Shaoxing
Yu Xi
Astronomy
1,393
5,946,026
https://en.wikipedia.org/wiki/People%20Capability%20Maturity%20Model
People Capability Maturity Model (short names: People CMM, PCMM, P-CMM) is a maturity framework that focuses on continuously improving the management and development of the human assets of an organization. It describes an evolutionary improvement path from ad hoc, inconsistently performed practices, to a mature, disciplined, and continuously improving development of the knowledge, skills, and motivation of the workforce that enhances strategic business performance. Related to fields such as human resources, knowledge management, and organizational development, the People CMM guides organizations in improving their processes for managing and developing their workforces. The People CMM helps organizations characterize the maturity of their workforce practices establish a programme of continuous workforce development, set priorities for improvement actions, integrate workforce development with process improvement, and establish a culture of excellence. The term was promoted in 1995, published in book form in 2001, and a second edition was published in July 2009. Description The People CMM consists of five maturity levels that establish successive foundations for continuously improving individual competencies, developing effective teams, motivating improved performance, and shaping the workforce the organization needs to accomplish its future business plans. Each maturity level is a well-defined evolutionary plateau that institutionalizes new capabilities for developing the organization's workforce. By following the maturity framework, an organization can avoid introducing workforce practices that its employees are unprepared to implement effectively. Structure The People CMM document describes the practices that constitute each of its maturity levels and provides information on how to apply them to guide organizational improvements. It describes an organization's capability for developing its workforce at each maturity level. It also describes how the People CMM can be applied as a standard for assessing workforce practices and as a guide for planning and implementing improvement activities. Version 2 of the People CMM has been designed to correct known issues in Version 1, which was released in 1995. It adds enhancements learned from five years of implementation experience and integrates the model better with CMMI and its IPPD extensions. The primary motivation for updating the People CMM was the error in Version 1 of placing team-building activities at Maturity Level 4. The authors made this placement based on substantial Feedback that it should not be placed at Maturity Level 3, as it had been in early review releases. Experience has indicated that many organizations initiate the formal development of workgroups while working toward Maturity Level 3. Thus, Version 2 of the People CMM initiates process-driven workgroup development at Maturity Level 3. This change is consistent with the placement of integrated teaming activities at Maturity Level 3 of the CMMI-IPPD. See also Capability Immaturity Model (CIMM) Capability Maturity Model (CMM) Capability Maturity Model Integration (CMMI) References External links Organisational maturity and functional performance P-CMM Mobile App Android P-CMM Mobile App Apple Maturity models Information technology management
People Capability Maturity Model
Technology
578
6,494,001
https://en.wikipedia.org/wiki/International%20Union%20of%20Basic%20and%20Clinical%20Pharmacology
The International Union of Basic and Clinical Pharmacology (IUPHAR) is a voluntary, non-profit association representing the interests of scientists in pharmacology-related fields to facilitate Better Medicines through Global Education and Research around the world. History Established in 1959 as a section of the International Union of Physiological Sciences, IUPHAR became an independent organization in 1966 and is a member of the International Council for Science (ICSU). The first World Congress of Pharmacology was held in Stockholm, Sweden in 1961 and subsequently held every three years. After 1990 the World Congresses were moved to a four-year interval. These meetings present the latest pharmacological research, technology, and methodology, and provide a forum for international collaboration and exchange of ideas. A General Assembly, consisting of delegates from all the member societies, is convened during the congresses so member societies have an opportunity to elect the Executive Committee and vote on matters concerning the governance and activities of the union. Members IUPHAR members are regional, national and special-interest societies around the world. The various sections and committees are composed of individuals from academia, pharmaceutical companies, and government organizations. IUPHAR resources are available to all members of the pharmacology-related societies that adhere to IUPHAR. Composition IUPHAR is divided in sectional topics. The Division of Clinical Pharmacology, including 3 subcommittees of Developing Countries, Geriatrics, and Pharmacoepidemiology and Pharmacovigilance, focuses on the needs and research tools for clinicians. The Committee on Receptor Nomenclature and Drug Classification (NC-IUPHAR) provides a uniform guideline for naming and classifying results from the Human Genome Project, naming proteins derived from new sequences as functional receptors and ion channels. Sections specializing in various areas of pharmacology have been established, including Drug Metabolism and Drug Transport, Education, Gastrointestinal Pharmacology, Immunopharmacology, Pharmacology of Natural products, Neuropsychopharmacology, Pediatrics Clinical Pharmacology and Pharmacogenetics and Pharmacogenomics. Volunteers participate in the various sections and division according to their interests and training. Activities A primary purpose of IUPHAR is providing global free access to a major, on-line repository of characterization data for receptors, ion channels, enzyme target classes and drugs through the Committee on Receptor Nomenclature and Drug Classification (NC-IUPHAR), established in 1987. The Guide to Pharmacology established in 2012 superseded the earlier IUPHAR-DB. This is a joint endeavor with the British Pharmacological Society, and has been supported by the Wellcome Trust. It includes all the G protein-coupled receptors, voltage-gated ion channels, 7TM receptors, nuclear receptors, ligand-gated ion channels and Kinases which are known to be in the human genome. Where relevant, data on the rat and mouse homologues are presented to assist researchers and clinicians in developing and/or enhancing therapeutics for eventual medication in humans. NC-IUPHAR also promulgates standards of name nomenclature for research in pharmacology and the related disciplines. In general, IUPHAR offers individual pharmacologists free curriculum expertise, career development and job listings (the non-profit PharmacoCareers.org), research resources, and collaboration opportunities. IUPHAR offers its member societies venues for participating in worldwide initiatives, publicizing member meetings and activities, nominating individuals for Young Investigator awards, and naming delegates to the quadrennial General Assemblies. A biannual newsletter entitled Pharmacology International is published. As a non-government organization in official relations with the World Health Organization (WHO), IUPHAR representatives help shape international policy on essential medicines, appropriate dose therapeutics for children, and clinical pharmacology core competencies among its many WHO-related activities. The Division of Clinical Pharmacology compiled and released the Research in Humans Compendium, a free resource to provide the scientific community interested in human research with information on the design of research protocols to assess the effectiveness of a drug in a series of pathological conditions. IUPHAR is involved in the development of pharmacology in developing countries. In conjunction with ICSU the Pharmacology for Africa (PharfA) initiative was undertaken in 2006 to promote and organize pharmacology on the African continent. The South African Society of Basic and Clinical Pharmacology is building a database and network of institutions and pharmacologists to create an infrastructure for training and funding pharmacologists. The long-term goal is for the African continent to attain the necessary pharmacological knowledge and resources to address disease-related issues affecting the population. As part of this mission, with the support of ICSU and the American Society for Pharmacology and Experimental Therapeutics, the IUPHAR Education Section organized a series of workshops, mostly in Africa, to train young investigators on ethical laboratory practices, including the three Rs of ethical use of animals. IUPHAR Pharmacology Education Project is a website developed by IUPHAR, with support from the American Society for Pharmacology and Experimental Therapeutics (ASPET), as a learning resource to support education and training in the pharmacological sciences. The materials are intended for use by students of pharmacology, clinical pharmacologists, and others interested in the pharmacological sciences. The stated aim is to produce a simple, attractive, easily searchable resource that will support students and teachers of the biomedical sciences, medicine, nursing and pharmacy. It is also intended as an introduction to some of the new data in the IUPHAR/BPS Guide to PHARMACOLOGY, particularly for those less familiar with such material. Future directions The early years of the 21st century will be focused on integrating basic and clinical research to implement translational medicine techniques more quickly. The 9th World Conference on Clinical Pharmacology and Therapeutics in Québec City, Canada was the last IUPHAR meeting to present clinical pharmacology separately. The World Congress of Basic and Clinical Pharmacology in Copenhagen, Denmark on July 17–23, 2010 was the first integrated meeting. The merging of these different approaches to the same discipline is to accelerate the introduction of improved therapeutics for humans. Educational components will be emphasized for both existing pharmacology programs as well as increasing and enhancing pharmacology training in developing countries. This topic was a central theme of the 17th World Congress of Basic and Clinical Pharmacology (WCP2014) held July 13–18, 2014 in Cape Town, South Africa. The 18th World Congress of Basic and Clinical Pharmacology (WCP2018) being held in Kyoto, Japan on July 1–6, 2018 will focus on drug development and therapeutics using new methodologies such as genome sequencing, stem cell biology, nanotechnology and systems biology. See also Drug design Medicinal chemistry Pre-clinical development Federation of European Pharmacological Societies European Association for Clinical Pharmacology and Therapeutics Safety Pharmacology Society References External links International Union of Basic and Clinical Pharmacology (IUPHAR) The Guide to Pharmacology The IUPHAR/BPS Guide to PHARMACOLOGY NC-IUPHAR Nomenclature Guidelines The 18th World Congress of Basic and Clinical Pharmacology Pharmacology for Africa Initiative (PharfA) Members of the International Council for Science International professional associations Pharmaceuticals policy Scientific organizations established in 1959 Pharmacological societies Members of the International Science Council
International Union of Basic and Clinical Pharmacology
Chemistry
1,569
21,333,387
https://en.wikipedia.org/wiki/List%20of%20mass%20spectrometry%20acronyms
This is a compilation of initialisms and acronyms commonly used in mass spectrometry. A ADI – Ambient desorption ionization AE – Appearance energy AFADESI – Air flow-assisted desorption electrospray ionization AFAI – Air flow-assisted ionization AFAPA – Aerosol flowing atmospheric-pressure afterglow AGHIS – All-glass heated inlet system AIRLAB – Ambient infrared laser ablation AMS – Accelerator mass spectrometry AMS – Aerosol mass spectrometer AMU – Atomic mass unit AP – Appearance potential AP MALDI – Atmospheric pressure matrix-assisted laser desorption/ionization APCI – Atmospheric pressure chemical ionization API – Atmospheric pressure ionization APPI – Atmospheric pressure photoionization ASAP – Atmospheric Sample Analysis Probe ASMS – American Society for Mass Spectrometry B BP – Base peak BIRD – Blackbody infrared radiative dissociation C CRF – Charge remote fragmentation CSR – Charge stripping reaction CI – Chemical ionization CA – Collisional activation CAD – Collisionally activated dissociation CID – Collision-induced dissociation CRM – Consecutive reaction monitoring CF-FAB – Continuous flow fast atom bombardment CRIMS – Chemical reaction interface mass spectrometry CTD – Charge transfer dissociation D DE – Delayed extraction DADI – Direct analysis of daughter ions DAPPI – Desorption atmospheric pressure photoionization DEP – Direct exposure probe DESI – Desorption electrospray ionization DIOS – Desorption/ionization on silicon DIP – Direct insertion probe DART – Direct analysis in real time DLI – Direct liquid introduction DIA – Data independent acquisition E EA – Electron affinity EAD – Electron-activated dissociation ECD – Electron-capture dissociation ECI – Electron capture ionization EDD – Electron-detachment dissociation EI – Electron ionization (or electron impact) EJMS – European Journal of Mass Spectrometry ESA – Electrostatic energy analyzer ES/ESI – Electrospray ionisation ETD – Electron-transfer dissociation eV – Electronvolt F FAIMS – High-field asymmetric waveform ion mobility spectrometry FAB – Fast atom bombardment FIB – Fast ion bombardment FD – Field desorption FFR – Field-free region FI – Field ionization FT-ICR MS – Fourier transform ion cyclotron resonance mass spectrometer FTMS – Fourier transform mass spectrometer G GDMS – Glow discharge mass spectrometry H HDX – Hydrogen/deuterium exchange HCD – Higher-energy C-trap dissociation I ICAT – Isotope-coded affinity tag ICP – Inductively coupled plasma ICRMS – Ion cyclotron resonance mass spectrometer IDMS – Isotope dilution mass spectrometry IJMS – International Journal of Mass Spectrometry IRMPD – Infrared multiphoton dissociation IKES – Ion kinetic energy spectrometry IMS – Ion mobility spectrometry IMSC – International Mass Spectrometry Conference IMSF – International Mass Spectrometry Foundation IRMS – Isotope ratio mass spectrometry IT – Ion trap ITMS – Ion trap mass spectrometry ITMS – Ion trap mobility spectrometry iTRAQ – Isobaric tag for relative and absolute quantitation J JASMS – Journal of the American Society for Mass Spectrometry JEOL – Japan Electro-Optics Laboratory JMS – Journal of Mass Spectrometry K KER – Kinetic energy release KERD – Kinetic energy release distribution L LCMS – Liquid chromatography–mass spectrometry LD – Laser desorption LDI – Laser desorption ionization LI – Laser ionization LMMS – Laser microprobe mass spectrometry LIT – Linear ion trap LSI – Liquid secondary ionization LSII – Laserspray ionization inlet M MIKES – Mass-analyzed ion kinetic energy spectrometry MS – Mass spectrometer MS – Mass spectrometry MS2 – Mass spectrometry/mass spectrometry, i.e. tandem mass spectrometry MS/MS – Mass spectrometry/mass spectrometry, i.e. tandem mass spectrometry MALDESI – Matrix-assisted laser desorption electrospray ionization MALDI – Matrix-assisted laser desorption/ionization MAII – Matrix-assisted inlet ionization MAIV – Matrix-assisted ionization vacuum MIMS – Membrane introduction mass spectrometry, membrane inlet mass spectrometry, membrane interface mass spectrometry MCP – Microchannel plate MSn – Multiple-stage mass spectrometry MCP – Microchannel plate MPI – Multiphoton ionization MRM – Multiple reaction monitoring N NEMS-MS – Nanoelectromechanical systems mass spectrometry NETD – Negative electron-transfer dissociation NICI – Negative ion chemical ionization NRMS – Neutralization reionization mass spectrometry O oa-TOF – Orthogonal acceleration time of flight OMS – Organic Mass Spectrometry (journal) P PDI – Plasma desorption/ionization PDMS – Plasma desorption mass spectrometry PAD – Post-acceleration detector PSD – Post-source decay PyMS – Pyrolysis mass spectrometry Q QUISTOR – Quadrupole ion storage trap QIT – Quadrupole ion trap QMS – Quadrupole mass spectrometer QTOF – Quadrupole time of flight R RCM – Rapid Communications in Mass Spectrometry REIMS – Rapid evaporative ionization mass spectrometry REMPI – Resonance enhanced multiphoton ionization RGA – Residual gas analyzer RI – Resonance ionization S SAII – Solvent-assisted ionization inlet SELDI – Surface-enhanced laser desorption/ionization SESI – Secondary electrospray ionization SHRIMP – Sensitive high-resolution ion microprobe SIFT – Selected ion flow tube SILAC – Stable isotope labelling by amino acids in cell culture SIM – Selected ion monitoring SIMS – Secondary ion mass spectrometry SIR – Selected ion recording SNMS – Secondary neutral mass spectrometry SRM – Selected reaction monitoring SWIFT – Stored waveform inverse Fourier transform SID – Surface-induced dissociation SIR – Surface-induced reaction SI – Surface ionization SORI – Sustained off-resonance irradiation T TI – Thermal ionization TIC – Total ion current TICC – Total ion current chromatogram TLF – Time-lag focusing TMT – Tandem mass tags TOF-MS – Time-of-flight mass spectrometer V VG – Vacuum Generators (company) References External links Mass Spectroscopy Acronym Page at MIT Mass spectrometry Mass spectrometry
List of mass spectrometry acronyms
Physics,Chemistry
1,425
65,418,263
https://en.wikipedia.org/wiki/Claudia%20Mazz%C3%A0
Claudia Mazzà is a professor of biomechanics at the Department of Mechanical Engineering at the University of Sheffield. Her research centres on biomechanics of human movement. She is the director of the EPSRC funded MultiSim project and a leading scientist in the Mobilise-D research project. Education and career Mazzà studied biomechanics at the University of Bologna where she completed her PhD in 2004. She then continued her research at the Department of Human Movement Sciences at the Foro Italico University of Rome where she was appointed assistant professor in 2006. She became a reader in Sheffield in 2013 and was promoted to professor in 2019. Research Mazzà's research encompasses a range of topics including biomechanics, gait analysis, techniques for human movement analysis and musculoskeletal modelling. This includes work on the influence of certain illnesses such as Parkinson's disease on the posture and motion of patients. Awards and honours Life Sciences Award – Suffrage Science awards 2020 References Year of birth missing (living people) Living people Bioengineers Women bioengineers University of Bologna alumni Academics of the University of Sheffield
Claudia Mazzà
Engineering,Biology
233
54,241,127
https://en.wikipedia.org/wiki/Hexafluorothioacetone
Hexafluorothioacetone is an organic perfluoro thione compound with formula CF3CSCF3. At standard conditions it is a blue gas. Production Hexafluorothioacetone was first produced by Middleton in 1961 by boiling bis-(perfluoroisopropyl)mercury with sulfur. Properties Hexafluorothioacetone boils at 8 °C. Below this it is a blue liquid. Colour The blue colour is due to absorption in the visible light range with bands at 800–675 nm and 725–400 nm. These bands are due to T1–S0 and S1–S0 transitions. There is also a strong absorption in ultraviolet around 230-190 nm. Reactions Hexafluorothioacetone acts more like a true thiocarbonyl (C=S) than many other thiocarbonyl compounds, because it is not able to form thioenol compounds (=C-S-H), and the sulfur is not in a negative ionized state (C-S−). Hexafluorothioacetone is not attacked by water or oxygen at standard conditions as are many other thiocarbonyls. Bases trigger the formation of a dimer 2,2,4,4-tetrakis-(trifluoromethyl)-1,3-dithietane. Bases includes amines. The dimer can be heated to regenerate the hexafluorothioacetone monomer. The dimer is also produced in a reaction with hexafluoropropene and sulfur with some potassium fluoride. Hexafluorothioacetone reacts with bisulfite to form a Bunte salt CH(CF3)2SSO2−. Thiols reacting with hexafluorothioacetone yield disulfides or a dithiohemiketal: R-SH + C(CF3)2S → R-S-S-CH(CF3)2. R-SH + C(CF3)2S → RSC(CF3)2SH (for example in methanethiol or ethanethiol). With mercaptoacetic acid, instead of a thiohemiketal, water elimination yields a ring shaped molecule called a dithiolanone -CH2C(O)SC(CF3)2S- (2,2-di(trifluoromethyl)-1,3-dithiolan-4-one). Aqueous hydrogen chloride results in the formation of a dimeric disulfide CH(CF3)2SSC(CF3)2Cl. Hydrogen bromide with water yields the similar CH(CF3)2SSC(CF3)2Br. Dry hydrogen iodide does something different and reduces the sulfur making CH(CF3)2SH. Wet hydrogen iodide only reduces to a disulfide CH(CF3)2SSC(CF3)2H. Strong organic acids add water to yield a disulfide compound CH(CF3)2SSC(CF3)2OH. Chlorine and bromine add to hexafluorothioacetone to make CCl(CF3)2SCl and CBr(CF3)2SBr. With diazomethane hexafluorothioacetone produces 2,2,5,5-tetrakis(trifluoromethyl)-l,3-dithiolane, another substituted dithiolane. Diphenyldiazoniethane reacts to form a three membered ring called a thiirane (di-2,2-trifluoromethyl-di-3,3-phenyl-thiirane) Trialkylphosphites (P(OR)3) react to make a trialkoxybis(trifluoromethyl)methylenephosphorane (RO)3P=C(CF3)2 and a thiophosphate (RO)3PS. Hexafluorothioacetone can act as a ligand on nickel. Hexafluorothioacetone is highly reactive to alkenes and dienes combining via addition reactions. With butadiene it reacts even as low as -78 °C to yield 2,2-bis-(trifluoromethyl)-3,6-dihydro-2H-l-thiapyran. See also Hexafluoroacetone References External links Thioketones Perfluorinated compounds Trifluoromethyl compounds Gases with color
Hexafluorothioacetone
Chemistry
992
250,877
https://en.wikipedia.org/wiki/NGC%20604
NGC 604 is an H II region inside the Triangulum Galaxy. It was discovered by William Herschel on September 11, 1784. It is among the largest H II regions in the Local Group of galaxies; at the galaxy's estimated distance of 2.7 million light-years, its longest diameter is roughly 1,520 light years (~460 parsecs), over 40 times the size of the visible portion of the Orion Nebula. It is over 6,300 times more luminous than the Orion Nebula, and if it were at the same distance it would outshine Venus. Its gas is ionized by a cluster of massive stars at its center with 200 stars of spectral type O and WR, a mass of 105 solar masses, and an age of 3.5 million years; however, unlike the Large Magellanic Cloud's Tarantula Nebula central cluster (R136), NGC 604's one is much less compact and more similar to a large stellar association. See also Tarantula Nebula List of largest nebulae References Some data in the table was updated from Sue French's column "Deep-sky Wonders", in the January 2006 issue of Sky & Telescope, p. 83. External links Nebula NGC 604 @ SEDS Messier pages H II regions Triangulum 0604 Triangulum Galaxy 17840911 Star-forming regions
NGC 604
Astronomy
283
206,586
https://en.wikipedia.org/wiki/Technological%20convergence
Technological convergence is the tendency for technologies that were originally unrelated to become more closely integrated and even unified as they develop and advance. For example, watches, telephones, television, computers, and social media platforms began as separate and mostly unrelated technologies, but have converged in many ways into an interrelated telecommunication, media, and technology industry. Definitions "Convergence is a deep integration of knowledge, tools, and all relevant activities of human activity for a common goal, to allow society to answer new questions to change the respective physical or social ecosystem. Such changes in the respective ecosystem open new trends, pathways, and opportunities in the following divergent phase of the process". Siddhartha Menon defines convergence as integration and digitalization. Integration, here, is defined as "a process of transformation measure by the degree to which diverse media such as phone, data broadcast and information technology infrastructures are combined into a single seamless all purpose network architecture platform". Digitalization is not so much defined by its physical infrastructure, but by the content or the medium. Jan van Dijk suggests that "digitalization means breaking down signals into bytes consisting of ones and zeros". Convergence is defined by Blackman (1998) as a trend in the evolution of technology services and industry structures. Convergence is later defined more specifically as the coming together of telecommunications, computing and broadcasting into a single digital bit-stream. Mueller stands against the statement that convergence is really a takeover of all forms of media by one technology: digital computers. Acronyms Some acronyms for converging scientific or technological fields include: NBIC (Nanotechnology, Biotechnology, Information technology and Cognitive science) GNR (Genetics, Nanotechnology and Robotics) GRIN (Genetics, Robotics, Information, and Nano processes) GRAIN (Genetics, Robotics, Artificial Intelligence, and Nanotechnology) BANG (Bits, Atoms, Neurons, Genes) Biotechnology A 2010 citation analysis of patent data shows that biomedical devices are strongly connected to computing and mobile telecommunications, and that molecular bioengineering is strongly connected to several IT fields. Bioconvergence is the integration of biology with engineering. Possible areas of bioconvergence include: Materials inspired by biology (such as in electronics) DNA data storage Medical technologies: Omics-based profiling Miniaturized drug delivery Tissue reconstruction Traceable pharmaceutical packaging More efficient bioreactors Digital convergence Digital convergence is the inclination for various digital innovations and media to become more similar with time. It enables the convergence of access devices and content as well as the industry participant operations and strategy. This is how this type of technological convergence creates opportunities, particularly in the area of product development and growth strategies for digital product companies. The same can be said in the case of individual content creators, such as vloggers on YouTube. The convergence in this example is demonstrated in the involvement of the Internet, home devices such as smart television, camera, the YouTube application, and digital content. In this setup, there are the so-called "spokes", which are the devices that connect to a central hub (such as a PC or smart TV). Here, the Internet serves as the intermediary, particularly through its interactivity tools and social networking, in order to create unique mixes of products and services via horizontal integration. The above example highlights how digital convergence encompasses three phenomena: previously stand-alone devices are being connected by networks and software, significantly enhancing functionalities; previously stand-alone products are being converged onto the same platform, creating hybrid products in the process; and, companies are crossing traditional boundaries such as hardware and software to provide new products and new sources of competition. Another example is the convergence of different types of digital contents. According to Harry Strasser, former CTO of Siemens "[digital convergence will substantially impact people's lifestyle and work style]". Cellphones The functions of the cellphone changes as technology converges. Because of technological advancement, a cellphone functions as more than just a phone: it can also contain an Internet connection, video players, MP3 players, gaming, and a camera. Their areas of use have increased over time, partly substituting for other devices. A mobile convergence device is one that, if connected to a keyboard, monitor, and mouse, can run applications as a desktop computer would. Convergent operating systems include the Linux operating systems Ubuntu Touch, Plasma Mobile and PureOS. Convergence can also refer to being able to run the same app across different devices and being able to develop apps for different devices (such as smartphones, TVs and desktop computers) at once, with the same code base. This can be done via Linux applications that adapt to the device they are being used on (including native apps designed for such via frameworks like Kirigami) or by the use of multi-platform frameworks like the Quasar framework that use tools such as Apache Cordova, Electron and Capacitor, which can increase the userbase, the pace and ease of development and the number of reached platforms while decreasing development costs. The Internet The role of the Internet has changed from its original use as a communication tool to easier and faster access to information and services, mainly through a broadband connection. The television, radio and newspapers were the world's media for accessing news and entertainment; now, all three media have converged into one, and people all over the world can read and hear news and other information on the Internet. The convergence of the Internet and conventional TV became popular in the 2010s, through Smart TV, also sometimes referred to as "Connected TV" or "Hybrid TV", (not to be confused with IPTV, Internet TV, or with Web TV). Smart TV is used to describe the current trend of integration of the Internet and Web 2.0 features into modern television sets and set-top boxes, as well as the technological convergence between computers and these television sets or set-top boxes. These new devices most often also have a much higher focus on online interactive media, Internet TV, over-the-top content, as well as on-demand streaming media, and less focus on traditional broadcast media like previous generations of television sets and set-top boxes always have had. Social movements The integration of social movements in cyberspace is one of the potential strategies that social movements can use in the age of media convergence. Because of the neutrality of the Internet and the end-to-end design, the power structure of the Internet was designed to avoid discrimination between applications. Mexico's Zapatistas campaign for land rights was one of the most influential case in the information age; Manuel Castells defines the Zapatistas as "the first informational guerrilla movement". The Zapatista uprising had been marginalized by the popular press. The Zapatistas were able to construct a grassroots, decentralized social movement by using the Internet. The Zapatistas Effect, observed by Cleaver, continues to organize social movements on a global scale. A sophisticated webmetric analysis, which maps the links between different websites and seeks to identify important nodal points in a network, demonstrates that the Zapatistas cause binds together hundreds of global NGOs. The majority of the social movement organized by Zapatistas targets their campaign especially against global neoliberalism. A successful social movement not only need online support but also protest on the street. Papic wrote, "Social Media Alone Do Not Instigate Revolutions", which discusses how the use of social media in social movements needs good organization both online and offline. Media Media technological convergence is the tendency that as technology changes, different technological systems sometimes evolve toward performing similar tasks. It is the interlinking of computing and other information technologies, media content, media companies and communication networks that have arisen as the result of the evolution and popularization of the Internet as well as the activities, products and services that have emerged in the digital media space. Generally, media convergence refers to the merging of both old and new media and can be seen as a product, a system or a process. Jenkins states that convergence is, "the flow of content across multiple media platforms, the cooperation between multiple media industries, and the migratory behaviour of media audiences who would go almost anywhere in search of the kinds of entertainment experiences they wanted". According to Jenkins, there are five areas of convergence: technological, economic, social or organic, cultural, and global. Media convergence is not just a technological shift or a technological process, it also includes shifts within the industrial, cultural, and social paradigms that encourage the consumer to seek out new information. Convergence, simply put, is how individual consumers interact with others on a social level and use various media platforms to create new experiences, new forms of media and content that connect us socially, and not just to other consumers, but to the corporate producers of media in ways that have not been as readily accessible in the past. However, Lugmayr and Dal Zotto argued, that media convergence takes place on technology, content, consumer, business model, and management level. They argue that media convergence is a matter of evolution and can be described through the triadic phenomena of convergence, divergence, and coexistence. Today's digital media ecosystems coexist, as e.g., mobile app stores provide vendor lock-ins into particular eco-systems; some technology platforms are converging under one technology, due to, for example, the usage of common communication protocols as in digital TV; and other media are diverging, as, for example, media content offerings are more and more specializing and provides a space for niche media. Closely linked to the multilevel process of media convergence are also several developments in different areas of the media and communication sector which are also summarized under the term of media deconvergence. Many experts view this as simply being the tip of the iceberg, as all facets of institutional activity and social life such as business, government, art, journalism, health, and education, are increasingly being carried out in these digital media spaces across a growing network of information and communication technology devices. Also included in this topic is the basis of computer networks, wherein many different operating systems are able to communicate via different protocols. Convergent services, such as VoIP, IPTV, Smart TV, and others, tend to replace the older technologies and thus can disrupt markets. IP-based convergence is inevitable and will result in new service and new demand in the market. When the old technology converges into the public-owned common, IP based services become access-independent or less dependent. The old service is access-dependent. Advances in technology bring the ability for technological convergence that Rheingold believes can alter the "social-side effects," in that "the virtual, social and physical world are colliding, merging and coordinating." It was predicted in the late 1980s, around the time that CD-ROM was becoming commonplace, that a digital revolution would take place, and that old media would be pushed to one side by new media. Broadcasting is increasingly being replaced by the Internet, enabling consumers all over the world the freedom to access their preferred media content more easily and at a more available rate than ever before. However, when the dot-com bubble of the 1990s suddenly popped, that poured cold water over the talk of such a digital revolution. In today's society, the idea of media convergence has once again emerged as a key point of reference as newer as well as established media companies attempt to visualize the future of the entertainment industry. If this revolutionary digital paradigm shift presumed that old media would be increasingly replaced by new media, the convergence paradigm that is currently emerging suggests that new and old media would interact in more complex ways than previously predicted. The paradigm shift that followed the digital revolution assumed that new media was going to change everything. When the dot com market crashed, there was a tendency to imagine that nothing had changed. The real truth lay somewhere in between as there were so many aspects of the current media environment to take into consideration. Many industry leaders are increasingly reverting to media convergence as a way of making sense in an era of disorientating change. In that respect, media convergence in theory is essentially an old concept taking on a new meaning. Media convergence, in reality, is more than just a shift in technology. It alters relationships between industries, technologies, audiences, genres and markets. Media convergence changes the rationality media industries operate in, and the way that media consumers process news and entertainment. Media convergence is essentially a process and not an outcome, so no single black box controls the flow of media. With proliferation of different media channels and increasing portability of new telecommunications and computing technologies, we have entered into an era where media constantly surrounds us. Media convergence requires that media companies rethink existing assumptions about media from the consumer's point of view, as these affect marketing and programming decisions. Media producers must respond to newly empowered consumers. Conversely, it would seem that hardware is instead diverging whilst media content is converging. Media has developed into brands that can offer content in a number of forms. Two examples of this are Star Wars and The Matrix. Both are films, but are also books, video games, cartoons, and action figures. Branding encourages expansion of one concept, rather than the creation of new ideas. In contrast, hardware has diversified to accommodate media convergence. Hardware must be specific to each function. While most scholars argue that the flow of cross-media is accelerating, O'Donnell suggests, especially between films and video game, the semblance of media convergence is misunderstood by people outside of the media production industry. The conglomeration of media industry continues to sell the same story line in different media. For example, Batman is in comics, films, anime, and games. However, the data to create the image of batman in each media is created individually by different teams of creators. The same character and the same visual effect repetitively appear in different media is because of the synergy of media industry to make them similar as possible. In addition, convergence does not happen when the game of two different consoles is produced. No flows between two consoles because it is faster to create game from scratch for the industry. One of the more interesting new media journalism forms is virtual reality. Reuters, a major international news service, has created and staffed a news “island” in the popular online virtual reality environment Second Life. Open to anyone, Second Life has emerged as a compelling 3D virtual reality for millions of citizens around the world who have created avatars (virtual representations of themselves) to populate and live in an altered state where personal flight is a reality, altered egos can flourish, and real money ( were spent during the 24 hours concluding at 10:19 a.m. eastern time January 7, 2008) can be made without ever setting foot into the real world. The Reuters Island in Second Life is a virtual version of the Reuters real-world news service but covering the domain of Second Life for the citizens of Second Life (numbering 11,807,742 residents as of January 5, 2008). Media convergence in the digital era means the changes that are taking place with older forms of media and media companies. Media convergence has two roles, the first is the technological merging of different media channels – for example, magazines, radio programs, TV shows, and movies, now are available on the Internet through laptops, iPads, and smartphones. As discussed in Media Culture (by Campbell), convergence of technology is not new. It has been going on since the late 1920s. An example is RCA, the Radio Corporation of America, which purchased Victor Talking Machine Company and introduced machines that could receive radio and play recorded music. Next came the TV, and radio lost some of its appeal as people started watching television, which has both talking and music as well as visuals. As technology advances, convergence of media change to keep up. The second definition of media convergence Campbell discusses is cross-platform by media companies. This usually involves consolidating various media holdings, such as cable, phone, television (over the air, satellite, cable) and Internet access under one corporate umbrella. This is not for the consumer to have more media choices, this is for the benefit of the company to cut down on costs and maximize its profits. As stated in the article Convergence Culture and Media Work by Mark Deuze, “the convergence of production and consumption of media across companies, channels, genres, and technologies is an expression of the convergence of all aspects of everyday life: work and play, the local and the global, self and social identity." History Communication networks were designed to carry different types of information independently. The older media, such as television and radio, are broadcasting networks with passive audiences. Convergence of telecommunication technology permits the manipulation of all forms of information, voice, data, and video. Telecommunication has changed from a world of scarcity to one of seemingly limitless capacity. Consequently, the possibility of audience interactivity morphs the passive audience into an engaged audience. The historical roots of convergence can be traced back to the emergence of mobile telephony and the Internet, although the term properly applies only from the point in marketing history when fixed and mobile telephony began to be offered by operators as joined products. Fixed and mobile operators were, for most of the 1990s, independent companies. Even when the same organization marketed both products, these were sold and serviced independently. In the 1990s, an implicit and often explicit assumption was that new media was going to replace the old media and Internet was going to replace broadcasting. In Nicholas Negroponte's Being Digital, Negroponte predicts the collapse of broadcast networks in favor of an era of narrow-casting. He also suggests that no government regulation can shatter the media conglomerate. "The monolithic empires of mass media are dissolving into an array of cottage industries... Media barons of today will be grasping to hold onto their centralized empires tomorrow.... The combined forces of technology and human nature will ultimately take a stronger hand in plurality than any laws Congress can invent." The new media companies claimed that the old media would be absorbed fully and completely into the orbit of the emerging technologies. George Gilder dismisses such claims saying, "The computer industry is converging with the television industry in the same sense that the automobile converged with the horse, the TV converged with the nickelodeon, the word-processing program converged with the typewriter, the CAD program converged with the drafting board, and digital desktop publishing converged with the Linotype machine and the letterpress." Gilder believes that computers had come not to transform mass culture but to destroy it. Media companies put media convergence back to their agenda after the dot-com bubble burst. In 1994, Knight Ridder promulgated the concept of portable magazines, newspaper, and books: "Within news corporations it became increasingly obvious that an editorial model based on mere replication in the Internet of contents that had previously been written for print newspapers, radio, or television was no longer sufficient." The rise of digital communication in the late 20th century has made it possible for media organizations (or individuals) to deliver text, audio, and video material over the same wired, wireless, or fiber-optic connections. At the same time, it inspired some media organizations to explore multimedia delivery of information. This digital convergence of news media, in particular, was called "Mediamorphosis" by researcher Roger Fidler in his 1997 book by that name. Today, we are surrounded by a multi-level convergent media world where all modes of communication and information are continually reforming to adapt to the enduring demands of technologies, "changing the way we create, consume, learn and interact with each other". Convergence culture Henry Jenkins determines convergence culture to be the flow of content across multiple media platforms, the cooperation between multiple media industries, and the migratory behavior of media audiences who will go almost anywhere in search of the kinds of entertainment experiences they want. The convergence culture is an important factor in transmedia storytelling. Convergence culture introduces new stories and arguments from one form of media into many. Transmedia storytelling is defined by Jenkins as a process "where integral elements of a fiction get dispersed systematically across multiple delivery channels for the purpose of creating a unified and coordinated entertainment experience. Ideally, each medium makes its own unique contribution to the unfolding of the story". For instance, The Matrix starts as a film, which is followed by two other instalments, but in a convergence culture it is not constrained to that form. It becomes a story not only told in the movies but in animated shorts, video games and comic books, three different media platforms. Online, a wiki is created to keep track of the story's expanding canon. Fan films, discussion forums, and social media pages also form, expanding The Matrix to different online platforms. Convergence culture took what started as a film and expanded it across almost every type of media. Bert is Evil (images) Bert and Bin Laden appeared in CNN coverage of anti-American protest following September 11. The association of Bert and Bin Laden links back to the Ignacio's Photoshop project for fun. Convergence culture is a part of participatory culture. Because average people can now access their interests on many types of media they can also have more of a say. Fans and consumers are able to participate in the creation and circulation of new content. Some companies take advantage of this and search for feedback from their customers through social media and sharing sites such as YouTube. Besides marketing and entertainment, convergence culture has also affected the way we interact with news and information. We can access news on multiple levels of media from the radio, TV, newspapers, and the Internet. The Internet allows more people to be able to report the news through independent broadcasts and therefore allows a multitude of perspectives to be put forward and accessed by people in many different areas. Convergence allows news to be gathered on a much larger scale. For instance, photographs were taken of torture at Abu Ghraib. These photos were shared and eventually posted on the Internet. This led to the breaking of a news story in newspapers, on TV, and the Internet. Media scholar Henry Jenkins has described the media convergence with participatory culture as: Appliances Some media observers expect that we will eventually access all media content through one device, or "black box". As such, media business practice has been to identify the next "black box" to invest in and provide media for. This has caused a number of problems. Firstly, as "black boxes" are invented and abandoned, the individual is left with numerous devices that can perform the same task, rather than one dedicated for each task. For example, one may own both a computer and a video games console, subsequently owning two DVD players. This is contrary to the streamlined goal of the "black box" theory, and instead creates clutter. Secondly, technological convergence tends to be experimental in nature. This has led to consumers owning technologies with additional functions that are harder, if not impractical, to use rather than one specific device. Many people would only watch the TV for the duration of the meal's cooking time, or whilst in the kitchen, but would not use the microwave as the household TV. These examples show that in many cases technological convergence is unnecessary or unneeded. Furthermore, although consumers primarily use a specialized media device for their needs, other "black box" devices that perform the same task can be used to suit their current situation. As a 2002 Cheskin Research report explained: "...Your email needs and expectations are different whether you're at home, work, school, commuting, the airport, etc., and these different devices are designed to suit your needs for accessing content depending on where you are- your situated context." Despite the creation of "black boxes", intended to perform all tasks, the trend is to use devices that can suit the consumer's physical position. Due to the variable utility of portable technology, convergence occurs in high-end mobile devices. They incorporate multimedia services, GPS, Internet access, and mobile telephony into a single device, heralding the rise of what has been termed the "smartphone," a device designed to remove the need to carry multiple devices. Convergence of media occurs when multiple products come together to form one product with the advantages of all of them, also known as the black box. This idea of one technology, concocted by Henry Jenkins, has become known more as a fallacy because of the inability to actually put all technical pieces into one. For example, while people can have email and Internet on their phone, they still want full computers with Internet and email in addition. Mobile phones are a good example, in that they incorporate digital cameras, MP3 players, voice recorders, and other devices. For the consumer, it means more features in less space; for media conglomerates it means remaining competitive. However, convergence has a downside. Particularly in initial forms, converged devices are frequently less functional and reliable than their component parts (e.g., a mobile phone's web browser may not render some web pages correctly, due to not supporting certain rendering methods, such as the iPhone browser not supporting Flash content). As the number of functions in a single device escalates, the ability of that device to serve its original function decreases. As Rheingold asserts, technological convergence holds immense potential for the "improvement of life and liberty in some ways and (could) degrade it in others". He believes the same technology has the potential to be "used as both a weapon of social control and a means of resistance". Since technology has evolved in the past ten years or so, companies are beginning to converge technologies to create demand for new products. This includes phone companies integrating 3G and 4G on their phones. In the mid 20th century, television converged the technologies of movies and radio, and television is now being converged with the mobile phone industry and the Internet. Phone calls are also being made with the use of personal computers. Converging technologies combine multiple technologies into one. Newer mobile phones feature cameras, and can hold images, videos, music, and other media. Manufacturers now integrate more advanced features, such as video recording, GPS receivers, data storage, and security mechanisms into the traditional cellphone. Telecommunications Telecommunications convergence or network convergence describes emerging telecommunications technologies, and network architecture used to migrate multiple communications services into a single network. Specifically, this involves the converging of previously distinct media such as telephony and data communications into common interfaces on single devices, such as most smart phones can make phone calls and search the web. Messaging Combination services include those that integrate SMS with voice, such as voice SMS. Providers include Bubble Motion, Jott, Kirusa, and SpinVox. Several operators have launched services that combine SMS with mobile instant messaging (MIM) and presence. Text-to-landline services also exist, where subscribers can send text messages to any landline phone and are charged at standard rates. The text messages are converted into spoken language. This service has been popular in America, where fixed and mobile numbers are similar. Inbound SMS has been converging to enable reception of different formats (SMS, voice, MMS, etc.). In April 2008, O2 UK launched voice-enabled shortcodes, adding voice functionality to the five-digit codes already used for SMS. This type of convergence is helpful for media companies, broadcasters, enterprises, call centres and help desks who need to develop a consistent contact strategy with the consumer. Because SMS is very popular today, it became relevant to include text messaging as a contact possibility for consumers. To avoid having multiple numbers (one for voice calls, another one for SMS), a simple way is to merge the reception of both formats under one number. This means that a consumer can text or call one number and be sure that the message will be received. Mobile "Mobile service provisions" refers not only to the ability to purchase mobile phone services, but the ability to wirelessly access everything: voice, Internet, audio, and video. Advancements in WiMAX and other leading edge technologies provide the ability to transfer information over a wireless link at a variety of speeds, distances, and non-line-of-sight conditions. Multi-play Multi-play is a marketing term describing the provision of different telecommunication services, such as Internet access, television, telephone, and mobile phone service, by organizations that traditionally only offered one or two of these services. Multi-play is a catch-all phrase; usually, the terms triple play (voice, video and data) or quadruple play (voice, video, data and wireless) are used to describe a more specific meaning. A dual play service is a marketing term for the provisioning of the two services: it can be high-speed Internet (digital subscriber line) and telephone service over a single broadband connection in the case of phone companies, or high-speed Internet (cable modem) and TV service over a single broadband connection in the case of cable TV companies. The convergence can also concern the underlying communication infrastructure. An example of this is a triple play service, where communication services are packaged allowing consumers to purchase TV, Internet, and telephony in one subscription. The broadband cable market is transforming as pay-TV providers move aggressively into what was once considered the telco space. Meanwhile, customer expectations have risen as consumer and business customers alike seek rich content, multi-use devices, networked products and converged services including on-demand video, digital TV, high speed Internet, VoIP, and wireless applications. It is uncharted territory for most broadband companies. A quadruple play service combines the triple play service of broadband Internet access, television, and telephone with wireless service provisions. This service set is also sometimes humorously referred to as "The Fantastic Four" or "Grand Slam". A fundamental aspect of the quadruple play is not only the long-awaited broadband convergence but also the players involved. Many of them, from the largest global service providers to whom we connect today via wires and cables to the smallest of startup service providers are interested. Opportunities are attractive: the big three telecom services – telephony, cable television, and wireless—could combine their industries. In the UK, the merger of NTL: Telewest and Virgin Mobile resulted in a company offering a quadruple play of cable television, broadband Internet, home telephone, and mobile telephone services. Home network Early in the 21st century, home LAN convergence so rapidly integrated home routers, wireless access points, and DSL modems that users were hard put to identify the resulting box they used to connect their computers to their Internet service. A general term for such a combined device is a residential gateway. VoIP The U.S. Federal Communications Commission (FCC) has not been able to decide how to regulate VoIP (Internet Telephony) because the convergent technology is still growing and changing. In addition to its growth, FCC is tentative to set regulation on VoIP in order to promote competition in the telecommunication industry. There is not a clear line between telecommunication service and the information service because of the growth of the new convergent media. Historically, telecommunication is subject to state regulation. The state of California concerned about the increasing popularity of Internet telephony will eventually obliterate funding for the Universal Service Fund. Some states attempt to assert their traditional role of common carrier oversight onto this new technology. Meisel and Needles (2005) suggests that the FCC, federal courts, and state regulatory bodies on access line charges will directly impact the speed in which Internet telephony market grows. On one hand, the FCC is hesitant to regulate convergent technology because VoIP with different feature from the old Telecommunication; no fixed model to build legislature yet. On the other hand, the regulations is needed because Service over the Internet might be quickly replaced telecommunication service, which will affect the entire economy. Convergence has also raised several debates about classification of certain telecommunications services. As the lines between data transmission, and voice and media transmission are eroded, regulators are faced with the task of how best to classify the converging segments of the telecommunication sector. Traditionally, telecommunication regulation has focused on the operation of physical infrastructure, networks, and access to network. No content is regulated in the telecommunication because the content is considered private. In contrast, film and Television are regulated by contents. The rating system regulates its distribution to the audience. Self-regulation is promoted by the industry. Bogle senior persuaded the entire industry to pay 0.1 percent levy on all advertising and the money was used to give authority to the Advertising Standards Authority, which keeps the government away from setting legislature in the media industry. The premises to regulate the new media, two-ways communications, concerns much about the change from old media to new media. Each medium has different features and characteristics. First, Internet, the new medium, manipulates all form of information – voice, data and video. Second, the old regulation on the old media, such as radio and Television, emphasized its regulation on the scarcity of the channels. Internet, on the other hand, has the limitless capacity, due to the end-to-end design. Third, Two-ways communication encourages interactivity between the content producers and the audiences. "...Fundamental basis for classification, therefore, is to consider the need for regulation in terms of either market failure or in the public interests"(Blackman). The Electronic Frontier Foundation, founded in 1990, is a non profit organization that defends free speech, privacy, innovation, and consumer rights. The Digital Millennium Copyright Act regulates and protect the digital content producers and consumers. Trends Network neutrality is an issue. Wu and Lessig set out two reasons for network neutrality: firstly, by removing the risk of future discrimination, it incentivizes people to invest more in the development of broadband applications; secondly, it enables fair competition between applications without network bias. The two reasons also coincide with FCC's interest to stimulate investment and enhance innovation in broadband technology and services. Despite regulatory efforts of deregulation, privatization, and liberalization, the infrastructure barrier has been a negative factor in achieving effective competition. Kim et al. argues that IP dissociates the telephony application from the infrastructure and Internet telephony is at the forefront of such dissociation. The neutrality of the network is very important for fair competition. As the former FCC Charman Michael Copps put it: "From its inception, the Internet was designed, as those present during the course of its creating will tell you, to prevent government or a corporation or anyone else from controlling it. It was designed to defeat discrimination against users, ideas and technologies". Because of these reasons, Shin concludes that regulator should make sure to regulate application and infrastructure separately. The layered model was first proposed by Solum and Chug, Sicker, and Nakahata. Sicker, Warbach and Witt have supported using a layered model to regulate the telecommunications industry with the emergence of convergence services. Many researchers have different layered approach, but they all agree that the emergence of convergent technology will create challenges and ambiguities for regulations. The key point of the layered model is that it reflects the reality of network architecture, and current business model. The layered model consists of: Access layer – where the physical infrastructure resides: copper wires, cable, or fiber optic. Transport layer – the provider of service. Application layer – the interface between the data and the users. Content layer – the layer which users see. Shin combines the layered model and network neutrality as the principle to regulate the convergent media industry. Robotics Medical applications of robotics have become increasingly prominent in the robotics literature. The use of robots in service sectors is much less than the use of robots in manufacturing. See also Computer multitasking (the software equivalent of a converged device) Dongle (can facilitate inclusion of non-converged devices) Digital rhetoric Generic Access Network History of science and technology UMA Today IP Multimedia Subsystem (IMS) Mobile VoIP Next Generation Networks Next generation network services Post-convergent Second screen References Bibliography Further reading External links Amdocs MultiPlay Strategy WhitePaper Technology Convergence Update with Bob Brown – Video Crossover devices Digital media History of telecommunications Science and technology studies Technological change Telecommunications systems Technology systems
Technological convergence
Technology,Engineering
7,436
2,008,426
https://en.wikipedia.org/wiki/Information%20systems%20technician
An information systems technician is a technician whose responsibility is maintaining communications and computer systems. Description Information systems technicians operate and maintain information systems, facilitating system utilization. In many companies, these technicians assemble data sets and other details needed to build databases. This includes data management, procedure writing, writing job setup instructions, and performing program librarian functions. Information systems technicians assist in designing and coordinating the development of integrated information system databases. Information systems technicians also help maintain Internet and Intranet websites. They decide how information is presented and create digital multimedia and presentation using software and related equipment. Information systems technicians install and maintain multi-platform networking computer environments, a variety of data networks, and a diverse set of telecommunications infrastructures. Information systems technicians schedule information gathering for content in a multiple system environment. Information systems technicians are responsible for the operation, programming, and configuration of many pieces of electronics, hardware and software. ITs often are also tasked to investigate, troubleshoot, and resolve end-user problems. Information systems technicians conduct ongoing assessments of short and long-term hardware and software needs for companies, developing, testing, and implementing new and revised programs. Information systems technicians cooperate with other staff to inventory, maintain and manage computer and communication systems. Information systems technicians provide communication links and connectivity to the department in an organization, serving to equipment modification and installation tasks. This includes: local area networks : computer network covering a local area, like a home, office or small group of buildings such as a college. wide area networks : computer network covering a wide geographical area, involving a vast array of computers. minicomputer systems : multi-user computers which make up the middle range of the computing spectrum, usually single-user systems (such as personal computers). Macro-computer systems : Usually large multi-user systems (such as mainframe computers) for bulk data processing such as censuses, industry/consumer statistics, ERP, and bank transaction processing. associated peripheral devices Additionally, Information systems technicians can conduct training and provide technical support to end-users, providing this for a departments (sometimes across multiple organizations). See also Computer repair technician References External links "Information Systems Technician Series", California State Personnel Board. June 28, 1972. (Archived from the original, July 31, 2012) Computer occupations People in information technology Technicians
Information systems technician
Technology
464
28,891,262
https://en.wikipedia.org/wiki/Pendular%20water
Pendular water is the moisture clinging to particles, such as soil particles or sand, because of surface tension. At the moisture content of a specific yield, gravity drainage will cease. This term relates to hydrology and groundwater flow. Notes External links http://www.avalanche-center.org/Education/glossary/pendular-regime.php Hydrology
Pendular water
Chemistry,Engineering,Environmental_science
78
63,384,990
https://en.wikipedia.org/wiki/Mac%20OS%20Gujarati
Mac OS Gujarati is a character set developed by Apple Inc. It is an extension of the Gujarati portion of IS 13194:1991 (ISCII-91). Code page layout The following table shows the Mac OS Gujarati encoding. Each character is shown with its equivalent Unicode code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as Mac OS Roman. Byte pairs and ISCII-related features are described in the mapping file. References Character sets Gujarati
Mac OS Gujarati
Technology
116
63,392,468
https://en.wikipedia.org/wiki/Abule-Ado%20explosion
The Abule-Ado explosion was an accidental explosion and fire that occurred in the Abule-Ado area around Festac Town, Amuwo Odofin Local Government Area, Lagos State, Nigeria. The explosion and fire started around 9 am on Sunday 15 March 2020; the fire was extinguished around 11 pm. According to the Nigerian National Petroleum Corporation (NNPC), the explosion and fire was caused when a truck rammed into gas cylinders stacked in a gas processing plant near a vandalised petroleum gas pipeline. 276,000 people were displaced according to the Lagos State Government. The Nigerian National Emergency Management Agency announced that as at 15 March, 2020 the number of casualties are 23 persons and 25 injured persons with 50 houses destroyed. This includes the students and the facilities at the Bethlehem Girls College, Abule-Ado which was destroyed. The school principal of Bethlehem Girls’ College at Abule Ado area of Lagos, Henrietta Alokha, was killed while trying to save her students from the inferno at the school. The Lagos state government led by Babajide Sanwo-Olu created a relief fund for the victims of the explosion on 16 March 2020. The funds are marked as a 2 billion naira emergency fund with the Lagos State Government donating 250 million naira at its inception. References 2020 disasters in Nigeria 2020 fires in Africa 2020s in Lagos State Explosions in 2020 Explosions in Nigeria Fires in Nigeria March 2020 events in Nigeria Disasters in Lagos State 2020 road incidents in Africa Road incidents in Nigeria Industrial fires and explosions 2020 industrial disasters
Abule-Ado explosion
Chemistry
314
4,427,299
https://en.wikipedia.org/wiki/1728%20%28number%29
1728 is the natural number following 1727 and preceding 1729. It is a dozen gross, or one great gross (or grand gross). It is also the number of cubic inches in a cubic foot. In mathematics 1728 is the cube of 12, and therefore equal to the product of the six divisors of 12 (1, 2, 3, 4, 6, 12). It is also the product of the first four composite numbers (4, 6, 8, and 9), which makes it a compositorial. As a cubic perfect power, it is also a highly powerful number that has a record value (18) between the product of the exponents (3 and 6) in its prime factorization. It is also a Jordan–Pólya number such that it is a product of factorials: . 1728 has twenty-eight divisors, which is a perfect count (as with 12, with six divisors). It also has a Euler totient of 576 or 242, which divides 1728 thrice over. 1728 is an abundant and semiperfect number, as it is smaller than the sum of its proper divisors yet equal to the sum of a subset of its proper divisors. It is a practical number as each smaller number is the sum of distinct divisors of 1728, and an integer-perfect number where its divisors can be partitioned into two disjoint sets with equal sum. 1728 is 3-smooth, since its only distinct prime factors are 2 and 3. This also makes 1728 a regular number which are most useful in the context of powers of 60, the smallest number with twelve divisors: . 1728 is also an untouchable number since there is no number whose sum of proper divisors is 1728. Many relevant calculations involving 1728 are computed in the duodecimal number system, in-which it is represented as "1000". Modular j-invariant 1728 occurs in the algebraic formula for the j-invariant of an elliptic curve, as a function over a complex variable on the upper half-plane , . Inputting a value of for , where is the imaginary number, yields another cubic integer: . In moonshine theory, the first few terms in the Fourier q-expansion of the normalized j-invariant exapand as, The Griess algebra (which contains the friendly giant as its automorphism group) and all subsequent graded parts of its infinite-dimensional moonshine module hold dimensional representations whose values are the Fourier coefficients in this q-expansion. Other properties The number of directed open knight's tours in minichess is 1728. 1728 is one less than the first taxicab or Hardy–Ramanujan number 1729, which is the smallest number that can be expressed as sums of two positive cubes in two ways. Decimal digits Regarding strings of digits of 1728, In culture 1728 is the number of daily chants of the Hare Krishna mantra by a Hare Krishna devotee. The number comes from 16 rounds on a 108 japamala bead. See also The year AD 1728 References External links 1728 at Numbers Aplenty. Integers
1728 (number)
Mathematics
644
908,764
https://en.wikipedia.org/wiki/Perl%20Data%20Language
Perl Data Language (abbreviated PDL) is a set of free software array programming extensions to the Perl programming language. PDL extends the data structures built into Perl, to include large multidimensional arrays, and adds functionality to manipulate those arrays as vector objects. It also provides tools for image processing, machine learning, computer modeling of physical systems, and graphical plotting and presentation. Simple operations are automatically vectorized across complete arrays, and higher-dimensional operations (such as matrix multiplication) are supported. Language design PDL is a vectorized array programming language: the expression syntax is a variation on standard mathematical vector notation, so that the user can combine and operate on large arrays with simple expressions. In this respect, PDL follows in the footsteps of the APL programming language, and it has been compared to commercial languages such as MATLAB and Interactive Data Language, and to other free languages such as NumPy and Octave. Unlike MATLAB and IDL, PDL allows great flexibility in indexing and vectorization: for example, if a subroutine normally operates on a 2-D matrix array, passing it a 3-D data cube will generally cause the same operation to happen to each 2-D layer of the cube. PDL borrows from Perl at least three basic types of program structure: imperative programming, functional programming, and pipeline programming forms may be combined. Subroutines may be loaded either via a built-in autoload mechanism or via the usual Perl module mechanism. Graphics True to the glue language roots of Perl, PDL borrows from several different modules for graphics and plotting support. NetPBM provides image file I/O (though FITS is supported natively). Gnuplot, PLplot, PGPLOT, and Prima modules are supported for 2-D graphics and plotting applications, and Gnuplot and OpenGL are supported for 3-D plotting and rendering. I/O PDL provides facilities to read and write many open data formats, including JPEG, PNG, GIF, PPM, MPEG, FITS, NetCDF, GRIB, raw binary files, and delimited ASCII tables. PDL programmers can use the CPAN Perl I/O libraries to read and write data in hundreds of standard and niche file formats. Machine learning PDL can be used for machine learning. It includes modules that are used to perform classic k-means clustering or general and generalized linear modeling methods such as ANOVA, linear regression, PCA, and logistic regression. Examples of PDL usage for regression modelling tasks include evaluating association between education attainment and ancestry differences of parents, comparison of RNA-protein interaction profiles that needs regression-based normalization and analysis of spectra of galaxies. perldl An installation of PDL usually comes with an interactive shell known as perldl, which can be used to perform simple calculations without requiring the user to create a Perl program file. A typical session of perldl would look something like the following: perldl> $x = pdl [[1, 2], [3, 4]]; perldl> $y = pdl [[5, 6, 7],[8, 9, 0]]; perldl> $z = $x x $y; perldl> p $z; [ [21 24 7] [47 54 21] ] The commands used in the shell are Perl statements that can be used in a program with PDL module included. x is an overloaded operator for matrix multiplication, and p in the last command is a shortcut for print. Implementation The core of PDL is written in C. Most of the functionality is written in PP, a PDL-specific metalanguage that handles the vectorization of simple C snippets and interfaces them with the Perl host language via Perl's XS compiler. Some modules are written in Fortran, with a C/PP interface layer. Many of the supplied functions are written in PDL itself. PP is available to the user to write C-language extensions to PDL. There is also an Inline module (Inline::Pdlpp) that allows PP function definitions to be inserted directly into a Perl script; the relevant code is low-level compiled and made available as a Perl subroutine. The PDL API uses the basic Perl 5 object-oriented functionality: PDL defines a new type of Perl scalar object (eponymously called a "PDL", or "ndarray") that acts as a Perl scalar, but that contains a conventional typed array of numeric or character values. All of the standard Perl operators are overloaded so that they can be used on PDL objects transparently, and PDLs can be mixed-and-matched with normal Perl scalars. Several hundred object methods for operating on PDLs are supplied by the core modules. See also Comparison of numerical-analysis software List of numerical-analysis software References External links PDL Quick Reference PDL Intro & resources Tutorial lecture on PDL Draft release of the PDL Book for PDL-2.006 Example of PDL usage in the scientific literature Array programming languages Free mathematics software Free science software Numerical programming languages Perl modules
Perl Data Language
Mathematics
1,096
10,300,318
https://en.wikipedia.org/wiki/Federal%20Standard%20595
Federal Standard 595, known as SAE AMS-STD-595 – Colors Used in Government Procurement, formerly FED-STD-595, is a United States Federal Standard for colors, issued by the General Services Administration. History FED-STD-595, 595A, and 595B Federal Standard 595 is the color description and communication system developed in 1956 by the United States government. Its origins reach back to World War II when a problem of providing exact color specifications to military equipment subcontractors in different parts of the world became a matter of urgency. Similarly to other color standards of the pre-digital era, such as RAL colour standard or British Standard 4800, Federal Standard 595 is a color collection rather than a color space. The standard is built upon a set of color shades where a unique reference number is assigned to each color. This collection is then printed on sample color chips and provided to interested parties. In contrast, modern color systems such as the Natural Color System (NCS) are built upon a color space paradigm, providing for much more flexibility and wider range of applications. Each color in the Federal Standard 595 range is identified by a five-digit code. The colors in the standard have no official names, just numbers. The initial standard FED-STD-595 issued in March 1956 contained 358 colors. Revision A issued in January 1968 counted 437 colors. Revision B Change 1 from January 1994 counted 611 colors. FED-STD-595C Federal Standard 595C was published January 16, 2008. No previous colors were removed. Thirty-nine new colors were added for a total of 650 colors. On July 31, 2008 595C Change Order 1 was published, changing the numbers of eight of the colors added in revision C. The revision C master reference list of colors provides all available reference information for these colors, including tristimulus values, pigments and 60° gloss level and color name as applicable. As before, all color matching must still be done via color reference chips. Many prime contractors, such as L3, require the Federal Standard 595 paint chips used for inspection purposes be replaced every two years. AMS-STD-595 As of February 14, 2017, FED-STD-595 was cancelled and replaced by SAE International's AMS-STD-595. Color chips as well as fan decks are available, including a box set containing 692 color chips. See also Pantone References External links Federal Standard 595C Color chart Federal Standard 595C, 2008. PDF files available from the website of the US military. Federal Standard 595B Rev Dec 1989, the previous version, FED-STD-595B, from 1989, revised in 1994 Federal Standard 595 Color Server, a third-party search engine providing graphical samples for each color and able to display combinations of colors Revision history of Federal Standard 595 Understanding the FS 595 color numbers Color space Standards of the United States
Federal Standard 595
Mathematics
621
12,747,956
https://en.wikipedia.org/wiki/Pseudoelementary%20class
In logic, a pseudoelementary class is a class of structures derived from an elementary class (one definable in first-order logic) by omitting some of its sorts and relations. It is the mathematical logic counterpart of the notion in category theory of (the codomain of) a forgetful functor, and in physics of (hypothesized) hidden variable theories purporting to explain quantum mechanics. Elementary classes are (vacuously) pseudoelementary but the converse is not always true; nevertheless pseudoelementary classes share some of the properties of elementary classes such as being closed under ultraproducts. Definition A pseudoelementary class is a reduct of an elementary class. That is, it is obtained by omitting some of the sorts and relations of a (many-sorted) elementary class. Examples The theory with equality of sets under union and intersection, whose structures are of the form (W, ∪, ∩), can be understood naively as the pseudoelementary class formed from the two-sorted elementary class of structures of the form (A, W, ∪, ∩, ∈) where ∈ ⊆ A×W and ∪ and ∩ are binary operations (qua ternary relations) on W. The theory of the latter class is axiomatized by ∀X,Y∈W.∀a∈A.[ a ∈ X∪Y   ⇔   a ∈ X ∨ a ∈ Y] ∀X,Y∈W.∀a∈A.[ a ∈ X∩Y   ⇔   a ∈ X ∧ a ∈ Y] ∀X,Y∈W.[ (∀a∈A.[a ∈ X   ⇔   a ∈ Y]) → X = Y] In the intended interpretation A is a set of atoms a,b,..., W is a set of sets of atoms X,Y,... and ∈ is the membership relation between atoms and sets. The consequences of these axioms include all the laws of distributive lattices. Since the latter laws make no mention of atoms they remain meaningful for the structures obtained from the models of the above theory by omitting the sort A of atoms and the membership relation ∈. All distributive lattices are representable as sets of sets under union and intersection, whence this pseudoelementary class is in fact an elementary class, namely the variety of distributive lattices. In this example both classes (respectively before and after the omission) are finitely axiomatizable elementary classes. But whereas the standard approach to axiomatizing the latter class uses nine equations to axiomatize a distributive lattice, the former class only requires the three axioms above, making it faster to define the latter class as a reduct of the former than directly in the usual way. The theory with equality of binary relations under union R∪S, intersection R∩S, complement R−, relational composition R;S, and relational converse R, whose structures are of the form (W, ∪, ∩, −, ;, ), can be understood as the pseudoelementary class formed from the three-sorted elementary class of structures of the form (A, P, W, ∪, ∩, −, ;, , λ, ρ, π, ∈). The intended interpretation of the three sorts are atoms, pairs of atoms, and sets of pairs of atoms, π: A×;A → P and λ,ρ: P → A are the evident pairing constructors and destructors, and ∈ ⊆ P×;W is the membership relation between pairs and relations (as sets of pairs). By analogy with Example 1, the purely relational connectives defined on W can be axiomatized naively in terms of atoms and pairs of atoms in the customary manner of introductory texts. The pure theory of binary relations can then be obtained as the theory of the pseudoelementary class of reducts of models of this elementary class obtained by omitting the atom and pair sorts and all relations involving the omitted sorts. In this example both classes are elementary, but only the former class is finitely axiomatizable, though the latter class (the reduct) was shown by Tarski in 1955 to be nevertheless a variety, namely RRA, the representable relation algebras. A primitive ring is a generalization of the notion of simple ring. It is definable in elementary (first-order) language in terms of the elements and ideals of a ring, giving rise to an elementary class of two-sorted structures comprising rings and ideals. The class of primitive rings is obtained from this elementary class by omitting the sorts and language associated with the ideals, and is hence a pseudoelementary class. In this example it is an open question whether this pseudoelementary class is elementary. The class of exponentially closed fields is a pseudoelementary class that is not elementary. Applications A quasivariety defined logically as the class of models of a universal Horn theory can equivalently be defined algebraically as a class of structures closed under isomorphisms, subalgebras, and reduced products. Since the notion of reduced product is more intricate than that of direct product, it is sometimes useful to blend the logical and algebraic characterizations in terms of pseudoelementary classes. One such blended definition characterizes a quasivariety as a pseudoelementary class closed under isomorphisms, subalgebras, and direct products (the pseudoelementary property allows "reduced" to be simplified to "direct"). A corollary of this characterization is that one can (nonconstructively) prove the existence of a universal Horn axiomatization of a class by first axiomatizing some expansion of the structure with auxiliary sorts and relations and then showing that the pseudoelementary class obtained by dropping the auxiliary constructs is closed under subalgebras and direct products. This technique works for Example 2 because subalgebras and direct products of algebras of binary relations are themselves algebras of binary relations, showing that the class RRA of representable relation algebras is a quasivariety (and a fortiori an elementary class). This short proof is an effective application of abstract nonsense; the stronger result by Tarski that RRA is in fact a variety required more honest toil. References Paul C. Eklof (1977), Ultraproducts for Algebraists, in Handbook of Mathematical Logic (ed. Jon Barwise), North-Holland. Model theory Universal algebra
Pseudoelementary class
Mathematics
1,364
32,721,733
https://en.wikipedia.org/wiki/Exonuclease%20VII
The enzyme exodeoxyribonuclease VII (EC 3.1.11.6, Escherichia coli exonuclease VII, E. coli exonuclease VII, endodeoxyribonuclease VII, exodeoxyribonuclease VII) is a bacterial exonuclease enzyme. It is composed of two nonidentical subunits; one large subunit and 4 small ones. that catalyses exonucleolytic cleavage in either 5′- to 3′- or 3′- to 5′-direction to yield nucleoside 5′-phosphates. The large subunit also contains an N-terminal OB-fold domain that binds to nucleic acids. References Protein families EC 3.1.11
Exonuclease VII
Biology
168
218,445
https://en.wikipedia.org/wiki/Lean%20manufacturing
Lean manufacturing is a method of manufacturing goods aimed primarily at reducing times within the production system as well as response times from suppliers and customers. It is closely related to another concept called just-in-time manufacturing (JIT manufacturing in short). Just-in-time manufacturing tries to match production to demand by only supplying goods that have been ordered and focus on efficiency, productivity (with a commitment to continuous improvement), and reduction of "wastes" for the producer and supplier of goods. Lean manufacturing adopts the just-in-time approach and additionally focuses on reducing cycle, flow, and throughput times by further eliminating activities that do not add any value for the customer. Lean manufacturing also involves people who work outside of the manufacturing process, such as in marketing and customer service. Lean manufacturing is particularly related to the operational model implemented in the post-war 1950s and 1960s by the Japanese automobile company Toyota called the Toyota Production System (TPS), known in the United States as "The Toyota Way". Toyota's system was erected on the two pillars of just-in-time inventory management and automated quality control. The seven "wastes" ( in Japanese), first formulated by Toyota engineer Shigeo Shingo, are the waste of superfluous inventory of raw material and finished goods, the waste of overproduction (producing more than what is needed now), the waste of over-processing (processing or making parts beyond the standard expected by customer), the waste of transportation (unnecessary movement of people and goods inside the system), the waste of excess motion (mechanizing or automating before improving the method), the waste of waiting (inactive working periods due to job queues), and the waste of making defective products (reworking to fix avoidable defects in products and processes). The term Lean was coined in 1988 by American businessman John Krafcik in his article "Triumph of the Lean Production System," and defined in 1996 by American researchers James Womack and Daniel Jones to consist of five key principles: "Precisely specify value by specific product, identify the value stream for each product, make value flow without interruptions, let customer pull value from the producer, and pursue perfection." Companies employ the strategy to increase efficiency. By receiving goods only as they need them for the production process, it reduces inventory costs and wastage, and increases productivity and profit. The downside is that it requires producers to forecast demand accurately as the benefits can be nullified by minor delays in the supply chain. It may also impact negatively on workers due to added stress and inflexible conditions. A successful operation depends on a company having regular outputs, high-quality processes, and reliable suppliers. History Frederick Taylor and Henry Ford documented their observations relating to these topics, and Shigeo Shingo and Taiichi Ohno applied their enhanced thoughts on the subject at Toyota in the late 1940s after World War II. The resulting methods were researched in the mid-20th century and dubbed Lean by John Krafcik in 1988, and then were defined in The Machine that Changed the World and further detailed by James Womack and Daniel Jones in Lean Thinking (1996). Japan: the origins of Lean The adoption of just-in-time manufacturing in Japan and many other early forms of Lean can be traced back directly to the US-backed Reconstruction and Occupation of Japan following WWII. During this time, an American economist, W. Edwards Deming, and an American statistician, Walter A. Shewhart, promoted some of the earliest modern manufacturing methods and management philosophies they developed in the late '30s and early '40s. The two experts were the first to apply these newly developed statistical models to improve efficiencies in many of America's largest military manufacturers during WWII. However, Deming and Shewhart failed to convince other US manufacturers to apply these "radical" methods. After the war, Deming was assigned to participate in the Reconstruction of Japan by General Douglas MacArthur. Deming participated as a manufacturing consultant for Japan's struggling heavy industries, which included Toyota and Mitsubishi. Unlike what they experienced in the US, Deming found the Japanese very receptive to learning and applying these new efficiency methods. Many of the manufacturing methods first introduced by Deming in Japan, and later innovated by Japanese companies, are what we now call Lean Manufacturing. Japanese manufacturers still recognize Deming for his contributions to modern Japanese efficiency practices by awarding the best manufacturers in the world the Deming Prize. In addition to Deming's critical influence in Japan, most local companies were in a position where they needed an immediate solution to the extreme situation they were living in after World War II. American supply chain specialist Gerhard Plenert has offered four quite vague reasons, paraphrased here. During Japan's post–World War II rebuilding (of economy, infrastructure, industry, political, and social-emotional stability): Japan's lack of cash made it difficult for industry to finance the big-batch, large inventory production methods common elsewhere. Japan lacked space to build big factories loaded with inventory. The Japanese islands lack natural resources with which to build products. Japan had high unemployment, which meant that labor efficiency methods were not an obvious pathway to industrial success. Thus, the Japanese "leaned out" their processes. "They built smaller factories ... in which the only materials housed in the factory were those on which work was currently being done. In this way, inventory levels were kept low, investment in in-process inventories was at a minimum, and the investment in purchased natural resources was quickly turned around so that additional materials were purchased." Plenert goes on to explain Toyota's key role in developing this lean or just-in-time production methodology. American industrialists recognized the threat of cheap offshore labor to American workers during the 1910s and explicitly stated the goal of what is now called lean manufacturing as a countermeasure. Henry Towne, past president of the American Society of Mechanical Engineers, wrote in the foreword to Frederick Winslow Taylor's Shop Management (1911), "We are justly proud of the high wage rates which prevail throughout our country, and jealous of any interference with them by the products of the cheaper labor of other countries. To maintain this condition, to strengthen our control of home markets, and, above all, to broaden our opportunities in foreign markets where we must compete with the products of other industrial nations, we should welcome and encourage every influence tending to increase the efficiency of our productive processes." Continuous production improvement and incentives for such were documented in Taylor's Principles of Scientific Management (1911): "... whenever a workman proposes an improvement, it should be the policy of the management to make a careful analysis of the new method, and if necessary conduct a series of experiments to determine accurately the relative merit of the new suggestion and of the old standard. And whenever the new method is found to be markedly superior to the old, it should be adopted as the standard for the whole establishment." "...after a workman has had the price per piece of the work he is doing lowered two or three times as a result of his having worked harder and increased his output, he is likely entirely to lose sight of his employer's side of the case and become imbued with a grim determination to have no more cuts if soldiering [marking time, just doing what he is told] can prevent it." Shigeo Shingo cites reading Principles of Scientific Management in 1931 and being "greatly impressed to make the study and practice of scientific management his life's work"., Shingo and Taiichi Ohno were key to the design of Toyota's manufacturing process. Previously a textile company, Toyota moved into building automobiles in 1934. Kiichiro Toyoda, the founder of Toyota Motor Corporation, directed the engine casting work and discovered many problems in their manufacturing, with wasted resources on the repair of poor-quality castings. Toyota engaged in intense study of each stage of the process. In 1936, when Toyota won its first truck contract with the Japanese government, the processes encountered new problems, to which Toyota responded by developing Kaizen improvement teams, which into what has become the Toyota Production System (TPS), and subsequently The Toyota Way. Levels of demand in the postwar economy of Japan were low; as a result, the focus of mass production on lowest cost per item via economies of scale had little application. Having visited and seen supermarkets in the United States, Ohno recognized that the scheduling of work should not be driven by sales or production targets but by actual sales. Given the financial situation during this period, over-production had to be avoided, and thus the notion of "pull" (or "build-to-order" rather than target-driven "push") came to underpin production scheduling. Evolution in the rest of the world Just-in-time manufacturing was introduced in Australia in the 1950s by the British Motor Corporation (Australia) at its Victoria Park plant in Sydney, from where the idea later migrated to Toyota. News about just-in-time/Toyota production system reached other western countries from Japan in 1977 in two English-language articles: one referred to the methodology as the "Ohno system", after Taiichi Ohno, who was instrumental in its development within Toyota. The other article, by Toyota authors in an international journal, provided additional details. Finally, those and other publicity were translated into implementations, beginning in 1980 and then quickly multiplying throughout industry in the United States and other developed countries. A seminal 1980 event was a conference in Detroit at Ford World Headquarters co-sponsored by the Repetitive Manufacturing Group (RMG), which had been founded 1979 within the American Production and Inventory Control Society (APICS) to seek advances in manufacturing. The principal speaker, Fujio Cho (later, president of Toyota Motor Corp.), in explaining the Toyota system, stirred up the audience, and led to the RMG's shifting gears from things like automation to just-in-time/Toyota production system. At least some of audience's stirring had to do with a perceived clash between the new just-in-time regime and manufacturing resource planning (MRP II), a computer software-based system of manufacturing planning and control which had become prominent in industry in the 1960s and 1970s. Debates in professional meetings on just-in-time vs. MRP II were followed by published articles, one of them titled, "The Rise and Fall of Just-in-Time". Less confrontational was Walt Goddard's, "Kanban Versus MRP II—Which Is Best for You?" in 1982. Four years later, Goddard had answered his own question with a book advocating just-in-time. Among the best known of MRP II's advocates was George Plossl, who authored two articles questioning just-in-time's kanban planning method and the "japanning of America". But, as with Goddard, Plossl later wrote that "JIT is a concept whose time has come". Just-in-time/TPS implementations may be found in many case-study articles from the 1980s and beyond. An article in a 1984 issue of Inc. magazine relates how Omark Industries (chain saws, ammunition, log loaders, etc.) emerged as an extensive just-in-time implementer under its US home-grown name ZIPS (zero inventory production system). At Omark's mother plant in Portland, Oregon, after the work force had received 40 hours of ZIPS training, they were "turned loose" and things began to happen. A first step was to "arbitrarily eliminate a week's lead time [after which] things ran smoother. 'People asked that we try taking another week's worth out.' After that, ZIPS spread throughout the plant's operations 'like an amoeba.'" The article also notes that Omark's 20 other plants were similarly engaged in ZIPS, beginning with pilot projects. For example, at one of Omark's smaller plants making drill bits in Mesabi, Minnesota, "large-size drill inventory was cut by 92%, productivity increased by 30%, scrap and rework ... dropped 20%, and lead time ... from order to finished product was slashed from three weeks to three days." The Inc. article states that companies using just-in-time the most extensively include "the Big Four, Hewlett-Packard, Motorola, Westinghouse Electric, General Electric, Deere & Company, and Black and Decker". By 1986, a case-study book on just-in-time in the U.S. was able to devote a full chapter to ZIPS at Omark, along with two chapters on just-in-time at several Hewlett-Packard plants, and single chapters for Harley-Davidson, John Deere, IBM-Raleigh, North Carolina, and California-based Apple Inc., a Toyota truck-bed plant, and New United Motor Manufacturing joint venture between Toyota and General Motors. Two similar, contemporaneous books from the UK are more international in scope. One of the books, with both conceptual articles and case studies, includes three sections on just-in-time practices: in Japan (e.g., at Toyota, Mazda, and Tokagawa Electric); in Europe (jmg Bostrom, Lucas Electric, Cummins Engine, IBM, 3M, Datasolve Ltd., Renault, Massey Ferguson); and in the US and Australia (Repco Manufacturing-Australia, Xerox Computer, and two on Hewlett-Packard). The second book, reporting on what was billed as the First International Conference on just-in-time manufacturing, includes case studies in three companies: Repco-Australia, IBM-UK, and 3M-UK. In addition, a day two keynote address discussed just-in-time as applied "across all disciplines, ... from accounting and systems to design and production". Rebranding as "lean" John Krafcik coined the term Lean in his 1988 article, "Triumph of the Lean Production System". The article states: (a) Lean manufacturing plants have higher levels of productivity/quality than non-Lean and (b) "The level of plant technology seems to have little effect on operating performance" (page 51). According to the article, risks with implementing Lean can be reduced by: "developing a well-trained, flexible workforce, product designs that are easy to build with high quality, and a supportive, high-performance supplier network" (page 51). Middle era and to the present Three more books which include just-in-time implementations were published in 1993, 1995, and 1996, which are start-up years of the lean manufacturing/lean management movement that was launched in 1990 with publication of the book, The Machine That Changed the World. That one, along with other books, articles, and case studies on lean, were supplanting just-in-time terminology in the 1990s and beyond. The same period, saw the rise of books and articles with similar concepts and methodologies but with alternative names, including cycle time management, time-based competition, quick-response manufacturing, flow, and pull-based production systems. There is more to just-in-time than its usual manufacturing-centered explication. Inasmuch as manufacturing ends with order-fulfillment to distributors, retailers, and end users, and also includes remanufacturing, repair, and warranty claims, just-in-time's concepts and methods have application downstream from manufacturing itself. A 1993 book on "world-class distribution logistics" discusses kanban links from factories onward, and a manufacturer-to-retailer model developed in the U.S. in the 1980s, referred to as quick response, has morphed over time to what is called fast fashion. Methodology The strategic elements of lean can be quite complex, and comprise multiple elements. Four different notions of lean have been identified: Lean as a fixed state or goal (being lean) Lean as a continuous change process (becoming lean) Lean as a set of tools or methods (doing lean/toolbox lean) Lean as a philosophy (lean thinking) The other way to avoid market risk and control the supply efficiently is to cut down in stock. P&G has completed their goal to co-operate with Walmart and other wholesales companies by building the response system of stocks directly to the suppliers companies. In 1999, Spear and Bowen identified four rules which characterize the "Toyota DNA": All work shall be highly specified as to content, sequence, timing, and outcome. Every customer-supplier connection must be direct, and there must be an unambiguous yes or no way to send requests and receive responses. The pathway for every product and service must be simple and direct. Any improvement must be made in accordance with the scientific method, under the guidance of a teacher, at the lowest possible level in the organization. This is a fundamentally different approach from most improvement methodologies, and requires more persistence than basic application of the tools, which may partially account for its lack of popularity. The implementation of "smooth flow" exposes quality problems that already existed, and waste reduction then happens as a natural consequence, a system-wide perspective rather focusing directly upon the wasteful practices themselves. Takt time is the rate at which products need to be produced to meet customer demand. The JIT system is designed to produce products at the rate of takt time, which ensures that products are produced just in time to meet customer demand. Sepheri provides a list of methodologies of just-in-time manufacturing that "are important but not exhaustive": Housekeeping: physical organization and discipline. Make it right the first time: elimination of defects. Setup reduction: flexible changeover approaches. Lot sizes of one: the ultimate lot size and flexibility. Uniform plant load: leveling as a control mechanism. Balanced flow: organizing flow scheduling throughput. Skill diversification: multi-functional workers. Control by visibility: communication media for activity. Preventive maintenance: flawless running, no defects. Fitness for use: producibility, design for process. Compact plant layout: product-oriented design. Streamlining movements: smoothing materials handling. Supplier networks: extensions of the factory. Worker involvement: small group improvement activities. Cellular manufacturing: production methods for flow. Pull system: signal [kanban] replenishment/resupply systems. Key principles Womack and Jones define Lean as "...a way to do more and more with less and less—less human effort, less equipment, less time, and less space—while coming closer and closer to providing customers exactly what they want" and then translate this into five key principles: Value: Specify the value desired by the customer. "Form a team for each product to stick with that product during its entire production cycle", "Enter into a dialogue with the customer" (e.g. Voice of the customer) The Value Stream: Identify the value stream for each product providing that value and challenge all of the wasted steps (generally nine out of ten) currently necessary to provide it Flow: Make the product flow continuously through the remaining value-added steps Pull: Introduce pull between all steps where continuous flow is possible Perfection: Manage toward perfection so that the number of steps and the amount of time and information needed to serve the customer continually falls Lean is founded on the concept of continuous and incremental improvements on product and process while eliminating redundant activities. "The value of adding activities are simply only those things the customer is willing to pay for, everything else is waste, and should be eliminated, simplified, reduced, or integrated". On principle 2, waste, see seven basic waste types under The Toyota Way. Additional waste types are: Faulty goods (manufacturing of goods or services that do not meet customer demand or specifications, Womack et al., 2003. See Lean services) Waste of skills (Six Sigma) Under-utilizing capabilities (Six Sigma) Delegating tasks with inadequate training (Six Sigma) Metrics (working to the wrong metrics or no metrics) (Mika Geoffrey, 1999) Participation (not utilizing workers by not allowing them to contribute ideas and suggestions and be part of Participative Management) (Mika Geoffrey, 1999) Computers (improper use of computers: not having the proper software, training on use and time spent surfing, playing games or just wasting time) (Mika Geoffrey, 1999) Implementation One paper suggests that an organization implementing Lean needs its own Lean plan as developed by the "Lean Leadership". This should enable Lean teams to provide suggestions for their managers who then makes the actual decisions about what to implement. Coaching is recommended when an organization starts off with Lean to impart knowledge and skills to shop-floor staff. Improvement metrics are required for informed decision-making. Lean philosophy and culture is as important as tools and methodologies. Management should not decide on solutions without understanding the true problem by consulting shop floor personnel. The solution to a specific problem for a specific company may not have generalized application. The solution must fit the problem. Value-stream mapping (VSM) and 5S are the most common approaches companies take on their first steps to Lean. Lean can be focused on specific processes, or cover the entire supply chain. Front-line workers should be involved in VSM activities. Implementing a series of small improvements incrementally along the supply chain can bring forth enhanced productivity. Naming Alternative terms for JIT manufacturing have been used. Motorola's choice was short-cycle manufacturing (SCM). IBM's was continuous-flow manufacturing (CFM), and demand-flow manufacturing (DFM), a term handed down from consultant John Constanza at his Institute of Technology in Colorado. Still another alternative was mentioned by Goddard, who said that "Toyota Production System is often mistakenly referred to as the 'Kanban System'", and pointed out that kanban is but one element of TPS, as well as JIT production. The wide use of the term JIT manufacturing throughout the 1980s faded fast in the 1990s, as the new term lean manufacturing became established, as "a more recent name for JIT". As just one testament to the commonality of the two terms, Toyota production system (TPS) has been and is widely used as a synonym for both JIT and lean manufacturing., Objectives and benefits Objectives and benefits of JIT manufacturing may be stated in two primary ways: first, in specific and quantitative terms, via published case studies; second, general listings and discussion. A case-study summary from Daman Products in 1999 lists the following benefits: reduced cycle times 97%, setup times 50%, lead times from 4 to 8 weeks to 5 to 10 days, flow distance 90%. This was achieved via four focused (cellular) factories, pull scheduling, kanban, visual management, and employee empowerment. Another study from NCR (Dundee, Scotland) in 1998, a producer of make-to-order automated teller machines, includes some of the same benefits while also focusing on JIT purchasing: In switching to JIT over a weekend in 1998, eliminated buffer inventories, reducing inventory from 47 days to 5 days, flow time from 15 days to 2 days, with 60% of purchased parts arriving JIT and 77% going dock to line, and suppliers reduced from 480 to 165. Hewlett-Packard, one of western industry's earliest JIT implementers, provides a set of four case studies from four H-P divisions during the mid-1980s. The four divisions, Greeley, Fort Collins, Computer Systems, and Vancouver, employed some but not all of the same measures. At the time about half of H-P's 52 divisions had adopted JIT. Application outside a manufacturing context Lean principles have been successfully applied to various sectors and services, such as call centers and healthcare. In the former, lean's waste reduction practices have been used to reduce handle time, within and between agent variation, accent barriers, as well as attain near perfect process adherence. In the latter, several hospitals have adopted the idea of lean hospital, a concept that prioritizes the patient, thus increasing the employee commitment and motivation, as well as boosting medical quality and cost effectiveness. Lean principles also have applications to software development and maintenance as well as other sectors of information technology (IT). More generally, the use of lean in information technology has become known as Lean IT. Lean methods are also applicable to the public sector, but most results have been achieved using a much more restricted range of techniques than lean provides. The challenge in moving lean to services is the lack of widely available reference implementations to allow people to see how directly applying lean manufacturing tools and practices can work and the impact it does have. This makes it more difficult to build the level of belief seen as necessary for strong implementation. However, some research does relate widely recognized examples of success in retail and even airlines to the underlying principles of lean. Despite this, it remains the case that the direct manufacturing examples of 'techniques' or 'tools' need to be better 'translated' into a service context to support the more prominent approaches of implementation, which has not yet received the level of work or publicity that would give starting points for implementors. The upshot of this is that each implementation often 'feels its way' along as must the early industrial engineering practices of Toyota. This places huge importance upon sponsorship to encourage and protect these experimental developments. Lean management is nowadays implemented also in non-manufacturing processes and administrative processes. In non-manufacturing processes is still huge potential for optimization and efficiency increase. Some people have advocated using STEM resources to teach children Lean thinking instead of computer science. Lean manufacturing methodology has become a prevalent practice in public healthcare, commonly known as lean healthcare. Due to the intensively competitive environment, lean approach becomes a growing alternative in the healthcare sector to achieve optimized resource management and performance improvement. Criticism According to Williams, it becomes necessary to find suppliers that are close by or can supply materials quickly with limited advance notice. When ordering small quantities of materials, suppliers' minimum order policies may pose a problem, though. Employees are at risk of precarious work when employed by factories that utilize just-in-time and flexible production techniques. A longitudinal study of US workers since 1970 indicates employers seeking to easily adjust their workforce in response to supply and demand conditions respond by creating more nonstandard work arrangements, such as contracting and temporary work. Natural and human-made disasters will disrupt the flow of energy, goods and services. The down-stream customers of those goods and services will, in turn, not be able to produce their product or render their service because they were counting on incoming deliveries "just in time" and so have little or no inventory to work with. The disruption to the economic system will cascade to some degree depending on the nature and severity of the original disaster and may create shortages. The larger the disaster the worse the effect on just-in-time failures. Electrical power is the ultimate example of just-in-time delivery. A severe geomagnetic storm could disrupt electrical power delivery for hours to years, locally or even globally. Lack of supplies on hand to repair the electrical system would have catastrophic effects. The COVID-19 pandemic has caused disruption in JIT practices, with various quarantine restrictions on international trade and commercial activity in general interrupting supply while lacking stockpiles to handle the disruption; along with increased demand for medical supplies like personal protective equipment (PPE) and ventilators, and even panic buying, including of various domestically manufactured (and so less vulnerable) products like panic buying of toilet paper, disturbing regular demand. This has led to suggestions that stockpiles and diversification of suppliers should be more heavily focused. Critics of Lean argue that this management method has significant drawbacks, especially for the employees of companies operating under Lean. Common criticism of Lean is that it fails to take into consideration the employee's safety and well-being. Lean manufacturing is associated with an increased level of stress among employees, who have a small margin of error in their work environment which require perfection. Lean also over-focuses on cutting waste, which may lead management to cut sectors of the company that are not essential to the company's short-term productivity but are nevertheless important to the company's legacy. Lean also over-focuses on the present, which hinders a company's plans for the future. Critics also make negative comparison of Lean and 19th century scientific management, which had been fought by the labor movement and was considered obsolete by the 1930s. Finally, lean is criticized for lacking a standard methodology: "Lean is more a culture than a method, and there is no standard lean production model." After years of success of Toyota's Lean Production, the consolidation of supply chain networks has brought Toyota to the position of being the world's biggest carmaker in the rapid expansion. In 2010, the crisis of safety-related problems in Toyota made other carmakers that duplicated Toyota's supply chain system wary that the same recall issue might happen to them. James Womack had warned Toyota that cooperating with single outsourced suppliers might bring unexpected problems. Lean manufacturing is different from lean enterprise. Recent research reports the existence of several lean manufacturing processes but of few lean enterprises. One distinguishing feature opposes lean accounting and standard cost accounting. For standard cost accounting, SKUs are difficult to grasp. SKUs include too much hypothesis and variance, i.e., SKUs hold too much indeterminacy. Manufacturing may want to consider moving away from traditional accounting and adopting lean accounting. In using lean accounting, one expected gain is activity-based cost visibility, i.e., measuring the direct and indirect costs at each step of an activity rather than traditional cost accounting that limits itself to labor and supplies. See also Notes References Billesbach, Thomas J. 1987. Applicability of Just-in-Time Techniques in the Administrative Area. Doctoral dissertation, University of Nebraska. Ann Arbor, Mich., University Microfilms International. Goddard, W. E. 2001. JIT/TQC—identifying and solving problems. Proceedings of the 20th Electrical Electronics Insulation Conference, Boston, October 7–10, 88–91. Goldratt, Eliyahu M. and Fox, Robert E. (1986), The Race, North River Press, Hall, Robert W. 1983. Zero Inventories. Homewood, Ill.: Dow Jones-Irwin. Hall, Robert W. 1987. Attaining Manufacturing Excellence: Just-in-Time, Total Quality, Total People Involvement. Homewood, Ill.: Dow Jones-Irwin. Hay, Edward J. 1988. The Just-in-Time Breakthrough: Implementing the New Manufacturing Basics. New York: Wiley. Ker, J. I., Wang, Y., Hajli, M. N., Song, J., Ker, C. W. (2014). Deploying Lean in Healthcare: Evaluating Information Technology Effectiveness in US Hospital Pharmacies Lubben, R. T. 1988. Just-in-Time Manufacturing: An Aggressive Manufacturing Strategy. New York: McGraw-Hill. MacInnes, Richard L. (2002) The Lean Enterprise Memory Jogger. Mika, Geoffrey L. (1999) Kaizen Event Implementation Manual Monden, Yasuhiro. 1982. Toyota Production System. Norcross, Ga: Institute of Industrial Engineers. Ohno, Taiichi (1988), Toyota Production System: Beyond Large-Scale Production, Productivity Press, Ohno, Taiichi (1988), Just-In-Time for Today and Tomorrow, Productivity Press, . Page, Julian (2003) Implementing Lean Manufacturing Techniques. Schonberger, Richard J. 1982. Japanese Manufacturing Techniques: Nine Hidden Lessons in Simplicity. New York: Free Press. Suri, R. 1986. Getting from 'just in case' to 'just in time': insights from a simple model. 6 (3) 295–304. Suzaki, Kyoshi. 1993. The New Shop Floor Management: Empowering People for Continuous Improvement. New York: Free Press. Voss, Chris, and David Clutterbuck. 1989. Just-in-Time: A Global Status Report. UK: IFS Publications. Wadell, William, and Bodek, Norman (2005), The Rebirth of American Industry, PCS Press, External links Lean Enterprise Institute Manufacturing Freight transport Inventory Working capital management Inventory optimization
Lean manufacturing
Engineering
6,691
6,886,776
https://en.wikipedia.org/wiki/Drug-related%20crime
A drug-related crime is a crime to possess, manufacture, or distribute drugs classified as having a potential for abuse (such as cocaine, heroin, morphine and amphetamines). Drugs are also related to crime as drug trafficking and drug production are often controlled by drug cartels, organised crime and gangs. Some drug-related crime involves crime against the person such as robbery or sexual assaults. U.S. Bureau of Justice Statistics In 2002, in the U.S. about a quarter of convicted property and drug offenders in local jails had committed their crimes to get money for drugs, compared to 5% of violent and public order offenders. Among State prisoners in 2004 the pattern was similar, with property (30%) and drug offenders (26%) more likely to commit their crimes for drug money than violent (10%) and public-order offenders (7%). In Federal prisons property offenders (11%) were less than half as likely as drug offenders (25%) to report drug money as a motive in their offenses. In 2004, 17% of U.S. State prisoners and 18% of Federal inmates said they committed their current offense to obtain money for drugs. These percentages represent a slight increase for Federal prisoners (16% in 1997) and a slight decrease for State prisoners (19% in 1997). Drugs and crime Drug abuse and addiction is associated with drug-related crimes. In the U.S. several jurisdictions have reported that benzodiazepine misuse by criminal detainees has surpassed that of opiates. Patients reporting to two emergency rooms in Canada with violence-related injuries were most often found to be intoxicated with alcohol and were significantly more likely to test positive for benzodiazepines (most commonly temazepam) than other groups of individuals, whereas other drugs were found to be insignificant in relation to violent injuries. Research carried out on drug-related crime found that drug misuse is associated with various crimes that are in part related to the feelings of invincibility, which can become particularly pronounced with abuse. Problematic crimes associated include shoplifting, property crime, drug dealing, violence and aggression and driving whilst intoxicated. In Scotland among the 71% of suspected criminals testing positive for controlled drugs at the time of their arrest benzodiazepines (over 85% are temazepam cases) are detected more frequently than opiates and are second only to cannabis, which is the most frequently detected drug. Research carried out by the Australian government found that benzodiazepine users are more likely to be violent, more likely to have been in contact with the police, and more likely to have been charged with criminal behavior than those using opiates. Illicit benzodiazepines mostly originate from medical practitioners but leak onto the illicit scene due to diversion and doctor shopping. Although only a very small number originate from thefts, forged prescriptions, armed robberies, or ram raids, it is most often benzodiazepines, rather than opiates, that are targeted in part because benzodiazepines are not usually locked in vaults and or do not have as strict laws governing prescription and storage of many benzodiazepines. Temazepam accounts for most benzodiazepine sought by forgery of prescriptions and through pharmacy burglary in Australia. Benzodiazepines have been used as a tool of murder by serial killers, and other murderers, such as those with the condition Munchausen Syndrome by Proxy. Benzodiazepines have also been used to facilitate rape or robbery crimes, and benzodiazepine dependence has been linked to shoplifting due to the fugue state induced by the chronic use of the drug. When benzodiazepines are used for criminal purposes against a victim they are often mixed with food or drink. Temazepam and midazolam are the most common benzodiazepines used to facilitate date rape. Alprazolam has been abused for the purpose of carrying out acts of incest and for the corruption of adolescent girls. However, alcohol remains the most common drug involved in cases of drug rape. Although benzodiazepines and ethanol are the most frequent drugs used in sexual assaults, GHB is another potential date rape drug that has received increased media focus. Some benzodiazepines are more associated with crime than others especially when abused or taken in combination with alcohol. The potent benzodiazepine flunitrazepam (Rohypnol), which has strong amnesia-producing effects can cause abusers to become ruthless and also cause feelings of being invincible. This has led to some acts of extreme violence to others, often leaving abusers with no recollection of what they have done in their drug-induced state. It has been proposed that criminal and violent acts brought on by benzodiazepine abuse may be related to lowered serotonin levels via enhanced GABAergic effects. Flunitrazepam has been implicated as the cause of one serial killer's violent rampage, triggering off extreme aggression with anterograde amnesia. A study on forensic psychiatric patients who had abused flunitrazepam at the time of their crimes found that the patients displayed extreme violence, lacked the ability to think clearly, and experienced a loss of empathy for their victims while under the influence of flunitrazepam, and it was found that the abuse of alcohol or other drugs in combination with flunitrazepam compounded the problem. Their behaviour under the influence of flunitrazepam was in contrast to their normal psychological state. Criticisms The concept of drug-related crime has been criticized for being too blunt, especially in its failure to distinguish between three types of crime associated with drugs: Use-Related crime: These are crimes that result from or involve individuals who ingest drugs, and who commit crimes as a result of the effect the drug has on their thought processes and behavior. Economic-Related crime: These are crimes where an individual commits a crime to fund a drug habit. These include theft and prostitution. System-Related crime: These are crimes that result from the structure of the drug system. They include production, manufacture, transportation, and sale of drugs, as well as violence related to the production or sale of drugs, such as a turf war. Drug-related crime may be used as a justification for prohibition, but, in the case of system-related crime, the acts are only crimes because of prohibition. In addition, some consider even user-related and economic-related aspects of crime as symptomatic of a broader problem. See also Alcohol-related crime Drug abuse Drugwipe test Self-medication General: Prohibition (drugs) Single Convention on Narcotic Drugs Organized crime: Cigarette smuggling Drug Cartel Drug Lord Illegal drug trade Organized crime Rum-running US specific: Bureau of Justice Statistics Bureau of Alcohol, Tobacco, Firearms and Explosives Comprehensive Drug Abuse Prevention and Control Act of 1970 Controlled Substances Act Drug Enforcement Administration Food and Drug Administration Racketeer Influenced and Corrupt Organizations Act (RICO) U.S. Immigration and Customs Enforcement Uniform Crime Report References External links Defining drug-related crime - EU Drug-related crime Canada Drug-related crime UK PDF version of Drug-related crime U.S. Department of Justice Prevention of drug-related crime - EU Beckley Foundation Report 2005, Reducing drug-related crime: an overview of the global Evidence Driving under the influence of drugs The National Center for Victims of Crime Drug control law Crime by type
Drug-related crime
Chemistry
1,556
1,723,156
https://en.wikipedia.org/wiki/Fourth%2C%20fifth%2C%20and%20sixth%20derivatives%20of%20position
In physics, the fourth, fifth and sixth derivatives of position are defined as derivatives of the position vector with respect to time – with the first, second, and third derivatives being velocity, acceleration, and jerk, respectively. The higher-order derivatives are less common than the first three; thus their names are not as standardized, though the concept of a minimum snap trajectory has been used in robotics and is implemented in MATLAB. The fourth derivative is referred to as snap, leading the fifth and sixth derivatives to be "sometimes somewhat facetiously" called crackle and pop, inspired by the Rice Krispies mascots Snap, Crackle, and Pop. The fourth derivative is also called jounce. (snap/jounce) Snap, or jounce, is the fourth derivative of the position vector with respect to time, or the rate of change of the jerk with respect to time. Equivalently, it is the second derivative of acceleration or the third derivative of velocity, and is defined by any of the following equivalent expressions: In civil engineering, the design of railway tracks and roads involves the minimization of snap, particularly around bends with different radii of curvature. When snap is constant, the jerk changes linearly, allowing for a smooth increase in radial acceleration, and when, as is preferred, the snap is zero, the change in radial acceleration is linear. The minimization or elimination of snap is commonly done using a mathematical clothoid function. Minimizing snap improves the performance of machine tools and roller coasters. The following equations are used for constant snap: where is constant snap, is initial jerk, is final jerk, is initial acceleration, is final acceleration, is initial velocity, is final velocity, is initial position, is final position, is time between initial and final states. The notation (used by Visser) is not to be confused with the displacement vector commonly denoted similarly. The dimensions of snap are distance per fourth power of time (LT−4). The corresponding SI unit is metre per second to the fourth power, m/s4, m⋅s−4. The fifth derivative of the position vector with respect to time is sometimes referred to as crackle. It is the rate of change of snap with respect to time. Crackle is defined by any of the following equivalent expressions: The following equations are used for constant crackle: where : constant crackle, : initial snap, : final snap, : initial jerk, : final jerk, : initial acceleration, : final acceleration, : initial velocity, : final velocity, : initial position, : final position, : time between initial and final states. The dimensions of crackle are LT−5. The corresponding SI unit is m/s5. The sixth derivative of the position vector with respect to time is sometimes referred to as pop. It is the rate of change of crackle with respect to time. Pop is defined by any of the following equivalent expressions: The following equations are used for constant pop: where : constant pop, : initial crackle, : final crackle, : initial snap, : final snap, : initial jerk, : final jerk, : initial acceleration, : final acceleration, : initial velocity, : final velocity, : initial position, : final position, : time between initial and final states. The dimensions of pop are LT−6. The corresponding SI unit is m/s6. References External links Acceleration Kinematic properties Time in physics Vector physical quantities
Fourth, fifth, and sixth derivatives of position
Physics,Mathematics
703
32,880,384
https://en.wikipedia.org/wiki/InfiniteGraph
InfiniteGraph is a distributed graph database implemented in Java and C++ and is from a class of NOSQL ("Not Only SQL") database technologies that focus on graph data structures. Developers use InfiniteGraph to find useful and often hidden relationships in highly connected, complex big data sets. InfiniteGraph is cross-platform, scalable, cloud-enabled, and is designed to handle very high throughput. InfiniteGraph can easily and efficiently perform queries that are difficult to perform, such as finding all paths or the shortest path between two items. InfiniteGraph is suited for applications and services that solve graph problems in operational environments. InfiniteGraphs "DO" query language enables both value-based queries as well as complex graph queries. InfiniteGraph goes beyond graph databases to also support complex object queries. Adoption is seen in federal government, telecommunications, healthcare, cyber security, manufacturing, finance, and networking applications. History InfiniteGraph is produced and supported by Objectivity, Inc., a company that develops database management technologies for large-scale, distributed data management and relationship analytics. The new InfiniteGraph was released in May 2021. Features API/Protocols: Java, core C++, REST API Graph Model: Labeled directed multigraph. An edge is a first-class entity with an identity independent of the vertices it connects.. Concurrency: Update locking on subgraphs, concurrent non-blocking ingest. Consistency: Flexible (from ACID to relaxed). Distribution: Lock server and 64-bit object IDs support dynamic addressing space (with each federation capable of managing up to 65,535 individual databases and 10^24 bytes (one quadrillion gigabytes, or a yottabyte) of physical addressing space). Processing: Multi-threaded. Query Methods: "DO" Query Language, Traverser and graph navigation API, predicate language qualification, path pattern matching. Parallel query support Visualization: InfiniteGraph "Studio." Schema: Supports schema-full plus provides a mechanism for attaching side data. Transactions: Fully ACID. Source: Proprietary, with open-source extensions, integrated components, and third-party connectors. Platforms: Windows and Linux References External links Data analysis software Proprietary database management systems Graph databases Distributed data stores NoSQL products Structured storage
InfiniteGraph
Mathematics
465
106,270
https://en.wikipedia.org/wiki/Plasmolysis
Plasmolysis is the process in which cells lose water in a hypertonic solution. The reverse process, deplasmolysis or cytolysis, can occur if the cell is in a hypotonic solution resulting in a lower external osmotic pressure and a net flow of water into the cell. Through observation of plasmolysis and deplasmolysis, it is possible to determine the tonicity of the cell's environment as well as the rate solute molecules cross the cellular membrane. Etymology The term plasmolysis is derived from the Latin word ‘plasma’ meaning ‘matrix’ and the Greek word ‘lysis’, meaning ‘loosening’. Turgidity A plant cell in hypotonic solution will absorb water by endosmosis, so that the increased volume of water in the cell will increase pressure, making the protoplasm push against the cell wall, a condition known as turgor. Turgor makes plant cells push against each other in the same way and is the main line method of support in non-woody plant tissue. Plant cell walls resist further water entry after a certain point, known as full turgor, which stops plant cells from bursting as animal cells do in the same conditions. This is also the reason that plants stand upright. Without the stiffness of the plant cells the plant would fall under its own weight. Turgor pressure allows plants to stay firm and erect, and plants without turgor pressure (known as flaccid) wilt. A cell will begin to decline in turgor pressure only when there is no air spaces surrounding it and eventually leads to a greater osmotic pressure than that of the cell. Vacuoles play a role in turgor pressure when water leaves the cell due to hyperosmotic solutions containing solutes such as mannitol, sorbitol, and sucrose. Plasmolysis If a plant cell is placed in a hypertonic solution, the plant cell loses water and hence turgor pressure by plasmolysis: pressure decreases to the point where the protoplasm of the cell peels away from the cell wall, leaving gaps between the cell wall and the membrane and making the plant cell shrink and crumple. A continued decrease in pressure eventually leads to cytorrhysis – the complete collapse of the cell wall. Plants with cells in this condition wilt. After plasmolysis the gap between the cell wall and the cell membrane in a plant cell is filled with hypertonic solution. This is because as the solution surrounding the cell is hypertonic, exosmosis takes place and the space between the cell wall and cytoplasm is filled with solutes, as most of the water drains away and hence the concentration inside the cell becomes more hypertonic. There are some mechanisms in plants to prevent excess water loss in the same way as excess water gain. Plasmolysis can be reversed if the cell is placed in a hypotonic solution. Stomata close to help keep water in the plant so it does not dry out. Wax also keeps water in the plant. The equivalent process in animal cells is called crenation. The liquid content of the cell leaks out due to exosmosis. The cell collapses, and the cell membrane pulls away from the cell wall (in plants). Most animal cells consist of only a phospholipid bilayer (plasma membrane) and not a cell wall, therefore shrinking up under such conditions. Plasmolysis only occurs in extreme conditions and rarely occurs in nature. It is induced in the laboratory by immersing cells in strong saline or sugar (sucrose) solutions to cause exosmosis, often using Elodea plants or onion epidermal cells, which have colored cell sap so that the process is clearly visible. Methylene blue can be used to stain plant cells. Plasmolysis is mainly known as shrinking of cell membrane in hypertonic solution and great pressure. Plasmolysis can be of two types, either concave plasmolysis or convex plasmolysis. Convex plasmolysis is always irreversible while concave plasmolysis is usually reversible. During concave plasmolysis, the plasma membrane and the enclosed protoplast partially shrinks from the cell wall due to half-spherical, inwarding curving pockets forming between the plasma membrane and the cell wall. During convex plasmolysis, the plasma membrane and the enclosed protoplast shrinks completely from the cell wall, with the plasma membrane's ends in a symmetrically, spherically curved pattern. References External links Pictures of plasmolysis in Elodea and onion skin. Wilting and plasmolysis. Plant physiology Membrane biology
Plasmolysis
Chemistry,Biology
991
45,196,112
https://en.wikipedia.org/wiki/Charge-shift%20bond
In theoretical chemistry, the charge-shift bond is a proposed new class of chemical bonds that sits alongside the three familiar families of covalent, ionic, and metallic bonds where electrons are shared or transferred respectively. The charge shift bond derives its stability from the resonance of ionic forms rather than the covalent sharing of electrons which are often depicted as having electron density between the bonded atoms. A feature of the charge shift bond is that the predicted electron density between the bonded atoms is low. It has long been known from experiment that the accumulation of electric charge between the bonded atoms is not necessarily a feature of covalent bonds. An example where charge shift bonding has been used to explain the low electron density found experimentally is in the central bond between the inverted tetrahedral carbon atoms in [1.1.1]propellanes. Theoretical calculations on a range of molecules have indicated that a charge shift bond is present, a striking example being fluorine, , which is normally described as having a typical covalent bond. The charge shift bond (CSB) has also been shown to exist at the cation-anion interface of protic ionic liquids (PILs). The authors have also shown how CSB character in PILs correlates with their physicochemical properties. Valence bond description The valence bond view of chemical bonding that owes much to the work of Linus Pauling is familiar to many, if not all, chemists. The basis of Pauling's description of the chemical bond is that an electron pair bond involves the mixing, resonance, of one covalent and two ionic structures. In bonds between two atoms of the same element, homonuclear bonds, Pauling assumed that the ionic structures make no appreciable contribution to the overall bonding. This assumption followed on from published calculations for the hydrogen molecule in 1933 by Weinbaum and by James and Coolidge that showed that the contribution of ionic forms amounted to only a small percentage of the H−H bond energy. For heteronuclear bonds, A−X, Pauling estimated the covalent contribution to the bond dissociation energy as being the mean of the bond dissociation energies of homonuclear A−A and X−X bonds. The difference between the mean and the observed bond energy was assumed to be due to the ionic contribution. The calculation for HCl is shown below. The ionic contribution to the overall bond dissociation energy was attributed to the difference in electronegativity between the A and X, and these differences were the starting point for Pauling's calculation of the individual electronegativities of the elements. The proponents of charge shift bond bonding re−examined the validity of Pauling's assumption that ionic forms make no appreciable contribution to the overall bond dissociation energies of homonuclear bonds. What they found using modern valence bond methods was that in some cases the contribution of ionic forms was significant, the most striking example being F2, fluorine, where their calculations indicate that the bond energy of the F−F bond is due wholly to the ionic contribution. Calculated bond energies The contribution of ionic resonance structures has been termed the charge−shift resonance energy, REcs, and values have been calculated for a number of single bonds, some of which are shown below: The results show that for homonuclear bonds the charge shift resonance energy can be significant, and for F2 and Cl2 show it is the attractive component whereas the covalent contribution is repulsive. The reduced density along the bond axis density is apparent using ELF, electron localization function, a tool for determining electron density. The bridge bond in a propellane The bridge bond (inverted bond between the bridgehead atoms which is common to the three cycles) in a substituted [1.1.1]propellane has been examined experimentally. A theoretical study on [1.1.1]propellane has shown that it has a significant REcs stabilisation energy. Factors causing charge shift bonding Analysis of a number of compounds where charge shift resonance energy is significant shows that in many cases elements with high electronegativities are involved and these have smaller orbitals and are lone pair rich. Factors that reduce the covalent contribution to the bond energy include poor overlap of bonding orbitals, and the lone pair bond weakening effect where repulsion due to the Pauli exclusion principle is the main factor. There is no correlation between the charge−shift resonance energy REcs and the difference between the electronegativities of the bonded atoms as might be expected from the Pauling bonding model, however there is a global correlation between REcs and the sum of their electronegativities which can be accounted for in part by the lone pair bond weakening effect. The charge-shift nature of the inverted bond in [1.1.1]propellanes has been ascribed to the Pauli repulsion due to the adjacent "wing" bonds destabilising of the covalent contribution. Experimental evidence for charge-shift bonds The interpretation of experimentally determined electron density in molecules often uses AIM theory. In this the electron density between the atomic nuclei along the bond path are calculated, and the bond critical point where the density is at a minimum is determined. The factors that determine the type of chemical bond are the Laplacian and the electron density at the bond critical point. At the bond critical point a typical covalent bond has significant density and a large negative Laplacian. In contrast a "closed shell" interaction as in an ionic bond has a small electron density and a positive Laplacian. A charge shift bond is expected to have a positive or small Laplacian. Only a limited number of experimental determinations have been made, compounds with bonds with a positive Laplacian are the N–N bond in solid N2O4, and the (Mg−Mg)2+ diatomic structure. References Chemical bonding
Charge-shift bond
Physics,Chemistry,Materials_science
1,216
14,330,991
https://en.wikipedia.org/wiki/Outline%20of%20nuclear%20technology
The following outline is provided as an overview of and topical guide to nuclear technology: Nuclear technology – involves the reactions of atomic nuclei. Among the notable nuclear technologies are nuclear power, nuclear medicine, and nuclear weapons. It has found applications from smoke detectors to nuclear reactors, and from gun sights to nuclear weapons. Essence of nuclear technology Atomic nucleus Branches of nuclear technology Nuclear engineering History of nuclear technology History of nuclear power History of nuclear weapons Nuclear material Nuclear fuel Fertile material Thorium Uranium Enriched uranium Depleted uranium Plutonium Deuterium Tritium Nuclear power Nuclear power – List of nuclear power stations Nuclear reactor technology Fusion power Inertial fusion power plant Reactor types List of nuclear reactors Advanced gas-cooled reactor Boiling water reactor Fast breeder reactor Fast neutron reactor Gas-cooled fast reactor Generation IV reactor Integral Fast Reactor Lead-cooled fast reactor Liquid-metal-cooled reactor Magnox reactor Molten-salt reactor Pebble-bed reactor Pressurized water reactor Sodium-cooled fast reactor Supercritical water reactor Very high temperature reactor Radioisotope thermoelectric generator Radioactive waste Future energy development Nuclear propulsion Nuclear thermal rocket Polywell Nuclear decommissioning Nuclear power phase-out Civilian nuclear accidents List of civilian nuclear accidents List of civilian radiation accidents Nuclear medicine Nuclear medicine – BNCT Brachytherapy Gamma (Anger) Camera PET Proton therapy Radiation therapy SPECT Tomotherapy Nuclear weapons Nuclear weapons – Nuclear explosion Effects of nuclear explosions Types of nuclear weapons Strategic nuclear weapon ICBM SLBM Tactical nuclear weapons List of nuclear weapons Nuclear weapons systems Nuclear weapons delivery (missiles, etc.) Nuclear weapon design Nuclear weapons proliferation Nuclear weapons testing List of states with nuclear weapons List of nuclear tests Nuclear strategy Assured destruction Counterforce, Countervalue Decapitation strike Deterrence Doctrine for Joint Nuclear Operations Fail-deadly Force de frappe First strike, Second strike Game theory & wargaming Massive retaliation Minimal deterrence Mutual assured destruction (MAD) No first use National Security Strategy of the United States Nuclear attribution Nuclear blackmail Nuclear proliferation Nuclear utilization target selection (NUTS) Single Integrated Operational Plan (SIOP) Strategic bombing Nuclear weapons incidents List of sunken nuclear submarines United States military nuclear incident terminology 1950 British Columbia B-36 crash 1950 Rivière-du-Loup B-50 nuclear weapon loss incident 1958 Mars Bluff B-47 nuclear weapon loss incident 1961 Goldsboro B-52 crash 1961 Yuba City B-52 crash 1964 Savage Mountain B-52 crash 1965 Philippine Sea A-4 incident 1966 Palomares B-52 crash 1968 Thule Air Base B-52 crash 2007 United States Air Force nuclear weapons incident Nuclear technology scholars Henri Becquerel Niels Bohr James Chadwick John Cockcroft Pierre Curie Marie Curie Albert Einstein Michael Faraday Enrico Fermi Otto Hahn Lise Meitner Robert Oppenheimer Wolfgang Pauli Franco Rasetti Ernest Rutherford Ernest Walton See also Outline of energy Outline of nuclear power List of civilian nuclear ships List of military nuclear accidents List of nuclear medicine radiopharmaceuticals List of nuclear waste treatment technologies List of particles Anti-nuclear movement External links Nuclear Energy Institute – Beneficial Uses of Radiation Nuclear Technology Nuclear technology Nuclear technology outline Outline of nuclear technology
Outline of nuclear technology
Physics
635
37,525,024
https://en.wikipedia.org/wiki/Critical%20exponent%20of%20a%20word
In mathematics and computer science, the critical exponent of a finite or infinite sequence of symbols over a finite alphabet describes the largest number of times a contiguous subsequence can be repeated. For example, the critical exponent of "Mississippi" is 7/3, as it contains the string "ississi", which is of length 7 and period 3. If w is an infinite word over the alphabet A and x is a finite word over A, then x is said to occur in w with exponent α, for positive real α, if there is a factor y of w with y = xax0 where x0 is a prefix of x, a is the integer part of α, and the length |y| = α |x|: we say that y is an α-power. The word w is α-power-free if it contains no factors which are β-powers for any β ≥ α. The critical exponent for w is the supremum of the α for which w has α-powers, or equivalently the infimum of the α for which w is α-power-free. Definition If is a word (possibly infinite), then the critical exponent of is defined to be where . Examples The critical exponent of the Fibonacci word is (5 + )/2 ≈ 3.618. The critical exponent of the Thue–Morse sequence is 2. The word contains arbitrarily long squares, but in any factor xxb the letter b is not a prefix of x. Properties The critical exponent can take any real value greater than 1. The critical exponent of a morphic word over a finite alphabet is either infinite or an algebraic number of degree at most the size of the alphabet. Repetition threshold The repetition threshold of an alphabet A of n letters is the minimum critical exponent of infinite words over A: clearly this value RT(n) depends only on n. For n=2, any binary word of length four has a factor of exponent 2, and since the critical exponent of the Thue–Morse sequence is 2, the repetition threshold for binary alphabets is RT(2) = 2. It is known that RT(3) = 7/4, RT(4) = 7/5 and that for n≥5 we have RT(n) ≥ n/(n-1). It is conjectured that the latter is the true value, and this has been established for 5 ≤ n ≤ 14 and for n ≥ 33. See also Critical exponent of a physical system Notes References Formal languages Combinatorics on words
Critical exponent of a word
Mathematics
539
45,995
https://en.wikipedia.org/wiki/Building
A building or edifice is an enclosed structure with a roof and walls, usually standing permanently in one place, such as a house or factory. Buildings come in a variety of sizes, shapes, and functions, and have been adapted throughout history for numerous factors, from building materials available, to weather conditions, land prices, ground conditions, specific uses, prestige, and aesthetic reasons. To better understand the concept, see Nonbuilding structure for contrast. Buildings serve several societal needs – occupancy, primarily as shelter from weather, security, living space, privacy, to store belongings, and to comfortably live and work. A building as a shelter represents a physical separation of the human habitat (a place of comfort and safety) from the outside (a place that may be harsh and harmful at times). Ever since the first cave paintings, buildings have been objects or canvasses of much artistic expression. In recent years, interest in sustainable planning and building practices has become an intentional part of the design process of many new buildings and other structures, usually green buildings. Definition A building is 'a structure that has a roof and walls and stands more or less permanently in one place'; "there was a three-storey building on the corner"; "it was an imposing edifice". In the broadest interpretation a fence or wall is a building. However, the word structure is used more broadly than building, to include natural and human-made formations and ones that do not have walls; structure is more often used for a fence. Sturgis' Dictionary included that differs from architecture in excluding all idea of artistic treatment; and it differs from construction in the idea of excluding scientific or highly skilful treatment." Structural height in technical usage is the height to the highest architectural detail on the building from street level. Spires and masts may or may not be included in this height, depending on how they are classified. Spires and masts used as antennas are not generally included. The distinction between a low-rise and high-rise building is a matter of debate, but generally three stories or less is considered low-rise. History There is clear evidence of homebuilding from around 18,000 BC. Buildings became common during the Neolithic period. Types Residential Single-family residential buildings are most often called houses or homes. Multi-family residential buildings containing more than one dwelling unit are called duplexes or apartment buildings. Condominiums are apartments that occupants own rather than rent. Houses may be built in pairs (semi-detached) or in terraces, where all but two of the houses have others on either side. Apartments may be built round courtyards or as rectangular blocks surrounded by plots of ground. Houses built as single dwellings may later be divided into apartments or bedsitters, or converted to other uses (e.g., offices or shops). Hotels, especially of the extended-stay variety (apartels), can be classed as residential. Building types may range from huts to multimillion-dollar high-rise apartment blocks able to house thousands of people. Increasing settlement density in buildings (and smaller distances between buildings) is usually a response to high ground prices resulting from the desire of many people to live close to their places of employment or similar attractors. Terms for residential buildings reflect such characteristics as function (e.g., holiday cottage (vacation home) or timeshare if occupied seasonally); size (cottage or great house); value (shack or mansion); manner of construction (log home or mobile home); architectural style (castle or Victorian); and proximity to geographical features (earth shelter, stilt house, houseboat, or floating home). For residents in need of special care, or those society considers dangerous enough to deprive of liberty, there are institutions (nursing homes, orphanages, psychiatric hospitals, and prisons) and group housing (barracks and dormitories). Historically, many people lived in communal buildings called longhouses, smaller dwellings called pit-houses, and houses combined with barns, sometimes called housebarns. Common building materials include brick, concrete, stone, and combinations thereof. Buildings are defined to be substantial, permanent structures. Such forms as yurts and motorhomes are therefore considered dwellings but not buildings. Commercial A commercial building is one in which at least one business is based and people do not live. Examples include stores, restaurant, and hotels. Industrial Industrial buildings are those in which heavy industry is done, such as manufacturing. These edifices include warehouses and factories. Agricultural Agricultural buildings are the outbuildings, such as barns located on farms. Mixed use Some buildings incorporate several or multiple different uses, most commonly commercial and residential. Complex Sometimes a group of inter-related (and possibly inter-connected) builds are referred to as a complex – for example a housing complex, educational complex, hospital complex, etc. Creation The practice of designing, constructing, and operating buildings is most usually a collective effort of different groups of professionals and trades. Depending on the size, complexity, and purpose of a particular building project, the project team may include: A real estate developer who secures funding for the project; One or more financial institutions or other investors that provide the funding Local planning and code authorities A surveyor who performs an ALTA/ACSM and construction surveys throughout the project; Construction managers who coordinate the effort of different groups of project participants; Licensed architects and engineers who provide building design and prepare construction documents; The principal design Engineering disciplines which would normally include the following professionals: Civil, Structural, Mechanical building services or HVAC (heating Ventilation and Air Conditioning) Electrical Building Services, Plumbing and drainage. Also other possible design Engineer specialists may be involved such as Fire (prevention), Acoustic, façade engineers, building physics, Telecoms, AV (Audio Visual), BMS (Building Management Systems) Automatic controls etc. These design Engineers also prepare construction documents which are issued to specialist contractors to obtain a price for the works and to follow for the installations. Landscape architects; Interior designers; Other consultants; Contractors who provide construction services and install building systems such as climate control, electrical, plumbing, decoration, fire protection, security and telecommunications; Marketing or leasing agents; Facility managers who are responsible for operating the building. Regardless of their size or intended use, all buildings in the US must comply with zoning ordinances, building codes and other regulations such as fire codes, life safety codes and related standards. Vehicles—such as trailers, caravans, ships and passenger aircraft—are treated as "buildings" for life safety purposes. Ownership and funding Mortgage loan Real estate developer Environmental impacts Building services Physical plant Any building requires a certain general amount of internal infrastructure to function, which includes such elements like heating / cooling, power and telecommunications, water and wastewater etc. Especially in commercial buildings (such as offices or factories), these can be extremely intricate systems taking up large amounts of space (sometimes located in separate areas or double floors / false ceilings) and constitute a big part of the regular maintenance required. Conveying systems Systems for transport of people within buildings: Elevator Escalator Moving sidewalk (horizontal and inclined) Systems for transport of people between interconnected buildings: Skyway Underground city Building damage Buildings may be damaged during construction or during maintenance. They may be damaged by accidents involving storms, explosions, subsidence caused by mining, water withdrawal or poor foundations and landslides. Buildings may suffer fire damage and flooding. They may become dilapidated through lack of proper maintenance, or alteration work improperly carried out. See also Autonomous building Commercial modular construction Earthquake engineering Float glass Hurricane-proof building List of largest buildings List of tallest buildings Lists of buildings and structures Natural building Natural disaster and earthquake Skyscraper Steel building Tent References External links
Building
Engineering
1,567
23,802,163
https://en.wikipedia.org/wiki/Electrical%20impedance%20myography
Electrical impedance myography, or EIM, is a non-invasive technique for the assessment of muscle health that is based on the measurement of the electrical impedance characteristics of individual muscles or groups of muscles. The technique has been used for the purpose of evaluating neuromuscular diseases both for their diagnosis and for their ongoing assessment of progression or with therapeutic intervention. Muscle composition and microscopic structure change with disease, and EIM measures alterations in impedance that occur as a result of disease pathology. EIM has been specifically recognized for its potential as an ALS biomarker (also known as a biological correlate or surrogate endpoint) by Prize4Life, a 501(c)(3) nonprofit organization dedicated to accelerating the discovery of treatments and cures for ALS. The $1M ALS Biomarker Challenge focused on identifying a biomarker precise and reliable enough to cut Phase II drug trials in half. The prize was awarded to Dr. Seward Rutkove, chief, Division of Neuromuscular Disease, in the Department of Neurology at Beth Israel Deaconess Medical Center and Professor of Neurology at Harvard Medical School, for his work in developing the technique of EIM and its specific application to ALS. It is hoped that EIM as a biomarker will result in the more rapid and efficient identification of new treatments for ALS. EIM has shown sensitivity to disease status in a variety of neuromuscular conditions, including radiculopathy, inflammatory myopathy, Duchenne muscular dystrophy, and spinal muscular atrophy. In addition to the assessment of neuromuscular disease, EIM also has the prospect of serving as a convenient and sensitive measure of muscle condition. Work in aging populations and individuals with orthopedic injuries indicates that EIM is very sensitive to muscle atrophy and disuse and is conversely likely sensitive to muscle conditioning and hypertrophy. Work on mouse and rats models, including a study of mice on board the final Space Shuttle mission (STS-135), has helped to confirm this potential value. Underlying concepts Interest in electrical impedance dates back to the turn of the 20th century, when physiologist Louis Lapicque postulated an elementary circuit to model membranes of nerve cells. Scientists experimented with variations on this model until 1940, when Kenneth Cole developed a circuit model that accounted for the impedance properties of both cell membranes and intracellular fluid. Like all impedance-based methods, EIM hinges on a simplified model of muscle tissue as an RC circuit. This model attributes the resistive component of the circuit to the resistance of extracellular and intracellular fluids, and the reactive component to the capacitive effects of cell membranes. The integrity of individual cell membranes has a significant effect on the tissue's impedance; hence, a muscle's impedance can be used to measure the tissue's degradation in disease progression. In neuromuscular disease, a variety of factors can influence the compositional and micro structural aspects of muscle, including most notably muscle fiber atrophy and disorganization, the deposition of fat and connective tissues, as occurs in muscular dystrophy, and the presence of inflammation, among many other pathologies. EIM captures these changes in the tissue as a whole by measuring its impedance characteristics across multiple frequencies and at multiple angles relative to the major muscle fiber direction. In EIM, impedance is separated into resistance and reactance, its real and imaginary components. From this, one can compute the muscle's phase, which represents the time-shift that a sinusoid undergoes when passing through the muscle. For a given resistance (R) and reactance (X), phase (θ) can be calculated. In current work, all three parameters appear to play important roles depending exactly on which diseases are being studied and how the technology is being applied. EIM can also be impacted by the thickness of the skin and subcutaneous fat overlying a region of muscle. However, electrode designs can be created that can circumvent the effect to a large extent and thus still provide primary muscle data. Moreover, the use of multifrequency measurements can also assist with this process of disentangling the effects of fat from those of muscle. From this information, it also becomes possible to infer/calculate the approximate amount of fat overlying a muscle in a given region. Multifrequency measurements Both resistance and reactance depend on the input frequency of the signal. Because changes in frequency shift the relative contributions of resistance (fluid) and reactance (membrane) to impedance, multifrequency EIM may allow a more comprehensive assessment of disease. Resistance, reactance, or phase can be plotted as a function of frequency to demonstrate the differences in frequency dependence between healthy and diseased groups. Diseased muscle exhibits an increase in reactance and phase with increasing frequency, while reactance and phase values of healthy muscle increase with frequency until 50–100 kHz, at which point they begin to decrease as a function of frequency. Frequencies ranging from 500 Hz to 2 MHz are used to determine the frequency spectrum for a given muscle. Muscle anisotropy Electrical impedance of muscle tissue is anisotropic; current flowing parallel to muscle fibers flows differently from current flowing orthogonally across the fibers. Current flowing orthogonally across a muscle encounters more cell membranes, thus increasing resistance, reactance, and phase values. By taking measurements at different angles with respect to muscle fibers, EIM can be used to determine the anisotropy of a given muscle. Anisotropy tends to be shown either as a graph plotting resistance, reactance, or phase as a function of angle with respect to the direction of muscle fibers or as a ratio of transverse (perpendicular to fibers) measurement to longitudinal measurement (parallel to muscle fibers) of a given impedance factor. Muscle anisotropy also changes with neuromuscular disease. EIM has shown a difference between anisotropy profiles of neuromuscular disease patients and healthy controls. In addition, EIM can use anisotropy to discriminate between myopathic and neurogenic disease. Different forms of neuromuscular disease have unique anisotropies. Myopathic disease is characterized by decreased anisotropy. Neurogenic disease produces a less predictable anisotropy. The angle of lowest phase may be shifted from the parallel position, and the anisotropy as a whole is often greater than that of a healthy control. Measurement approaches In general, to apply the technique, a minimum of four surface electrodes are placed over the muscle of interest. A minute alternating current is applied across the outer two electrodes, and voltage signals are recorded by the inner electrodes. The frequency of the applied current and the relationship of the electrode array to the major muscle fiber direction is varied so that a full multifrequency and multidirectional assessment of the muscle can be achieved. EIM has been performed with a number of different impedance analysis devices. Commercially available systems used for bioimpedance analysis, can be calibrated to measure impedance of individual muscles. A suitable impedance analyzer can also be custom built using a lock-in amplifier to produce the signal and a low-capacitance probe, such as the Tektronix P6243, to record voltages from the surface electrodes. Such methods, however, are slow and clumsy to apply given the need for careful electrode positioning over a muscle of interest and the potential for misalignment of electrodes and inaccuracy. Accordingly, an initial hand-held system was constructed using multiple components with an electrode head that could be placed directly on the patient. The device featured an array of electrode plates, which could be selectively activated to perform impedance measurements in arbitrary orientations. The oscilloscopes were programmed to produce a compound sinusoid signal, which could be used to measure the impedance at multiple frequencies simultaneously via a Fast Fourier transform. Since that initial system was created, other handheld commercial systems are being developed, such as Skulpt, for use in both neuromuscular disease assessment and for fitness monitoring, including the calculation of a muscle quality (or MQ) value. This latter value aims to provide an approximate assessment of the relative force-generating capacity of muscle for a given cross-sectional area of tissue. Muscle quality, for example, is a measure used in the assessment of sarcopenia. Comparison with standard bioelectrical impedance analysis Standard bioelectrical impedance analysis (BIA), like EIM, also employs a weak, high frequency electric current to measure characteristics of the human body. In standard BIA, unlike EIM, electric current is passed between electrodes placed on the hands and feet, and the impedance characteristics of the entire current path are measured. Thus, the measured impedance characteristics are relatively nonspecific since they encompass much of the body including the entire length of the extremities, the chest, abdomen and pelvis; accordingly, only summary whole-body measures of lean body mass and % fat can be offered. Moreover, in BIA, current travels the path of least resistance, and thus any factors that alter the current path will cause variability in the data. For example, the expansion of large vessels (e.g., veins) with increasing hydration will offer a low-resistance path, and thus distorting the resulting data. In addition, changes in abdominal contents will similarly alter the data. Body position can also have substantial effects, with joint position contributing to variations in the data. EIM, in contrast, measures only the superficial aspects of individual muscles and is relatively unaffected by body or limb position or hydration status. The differences between EIM and standard BIA were exemplified in one study in amyotrophic lateral sclerosis (ALS) which showed that EIM was effectively able to track progression in 60 ALS patients whereas BIA was not. References Electrophysiology Impedance measurements
Electrical impedance myography
Physics
2,069
23,522,400
https://en.wikipedia.org/wiki/C27H46O
{{DISPLAYTITLE:C27H46O}} The molecular formula C27H46O (molar mass: 386.65 g/mol, exact mass: 386.354866) may refer to: Cholestenol Allocholesterol (Δ-4-Cholestenol) CAS# Cholesterol (Δ-5-Cholestenol) CAS# Epicholesterol (3α-Cholesterol) CAS# Lathosterol (Δ-7-Cholestenol) CAS# Zymostenol (Δ-8-Cholestenol) CAS# Coprostanone CAS# i-cholesterol CAS# Dihydrotachysterol 3 CAS#
C27H46O
Chemistry
156
297,520
https://en.wikipedia.org/wiki/Head%20%28Unix%29
head is a program on Unix and Unix-like operating systems used to display the beginning of a text file or piped data. Syntax The command syntax is: head [options] By default, will print the first 10 lines of its input to the standard output. Option flags Other command Many early versions of Unix and Plan 9 did not have this command, and documentation and books used sed instead: sed 5q filename The example prints every line (implicit) and quits after the fifth. Equivalently, awk may be used to print the first five lines in a file: awk 'NR < 6' filename However, neither sed nor awk were available in early versions of BSD, which were based on Version 6 Unix, and included head. Implementations A head command is also part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2. The command has also been ported to the IBM i operating system. See also tail (Unix) dd (Unix) List of Unix commands References External links head manual page from GNU coreutils. FreeBSD documentation for head Unix text processing utilities Unix SUS2008 utilities IBM i Qshell commands
Head (Unix)
Technology
247
54,323,077
https://en.wikipedia.org/wiki/LeDock
LeDock is a molecular docking software, designed for protein-ligand interactions, that is compatible with Linux, macOS, and Windows. The software can run as a standalone programme or from Jupyter Notebook. It supports the Tripos Mol2 file format. Methodology LeDock utilizes a simulated annealing and genetic algorithm approach for facilitating the docking process of ligands with protein targets. The software employs a knowledge-based scoring scheme that is derived from extensive prospective virtual screening campaigns. It is categorized as a flexible docking method. Performance In a study involving 2,002 protein-ligand complexes, LeDock demonstrated a notable level of accuracy in predicting molecular poses. The Linux version contains command line tools to run automated virtual screening of different large molecular libraries in the cloud. In a performance evaluation of ten docking programs, LeDock demonstrated strong sampling power when compared against other commercial and academic alternatives. According to a review from 2017, LeDock was noted for its effectiveness in sampling ligand conformational space, identifying near-native binding poses, and having a flexible docking protocol. The Linux version includes tools for high-throughput virtual screening in the cloud. See also Drug design Macromolecular docking Molecular mechanics Molecular modelling Protein structure Protein design List of software for molecular mechanics modeling List of protein-ligand docking software Molecular design software Lead Finder Virtual screening Scoring functions for docking References External links Official website Medicinal chemistry Drug discovery Molecular modelling software
LeDock
Chemistry,Biology
286
6,151,067
https://en.wikipedia.org/wiki/Herta%20M%C3%BCller
Herta Müller (; born 17 August 1953) is a Romanian-German novelist, poet, essayist and recipient of the 2009 Nobel Prize in Literature. She was born in Nițchidorf (; ), Timiș County in Romania; her native languages are German and Romanian. Since the early 1990s, she has been internationally established, and her works have been translated into more than twenty languages. Müller is noted for her works depicting the effects of violence, cruelty and terror, usually in the setting of the Socialist Republic of Romania under the repressive Nicolae Ceaușescu regime which she has experienced herself. Many of her works are told from the viewpoint of the German minority in Romania and are also a depiction of the modern history of the Germans in the Banat and Transylvania. Her much acclaimed 2009 novel The Hunger Angel (Atemschaukel) portrays the deportation of Romania's German minority to Soviet Gulags during the Soviet occupation of Romania for use as German forced labour. Müller has received more than twenty awards to date, including the Kleist Prize (1994), the Aristeion Prize (1995), the International Dublin Literary Award (1998) and the Franz Werfel Human Rights Award (2009). On 8 October 2009, the Swedish Academy announced that she had been awarded the Nobel Prize in Literature, describing her as a woman "who, with the concentration of poetry and the frankness of prose, depicts the landscape of the dispossessed". Early life Müller was born to Banat Swabian Catholic farmers in Nițchidorf (German: Nitzkydorf; Hungarian: Niczkyfalva), up to the 1980s a German-speaking village in the Romanian Banat in southwestern Romania. Her grandfather had been a wealthy farmer and merchant, but his property was confiscated by the Communist regime. Her father was a member of the Waffen-SS during World War II, and earned a living as a truck driver in Communist Romania. In 1945, her mother, born 1928 as Katarina Gion, then aged 17, was among 100,000 of the German minority deported to forced labour camps in the Soviet Union, from which she was released in 1950. Müller's native language is German; she learned Romanian only in grammar school. She graduated from Nikolaus Lenau High School before becoming a student of German studies and Romanian literature at West University of Timișoara. In 1976, Müller began working as a translator for an engineering factory, but was dismissed in 1979 for her refusal to cooperate with the Securitate, the Communist regime's secret police. After her dismissal, she initially earned a living by teaching in kindergarten and giving private German lessons. Career Müller's first book, Niederungen (Nadirs), was published in Romania in German in 1982, receiving a prize from the Central Committee of the Union of Communist Youth. The book was about a child's view of the German-cultural Banat. Some members of the Banat Swabian community criticized Müller for "fouling her own nest" by her unsympathetic portrayal of village life. Müller was a member of Aktionsgruppe Banat, a group of German-speaking writers in Romania who supported freedom of speech over the censorship they faced under Nicolae Ceaușescu's government, and her works, including The Land of Green Plums, deal with these issues. Radu Tinu, the Securitate officer in charge of her case, denies that she ever suffered any persecutions, a claim that is opposed by Müller's own version of her (ongoing) persecution in an article in the German weekly Die Zeit in July 2009. After being refused permission to emigrate to West Germany in 1985, Müller was finally allowed to leave along with her then-husband, novelist Richard Wagner, in 1987, and they settled in West Berlin, where both still live. In the following years, she accepted lectureships at universities in Germany and abroad. Müller was elected to membership in the Deutsche Akademie für Sprache und Dichtung in 1995, and other honorary positions followed. In 1997, she withdrew from the PEN centre of Germany in protest of its merger with the former German Democratic Republic branch. In July 2008, Müller sent a critical open letter to Horia-Roman Patapievici, president of the Romanian Cultural Institute in reaction to the moral and financial support given by the institute to two former informants of the Securitate participating at the Romanian-German Summer School. The critic Denis Scheck described visiting Müller at her home in Berlin and seeing that her desk contained a drawer full of single letters cut from a newspaper she had entirely destroyed in the process. Realising that she used the letters to write texts, he felt he had "entered the workshop of a true poet". The Passport, first published in Germany as Der Mensch ist ein großer Fasan auf der Welt in 1986, is, according to The Times Literary Supplement, couched in the strange code engendered by repression: indecipherable because there is nothing specific to decipher, it is candid, but somehow beside the point, redolent of things unsaid. From odd observations the villagers sometimes make ("Man is nothing but a pheasant in the world"), to chapters titled after unimportant props ("The Pot Hole", "The Needle"), everything points to a strategy of displaced meaning ... Every such incidence of misdirection is the whole book in miniature, for although Ceausescu is never mentioned, he is central to the story, and cannot be forgotten. The resulting sense that anything, indeed everything – whether spoken by the characters or described by the author – is potentially dense with tacit significance means this short novel expands in the mind to occupy an emotional space far beyond its size or the seeming simplicity of its story." 2009 success In 2009, Müller enjoyed the greatest international success of her career. Her novel Atemschaukel (published in English as The Hunger Angel) was nominated for the German Book Prize and won the Franz Werfel Human Rights Award. In this book, Müller describes the journey of a young man to a gulag in the Soviet Union, the fate of many Germans in Transylvania after World War II. It was inspired by the experience of the poet Oskar Pastior, whose memories she had made notes of, and also by what happened to her own mother. In October 2009, the Swedish Academy announced its decision to award that year's Nobel Prize in Literature to Müller "who, with the concentration of poetry and the frankness of prose, depicts the landscape of the dispossessed." The academy compared Müller's style and her use of German as a minority language with Franz Kafka and pointed out the influence of Kafka on Müller. The award coincided with the 20th anniversary of the fall of communism. Michael Krüger, head of Müller's publishing house, said: "By giving the award to Herta Müller, who grew up in a German-speaking minority in Romania, the committee has recognized an author who refuses to let the inhumane side of life under communism be forgotten". In 2012, Müller commented on the Nobel Prize for Mo Yan by saying that the Swedish Academy had apparently chosen an author who 'celebrates censorship'. On 6 July 2020 a no longer existing Twitter account published the fake news of Herta Müller's death, which was immediately disclaimed by her publisher. Influences Although Müller has revealed little about the specific people or books that have influenced her, she has acknowledged the importance of her university studies in German and Romanian literature, and particularly of the contrast between the two languages. "The two languages", the writer says, "look differently even at plants. In Romanian, 'snowdrops' are 'little tears', in German they are 'Schneeglöckchen', which is 'little snow bells', which means we're not only speaking about different words, but about different worlds." (However here she confuses snowdrops with lily-of-the-valley, the latter being called 'little tears' in Romanian.) She continues, "Romanians see a falling star and say that someone has died, with the Germans you make a wish when you see the falling star." Romanian folk music is another influence: "When I first heard Maria Tănase she sounded incredible to me, it was for the first time that I really felt what folklore meant. Romanian folk music is connected to existence in a very meaningful way." Müller's work was also shaped by the many experiences she shared with her ex-husband, the novelist and essayist Richard Wagner. Both grew up in Romania as members of the Banat Swabian ethnic group and enrolled in German and Romanian literary studies at Timișoara University. Upon graduating, both worked as German-language teachers, and were members of Aktionsgruppe Banat, a literary society that fought for freedom of speech. Müller's involvement with Aktionsgruppe Banat gave her the courage to write boldly, despite the threats and trouble generated by the Romanian secret police. Although her books are fictional, they are based on real people and experiences. Her 1996 novel, The Land of Green Plums, was written after the deaths of two friends, in which Müller suspected the involvement of the secret police, and one of its characters was based on a close friend from Aktionsgruppe Banat. Letter from Liu Xia Herta Müller wrote the foreword for the first publication of the poetry of Liu Xia, wife of the imprisoned Nobel Peace Prize recipient Liu Xiaobo, in 2015. Müller also translated and read a few of Liu Xia poems in 2014. On 4 December 2017, a photo of the letter to Herta Müller from Liu Xia in a form of poem was posted on Facebook by Chinese dissident Liao Yiwu, where Liu Xia said that she was going mad in her solitary life. On 7 October massacres At the 7 October Forum held in Stockholm on 25 and 26 May 2024, Müller commented on the "unimaginable massacre" committed by Hamas in its "limitless contempt for humanity" in the 7 October attacks and described it comparable to Nazi extermination pogroms. Müller also criticized two progressive groups: Berlin club-goers and Ivy League students. "Hatred of Jews has infiltrated Berlin's nightlife," she said. Furthermore, she stated: "After October 7, the Berlin club scene literally ducked away. Although 364 young people, ravers like them, were massacred at a techno festival, the club association did not comment on it until days later. And even that was just a dull exercise in compulsory action, because antisemitism and Hamas were not even mentioned." In the Frankfurter Allgemeine Zeitung, she cast doubt on the veracity of images coming out of Gaza. "Hamas controls the selection of images and orchestrates our emotions," she wrote. "Our feelings are their strongest weapon against Israel’. Works Prose Niederungen, stories, censored version published in Bucharest, 1982; uncensored version published in Germany, 1984. Translated as Nadirs by Sieglinde Lug (University of Nebraska Press, 1999) Drückender Tango ("Oppressive Tango"), stories, Bucharest, 1984 Der Mensch ist ein großer Fasan auf der Welt, Berlin, 1986. Translated as The Passport by Martin Chalmers (Serpent's Tail, 1989) Barfüßiger Februar ("Barefoot February"), Berlin, 1987 Reisende auf einem Bein, Berlin, 1989. Translated as Traveling on One Leg by Valentina Glajar and Andre Lefevere (Hydra Books/Northwestern University Press, 1998) Der Teufel sitzt im Spiegel ("The Devil is Sitting in the Mirror"), Berlin, 1991 Der Fuchs war damals schon der Jäger, Reinbek bei Hamburg, 1992. Translated as The Fox Was Ever the Hunter by Philip Boehm (2016) Eine warme Kartoffel ist ein warmes Bett ("A Warm Potato Is a Warm Bed"), Hamburg, 1992 Der Wächter nimmt seinen Kamm ("The Guard Takes His Comb"), Reinbek bei Hamburg, 1993 Angekommen wie nicht da ("Arrived As If Not There"), Lichtenfels, 1994 Herztier, Reinbek bei Hamburg, 1994. Translated as The Land of Green Plums by Michael Hofmann (Metropolitan Books/Henry Holt and Company, 1996). Reviewed in The New York Times Hunger und Seide ("Hunger and Silk"), essays, Reinbek bei Hamburg, 1995 In der Falle ("In a Trap"), Göttingen 1996 Heute wär ich mir lieber nicht begegnet, Reinbek bei Hamburg, 1997. Translated as The Appointment by Michael Hulse and Philip Boehm (Metropolitan Books/Picador, 2001) Der fremde Blick oder Das Leben ist ein Furz in der Laterne ("The Foreign View, or Life Is a Fart in a Lantern"), Göttingen, 1999 Heimat ist das, was gesprochen wird ("Home Is What Is Spoken There"), Blieskastel, 2001 A Good Person Is Worth as Much as a Piece of Bread, foreword to Kent Klich's Children of Ceausescu, published by Journal, 2001 and Umbrage Editions, 2001. Der König verneigt sich und tötet ("The King Bows and Kills"), essays, Munich (and elsewhere), 2003 Atemschaukel, Munich, 2009. Translated as The Hunger Angel by Philip Boehm (Metropolitan Books, 2012) Immer derselbe Schnee und immer derselbe Onkel, 2011 Lyrics / found poetry Im Haarknoten wohnt eine Dame ("A Lady Lives in the Hair Knot"), Rowohlt, Reinbek bei Hamburg, 2000 Die blassen Herren mit den Mokkatassen ("The Pale Gentlemen with their Espresso Cups"), Carl Hanser Verlag, Munich, 2005 Este sau nu este Ion ("Is He or Isn't He Ion"), collage-poetry written and published in Romanian, Iași, Polirom, 2005 Vater telefoniert mit den Fliegen ("Father is calling the Flies"), Carl Hanser Verlag, Munich, 2012 Father's on the Phone with the Flies: A Selection, Seagull Books, Munich, 2018 (73 collage poems with reproductions of originals) Editor Theodor Kramer: Die Wahrheit ist, man hat mir nichts getan ("The Truth Is No One Did Anything to Me"), Vienna 1999 Die Handtasche ("The Purse"), Künzelsau 2001 Wenn die Katze ein Pferd wäre, könnte man durch die Bäume reiten ("If the Cat Were a Horse, You Could Ride Through the Trees"), Künzelsau 2001 Filmography 1993: Vulpe – vânător (Der Fuchs war damals schon der Jäger), directed by Stere Gulea, starring Oana Pellea, Dorel Vișan, George Alexandru etc. Awards and honours 1981 Adam Müller-Guttenbrunn Prize of the Timișoara Literature Circle 1984 Aspekte-Literaturpreis 1985 Rauris Literature Prize 1985 Encouragement Prize of the Literature Award of Bremen 1987 Ricarda-Huch Prize of Darmstadt 1989 Marieluise-Fleißer-Preis of Ingolstadt 1989 German Language Prize, together with Gerhardt Csejka, Helmuth Frauendorfer, Klaus Hensel, Johann Lippet, Werner Söllner, William Totok, Richard Wagner 1990 Roswitha Medal of Knowledge of Bad Gandersheim 1991 Kranichsteiner Literature Prize 1993 Critical Prize for Literature 1994 Kleist Prize 1995 Aristeion Prize 1995/96 Stadtschreiber von Bergen 1997 Literature Prize of Graz 1998 Ida-Dehmel Literature Prize and the International Dublin Literary Award for The Land of Green Plums 2001 Cicero Speaker Prize 2002 Carl-Zuckmayer-Medaille of Rhineland-Palatinate 2003 Joseph-Breitbach-Preis (together with Christoph Meckel and Harald Weinrich) 2004 Literature Prize of Konrad-Adenauer-Stiftung 2005 Berlin Literature Prize 2006 Würth Prize for European Literature und Walter-Hasenclever Literature Prize 2009 Nobel Prize in Literature 2009 Franz Werfel Human Rights Award, in particular for her novel The Hunger Angel 2010 Hoffmann von Fallersleben Prize 2013 Best Translated Book Award, shortlist, The Hunger Angel 2014 Hannelore Greve Literature Prize 2021 Pour le Mérite for Sciences and Arts 2022: Prize for Understanding and Tolerance, Jewish Museum Berlin 2022 Brückepreis See also List of female Nobel laureates List of Nobel laureates in Literature References Further reading Bettina Brandt and Valentina Glajar (Eds.), Herta Müller. Politics and aesthetics. University of Nebraska Press, Lincoln 2013. . pdf (excerpt) Nina Brodbeck, Schreckensbilder, Marburg 2000. Thomas Daum (ed.), Herta Müller, Frankfurt am Main 2003. Norbert Otto Eke (ed.), Die erfundene Wahrnehmung, Paderborn 1991. Valentina Glajar, "The Discourse of Discontent: Politics and Dictatorship in Hert Müller's Herztier." The German Legacy in East Central Europe. As Recorded in Recent German Language Literature Ed. Valentina Glajar. Camden House, Rochester NY 2004. 115–160. Valentina Glajar, "Banat-Swabian, Romanian, and German: Conflicting Identities in Herta Muller's Herztier." Monatshefte 89.4 (Winter 1997): 521–540. Maria S. Grewe, "Imagining the East: Some Thoughts on Contemporary Minority Literature in Germany and Exoticist Discourse in Literary Criticism." Germany and the Imagined East. Ed. Lee Roberts. Cambridge, 2005. Maria S. Grewe, Estranging Poetic: On the Poetic of the Foreign in Select Works by Herta Müller and Yoko Tawada, New York: Columbia UP, 2009. Brigid Haines, '"The Unforgettable Forgotten": The Traces of Trauma in Herta Müller's Reisende auf einem Bein, German Life and Letters, 55.3 (2002), 266–281. Brigid Haines and Margaret Littler, Contemporary German Women's Writing: Changing the Subject, Oxford: Oxford University Press, 2004. Brigid Haines (ed.), Herta Müller. Cardiff 1998. Martin A. Hainz, "Den eigenen Augen blind vertrauen? Über Rumänien." Der Hammer – Die Zeitung der 2 (November 2004): 5–6. Herta Haupt-Cucuiu: Eine Poesie der Sinne [A Poetry of the Senses], Paderborn, 1996. Ralph Köhnen (ed.), Der Druck der Erfahrung treibt die Sprache in die Dichtung: Bildlickeit in Texten Herta Müllers, Frankfurt am Main: Peter Lang, 1997. Lyn Marven, Body and Narrative in Contemporary Literatures in German: Herta Müller, Libuse Moníková, Kerstin Hensel. Oxford: Oxford University Press, 2005. Grazziella Predoiu, Faszination und Provokation bei Herta Müller, Frankfurt am Main, 2000. Diana Schuster, Die Banater Autorengruppe: Selbstdarstellung und Rezeption in Rumänien und Deutschland. Konstanz: Hartung-Gorre-Verlag, 2004. Carmen Wagner, Sprache und Identität. Oldenburg, 2002. External links Herta Müller, short biography by Professor of German Beverley Driver Eddy at Dickinson College Herta Müller: Bio, excerpts, interviews and articles in the archives of the Prague Writers' Festival Herta Müller, at complete review List of works, selection of translations, Bibliothèque Nobel Herta Müller , profile by International Literature Festival Berlin. Retrieved on 7 October 2009 Herta Müller interview by Radio Romania International on 17 August 2007. Retrieved on 7 October 2009 "Securitate in all but name", by Herta Müller. About her ongoing fight with the Securitate, August 2009 "Everything I Own I Carry with Me", excerpt from the novel. September 2009 Poetry and Labor Camp: Literature Nobel Laureate Herta Müller Goethe-Institut, December 2009 "The Evil of Banality" – A review of The Appointment by Costica Bradatan, The Globe and Mail, February 2010 "Herta Müller: The 2009 Laureate of the Nobel Prize in Literature", Yemen Times "Half-lives in the shadow of starvation", review by Costica Bradatan of The Hunger Angel, The Australian, February 2013 How could I forgive. An interview with Herta Müller Video by Louisiana Channel including the Nobel Lecture, 7 December 2009 Jedes Wort weiß etwas vom Teufelskreis 1953 births Living people Banat Swabians Danube-Swabian people Romanian emigrants to West Germany German anti-communists German women essayists German Nobel laureates German women poets Kleist Prize winners Knights Commander of the Order of Merit of the Federal Republic of Germany Nobel laureates in Literature People from Timiș County German people of German-Romanian descent Romanian dissidents Romanian Nobel laureates Romanian novelists Romanian writers in German Romanian women poets Romanian schoolteachers Romanian translators Women Nobel laureates 20th-century German novelists 21st-century German novelists 20th-century German writers 20th-century German women writers 21st-century German writers 21st-century German women writers German women novelists 21st-century German poets 20th-century German translators 21st-century translators Members of the German Academy for Language and Literature 20th-century German essayists 21st-century German essayists West University of Timișoara alumni Writers from Berlin Academic staff of the Free University of Berlin
Herta Müller
Technology
4,616
1,196
https://en.wikipedia.org/wiki/Angle
In Euclidean geometry, an angle or plane angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. Two intersecting curves may also define an angle, which is the angle of the rays lying tangent to the respective curves at their point of intersection. Angles are also formed by the intersection of two planes; these are called dihedral angles. In any case, the resulting angle lies in a plane (spanned by the two rays or perpendicular to the line of plane-plane intersection). The magnitude of an angle is called an angular measure or simply "angle". Two different angles may have the same measure, as in an isosceles triangle. "Angle" also denotes the angular sector, the infinite region of the plane bounded by the sides of an angle. Angle of rotation is a measure conventionally defined as the ratio of a circular arc length to its radius, and may be a negative number. In the case of an ordinary angle, the arc is centered at the vertex and delimited by the sides. In the case of an angle of rotation, the arc is centered at the center of the rotation and delimited by any other point and its image after the rotation. History and etymology The word angle comes from the Latin word , meaning "corner". Cognate words include the Greek () meaning "crooked, curved" and the English word "ankle". Both are connected with the Proto-Indo-European root *ank-, meaning "to bend" or "bow". Euclid defines a plane angle as the inclination to each other, in a plane, of two lines that meet each other and do not lie straight with respect to each other. According to the Neoplatonic metaphysician Proclus, an angle must be either a quality, a quantity, or a relationship. The first concept, angle as quality, was used by Eudemus of Rhodes, who regarded an angle as a deviation from a straight line; the second, angle as quantity, by Carpus of Antioch, who regarded it as the interval or space between the intersecting lines; Euclid adopted the third: angle as a relationship. Identifying angles In mathematical expressions, it is common to use Greek letters (α, β, γ, θ, φ, . . . ) as variables denoting the size of some angle (the symbol is typically not used for this purpose to avoid confusion with the constant denoted by that symbol). Lower case Roman letters (a, b, c, . . . ) are also used. In contexts where this is not confusing, an angle may be denoted by the upper case Roman letter denoting its vertex. See the figures in this article for examples. The three defining points may also identify angles in geometric figures. For example, the angle with vertex A formed by the rays AB and AC (that is, the half-lines from point A through points B and C) is denoted or . Where there is no risk of confusion, the angle may sometimes be referred to by a single vertex alone (in this case, "angle A"). In other ways, an angle denoted as, say, might refer to any of four angles: the clockwise angle from B to C about A, the anticlockwise angle from B to C about A, the clockwise angle from C to B about A, or the anticlockwise angle from C to B about A, where the direction in which the angle is measured determines its sign (see ). However, in many geometrical situations, it is evident from the context that the positive angle less than or equal to 180 degrees is meant, and in these cases, no ambiguity arises. Otherwise, to avoid ambiguity, specific conventions may be adopted so that, for instance, always refers to the anticlockwise (positive) angle from B to C about A and the anticlockwise (positive) angle from C to B about A. Types Individual angles There is some common terminology for angles, whose measure is always non-negative (see ): An angle equal to 0° or not turned is called a zero angle. An angle smaller than a right angle (less than 90°) is called an acute angle ("acute" meaning "sharp"). An angle equal to  turn (90° or radians) is called a right angle. Two lines that form a right angle are said to be normal, orthogonal, or perpendicular. An angle larger than a right angle and smaller than a straight angle (between 90° and 180°) is called an obtuse angle ("obtuse" meaning "blunt"). An angle equal to  turn (180° or radians) is called a straight angle. An angle larger than a straight angle but less than 1 turn (between 180° and 360°) is called a reflex angle. An angle equal to 1 turn (360° or 2 radians) is called a full angle, complete angle, round angle or perigon. An angle that is not a multiple of a right angle is called an oblique angle. The names, intervals, and measuring units are shown in the table below: Vertical and angle pairs When two straight lines intersect at a point, four angles are formed. Pairwise, these angles are named according to their location relative to each other. A transversal is a line that intersects a pair of (often parallel) lines and is associated with exterior angles, interior angles, alternate exterior angles, alternate interior angles, corresponding angles, and consecutive interior angles. Combining angle pairs The angle addition postulate states that if B is in the interior of angle AOC, then I.e., the measure of the angle AOC is the sum of the measure of angle AOB and the measure of angle BOC. Three special angle pairs involve the summation of angles: Polygon-related angles An angle that is part of a simple polygon is called an interior angle if it lies on the inside of that simple polygon. A simple concave polygon has at least one interior angle, that is, a reflex angle. In Euclidean geometry, the measures of the interior angles of a triangle add up to radians, 180°, or turn; the measures of the interior angles of a simple convex quadrilateral add up to 2 radians, 360°, or 1 turn. In general, the measures of the interior angles of a simple convex polygon with n sides add up to (n − 2) radians, or (n − 2)180 degrees, (n − 2)2 right angles, or (n − 2) turn. The supplement of an interior angle is called an exterior angle; that is, an interior angle and an exterior angle form a linear pair of angles. There are two exterior angles at each vertex of the polygon, each determined by extending one of the two sides of the polygon that meet at the vertex; these two angles are vertical and hence are equal. An exterior angle measures the amount of rotation one must make at a vertex to trace the polygon. If the corresponding interior angle is a reflex angle, the exterior angle should be considered negative. Even in a non-simple polygon, it may be possible to define the exterior angle. Still, one will have to pick an orientation of the plane (or surface) to decide the sign of the exterior angle measure. In Euclidean geometry, the sum of the exterior angles of a simple convex polygon, if only one of the two exterior angles is assumed at each vertex, will be one full turn (360°). The exterior angle here could be called a supplementary exterior angle. Exterior angles are commonly used in Logo Turtle programs when drawing regular polygons. In a triangle, the bisectors of two exterior angles and the bisector of the other interior angle are concurrent (meet at a single point). In a triangle, three intersection points, each of an external angle bisector with the opposite extended side, are collinear. In a triangle, three intersection points, two between an interior angle bisector and the opposite side, and the third between the other exterior angle bisector and the opposite side extended are collinear. Some authors use the name exterior angle of a simple polygon to mean the explement exterior angle (not supplement!) of the interior angle. This conflicts with the above usage. Plane-related angles The angle between two planes (such as two adjacent faces of a polyhedron) is called a dihedral angle. It may be defined as the acute angle between two lines normal to the planes. The angle between a plane and an intersecting straight line is complementary to the angle between the intersecting line and the normal to the plane. Measuring angles The size of a geometric angle is usually characterized by the magnitude of the smallest rotation that maps one of the rays into the other. Angles of the same size are said to be equal congruent or equal in measure. In some contexts, such as identifying a point on a circle or describing the orientation of an object in two dimensions relative to a reference orientation, angles that differ by an exact multiple of a full turn are effectively equivalent. In other contexts, such as identifying a point on a spiral curve or describing an object's cumulative rotation in two dimensions relative to a reference orientation, angles that differ by a non-zero multiple of a full turn are not equivalent. To measure an angle θ, a circular arc centered at the vertex of the angle is drawn, e.g., with a pair of compasses. The ratio of the length s of the arc by the radius r of the circle is the number of radians in the angle: Conventionally, in mathematics and the SI, the radian is treated as being equal to the dimensionless unit 1, thus being normally omitted. The angle expressed by another angular unit may then be obtained by multiplying the angle by a suitable conversion constant of the form , where k is the measure of a complete turn expressed in the chosen unit (for example, for degrees or 400 grad for gradians): The value of thus defined is independent of the size of the circle: if the length of the radius is changed, then the arc length changes in the same proportion, so the ratio s/r is unaltered. Units Throughout history, angles have been measured in various units. These are known as angular units, with the most contemporary units being the degree ( ° ), the radian (rad), and the gradian (grad), though many others have been used throughout history. Most units of angular measurement are defined such that one turn (i.e., the angle subtended by the circumference of a circle at its centre) is equal to n units, for some whole number n. Two exceptions are the radian (and its decimal submultiples) and the diameter part. In the International System of Quantities, an angle is defined as a dimensionless quantity, and in particular, the radian unit is dimensionless. This convention impacts how angles are treated in dimensional analysis. The following table lists some units used to represent angles. Dimensional analysis Signed angles It is frequently helpful to impose a convention that allows positive and negative angular values to represent orientations and/or rotations in opposite directions or "sense" relative to some reference. In a two-dimensional Cartesian coordinate system, an angle is typically defined by its two sides, with its vertex at the origin. The initial side is on the positive x-axis, while the other side or terminal side is defined by the measure from the initial side in radians, degrees, or turns, with positive angles representing rotations toward the positive y-axis and negative angles representing rotations toward the negative y-axis. When Cartesian coordinates are represented by standard position, defined by the x-axis rightward and the y-axis upward, positive rotations are anticlockwise, and negative cycles are clockwise. In many contexts, an angle of −θ is effectively equivalent to an angle of "one full turn minus θ". For example, an orientation represented as −45° is effectively equal to an orientation defined as 360° − 45° or 315°. Although the final position is the same, a physical rotation (movement) of −45° is not the same as a rotation of 315° (for example, the rotation of a person holding a broom resting on a dusty floor would leave visually different traces of swept regions on the floor). In three-dimensional geometry, "clockwise" and "anticlockwise" have no absolute meaning, so the direction of positive and negative angles must be defined in terms of an orientation, which is typically determined by a normal vector passing through the angle's vertex and perpendicular to the plane in which the rays of the angle lie. In navigation, bearings or azimuth are measured relative to north. By convention, viewed from above, bearing angles are positive clockwise, so a bearing of 45° corresponds to a north-east orientation. Negative bearings are not used in navigation, so a north-west orientation corresponds to a bearing of 315°. Equivalent angles Angles that have the same measure (i.e., the same magnitude) are said to be equal or congruent. An angle is defined by its measure and is not dependent upon the lengths of the sides of the angle (e.g., all right angles are equal in measure). Two angles that share terminal sides, but differ in size by an integer multiple of a turn, are called coterminal angles. The reference angle (sometimes called related angle) for any angle θ in standard position is the positive acute angle between the terminal side of θ and the x-axis (positive or negative). Procedurally, the magnitude of the reference angle for a given angle may determined by taking the angle's magnitude modulo turn, 180°, or radians, then stopping if the angle is acute, otherwise taking the supplementary angle, 180° minus the reduced magnitude. For example, an angle of 30 degrees is already a reference angle, and an angle of 150 degrees also has a reference angle of 30 degrees (180° − 150°). Angles of 210° and 510° correspond to a reference angle of 30 degrees as well (210° mod 180° = 30°, 510° mod 180° = 150° whose supplementary angle is 30°). Related quantities For an angular unit, it is definitional that the angle addition postulate holds. Some quantities related to angles where the angle addition postulate does not hold include: The slope or gradient is equal to the tangent of the angle; a gradient is often expressed as a percentage. For very small values (less than 5%), the slope of a line is approximately the measure in radians of its angle with the horizontal direction. The spread between two lines is defined in rational geometry as the square of the sine of the angle between the lines. As the sine of an angle and the sine of its supplementary angle are the same, any angle of rotation that maps one of the lines into the other leads to the same value for the spread between the lines. Although done rarely, one can report the direct results of trigonometric functions, such as the sine of the angle. Angles between curves The angle between a line and a curve (mixed angle) or between two intersecting curves (curvilinear angle) is defined to be the angle between the tangents at the point of intersection. Various names (now rarely, if ever, used) have been given to particular cases:—amphicyrtic (Gr. , on both sides, κυρτός, convex) or cissoidal (Gr. κισσός, ivy), biconvex; xystroidal or sistroidal (Gr. ξυστρίς, a tool for scraping), concavo-convex; amphicoelic (Gr. κοίλη, a hollow) or angulus lunularis, biconcave. Bisecting and trisecting angles The ancient Greek mathematicians knew how to bisect an angle (divide it into two angles of equal measure) using only a compass and straightedge but could only trisect certain angles. In 1837, Pierre Wantzel showed that this construction could not be performed for most angles. Dot product and generalisations In the Euclidean space, the angle θ between two Euclidean vectors u and v is related to their dot product and their lengths by the formula This formula supplies an easy method to find the angle between two planes (or curved surfaces) from their normal vectors and between skew lines from their vector equations. Inner product To define angles in an abstract real inner product space, we replace the Euclidean dot product ( · ) by the inner product , i.e. In a complex inner product space, the expression for the cosine above may give non-real values, so it is replaced with or, more commonly, using the absolute value, with The latter definition ignores the direction of the vectors. It thus describes the angle between one-dimensional subspaces and spanned by the vectors and correspondingly. Angles between subspaces The definition of the angle between one-dimensional subspaces and given by in a Hilbert space can be extended to subspaces of finite dimensions. Given two subspaces , with , this leads to a definition of angles called canonical or principal angles between subspaces. Angles in Riemannian geometry In Riemannian geometry, the metric tensor is used to define the angle between two tangents. Where U and V are tangent vectors and gij are the components of the metric tensor G, Hyperbolic angle A hyperbolic angle is an argument of a hyperbolic function just as the circular angle is the argument of a circular function. The comparison can be visualized as the size of the openings of a hyperbolic sector and a circular sector since the areas of these sectors correspond to the angle magnitudes in each case. Unlike the circular angle, the hyperbolic angle is unbounded. When the circular and hyperbolic functions are viewed as infinite series in their angle argument, the circular ones are just alternating series forms of the hyperbolic functions. This comparison of the two series corresponding to functions of angles was described by Leonhard Euler in Introduction to the Analysis of the Infinite (1748). Angles in geography and astronomy In geography, the location of any point on the Earth can be identified using a geographic coordinate system. This system specifies the latitude and longitude of any location in terms of angles subtended at the center of the Earth, using the equator and (usually) the Greenwich meridian as references. In astronomy, a given point on the celestial sphere (that is, the apparent position of an astronomical object) can be identified using any of several astronomical coordinate systems, where the references vary according to the particular system. Astronomers measure the angular separation of two stars by imagining two lines through the center of the Earth, each intersecting one of the stars. The angle between those lines and the angular separation between the two stars can be measured. In both geography and astronomy, a sighting direction can be specified in terms of a vertical angle such as altitude /elevation with respect to the horizon as well as the azimuth with respect to north. Astronomers also measure objects' apparent size as an angular diameter. For example, the full moon has an angular diameter of approximately 0.5° when viewed from Earth. One could say, "The Moon's diameter subtends an angle of half a degree." The small-angle formula can convert such an angular measurement into a distance/size ratio. Other astronomical approximations include: 0.5° is the approximate diameter of the Sun and of the Moon as viewed from Earth. 1° is the approximate width of the little finger at arm's length. 10° is the approximate width of a closed fist at arm's length. 20° is the approximate width of a handspan at arm's length. These measurements depend on the individual subject, and the above should be treated as rough rule of thumb approximations only. In astronomy, right ascension and declination are usually measured in angular units, expressed in terms of time, based on a 24-hour day. See also Angle measuring instrument Angles between flats Angular statistics (mean, standard deviation) Angle bisector Angular acceleration Angular diameter Angular velocity Argument (complex analysis) Astrological aspect Central angle Clock angle problem Decimal degrees Dihedral angle Exterior angle theorem Golden angle Great circle distance Horn angle Inscribed angle Irrational angle Phase (waves) Protractor Solid angle Spherical angle Subtended angle Tangential angle Transcendent angle Trisection Zenith angle Notes References Bibliography . External links
Angle
Physics
4,260
392,874
https://en.wikipedia.org/wiki/Hidden%20node%20problem
In wireless networking, the hidden node problem or hidden terminal problem occurs when a node can communicate with a wireless access point (AP), but cannot directly communicate with other nodes that are communicating with that AP. This leads to difficulties in medium access control sublayer since multiple nodes can send data packets to the AP simultaneously, which creates interference at the AP resulting in no packet getting through. Although some loss of packets is normal in wireless networking, and the higher layers will resend them, if one of the nodes is transferring a lot of large packets over a long period, the other node may get very little goodput. Practical protocol solutions exist to the hidden node problem. For example, Request To Send/Clear To Send (RTS/CTS) mechanisms where nodes send short packets to request permission of the access point to send longer data packets. As responses from the AP are seen by all the nodes, the nodes can synchronize their transmissions to not interfere. However, the mechanism introduces latency, and the overhead can often be greater than the cost, particularly for short data packets. Background Hidden nodes in a wireless network are nodes that are out of range of other nodes or a collection of nodes. Consider a physical star topology with an access point with many nodes surrounding it in a circular fashion: each node is within communication range of the AP, but the nodes cannot communicate with each other. For example, in a wireless network, it is likely that the node at the far edge of the access point's range, which is known as A, can see the access point, but it is unlikely that the same node can communicate with a node on the opposite end of the access point's range, C. These nodes are known as hidden. Another example would be where A and C are either side of an obstacle that reflects or strongly absorbs radio waves, but nevertheless they can both still see the same AP. The problem is when nodes A and C start to send packets simultaneously to the access point B. As the nodes A and C cannot receive each other's signals, so they cannot detect the collision before or while transmitting, carrier-sense multiple access with collision detection (CSMA/CD) does not work, and collisions occur, which then corrupt the data received by the access point. To overcome the hidden node problem, request-to-send/clear-to-send (RTS/CTS) handshaking (IEEE 802.11 RTS/CTS) is implemented at the Access Point in conjunction with the Carrier sense multiple access with collision avoidance (CSMA/CA) scheme. The same problem exists in a mobile ad hoc network (MANET). IEEE 802.11 uses 802.11 RTS/CTS acknowledgment and handshake packets to partly overcome the hidden node problem. RTS/CTS is not a complete solution and may decrease throughput even further, but adaptive acknowledgements from the base station can help too. The comparison with hidden stations shows that RTS/CTS packages in each traffic class are profitable (even with short audio frames, which cause a high overhead on RTS/CTS frames). In the experimental environment following traffic classes are included: data (not time critical), data (time critical), video, audio. Examples for notations: (0|0|0|2) means 2 audio stations; (1|1|2|0) means 1 data station (not time critical), 1 data station (time critical), 2 video stations. The other methods that can be employed to solve hidden node problem are : Increase transmitting power from the nodes Use omnidirectional antennas Remove obstacles Move the node Use protocol enhancement software Use antenna diversity Solutions Increasing transmitting power Increasing the transmission power of the nodes can solve the hidden node problem by allowing the cell around each node to increase in size, encompassing all of the other nodes. This configuration enables the non-hidden nodes to detect, or hear, the hidden node. If the non-hidden nodes can hear the hidden node, the hidden node is no longer hidden. As wireless LANs use the CSMA/CA protocol, nodes will wait their turn before communicating with the access point. This solution only works if one increases the transmission power on nodes that are hidden. In the typical case of a WiFi network, increasing transmission power on the access point only will not solve the problem because typically the hidden nodes are the clients (e.g. laptops, mobile devices), not the access point itself, and the clients will still not be able to hear each other. Increasing transmission power on the access point is actually likely to make the problem worse, because it will put new clients in range of the access point and thus add new nodes to the network that are hidden from other clients. Omnidirectional antennas Since nodes using directional antennas are nearly invisible to nodes that are not positioned in the direction the antenna is aimed at, directional antennas should be used only for very small networks (e.g., dedicated point-to-point connections). Use omnidirectional antennas for widespread networks consisting of more than two nodes. Removing obstacles Increasing the power on mobile nodes may not work if, for example, the reason one node is hidden is that there is a concrete or steel wall preventing communication with other nodes. It is doubtful that one would be able to remove such an obstacle, but removal of the obstacle is another method of remedy for the hidden node problem. Moving the node Another method of solving the hidden node problem is moving the nodes so that they can all hear each other. If it is found that the hidden node problem is the result of a user moving his computer to an area that is hidden from the other wireless nodes, it may be necessary to have that user move again. The alternative to forcing users to move is extending the wireless LAN to add proper coverage to the hidden area, perhaps using additional access points. Protocol enhancement There are several software implementations of additional protocols that essentially implement a polling or token passing strategy. Then, a master (typically the access point) dynamically polls clients for data. Clients are not allowed to send data without the master's invitation. This eliminates the hidden node problem at the cost of increased latency and less maximum throughput. The Wi-Fi IEEE 802.11 RTS/CTS is one handshake protocol that is used. Clients that wish to send data send an RTS frame, the access point then sends a CTS frame when it is ready for that particular node. For short packets the overhead is quite large, so short packets do not usually use it, the minimum size is generally configurable. Cell network With cellular networks the hidden node problem has practical solutions by time domain multiplexing for each given client for a mast, and using spatially diverse transmitters, so that each node is potentially served by any of three masts to greatly minimise issues with obstacles interfering with radio propagation. See also Exposed node problem Hybrid coordination function Point coordination function Wireless LAN References External links Wireless Central Coordinated Protocol (WiCCP), a software solution of the hidden node problem Frottle, a client/server software solution Benchmarks comparing pure CSMA/CA with RTS/CTS and Polling NetEqualizer, a throttling system addressing the hidden node problem Wireless networking H de:Carrier Sense Multiple Access/Collision Avoidance#Hidden-Station-Problem
Hidden node problem
Technology,Engineering
1,516
23,534,876
https://en.wikipedia.org/wiki/SUNGEN%20International%20Limited
SUNGEN International Limited is a subsidiary of Anwell Technologies Limited, with trading and manufacturing business in solar photovoltaics panels and modules. Overview SUNGEN has a thin film solar module production facility in Henan, China using Anwell Solar’s manufacturing technologies. The facility will be used to produce 1.1 by 1.4 meter a-Si thin-film photovoltaic modules, at an initial annual capacity of 40MW in 2010. In March 2010, the Group announced the mass production of thin film solar modules at SUNGEN’s manufacturing base in Henan China using Anwell Solar’s thin film solar module production line. It produces and markets the SUNGEN branded photovoltaic modules to a portfolio of installers, distributors, as well as architectural and building firms around the world. See also China Sunergy References External links Solar Cell Installation Solar energy companies Thin-film cell manufacturers Photovoltaics manufacturers Hong Kong brands Manufacturing companies of Hong Kong
SUNGEN International Limited
Engineering
196
32,182,729
https://en.wikipedia.org/wiki/Gradient-like%20vector%20field
In differential topology, a mathematical discipline, and more specifically in Morse theory, a gradient-like vector field is a generalization of gradient vector field. The primary motivation is as a technical tool in the construction of Morse functions, to show that one can construct a function whose critical points are at distinct levels. One first constructs a Morse function, then uses gradient-like vector fields to move around the critical points, yielding a different Morse function. Definition Given a Morse function f on a manifold M, a gradient-like vector field X for the function f is, informally: away from critical points, X points "in the same direction as" the gradient of f, and near a critical point (in the neighborhood of a critical point), it equals the gradient of f, when f is written in standard form given in the Morse lemmas. Formally: away from critical points, around every critical point there is a neighborhood on which f is given as in the Morse lemmas: and on which X equals the gradient of f. Dynamical system The associated dynamical system of a gradient-like vector field, a gradient-like dynamical system, is a special case of a Morse–Smale system. References An introduction to Morse theory, Yukio Matsumoto, 2002, Section 2.3: Gradient-like vector fields, p. 56–69 Gradient-Like Vector Fields Exist , September 25, 2009 Morse theory Differential topology
Gradient-like vector field
Mathematics
293
45,451,212
https://en.wikipedia.org/wiki/Two-factor%20theory%20of%20intelligence
Charles Spearman developed his two-factor theory of intelligence using factor analysis. His research not only led him to develop the concept of the g factor of general intelligence, but also the s factor of specific intellectual abilities. L. L. Thurstone, Howard Gardner, and Robert Sternberg also researched the structure of intelligence, and in analyzing their data, concluded that a single underlying factor was influencing the general intelligence of individuals. However, Spearman was criticized in 1916 by Godfrey Thomson, who claimed that the evidence was not as crucial as it seemed. Modern research is still expanding this theory by investigating Spearman's law of diminishing returns, and adding connected concepts to the research. Spearman's two-factor theory of intelligence In 1904, Charles Spearman had developed a statistical procedure called factor analysis. In factor analysis, related variables are tested for correlation to each other, then the correlation of the related items are evaluated to find clusters or groups of the variables. Spearman tested how well people performed on various tasks relating to intelligence. Such tasks include: distinguishing pitch, perceiving weight and colors, directions, and mathematics. When analyzing the data he collected, Spearman noted that those that did well in one area also scored higher in other areas. With this data, Spearman concluded that there must be one central factor that influences our cognitive abilities. Spearman termed this general intelligence g. Structure of intelligence debate Due to the controversy of the structure of intelligence, other psychologists also published their relevant research. Other than Charles Spearman, three others developed a hypothesis regarding the structure of intelligence. L. L. Thurstone tested subjects on 56 different abilities; from his data he established seven primary mental abilities relating to intelligence. He categorized them as: spatial ability, numerical ability, word fluency, memory, perceptual speed, verbal comprehension, and inductive reasoning. Other researchers, interested in this new research study, analyzed Thurstone's data, discovering that those scored high in one category often did well in the others. This finding gives support that there is an underlying factor influencing them, namely g. Howard Gardner suggested in his theory of multiple intelligences that intelligence is formed out of multiple abilities. He recognized eight intelligences: linguistic, musical, spatial, intrapersonal, interpersonal, logical-mathematical, bodily-kinesthetic, and naturalist. He also considered the possibility of a ninth intelligent ability, existential intelligence. Gardner proposed that individuals who excelled in one ability would lack in another. Instead, his results showed that each of his eight intelligences correlate positively with each other. After further analysis, Gardner found that logic, spatial abilities, language, and mathematics are all linked in some way, giving support for an underlying g factor that is prominent in almost all intelligence in general. Robert Sternberg agreed with Gardner that there were multiple intelligences, but he narrowed his scope to just three in his triarchic theory of intelligence: analytical, creative, and practical. He classified analytical intelligence as problem-solving skills in tests and academics. Creative intelligence is considered how people react adaptively in new situations, or create novel ideas. Practical intelligence is defined as the everyday logic used when multiple solutions or decisions are possible. When Sternberg analyzed his data the relationship between the three intelligences surprised him. The data resembled what the other psychologists had found. All three mental abilities correlated highly with one another, and evidence that one basic factor, g, was the primary influence. Not all psychologists agreed with Spearman and his general intelligence. In 1916, Godfrey Thomson wrote a paper criticizing Spearman's g:The object of this paper is to show that the cases brought forward by Professor Spearman in favor of the existence of General Ability are by no means "crucial." They are it is true not inconsistent with the existence of such a common element but neither are they inconsistent with its non-existence. The essential point about Professor Spearman's hypothesis is the existence of this General Factor. Both he and his opponents are agreed that there are Specific Factors peculiar to individual tests, both he and his opponents agree that there are Group Factors which run through some but not all tests. The difference between them is that Professor Spearman says there is a further single factor which runs through all tests, and that by pooling a few tests the Group Factors can soon be eliminated and a point reached where all the correlations are due to the General Factor alone. (pp. 217) Development of Spearman's theory Experimental Evidence Spearman originally came up with the term General Intelligence, or as he called it, g, to measure intelligence in his Two Theory on Intelligence. Spearman first researched in an experiment with 24 children from a small village school measuring three intellectual measures, based on teachers rankings, to address intellectual and sensory as the two different sets of measure: School Cleverness, Common Sense A and Common Sense B. His results showed the average r between intellectual and sensory measures to be +.38, School Cleverness and Commonsense to be at +0.55, and the three tasks intercorrelated at +0.25. This data was looked at other populations including high school. Spearman proposed that intellectual and sensory measure be combined as assessment of general intelligence. g and s Spearman's two-factor theory proposes that intelligence has two components: general intelligence ("g") and specific ability ("s"). To explain the differences in performance on different tasks, Spearman hypothesized that the "s" component was specific to a certain aspect of intelligence. Regarding g, Spearman saw individuals as having some level of more or less general intelligence, while s varied from person to person based on the specific task. In 1999, behavior geneticist Robert Plomin described g by saying: "g is one of the most reliable and valid measures in the behavioral domain... and it predicts important social outcomes such as educational and occupational levels far better than any other trait." To visualize g, imagine a Venn diagram with four circles overlapping. In the middle of the overlapping circles, would be g, which influences all the specific intelligences, while s is represented by the four circles. Though the specific number of s factors are unknown, a few have been relatively accepted: mechanical, spatial, logical, and arithmetical. Rising interest in the debate on the structure of intelligence prompted Spearman to elaborate and argue for his hypothesis. He claimed that g was not made up of one single ability, but rather two genetically influenced, unique abilities working together. He called these abilities "eductive" and "reproductive". He suggested that future understanding of the interaction between these two different abilities would drastically change how individual differences and cognition are understood in psychology, possibly creating the basis for wisdom. Impact on psychology Intelligence testing Many researches are currently using Spearman's form of intelligence testing in their current studies. Although not all of the studies are currently using Spearman's exact model for intelligence testing, they are adding some modern concepts to that study. Spearman described that there was a functional relationship between intelligence and Sensory Discriminatory Abilities. Recent research has determined that there is an overlap between Working Memory, General Discriminatory Abilities, and Fluid Intelligence. His work has been built on, expanded, and linked to many other factors related to intelligence. Intelligence testing measuring the g factor has been studied recently to re-explore Spearman's law of diminishing returns. This study investigates how g test scores will most likely decrease as g increases. Research has been done to investigate if g scores are made up of scores from Differential Ability Scales, s factors, and how the law of diminishing returns compare to Spearman's Law of diminishing returns. With the use of linear and nonlinear Confirmatory Factor Analysis, it is showing that the nonlinear model best described the data. The nonlinear model suggests that as g increases, the s factor lowers the overall score and inaccurately represents general intelligence. Modern psychology This theory is still greatly present in today's modern psychology. Researchers are examining this theory and recreating it in modern research. The g factor is still frequently studied in current research. For example, a study could use and be compared with various other similar intelligence measures. Scales such as the Wechsler Intelligence Scale for Children has been compared with Spearman's g, which shows that there has a decrease in statistic significance. Research has been adapted to incorporate modern psychological topics into Spearman's Two Factor Theory of Intelligence. Nature versus Nurture is one topic that has been cross studied with Spearman's g factor. Research shows that although environmental factors influence the g factor differently, it has been found that it is affected if influenced early in life, rather than adulthood where there is little to no impact. Genetic influence has been documented to greatly influence g factor on intelligence. References Further reading Myers, D.G. (2009). Psychology: Ninth Edition in Modules. Worth Publishers. . Kalat, J.W. (2014). Introduction to Psychology, 10th Edition. Cengage Learning. Weiten, W. (2013). Psychology: Themes and Variations (9th ed.). Thomson Wadsworth Publishing. Personality theories Human development Intelligence
Two-factor theory of intelligence
Biology
1,881
57,693,965
https://en.wikipedia.org/wiki/Beatriz%20Rold%C3%A1n%20Cuenya
Beatriz Roldán Cuenya (born 1976 in Oviedo) is a Spanish physicist working in surface science and catalysis. Since 2017 she has been director of the Department of Interface Science at the Fritz Haber Institute of the Max Planck Society in Berlin, Germany. Since April 2023, she has also been interim director of the Department of Inorganic Chemistry, also at the Fritz Haber Institute. Professional career Roldán Cuenya studied at the University of Oviedo in Spain and received her doctorate degree from the University of Duisburg-Essen in Germany under the supervision of Werner Keune. As a postdoc she worked at the University of California, Santa Barbara in the group of Eric McFarland and subsequently became professor at the University of Central Florida in Orlando (USA). In 2013, she accepted a Chair Faculty position in Solid State Physics at the Ruhr University Bochum in Germany. She holds two roles at the Fritz Haber Institute - since 2017 as Director of the Department of Interface Science, and since April 2023 as interim Director of the Department of Inorganic Chemistry. Her main research interests are the synthesis of nanostructured materials with tunable surface properties and the experimental investigation of structure-reactivity relationships in thermal and electro-catalysis using in situ and operando methods. Applications of her work are in the areas of environmental remediation and energy conversion. Awards and distinctions 2005 NSF-CAREER Award of the American National Science Foundation 2009 Peter Mark Memorial Award, American Vacuum Society 2009 University of Central Florida, Research Incentive Award 2016 Fellow of the Max Planck Society at the Max Planck Institute for Chemical Energy Conversion (Mülheim, Germany) 2016 European Research Council Consolidator Award 2020 Elected Member of the Academia Europaea, the Academy of Europe 2021 ISE-Elsevier Prize for Experimental Electrochemistry of the International Society of Electrochemistry (ISE) 2022 Paul H. Emmett Award from the North American Catalysis Society for Fundamental Catalysis Publications I. Zegkinoglou, A. Zendegani, I. Sinev, S. Kunze, H. Mistry, H. S. Jeon, J. Zhao, M. Hu, E. E. Alp, S. Piontek, M. Smialkowski, U.-P. Apfel, F. Körmann, J. Neugebauer, T. Hickel, B. Roldan Cuenya: Operando phonon studies of the protonation mechanism in highly active hydrogen evolution reaction pentlandite catalysts, JACS 2017, 139, 14360, H. Mistry, Y. Choi, A. Bagger, F. Scholten, C. Bonifacio, I. Sinev, N. J. Divins, I. Zegkinoglou, H. Jeon, K. Kisslinger, E. A. Stach, J. C. Yang, J. Rossmeisl, B. Roldan Cuenya: Enhanced carbon dioxide electroreduction to carbon monoxide over defect rich plasma-activated silver catalysts, Angew. Chem. 2017, 56, 11394, H. Mistry, A. Varela, C. S. Bonifacio, I. Zegkinoglou, I. Sinev, Y.-W. Choi, K. Kisslinger, E. A. Stach, J. C. Yang, P. Strasser, B. Roldan Cuenya, Highly selective plasma-activated copper catalysts for carbon dioxide reduction to ethylene, Nature Commun. 2016, 7, 12123, . References External links Department of Interface Science of the Fritz Haber Institute Living people Max Planck Society people 1976 births Spanish physicists Members of Academia Europaea Members of the German National Academy of Sciences Leopoldina University of Oviedo alumni University of Duisburg-Essen alumni Academic staff of Ruhr University Bochum Max Planck Institute directors People from Oviedo Physical chemists University of Central Florida faculty Women chemical engineers
Beatriz Roldán Cuenya
Chemistry
840
4,047,952
https://en.wikipedia.org/wiki/Mirex
Mirex is an organochloride that was commercialized as an insecticide and later banned because of its impact on the environment. This white crystalline odorless solid is a derivative of both cyclopentadiene and cubane. It was popularized to control fire ants but by virtue of chemical robustness and lipophilicity it was recognized as a bioaccumulative pollutant. The spread of the red imported fire ant was encouraged by the use of mirex, which also kills native ants that are highly competitive with the fire ants. The United States Environmental Protection Agency prohibited its use in 1976. It is prohibited by the Stockholm Convention on Persistent Organic Pollutants. Production and applications Mirex was first synthesized in 1946, but was not used in pesticide formulations until 1955. Mirex was produced by the dimerization of hexachlorocyclopentadiene in the presence of aluminium chloride. Mirex is a stomach insecticide, meaning that it must be ingested by the organism in order to poison it. The insecticidal use was focused on Southeastern United States to control fire ants. Approximately 250,000 kg of mirex were applied to fields between 1962 and 1975 (US NRC, 1978). Most of the mirex was in the form of "4X mirex bait", which consists of 0.3% mirex in 14.7% soybean oil mixed with 85% corncob grits. Application of the 4X bait was designed to give a coverage of 4.2 g mirex/ha and was delivered by aircraft, helicopter or tractor. 1x and 2x bait were also used. Use of mirex as a pesticide was banned in 1978. The Stockholm Convention banned production and use of several persistent organic pollutants, and mirex is one of the "dirty dozen". Degradation Much like other perchlorocarbons such as carbon tetrachloride, mirex does not burn easily; pyrolysis products are expected to include carbon dioxide, carbon monoxide, hydrogen chloride, chlorine, phosgene, and possibly other organochlorine species. Slow oxidation of mirex can be used to produce chlordecone ("Kepone"), a related insecticide that is also banned in most of the western world, but is more readily biodegraded. Sunlight degrades mirex to photomirex (8-monohydromirex) and 2,8-dihydromirex. Mirex is highly resistant to microbiological degradation. It only slowly dechlorinates to a monohydro derivative by anaerobic microbial action in sewage sludge and by enteric bacteria. Degradation by soil microorganisms has not been described. Bioaccumulation and biomagnification Mirex is highly cumulative and amount depends upon the concentration and duration of exposure. There is evidence of accumulation of mirex in aquatic and terrestrial food chains to harmful levels. After 6 applications of mirex bait at 1.4 kg/ha, high mirex levels were found in some species; turtle fat contained 24.8 mg mirex/kg, kingfishers, 1.9 mg/kg, coyote fat, 6 mg/kg, opossum fat, 9.5 mg/kg, and racoon fat, 73.9 mg/kg. In a model ecosystem with a terrestrial-aquatic interface, sorghum seedlings were treated with mirex at 1.1 kg/ha. Caterpillars fed on these seedlings and their faeces contaminated the water which contained algae, snails, Daphnia, mosquito larvae, and fish. After 33 days, the ecological magnification value was 219 for fish and 1165 for snails. Although general environmental levels are low, it is widespread in the biotic and abiotic environment. Being lipophilic, mirex is strongly adsorbed on sediments. Safety Mirex is only moderately toxic in single-dose animal studies (oral values range from 365–3000 mg/kg body weight). It can enter the body via inhalation, ingestion, and via the skin. The most sensitive effects of repeated exposure in animals are principally associated with the liver, and these effects have been observed with doses as low as 1.0 mg/kg diet (0.05 mg/kg body weight per day), the lowest dose tested. At higher dose levels, it is fetotoxic (25 mg/kg in diet) and teratogenic (6.0 mg/kg per day). Mirex was not generally active in short-term tests for genetic activity. There is sufficient evidence of its carcinogenicity in mice and rats. Delayed onset of toxic effects and mortality is typical of mirex poisoning. Mirex is toxic for a range of aquatic organisms, with crustacea being particularly sensitive. Mirex induces pervasive chronic physiological and biochemical disorders in various vertebrates. No acceptable daily intake (ADI) for mirex has been advised by FAO/WHO. IARC (1979) evaluated mirex's carcinogenic hazard and concluded that "there is sufficient evidence for its carcinogenicity to mice and rats. In the absence of adequate data in humans, based on above result it can be said, that it has carcinogenic risk to humans". Data on human health effects do not exist . Health effects Per a 1995 ATSDR report mirex caused fatty changes in the livers, hyperexcitability and convulsion, and inhibition of reproduction in animals. It is a potent endocrine disruptor, interfering with estrogen-mediated functions such as ovulation, pregnancy, and endometrial growth. It also induced liver cancer by interaction with estrogen in female rodents. References Further reading International Organization for the Management of Chemicals (IOMC), 1995, POPs Assessment Report, December.1995. Lambrych KL, and JP Hassett. Wavelength-Dependent Photoreactivity of Mirex in Lake Ontario. Environ. Sci. Technol. 2006, 40, 858-863 Mirex Health and Safety Guide. IPCS International Program on Chemical Safety. Health and Safety Guide No.39. 1990 Toxicological Review of Mirex. In support of summary information on the Integrated Risk Information System (IRIS) 2003. U.S. Environmental Protection Agency, Washington DC. Obsolete pesticides Organochloride insecticides IARC Group 2B carcinogens Endocrine disruptors Persistent organic pollutants under the Stockholm Convention Fetotoxicants Teratogens Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution Cyclobutanes Perchlorocarbons
Mirex
Chemistry
1,394
590,995
https://en.wikipedia.org/wiki/Intermodulation
Intermodulation (IM) or intermodulation distortion (IMD) is the amplitude modulation of signals containing two or more different frequencies, caused by nonlinearities or time variance in a system. The intermodulation between frequency components will form additional components at frequencies that are not just at harmonic frequencies (integer multiples) of either, like harmonic distortion, but also at the sum and difference frequencies of the original frequencies and at sums and differences of multiples of those frequencies. Intermodulation is caused by non-linear behaviour of the signal processing (physical equipment or even algorithms) being used. The theoretical outcome of these non-linearities can be calculated by generating a Volterra series of the characteristic, or more approximately by a Taylor series. Practically all audio equipment has some non-linearity, so it will exhibit some amount of IMD, which however may be low enough to be imperceptible by humans. Due to the characteristics of the human auditory system, the same percentage of IMD is perceived as more bothersome when compared to the same amount of harmonic distortion. Intermodulation is also usually undesirable in radio, as it creates unwanted spurious emissions, often in the form of sidebands. For radio transmissions this increases the occupied bandwidth, leading to adjacent channel interference, which can reduce audio clarity or increase spectrum usage. IMD is only distinct from harmonic distortion in that the stimulus signal is different. The same nonlinear system will produce both total harmonic distortion (with a solitary sine wave input) and IMD (with more complex tones). In music, for instance, IMD is intentionally applied to electric guitars using overdriven amplifiers or effects pedals to produce new tones at subharmonics of the tones being played on the instrument. See Power chord#Analysis. IMD is also distinct from intentional modulation (such as a frequency mixer in superheterodyne receivers) where signals to be modulated are presented to an intentional nonlinear element (multiplied). See non-linear mixers such as mixer diodes and even single-transistor oscillator-mixer circuits. However, while the intermodulation products of the received signal with the local oscillator signal are intended, superheterodyne mixers can, at the same time, also produce unwanted intermodulation effects from strong signals near in frequency to the desired signal that fall within the passband of the receiver. Causes of intermodulation A linear time-invariant system cannot produce intermodulation. If the input of a linear time-invariant system is a signal of a single frequency, then the output is a signal of the same frequency; only the amplitude and phase can differ from the input signal. Non-linear systems generate harmonics in response to sinusoidal input, meaning that if the input of a non-linear system is a signal of a single frequency, then the output is a signal which includes a number of integer multiples of the input frequency signal; (i.e. some of ). Intermodulation occurs when the input to a non-linear system is composed of two or more frequencies. Consider an input signal that contains three frequency components at, , and ; which may be expressed as where the and are the amplitudes and phases of the three components, respectively. We obtain our output signal, , by passing our input through a non-linear function : will contain the three frequencies of the input signal, , , and (which are known as the fundamental frequencies), as well as a number of linear combinations of the fundamental frequencies, each in the form where , , and are arbitrary integers which can assume positive or negative values. These are the intermodulation products (or IMPs). In general, each of these frequency components will have a different amplitude and phase, which depends on the specific non-linear function being used, and also on the amplitudes and phases of the original input components. More generally, given an input signal containing an arbitrary number of frequency components , the output signal will contain a number of frequency components, each of which may be described by where the coefficients are arbitrary integer values. Intermodulation order The order of a given intermodulation product is the sum of the absolute values of the coefficients, For example, in our original example above, third-order intermodulation products (IMPs) occur where : In many radio and audio applications, odd-order IMPs are of most interest, as they fall within the vicinity of the original frequency components, and may therefore interfere with the desired behaviour. For example, intermodulation distortion from the third order (IMD3) of a circuit can be seen by looking at a signal that is made up of two sine waves, one at and one at . When you cube the sum of these sine waves you will get sine waves at various frequencies including and . If and are large but very close together then and will be very close to and . Passive intermodulation (PIM) As explained in a previous section, intermodulation can only occur in non-linear systems. Non-linear systems are generally composed of active components, meaning that the components must be biased with an external power source which is not the input signal (i.e. the active components must be "turned on"). Passive intermodulation (PIM), however, occurs in passive devices (which may include cables, antennas etc.) that are subjected to two or more high power tones. The PIM product is the result of the two (or more) high power tones mixing at device nonlinearities such as junctions of dissimilar metals or metal-oxide junctions, such as loose corroded connectors. The higher the signal amplitudes, the more pronounced the effect of the nonlinearities, and the more prominent the intermodulation that occurs — even though upon initial inspection, the system would appear to be linear and unable to generate intermodulation. The requirement for "two or more high power tones" need not be discrete tones. Passive intermodulation can also occur between different frequencies (i.e. different "tones") within a single broadband carrier. These PIMs would show up as sidebands in a telecommunication signal, which interfere with adjacent channels and impede reception. Passive intermodulations are a major concern in modern communication systems in cases when a single antenna is used for both high power transmission signals as well as low power receive signals (or when a transmit antenna is in close proximity to a receive antenna). Although the power in the passive intermodulation signal is typically many orders of magnitude lower than the power of the transmit signal, the power in the passive intermodulation signal is often times on the same order of magnitude (and possibly higher) than the power of the receive signal. Therefore, if a passive intermodulation finds its way to receive path, it cannot be filtered or separated from the receive signal. The receive signal would therefore be clobbered by the passive intermodulation signal. Sources of passive intermodulation Ferromagnetic materials are the most common materials to avoid and include ferrites, nickel, (including nickel plating) and steels (including some stainless steels). These materials exhibit hysteresis when exposed to reversing magnetic fields, resulting in PIM generation. Passive intermodulation can also be generated in components with manufacturing or workmanship defects, such as cold or cracked solder joints or poorly made mechanical contacts. If these defects are exposed to high radio frequency currents, passive intermodulation can be generated. As a result, radio frequency equipment manufacturers perform factory PIM tests on components, to eliminate passive intermodulation caused by these design and manufacturing defects. Passive intermodulation can also be inherent in the design of a high power radio frequency component where radio frequency current is forced to narrow channels or restricted. In the field, passive intermodulation can be caused by components that were damaged in transit to the cell site, installation workmanship issues and by external passive intermodulation sources. Some of these include: Contaminated surfaces or contacts due to dirt, dust, moisture or oxidation. Loose mechanical junctions due to inadequate torque, poor alignment or poorly prepared contact surfaces. Loose mechanical junctions caused during transportation, shock or vibration. Metal flakes or shavings inside radio frequency connections. Inconsistent metal-to-metal contact between radio frequency connector surfaces caused by any of the following: Trapped dielectric materials (adhesives, foam, etc.), cracks or distortions at the end of the outer conductor of coaxial cables, often caused by overtightening the back nut during installation, solid inner conductors distorted in the preparation process, hollow inner conductors excessively enlarged or made oval during the preparation process. Passive intermodulation can also occur in connectors, or when conductors made of two galvanically unmatched metals come in contact with each other. Nearby metallic objects in the direct beam and side lobes of the transmit antenna including rusty bolts, roof flashing, vent pipes, guy wires, etc. Passive intermodulation testing IEC 62037 is the international standard for passive intermodulation testing and gives specific details as to passive intermodulation measurement setups. The standard specifies the use of two +43 dBm (20 W) tones for the test signals for passive intermodulation testing. This power level has been used by radio frequency equipment manufacturers for more than a decade to establish PASS / FAIL specifications for radio frequency components. Intermodulation in electronic circuits Slew-induced distortion (SID) can produce intermodulation distortion (IMD) when the first signal is slewing (changing voltage) at the limit of the amplifier's power bandwidth product. This induces an effective reduction in gain, partially amplitude-modulating the second signal. If SID only occurs for a portion of the signal, it is called "transient" intermodulation distortion. Measurement Intermodulation distortion in audio is usually specified as the root mean square (RMS) value of the various sum-and-difference signals as a percentage of the original signal's root mean square voltage, although it may be specified in terms of individual component strengths, in decibels, as is common with radio frequency work. Audio system measurements (Audio IMD) include SMPTE standard RP120-1994 where two signals (at 60 Hz and 7 kHz, with 4:1 amplitude ratios) are used for the test; many other standards (such as DIN, CCIF) use other frequencies and amplitude ratios. Opinion varies over the ideal ratio of test frequencies (e.g. 3:4, or almost — but not exactly — 3:1 for example). After feeding the equipment under test with low distortion input sinewaves, the output distortion can be measured by using an electronic filter to remove the original frequencies, or spectral analysis may be made using Fourier transformations in software or a dedicated spectrum analyzer, or when determining intermodulation effects in communications equipment, may be made using the receiver under test itself. In radio applications, intermodulation may be measured as adjacent channel power ratio. Hard to test are intermodulation signals in the GHz-range generated from passive devices (PIM: passive intermodulation). Manufacturers of these scalar PIM-instruments are Summitek and Rosenberger. The newest developments are PIM-instruments to measure also the distance to the PIM-source. Anritsu offers a radar-based solution with low accuracy and Heuermann offers a frequency converting vector network analyzer solution with high accuracy. See also Beat (acoustics) Audio system measurements Second-order intercept point (SOI) Third-order intercept point (TOI), a metric of an amplifier or system related to intermodulation Luxemburg–Gorky effect References Further reading Audio amplifier specifications Waves Radio electronics
Intermodulation
Physics,Engineering
2,464
1,300,441
https://en.wikipedia.org/wiki/Multiflow
Multiflow Computer, Inc., founded in April, 1984 near New Haven, Connecticut, USA, was a manufacturer and seller of minisupercomputer hardware and software embodying the VLIW design style. Multiflow, incorporated in Delaware, ended operations in March, 1990, after selling about 125 VLIW minisupercomputers in the United States, Europe, and Japan. While Multiflow's commercial success was small and short-lived, its technical success and the dissemination of its technology and people had a great effect on the future of computer science and the computer industry. Multiflow's computers were arguably the most novel ever to be broadly sold, programmed, and used like conventional computers. (Other novel computers either required novel programming, or represented more incremental steps beyond existing computers.) Along with Cydrome, an attached-VLIW minisupercomputer company that had less commercial success, Multiflow demonstrated that the VLIW design style was practical, a conclusion surprising to many. While still controversial, VLIW has since been a force in high-performance embedded systems, and has been finding slow acceptance in general-purpose computing. Early history Technology roots The VLIW (for Very Long Instruction Word) design style was first proposed by Joseph A. (Josh) Fisher, a Yale University computer science professor, during the period 1979-1981. VLIW was motivated by a compiler scheduling technique, called trace scheduling, that Fisher had developed as a graduate student at the Courant Institute of Mathematical Sciences of New York University in 1978. Trace scheduling, unlike any prior compiler technique, exposed significant quantities of instruction-level parallelism (ILP) in ordinary computer programs, without laborious hand coding. This implied the practicality of processors for which the compiler could be relied upon to find and specify ILP. VLIW was put forward by Fisher as a way to build general-purpose instruction-level parallel processors exploiting ILP to a degree that would have been impractical using what would later be called superscalar control hardware. Instead, the compiler could, in advance, arrange the ILP to be carried out nearly in lock-step by the hardware, commanded by long instructions or a similar mechanism. While there had previously been processors that achieved significant amounts of ILP, they had all relied upon code laboriously hand-parallelized by the user, or upon library routines, and thus were not general-purpose computers and did not fit the VLIW paradigm. The practicality of trace scheduling was demonstrated by a compiler built at Yale by Fisher and three of his graduate students, John Ruttenberg, Alexandru Nicolau, and especially John Ellis, whose doctoral dissertation on the compiler won the ACM Doctoral Dissertation Award in 1985. Encouraged by their compiling progress, Fisher's group started an architecture and hardware design effort called the ELI (Enormously Long Instructions) Project. Business beginnings ELI, which was to have 512-bit instruction words and initiate 10-30 RISC operations per cycle, was never built. Instead, Fisher, Ruttenberg, and John O'Donnell, who had led the ELI hardware project, started Multiflow in 1984 after failing to interest any mainstream computer companies in partnering in the ELI project. Originally, Multiflow was to have become a division of the workstation company Apollo Computer, but eventually it sought venture capital funding, closing its first round of financing in January, 1985, when the company already had about 20 employees. Donald E. Eckdahl, a former head of the NCR computer division, joined the company in 1985 as its CEO. Multiflow delivered its first working VLIW minisupercomputers in early 1987 to three beta-sites: Grumman Aircraft, Sikorsky Helicopter, and the Supercomputer Research Center. A Trace 14/200 was demonstrated to the public at a supercomputing conference in May, 1987, in Santa Clara, California. Technology Innovative architecture Multiflow's first computers were called the Trace 7/200 and Trace 14/200. The 7/ in the computer model number signified that the processor could initiate seven operations each cycle, using a 256-bit long instruction composed of 7 32-bit operations and a 32-bit utility field. The seven operations were four integer/memory, two floating, and a branch. The 14/ models had twice as many of each instruction, and thus 512-bit long instruction words. Like many scientific-oriented processors of its day, the Trace had no traditional cache memory. Multiflow also announced a 28/ model at the outset, and eventually these were built and sold to a few customers. The 28/ had 1024-bit instruction words. Having ordinary programs compiled for computers like these was unquestionably revolutionary, as no earlier computer had offered compiled ILP even like that of the 7/ models. The 28/ systems pushed these limits far beyond either academic or industrial conception. While only a few customer programs contained enough ILP to keep a 28/ busy, when they did the performance was remarkable, since the processor would then initiate close to all 28 operations on average. Hardware Each 7/ processor datapath comprised a control unit board, an integer ALU board, and a floating point board. The 14/ added a second integer ALU board and a second floating point board. Before many systems were in the field, faster 3rd party floating-point chips became available, and the /200 family was replaced by the object-code incompatible 7/300 and 14/300, and the 14/300 became by far the company's most popular model. In about 1988, a /100 entry level series was introduced as well, but these were essentially /300 systems with a slower clock. All the processors were built using CMOS gate arrays for the integer ALUs and registers, 3rd-party floating point chips, and medium-scale integrated circuits for the control and other portions. In 1988, the company started development of an ECL /500 family, which was to feature a 14/ that could also be used as a multiprocessor of two 7/ models, but that system was not completed before the company ceased operations. One example Trace system is in storage at the Computer History Museum. Innovative software Multiflow also produced the software tools for the systems it built. The systems ran Berkeley Unix. Probably, at the time the Multiflow systems were delivered, no computer that issued instructions longer than a single operation at a time had ever run a compiled mainstream operating system. Yet the entire Unix operating system and the usual tools all ran, with the usual portions compiled, on all the company's models. The compiler was particularly noteworthy, as could be expected given Multiflow's technology. The company built a new compiler, in a similar style to that developed at Yale, but industrial-strength and with the incorporation of much commercially-necessary capability. In addition to implementing aggressive trace scheduling, it was known for its reliability, for its incorporation of state-of-the-art optimization, and for its ability to handle simultaneously many different language variants and all of the different object-code incompatible models of the Multiflow Traces. (While code from a 7/X00 could run correctly on a 14/X00, the nature of the architecture mandated that it would have to be recompiled to run faster than it did on the 7/.) The compiler was generating correct code by 1985, and by 1987 it was producing code that found significant amounts of ILP. After 1987, with the press of customers and prospects, its development emphasized features and functionality, though performance-oriented improvement continued. The compiler was so robust, and so good at exposing ILP independent of the system it was targeted for, that after Multiflow closed, the compiler was licensed by many of the largest computer companies. It has been reported that this included Intel, Hewlett-Packard, Digital Equipment Corporation, Fujitsu, Hughes, HAL Computer Systems, and Silicon Graphics. Other companies known to have licensed the technology include Equator Technologies, Hitachi and NEC. Compilers built starting from that code base were used for advanced development and benchmark reporting for the most important superscalar processors of the 1990s. Descendants of the compiler were still in wide use 20 years after it first started generating correct code (notably, Intel's icc "Proton" compiler and the NEC Earth Simulator compiler), and are often used as benchmark targets for new compiler development. MIT and the University of Washington are among the universities that received and used the compiler for advanced research purposes. The Multiflow compiler was written in C. It pre-dated the popular use of C++ (Multiflow was a beta-site for the language). The compiler designers were strong believers in the object-oriented paradigm, however, and the compiler had a rather idiosyncratic style that encapsulated the structures and operations in it. This caused a steep learning curve for the many developers who used it after Multiflow's demise, but one that was usually considered a good investment because of the unique combination of ambitious compiling and rock-solid engineering the compiler offered. Customers and business history Customers While a few of Multiflow's sales went to organizations wishing to learn more about the new VLIW design style, most systems were used for simulation in product development environments: mechanical, aerodynamic, defense, crash dynamics, chemical, and some electronic. Customers ranged from a major metropolitan air-quality board to a major consumer detergent, food and sundries company, along with the expected heavy industry companies, research laboratories and universities. In 1987, GEI Rechnersysteme GmbH, a division of Daimler-Benz, began distributing Traces in Germany with great success, despite fierce competition from other minisupercomputer companies. In the following three years, Multiflow opened offices or had distributors in most of Western Europe and Japan, and opened offices in many US metropolitan areas. Multiflow's end Multiflow ended operations on March 27, 1990, two days after a large deal contemplated with Digital Equipment Corporation came apart. At that point, the board determined that the prospects for successful additional financing, in the amounts necessary to bring Multiflow to maturity, were too unlikely to justify the company's continuation. Multiflow's failure is often blamed anecdotally on “good technology, but bad marketing,” on “good software, but slow, conservative hardware,” on some property of its innovative technology, or even on the isolated location of its headquarters. The more likely cause was that its business plan was incompatible with seismic shifts in the computer industry. Building a full-scale, general-purpose computer company seemed to require many hundreds of millions of dollars (US) by 1990. But the killer micro revolution meant there would be a steady march of ever faster and cheaper competition. The economies inherent in microprocessors were inaccessible to startups in general, and incompatible with VLIWs, which would have required too much silicon for the densities of the time. (The first VLIW microprocessor was the Philips Life, the ancestor of today's TriMedia, delivered several years later.) Since the founding of Sun and SGI in the early 1980s, no new general-purpose computer company has succeeded without building computers for which there was an existing large software base, and none of the many minisupercomputer startup companies of the 1980s eventually succeeded. Corporate culture Multiflow was staffed by engineers, computer scientists, and other computer professionals who were attracted to the combination of a novel and challenging technology, an uphill battle, and the remarkable social experience of working in the most uniformly talented group they were ever likely to be a part of. The system was so novel that its engineering was widely expected to fail. Despite that, even though none of the employees (besides Eckdahl) had ever held senior engineering positions, Trace systems and their software were delivered on time, were robust, and exceeded their promised performance. In great part this was due to the talent level of those attracted to the company, and to the tremendous learning environment it was from the outset. Following Multiflow's closing, its employees went on to have a widespread effect on the industry. The small core group of engineers and scientists, numbering about 20, produced four fellows in major American computer companies (two of whom were Eckert-Mauchly Award winners), several founders of successful startups, and leaders of major development efforts at large companies. The only nontechnical person in the core group, hired out of business school, went on to lead corporate development at a major research lab. As Multiflow grew, it continued the tradition of hiring highly talented people: as one example, the documentation writer became one of the most influential editors in computer publishing. Multiflow's effect on the computer industry was very much its people in addition to its technology. External links Book on the history of Multiflow Architecture and implementation of a VLIW supercomputer A VLIW architecture for a trace scheduling compiler The Multiflow trace scheduling compiler Embedded/VLIW book with much Multiflow-related content Very Long Instruction Word architectures and the ELI-512 Parallel processing: a smart compiler and a dumb machine Bulldog: a compiler for vliw architectures 1984 establishments in Connecticut 1990 disestablishments in Connecticut American companies established in 1984 American companies disestablished in 1990 Computer companies established in 1984 Computer companies disestablished in 1990 Defunct computer companies of the United States Defunct computer hardware companies Defunct computer systems companies Defunct software companies of the United States Software companies based in Connecticut Software companies established in 1984 Software companies disestablished in 1990 Supercomputers Technology companies established in 1984 Technology companies disestablished in 1990 Very long instruction word computing
Multiflow
Technology
2,813
76,419,161
https://en.wikipedia.org/wiki/Spiroketals
In chemistry, Spiroketals are structural motifs composed of two heterocycles sharing one central carbon which makes them a subclass of spiro compound. Their structural specificity lays on the presence of one oxygen atom in each ring, in alpha of the spiro carbon. Although there are no rules about the size of each ring, the most widely encountered spiroketal are composed of five and six membered rings. Occurrence in nature Many natural products of biological interest contain [6,5]- and [6,6]-spiroketal moieties that can adopt various configurations. The first example of a spiroketals in the literature appeared before 1970, such as the triterpenoid saponins and sapogenins. Then several works described the presence of spiroketals in various compounds. Like diarrheic shellfish poisoning (DSP) class of toxins containing in the okadaic acid and ancanthafolicin. The most noticeable occurring spiroketals are the whole range of fruit fly pheromones. Pharmacology interest Due to its non-planar substructure, the spiroketal motif gain interest among the academical and industrial pharmaceutical research fields, both in structure-based drug design (SBDD) and development of screening libraries. Avermectins have been found in fungus and are antiparasitic drugs. The avermectins appear to paralyze nematodes and arthropods by potentiating the presynaptic release of gamma-aminobutyric acid, thereby blocking post-synaptic transmission of nerve impulses Tofogliflozin is an inhibitor of human sodium glucose cotransporter 2 (hSGLT2) and was approved in 2014 in Japan for the treatment of Type 2 diabetes Chemical synthesis Acid catalyzed spiroketalisation The most employed method to ring close spiroketal consists in the hydrolysis of the dihydroxyketal in acidic conditions, but this method is not granting stereocontrol. Thus, several miscellaneous methods have emerged in order to control the stereoselectivity of the spirocyclisation. Notes References Heterocyclic compounds
Spiroketals
Chemistry
462
58,666,321
https://en.wikipedia.org/wiki/Numerical%20certification
Numerical certification is the process of verifying the correctness of a candidate solution to a system of equations. In (numerical) computational mathematics, such as numerical algebraic geometry, candidate solutions are computed algorithmically, but there is the possibility that errors have corrupted the candidates. For instance, in addition to the inexactness of input data and candidate solutions, numerical errors or errors in the discretization of the problem may result in corrupted candidate solutions. The goal of numerical certification is to provide a certificate which proves which of these candidates are, indeed, approximate solutions. Methods for certification can be divided into two flavors: a priori certification and a posteriori certification. A posteriori certification confirms the correctness of the final answers (regardless of how they are generated), while a priori certification confirms the correctness of each step of a specific computation. A typical example of a posteriori certification is Smale's alpha theory, while a typical example of a priori certification is interval arithmetic. Certificates A certificate for a root is a computational proof of the correctness of a candidate solution. For instance, a certificate may consist of an approximate solution , a region containing , and a proof that contains exactly one solution to the system of equations. In this context, an a priori numerical certificate is a certificate in the sense of correctness in computer science. On the other hand, an a posteriori numerical certificate operates only on solutions, regardless of how they are computed. Hence, a posteriori certification is different from algorithmic correctness – for an extreme example, an algorithm could randomly generate candidates and attempt to certify them as approximate roots using a posteriori certification. A posteriori certification methods There are a variety of methods for a posteriori certification, including Alpha theory The cornerstone of Smale's alpha theory is bounding the error for Newton's method. Smale's 1986 work introduced the quantity , which quantifies the convergence of Newton's method. More precisely, let be a system of analytic functions in the variables , the derivative operator, and the Newton operator. The quantities and are used to certify a candidate solution. In particular, if then is an approximate solution for , i.e., the candidate is in the domain of quadratic convergence for Newton's method. In other words, if this inequality holds, then there is a root of so that iterates of the Newton operator converge as The software package alphaCertified provides an implementation of the alpha test for polynomials by estimating and . Interval Newton and Krawczyck methods Suppose is a function whose fixed points correspond to the roots of . For example, the Newton operator has this property. Suppose that is a region, then, If maps into itself, i.e., , then by Brouwer fixed-point theorem, has at least one fixed point in , and, hence has at least one root in . If is contractive in a region containing , then there is at most one root in . There are versions of the following methods over the complex numbers, but both the interval arithmetic and conditions must be adjusted to reflect this case. Interval Newton method In the univariate case, Newton's method can be directly generalized to certify a root over an interval. For an interval , let be the midpoint of . Then, the interval Newton operator applied to is In practice, any interval containing can be used in this computation. If is a root of , then by the mean value theorem, there is some such that . In other words, . Since contains the inverse of at all points of , it follows that . Therefore, . Furthermore, if , then either is a root of and or . Therefore, is at most half the width of . Therefore, if there is some root of in , the iterative procedure of replacing by will converge to this root. If, on the other hand, there is no root of in , this iterative procedure will eventually produce an empty interval, a witness to the nonexistence of roots. See interval Newton method for higher dimensional analogues of this approach. Krawczyck method Let be any invertible matrix in . Typically, one takes to be an approximation to . Then, define the function We observe that is a fixed of if and only if is a root of . Therefore the approach above can be used to identify roots of . This approach is similar to a multivariate version of Newton's method, replacing the derivative with the fixed matrix . We observe that if is a compact and convex region and , then, for any , there exist such that Let be the Jacobian matrix of evaluated on . In other words, the entry consists of the image of over . It then follows that where the matrix-vector product is computed using interval arithmetic. Then, allowing to vary in , it follows that the image of on satisfies the following containment: where the calculations are, once again, computed using interval arithmetic. Combining this with the formula for , the result is the Krawczyck operator where is the identity matrix. If , then has a fixed point in , i.e., has a root in . On the other hand, if the maximum matrix norm using the supremum norm for vectors of all matrices in is less than , then is contractive within , so has a unique fixed point. A simpler test, when is an axis-aligned parallelepiped, uses , i.e., the midpoint of . In this case, there is a unique root of if where is the length of the longest side of . Miranda test Miranda test (Yap, Vegter, Sharma) A priori certification methods Interval arithmetic (Moore, Arb, Mezzarobba) Condition numbers (Beltran–Leykin) Interval arithmetic Interval arithmetic can be used to provide an a priori numerical certificate by computing intervals containing unique solutions. By using intervals instead of plain numeric types during path tracking, resulting candidates are represented by intervals. The candidate solution-interval is itself the certificate, in the sense that the solution is guaranteed to be inside the interval. Condition numbers Numerical algebraic geometry solves polynomial systems using homotopy continuation and path tracking methods. By monitoring the condition number for a tracked homotopy at every step, and ensuring that no two solution paths ever intersect, one can compute a numerical certificate along with a solution. This scheme is called a priori path tracking. Non-certified numerical path tracking relies on heuristic methods for controlling time step size and precision. In contrast, a priori certified path tracking goes beyond heuristics to provide step size control that guarantees that for every step along the path, the current point is within the domain of quadratic convergence for the current path. References Algebraic geometry Nonlinear algebra
Numerical certification
Mathematics
1,384
67,310
https://en.wikipedia.org/wiki/Disk%20read-and-write%20head
A disk read-and-write head is the small part of a disk drive that moves above the disk platter and transforms the platter's magnetic field into electric current (reads the disk) or, vice versa, transforms electric current into magnetic field (writes the disk). The heads have gone through a number of changes over the years. In a hard drive, the heads fly above the disk surface with clearance of as little as 3 nanometres. The flying height has been decreasing with each new generation of technology to enable higher areal density. The flying height of the head is controlled by the design of an air bearing etched onto the disk-facing surface of the slider. The role of the air bearing is to maintain the flying height constant as the head moves over the surface of the disk. The air bearings are carefully designed to maintain the same height across the entire platter, despite differing speeds depending on the head distance from the center of the platter. If the head hits the disk's surface, a catastrophic head crash can result. The heads often have a diamond-like carbon coating. Inductive heads Inductive heads use the same element for both reading and writing. Traditional head The heads themselves started out similar to the heads in tape recorders—simple devices made out of a tiny C-shaped piece of highly magnetizable material such as permalloy or ferrite wrapped in a fine wire coil. When writing, the coil is energized, a strong magnetic field forms in the gap of the C, and the recording surface adjacent to the gap is magnetized. When reading, the magnetized material rotates past the heads, the ferrite core concentrates the field, and a current is generated in the coil. In the gap the field is very strong and quite narrow. That gap is roughly equal to the thickness of the magnetic media on the recording surface. The gap determines the minimum size of a recorded area on the disk. Ferrite heads are large, and write fairly large features. They must also be flown fairly far from the surface thus requiring stronger fields and larger heads. Metal-in-gap (MIG) heads Metal-in-gap (MIG) heads are ferrite heads with a small piece of metal in the head gap that concentrates the field. This allows smaller features to be read and written. MIG heads were replaced by thin-film heads. Thin-film heads First introduced in 1979 on the IBM 3370 disk drive, thin-film technology uses photolithographic techniques similar to those used on semiconductor devices to fabricate hard drive heads. At the time, these heads had smaller size and greater precision than the ferrite-based heads then in use; they were electronically similar to them and used the same physics. Thin layers of magnetic (Ni–Fe), insulating, and copper coil wiring materials were built on ceramic substrates that were then physically separated into individual read/write heads integrated with their air bearing, significantly reducing the manufacturing cost per unit. Thin-film heads were much smaller than MIG heads and therefore allowed smaller recorded features to be used. Thin-film heads allowed 3.5 inch drives to reach 4 GB storage capacities in 1995. The geometry of the head gap was a compromise between what worked best for reading and what worked best for writing. Magnetoresistive heads (MR heads) The next head improvement in head design was to separate the writing element from the reading element allowing the optimization of a thin-film element for writing and a separate thin-film head element for reading. The separate read element uses the magnetoresistive (MR) effect which changes the resistance of a material in the presence of a magnetic field. These MR heads are able to read very small magnetic features reliably, but can not be used to create the strong field used for writing. The term AMR (Anisotropic MR) is used to distinguish it from the later introduced improvement in MR technology called GMR (giant magnetoresistance) and TMR (tunneling magnetoresistance). The transition to perpendicular magnetic recording (PMR) media has major implications for the write process and the write element of the head structure but less so for the MR read sensor of the head structure. AMR heads The introduction of the AMR head in 1990 by IBM led to a period of rapid areal density increases of about 100% per year. GMR heads In 1997 GMR, giant magnetoresistive heads started to replace AMR heads. Since the 1990s, a number of studies have been done on the effects of colossal magnetoresistance (CMR), which may allow for even greater increases in density. But so far it has not led to practical applications because it requires low temperatures and large equipment size. TMR heads In 2004, the first drives to use tunneling MR (TMR) heads were introduced by Seagate allowing 400 GB drives with 3 disk platters. Seagate introduced TMR heads featuring integrated microscopic heater coils to control the shape of the transducer region of the head during operation. The heater can be activated prior to the start of a write operation to ensure proximity of the write pole to the disk and medium. This improves the written magnetic transitions by ensuring that the head's write field fully saturates the magnetic disk medium. The same thermal actuation approach can be used to temporarily decrease the separation between the disk medium and the read sensor during the readback process, thus improving signal strength and resolution. By mid-2006 other manufacturers have begun to use similar approaches in their products. See also Applied Magnetics Corporation, once the largest supplier of disk heads Tape head References External links The PC Guide: Function of the Read/Write Heads IBM Research: GMR introduction, animations Hitachi Global Storage Technologies: Recording Head Materials Computer storage devices Hard disk computer storage Magnetic devices Rotating disc computer storage media it:Disco rigido#Descrizione
Disk read-and-write head
Technology
1,219
12,422,978
https://en.wikipedia.org/wiki/C4H10O3
{{DISPLAYTITLE:C4H10O3}} The molecular formula C4H10O3 (molar mass: 106.12 g/mol, exact mass: 106.0630 u) may refer to: 1,2,4-Butanetriol Diethyl ether peroxide Diethylene glycol (DEG) Trimethyl orthoformate (TMOF)
C4H10O3
Chemistry
89
6,037,050
https://en.wikipedia.org/wiki/Ground-directed%20bombing
Ground-directed bombing (GDB) is a military tactic for airstrikes by ground-attack aircraft, strategic bombers, and other equipped air vehicles under command guidance from aviation ground support equipment and/or ground personnel (e.g., ground observers). Often used in poor weather and at night (75% of all Vietnam War bombings "were done with GDB"), the tactic was superseded by an airborne computer predicting unguided bomb impact from data provided by precision avionics (e.g., GPS, GPS/INS, etc.) Equipment for radar GDB generally included a combination ground radar/computer/communication system ("Q" system) and aircraft avionics for processing radioed commands. A 21st century variant of ground-directed bombing is the radio command guidance for armed unmanned aerial vehicles to effect ground-directed release of ordnance (e.g., precision-guided munitions for bombing such as the AGM-114 Hellfire). World War II In early 1945, ground-directed bombing was invented by Lt Col Reginald Clizbe, deputy commander of the 47th Bombardment Group (Light), using automatic tracking radar in Northern Italy for A-26C missions (e.g., in the Po Valley). Development was by a team that included Donald H. Falkingham (who was awarded the Air Medal) that modified radar plotting to transmit control commands to the pilot direction indicator (bomb release was eventually automated from the ground radar). Similar to the ground training configuration in the US for bombardiers with the Norden bombsight, in a tent near the SCR-284 radar a bombsight was automatically positioned over a large map by the plotting signals converted from the radar track's spherical coordinates from the SCR-284 ranging and antenna pointing circuits. The guidance signals output from the moving bombsight as it viewed the map were then relayed to the aircraft as if the bombsight were on board (e.g., to a 1945 AN/ARA-17 Release Point Indicator). Post-war, ground radar command guidance was common for missiles designed to bombard ground targets, such as the AN/MSQ-1A with alternating current analog computer initially used for guidance of the MGM-1 Matador (the Republic-Ford JB-2 Loon had used ground radar guidance in 1945, and a few V-2s bombarding England used radio control in 1944.) Korean War Korean War GDB equipment of the United States Marine Corps included the AN/MPQ-14, and GBD in Korea "was first tried on November 28 [1950], when a detachment of the 3903d Radar Bomb Scoring Squadron used truck-mounted AN/MPQ-2 radars [derived from the World War II SCR-584 gun laying set] to guide B–26s against enemy positions in front of the 25th Infantry Division." Three USAF RBS detachments (e.g., Det 5) commanded GDB until the 502nd Tactical Control Group "assumed control of the 3903's three MPQ-2 radar sets" in January 1951, and the radar sites "became full-scale tactical air-direction posts called Tadpoles [code]-named Hillbilly, Beverage, and Chestnut,…about ten miles behind the front lines near the command posts of the I, IX, and X Corps." On February 23, 1951, the 1st Boeing B-29 Superfortress mission controlled by an MPQ-2 was flown, and a new AN/MSQ-1 Close Support Control Set was at Yangu, Korea, by September 1951 (AN/MPS-9 with OA-132 plotting computer & board). The similar AN/MSQ-2 Close Support Control Set also developed by Rome Air Development Center (MPS-9 radar & OA-215) began arriving in 1951 (in October, one GDB detachment that hadn't been provided MSQ-2 Technical Orders mistakenly bombed itself by using MSQ-1 procedures.) Korea GDB operations of 2380 & 204 respective daylight & nighttime raids included 900 flown by USMC Vought F4U Corsairs. Vietnam War Vietnam War GDB equipment included the USMC AN/TPQ-10 "Course Directing Central" and the United States Air Force "Bomb Directing Centrals" with bomb ballistics computer by Reeves Instrument Corporation (AN/MSQ-77, AN/TSQ-81, & AN/TSQ-96) for Combat Skyspot. From 1966-1971, ASRTs controlled more than 38,010 AN/TPQ-10 missions, directing more than 121,000 tons of ordnance on 56,753 targets (e.g., during the USMC "Operation Neutralize" bombing campaign against the North Vietnamese' siege of "Con Thien"). In addition to Arc Light B-52 airstrikes, GDB during the war was used against Cambodia targets of Operation Menu from Bien Hoa Air Base and by Operation Niagara, while Commando Club was used for GDB of the Red River Delta (e.g., Hanoi). Late Cold War Post-Vietnam War GDB Strategic Air Command missions were occasionally used for training/readiness, e.g., to maintain proficiency of aircrews and SAC's GDB-qualified technicians at 1st Combat Evaluation Group RBS sites. A new GBD system developed from the AN/TPB-1C Course Directing Central was the solid-state US Dynamics AN/TPQ-43 Bomb Scoring Set which included optical tracking. The AN/TPQ-43 ("Seek Score") replaced the AN/MSQ-77, -81, & -96 systems at the end of the Cold War before being decommissioned in 2007, and GDB systems were also designated for use during airdrops as part of the Ground Radar Aerial Delivery System (GRADS). Iraq War References Aerial operations and battles of World War II involving the United States Aerial warfare tactics Cold War tactics Military operations of the Korean War Military operations of the Vietnam War Military radars Science and technology during World War II World War II operations and battles of the Italian Campaign Aerial operations and battles of the Korean War Aerial operations and battles of the Vietnam War
Ground-directed bombing
Technology
1,283
74,760,709
https://en.wikipedia.org/wiki/Purpureocillium%20takamizusanense
Purpureocillium takamizusanense is a species of fungus in the genus Purpureocillium in the order of Hypocreales. Biology P. takamizusanense is the holomorph designation of this species. The anamorph form Isaria takamizusanensis and it's teleomorph Cordyceps rogamimontana were originally described to the genera Isaria and Cordyceps respectively., before their connection was shown in 2015. P. takamizusanense has been collected from various cicada species in Japan. Species include Hyalessa maculaticollis, from which the holotype was sampled in 1939, and Meimuna opalifera. References Ophiocordycipitaceae Fungi described in 1941 Fungi of Japan Fungus species
Purpureocillium takamizusanense
Biology
175
24,327,287
https://en.wikipedia.org/wiki/Rename%20%28computing%29
In computing, rename refers to the altering of a name of a file. This can be done manually by using a shell command such as ren or mv, or by using batch renaming software that can automate the renaming process. Implementations The C standard library provides a function called rename which does this action. In POSIX, which is extended from the C standard, the rename function will fail if the old and new names are on different mounted file systems. In SQL, renames are performed by using the CHANGE specification in ALTER TABLE statements. Atomic rename In POSIX, a successful call to rename is guaranteed to have been atomic from the point of view of the current host (i.e., another program would only see the file with the old name or the file with the new name, not both or neither of them). This aspect is often used during a file save operation to avoid any possibility of the file contents being lost if the save operation is interrupted. The rename function from the C library in Windows does not implement the POSIX atomic behaviour; instead it fails if the destination file already exists. However, other calls in the Windows API do implement the atomic behaviour. References Computing terminology
Rename (computing)
Technology
250
67,652,132
https://en.wikipedia.org/wiki/Evernic%20acid
Evernic acid is an organic compound and depside with the molecular formula C17H16O7. Evernic acid was first isolated from the lichen Usnea longissima. Evernic acid is soluble in hot alcohol and poorly soluble in water. Evernic acid is produced by the lichens Ramalina, Evernia, and Hypogymnia. References Further reading + Polyphenols Methoxy compounds Esters Carboxylic acids
Evernic acid
Chemistry
96
48,803,920
https://en.wikipedia.org/wiki/TJ-II
TJ-II is a flexible Heliac installed at Spain's National Fusion Laboratory. Its first plasma run was in 1998, and as of 2024 is still operational. History The flexible Heliac TJ-II was designed on the basis of calculations performed by the team of physicists and engineers of CIEMAT, in collaboration with the Oak Ridge National Laboratory (ORNL, USA) and the Max Planck Institute of Plasma Physics (IPP, Germany). The TJ-II project received preferential support from the European Atomic Energy Community (EURATOM) for phase I (Physics) in 1986 and for phase II (Engineering) in 1990. The construction of this flexible Heliac was carried out in parts according to its constitutive elements, which were commissioned to various European companies, although 60% of the investments reverted to Spanish companies. Precedents TJ-II is the third magnetic confinement device in a series. In 1983, the device TJ-I was taken into operation. The denomination of this device is due to the abbreviation of "Tokamak de la Junta de Energía Nuclear", this being the former denomination of CIEMAT. The abbreviation was maintained for successive devices for administrative reasons. In 1994, the torsatron TJ-IU was taken into operation. This was the first magnetic confinement device entirely built in Spain. Currently, TJ-IU is located at the University of Stuttgart in Germany under the name of TJ-K (the 'K' stands for Kiel, its first location in Germany, before arriving in Stuttgart). Description In TJ-II, the magnetic trap is obtained by means of various sets of coils that completely determine the magnetic surfaces before plasma initiation. The toroidal field is created by 32 coils. The three-dimensional twist of the central axis of the configuration is generated by means of two central coils: one circular and one helical. The vertical position of the plasma is controlled by the vertical field coils. The combined action of these magnetic fields generate bean-shaped magnetic surfaces that guide the particles of the plasma so that they do not collide with the vacuum vessel wall. Parameters It is a four period low magnetic shear stellarator with major radius R = 1.5 m, average minor radius a < 0.22 m, and magnetic field on axis up to 1.2 T. It is 'flexible' because varying the currents in the central circular and helical coils changes the magnetic configuration (iota ≈ 1.28 – 2.24) and plasma shape and sizes (plasma volume ≈ 0.6 – 1.1 m3). It has 32 toroidal coils (in a rounded square shape), and 4 poloidal coils (2 above and 2 below), and 2 helical coils around the 'central conductor'. The central conductor is inside the toroidal coils and the plasma and vacuum vessel forms a helix around it. It can produce a 0.25s pulse every 7 mins. Goals and Research The objective of the experimental program of TJ-II is to investigate the physics of plasma in a device with a helical magnetic axis having a great flexibility in its magnetic configuration, and to contribute to the international effort regarding the study of magnetic confinement devices for fusion. Experiment Transport and magnetic configuration, iota effects Confinement transitions, zonal flows Internal Transport Barriers Plasma Wall Interaction (in 2008 it experimented with lithium wall components). Impurity transport Instabilities Turbulence Theory Neoclassical transport Self-Organised Criticality Non-diffusive transport Gyrokinetic simulations Divertor studies for TJ-II Topology and transport (research project funded by the Ministerio de Ciencia e Innovación) References Further reading FusionWiki, jointly hosted by LNF and Fusenet http://wiki.fusenet.eu/wiki/TJ-II External links TJ-II project website Stellarators Plasma physics facilities
TJ-II
Physics
815
51,614,395
https://en.wikipedia.org/wiki/NGC%20214
NGC 214 is a spiral galaxy in the northern constellation of Andromeda, located at a distance of from the Milky Way. It was discovered on September 10, 1784 by William Herschel. The shape of this galaxy is given by its morphological classification of SABbc, which indicates a weak bar-like structure (SAB) at the core and moderate to loosely-wound spiral arms (bc). On July 19, 2005, a magnitude 17.4 supernova was detected at a position west and north of the galactic nucleus. The object was not visible on plates taken July 2, so it likely erupted after that date. Designated SN 2005db, it was determined to be a type IIn supernova based on the spectrum. A second supernova event was spotted from an image taken August 30, 2006, at west and south of the nucleus. It reached magnitude 17.8 and was designated SN 2006ep. This was determined to be a type-Ib/c supernova. References External links Intermediate spiral galaxies 0214 Andromeda (constellation) Astronomical objects discovered in 1784 002479
NGC 214
Astronomy
225
10,904,266
https://en.wikipedia.org/wiki/Coinduction
In computer science, coinduction is a technique for defining and proving properties of systems of concurrent interacting objects. Coinduction is the mathematical dual to structural induction. Coinductively defined data types are known as codata and are typically infinite data structures, such as streams. As a definition or specification, coinduction describes how an object may be "observed", "broken down" or "destructed" into simpler objects. As a proof technique, it may be used to show that an equation is satisfied by all possible implementations of such a specification. To generate and manipulate codata, one typically uses corecursive functions, in conjunction with lazy evaluation. Informally, rather than defining a function by pattern-matching on each of the inductive constructors, one defines each of the "destructors" or "observers" over the function result. In programming, co-logic programming (co-LP for brevity) "is a natural generalization of logic programming and coinductive logic programming, which in turn generalizes other extensions of logic programming, such as infinite trees, lazy predicates, and concurrent communicating predicates. Co-LP has applications to rational trees, verifying infinitary properties, lazy evaluation, concurrent logic programming, model checking, bisimilarity proofs, etc." Experimental implementations of co-LP are available from the University of Texas at Dallas and in the language Logtalk (for examples see ) and SWI-Prolog. Description In a concise statement is given of both the principle of induction and the principle of coinduction. While this article is not primarily concerned with induction, it is useful to consider their somewhat generalized forms at once. In order to state the principles, a few preliminaries are required. Preliminaries Let be a set and be a monotone function , that is: Unless otherwise stated, will be assumed to be monotone. X is F-closed if X is F-consistent if X is a fixed point if These terms can be intuitively understood in the following way. Suppose that is a set of assertions, and is the operation that yields the consequences of . Then is F-closed when you cannot conclude anymore than you've already asserted, while is F-consistent when all of your assertions are supported by other assertions (i.e. there are no "non-F-logical assumptions"). The Knaster–Tarski theorem tells us that the least fixed-point of (denoted ) is given by the intersection of all F-closed sets, while the greatest fixed-point (denoted ) is given by the union of all F-consistent sets. We can now state the principles of induction and coinduction. Definition Principle of induction: If is F-closed, then Principle of coinduction: If is F-consistent, then Discussion The principles, as stated, are somewhat opaque, but can be usefully thought of in the following way. Suppose you wish to prove a property of . By the principle of induction, it suffices to exhibit an F-closed set for which the property holds. Dually, suppose you wish to show that . Then it suffices to exhibit an F-consistent set that is known to be a member of. Examples Defining a set of datatypes Consider the following grammar of datatypes: That is, the set of types includes the "bottom type" , the "top type" , and (non-homogenous) lists. These types can be identified with strings over the alphabet . Let denote all (possibly infinite) strings over . Consider the function : In this context, means "the concatenation of string , the symbol , and string ." We should now define our set of datatypes as a fixpoint of , but it matters whether we take the least or greatest fixpoint. Suppose we take as our set of datatypes. Using the principle of induction, we can prove the following claim: All datatypes in are finite To arrive at this conclusion, consider the set of all finite strings over . Clearly cannot produce an infinite string, so it turns out this set is F-closed and the conclusion follows. Now suppose that we take as our set of datatypes. We would like to use the principle of coinduction to prove the following claim: The type Here denotes the infinite list consisting of all . To use the principle of coinduction, consider the set: This set turns out to be F-consistent, and therefore . This depends on the suspicious statement that The formal justification of this is technical and depends on interpreting strings as sequences, i.e. functions from . Intuitively, the argument is similar to the argument that (see Repeating decimal). Coinductive datatypes in programming languages Consider the following definition of a stream: data Stream a = S a (Stream a) -- Stream "destructors" head (S a astream) = a tail (S a astream) = astream This would seem to be a definition that is not well-founded, but it is nonetheless useful in programming and can be reasoned about. In any case, a stream is an infinite list of elements from which you may observe the first element, or place an element in front of to get another stream. Relationship with F-coalgebras Source: Consider the endofunctor in the category of sets: The final F-coalgebra has the following morphism associated with it: This induces another coalgebra with associated morphism . Because is final, there is a unique morphism such that The composition induces another F-coalgebra homomorphism . Since is final, this homomorphism is unique and therefore . Altogether we have: This witnesses the isomorphism , which in categorical terms indicates that is a fixpoint of and justifies the notation. Stream as a final coalgebra We will show that Stream A is the final coalgebra of the functor . Consider the following implementations: out astream = (head astream, tail astream) out' (a, astream) = S a astream These are easily seen to be mutually inverse, witnessing the isomorphism. See the reference for more details. Relationship with mathematical induction We will demonstrate how the principle of induction subsumes mathematical induction. Let be some property of natural numbers. We will take the following definition of mathematical induction: Now consider the function : It should not be difficult to see that . Therefore, by the principle of induction, if we wish to prove some property of , it suffices to show that is F-closed. In detail, we require: That is, This is precisely mathematical induction as stated. See also F-coalgebra Corecursion Bisimulation Anamorphism Total functional programming References Further reading Textbooks Davide Sangiorgi (2012). Introduction to Bisimulation and Coinduction. Cambridge University Press. Davide Sangiorgi and Jan Rutten (2011). Advanced Topics in Bisimulation and Coinduction. Cambridge University Press. Introductory texts Andrew D. Gordon (1994). — mathematically oriented description Bart Jacobs and Jan Rutten (1997). A Tutorial on (Co)Algebras and (Co)Induction (alternate link) — describes induction and coinduction simultaneously Eduardo Giménez and Pierre Castéran (2007). "A Tutorial on [Co-]Inductive Types in Coq" Coinduction — short introduction History Davide Sangiorgi. "On the Origins of Bisimulation and Coinduction", ACM Transactions on Programming Languages and Systems, Vol. 31, Nb 4, Mai 2009. Miscellaneous Co-Logic Programming: Extending Logic Programming with Coinduction — describes the co-logic programming paradigm Theoretical computer science Logic programming Functional programming Category theory Mathematical induction
Coinduction
Mathematics
1,597
273,679
https://en.wikipedia.org/wiki/Astronomical%20spectroscopy
Astronomical spectroscopy is the study of astronomy using the techniques of spectroscopy to measure the spectrum of electromagnetic radiation, including visible light, ultraviolet, X-ray, infrared and radio waves that radiate from stars and other celestial objects. A stellar spectrum can reveal many properties of stars, such as their chemical composition, temperature, density, mass, distance and luminosity. Spectroscopy can show the velocity of motion towards or away from the observer by measuring the Doppler shift. Spectroscopy is also used to study the physical properties of many other types of celestial objects such as planets, nebulae, galaxies, and active galactic nuclei. Background Astronomical spectroscopy is used to measure three major bands of radiation in the electromagnetic spectrum: visible light, radio waves, and X-rays. While all spectroscopy looks at specific bands of the spectrum, different methods are required to acquire the signal depending on the frequency. Ozone (O3) and molecular oxygen (O2) absorb light with wavelengths under 300 nm, meaning that X-ray and ultraviolet spectroscopy require the use of a satellite telescope or rocket mounted detectors. Radio signals have much longer wavelengths than optical signals, and require the use of antennas or radio dishes. Infrared light is absorbed by atmospheric water and carbon dioxide, so while the equipment is similar to that used in optical spectroscopy, satellites are required to record much of the infrared spectrum. Optical spectroscopy Physicists have been looking at the solar spectrum since Isaac Newton first used a simple prism to observe the refractive properties of light. In the early 1800s Joseph von Fraunhofer used his skills as a glassmaker to create very pure prisms, which allowed him to observe 574 dark lines in a seemingly continuous spectrum. Soon after this, he combined telescope and prism to observe the spectrum of Venus, the Moon, Mars, and various stars such as Betelgeuse; his company continued to manufacture and sell high-quality refracting telescopes based on his original designs until its closure in 1884. The resolution of a prism is limited by its size; a larger prism will provide a more detailed spectrum, but the increase in mass makes it unsuitable for highly detailed work. This issue was resolved in the early 1900s with the development of high-quality reflection gratings by J.S. Plaskett at the Dominion Observatory in Ottawa, Canada. Light striking a mirror will reflect at the same angle, however a small portion of the light will be refracted at a different angle; this is dependent upon the indices of refraction of the materials and the wavelength of the light. By creating a "blazed" grating which utilizes a large number of parallel mirrors, the small portion of light can be focused and visualized. These new spectroscopes were more detailed than a prism, required less light, and could be focused on a specific region of the spectrum by tilting the grating. The limitation to a blazed grating is the width of the mirrors, which can only be ground a finite amount before focus is lost; the maximum is around 1000 lines/mm. In order to overcome this limitation holographic gratings were developed. Volume phase holographic gratings use a thin film of dichromated gelatin on a glass surface, which is subsequently exposed to a wave pattern created by an interferometer. This wave pattern sets up a reflection pattern similar to the blazed gratings but utilizing Bragg diffraction, a process where the angle of reflection is dependent on the arrangement of the atoms in the gelatin. The holographic gratings can have up to 6000 lines/mm and can be up to twice as efficient in collecting light as blazed gratings. Because they are sealed between two sheets of glass, the holographic gratings are very versatile, potentially lasting decades before needing replacement. Light dispersed by the grating or prism in a spectrograph can be recorded by a detector. Historically, photographic plates were widely used to record spectra until electronic detectors were developed, and today optical spectrographs most often employ charge-coupled devices (CCDs). The wavelength scale of a spectrum can be calibrated by observing the spectrum of emission lines of known wavelength from a gas-discharge lamp. The flux scale of a spectrum can be calibrated as a function of wavelength by comparison with an observation of a standard star with corrections for atmospheric absorption of light; this is known as spectrophotometry. Radio spectroscopy Radio astronomy was founded with the work of Karl Jansky in the early 1930s, while working for Bell Labs. He built a radio antenna to look at potential sources of interference for transatlantic radio transmissions. One of the sources of noise discovered came not from Earth, but from the center of the Milky Way, in the constellation Sagittarius. In 1942, JS Hey captured the Sun's radio frequency using military radar receivers. Radio spectroscopy started with the discovery of the 21-centimeter H I line in 1951. Radio interferometry Radio interferometry was pioneered in 1946, when Joseph Lade Pawsey, Ruby Payne-Scott and Lindsay McCready used a single antenna atop a sea cliff to observe 200 MHz solar radiation. Two incident beams, one directly from the sun and the other reflected from the sea surface, generated the necessary interference. The first multi-receiver interferometer was built in the same year by Martin Ryle and Vonberg. In 1960, Ryle and Antony Hewish published the technique of aperture synthesis to analyze interferometer data. The aperture synthesis process, which involves autocorrelating and discrete Fourier transforming the incoming signal, recovers both the spatial and frequency variation in flux. The result is a 3D image whose third axis is frequency. For this work, Ryle and Hewish were jointly awarded the 1974 Nobel Prize in Physics. X-ray spectroscopy Stars and their properties Chemical properties Newton used a prism to split white light into a spectrum of color, and Fraunhofer's high-quality prisms allowed scientists to see dark lines of an unknown origin. In the 1850s, Gustav Kirchhoff and Robert Bunsen described the phenomena behind these dark lines. Hot solid objects produce light with a continuous spectrum, hot gases emit light at specific wavelengths, and hot solid objects surrounded by cooler gases show a near-continuous spectrum with dark lines corresponding to the emission lines of the gases. By comparing the absorption lines of the Sun with emission spectra of known gases, the chemical composition of stars can be determined. The major Fraunhofer lines, and the elements with which they are associated, appear in the following table. Designations from the early Balmer Series are shown in parentheses. Not all of the elements in the Sun were immediately identified. Two examples are listed below: In 1868 Norman Lockyer and Pierre Janssen independently observed a line next to the sodium doublet (D1 and D2) which Lockyer determined to be a new element. He named it Helium, but it wasn't until 1895 the element was found on Earth. In 1869 the astronomers Charles Augustus Young and William Harkness independently observed a novel green emission line in the Sun's corona during an eclipse. This "new" element was incorrectly named coronium, as it was only found in the corona. It was not until the 1930s that Walter Grotrian and Bengt Edlén discovered that the spectral line at 530.3 nm was due to highly ionized iron (Fe13+). Other unusual lines in the coronal spectrum are also caused by highly charged ions, such as nickel and calcium, the high ionization being due to the extreme temperature of the solar corona. To date more than 20 000 absorption lines have been listed for the Sun between 293.5 and 877.0 nm, yet only approximately 75% of these lines have been linked to elemental absorption. By analyzing the equivalent width of each spectral line in an emission spectrum, both the elements present in a star and their relative abundances can be determined. Using this information stars can be categorized into stellar populations; Population I stars are the youngest stars and have the highest metal content (the Sun is a Pop I star), while Population III stars are the oldest stars with a very low metal content. Temperature and size In 1860 Gustav Kirchhoff proposed the idea of a black body, a material that emits electromagnetic radiation at all wavelengths. In 1894 Wilhelm Wien derived an expression relating the temperature (T) of a black body to its peak emission wavelength (λmax): b is a constant of proportionality called Wien's displacement constant, equal to This equation is called Wien's Law. By measuring the peak wavelength of a star, the surface temperature can be determined. For example, if the peak wavelength of a star is 502 nm the corresponding temperature will be 5772 kelvins. The luminosity of a star is a measure of the electromagnetic energy output in a given amount of time. Luminosity (L) can be related to the temperature (T) of a star by: , where R is the radius of the star and σ is the Stefan–Boltzmann constant, with a value of Thus, when both luminosity and temperature are known (via direct measurement and calculation) the radius of a star can be determined. Galaxies The spectra of galaxies look similar to stellar spectra, as they consist of the combined light of billions of stars. Doppler shift studies of galaxy clusters by Fritz Zwicky in 1937 found that the galaxies in a cluster were moving much faster than seemed to be possible from the mass of the cluster inferred from the visible light. Zwicky hypothesized that there must be a great deal of non-luminous matter in the galaxy clusters, which became known as dark matter. Since his discovery, astronomers have determined that a large portion of galaxies (and most of the universe) is made up of dark matter. In 2003, however, four galaxies (NGC 821, NGC 3379, NGC 4494, and NGC 4697) were found to have little to no dark matter influencing the motion of the stars contained within them; the reason behind the lack of dark matter is unknown. In the 1950s, strong radio sources were found to be associated with very dim, very red objects. When the first spectrum of one of these objects was taken there were absorption lines at wavelengths where none were expected. It was soon realised that what was observed was a normal galactic spectrum, but highly red shifted. These were named quasi-stellar radio sources, or quasars, by Hong-Yee Chiu in 1964. Quasars are now thought to be galaxies formed in the early years of our universe, with their extreme energy output powered by super-massive black holes. The properties of a galaxy can also be determined by analyzing the stars found within them. NGC 4550, a galaxy in the Virgo Cluster, has a large portion of its stars rotating in the opposite direction as the other portion. It is believed that the galaxy is the combination of two smaller galaxies that were rotating in opposite directions to each other. Bright stars in galaxies can also help determine the distance to a galaxy, which may be a more accurate method than parallax or standard candles. Interstellar medium The interstellar medium is matter that occupies the space between star systems in a galaxy. 99% of this matter is gaseous – hydrogen, helium, and smaller quantities of other ionized elements such as oxygen. The other 1% is dust particles, thought to be mainly graphite, silicates, and ices. Clouds of the dust and gas are referred to as nebulae. There are three main types of nebula: absorption, reflection, and emission nebulae. Absorption (or dark) nebulae are made of dust and gas in such quantities that they obscure the starlight behind them, making photometry difficult. Reflection nebulae, as their name suggest, reflect the light of nearby stars. Their spectra are the same as the stars surrounding them, though the light is bluer; shorter wavelengths scatter better than longer wavelengths. Emission nebulae emit light at specific wavelengths depending on their chemical composition. Gaseous emission nebulae In the early years of astronomical spectroscopy, scientists were puzzled by the spectrum of gaseous nebulae. In 1864 William Huggins noticed that many nebulae showed only emission lines rather than a full spectrum like stars. From the work of Kirchhoff, he concluded that nebulae must contain "enormous masses of luminous gas or vapour." However, there were several emission lines that could not be linked to any terrestrial element, brightest among them lines at 495.9 nm and 500.7 nm. These lines were attributed to a new element, nebulium, until Ira Bowen determined in 1927 that the emission lines were from highly ionised oxygen (O+2). These emission lines could not be replicated in a laboratory because they are forbidden lines; the low density of a nebula (one atom per cubic centimetre) allows for metastable ions to decay via forbidden line emission rather than collisions with other atoms. Not all emission nebulae are found around or near stars where solar heating causes ionisation. The majority of gaseous emission nebulae are formed of neutral hydrogen. In the ground state neutral hydrogen has two possible spin states: the electron has either the same spin or the opposite spin of the proton. When the atom transitions between these two states, it releases an emission or absorption line of 21 cm. This line is within the radio range and allows for very precise measurements: Velocity of the cloud can be measured via Doppler shift The intensity of the 21 cm line gives the density and number of atoms in the cloud The temperature of the cloud can be calculated Using this information, the shape of the Milky Way has been determined to be a spiral galaxy, though the exact number and position of the spiral arms is the subject of ongoing research. Complex molecules Dust and molecules in the interstellar medium not only obscures photometry, but also causes absorption lines in spectroscopy. Their spectral features are generated by transitions of component electrons between different energy levels, or by rotational or vibrational spectra. Detection usually occurs in radio, microwave, or infrared portions of the spectrum. The chemical reactions that form these molecules can happen in cold, diffuse clouds or in dense regions illuminated with ultraviolet light. Most known compounds in space are organic, ranging from small molecules e.g. acetylene C2H2 and acetone (CH3)2CO; to entire classes of large molecule e.g. fullerenes and polycyclic aromatic hydrocarbons; to solids, such as graphite or other sooty material. Motion in the universe Stars and interstellar gas are bound by gravity to form galaxies, and groups of galaxies can be bound by gravity in galaxy clusters. With the exception of stars in the Milky Way and the galaxies in the Local Group, almost all galaxies are moving away from Earth due to the expansion of the universe. Doppler effect and redshift The motion of stellar objects can be determined by looking at their spectrum. Because of the Doppler effect, objects moving towards someone are blueshifted, and objects moving away are redshifted. The wavelength of redshifted light is longer, appearing redder than the source. Conversely, the wavelength of blueshifted light is shorter, appearing bluer than the source light: where is the emitted wavelength, is the velocity of the object, and is the observed wavelength. Note that v<0 corresponds to λ<λ0, a blueshifted wavelength. A redshifted absorption or emission line will appear more towards the red end of the spectrum than a stationary line. In 1913 Vesto Slipher determined the Andromeda Galaxy was blueshifted, meaning it was moving towards the Milky Way. He recorded the spectra of 20 other galaxies — all but four of which were redshifted — and was able to calculate their velocities relative to the Earth. Edwin Hubble would later use this information, as well as his own observations, to define Hubble's law: The further a galaxy is from the Earth, the faster it is moving away. Hubble's law can be generalised to: where is the velocity (or Hubble Flow), is the Hubble Constant, and is the distance from Earth. Redshift (z) can be expressed by the following equations: In these equations, frequency is denoted by and wavelength by . The larger the value of z, the more redshifted the light and the farther away the object is from the Earth. As of January 2013, the largest galaxy redshift of z~12 was found using the Hubble Ultra-Deep Field, corresponding to an age of over 13 billion years (the universe is approximately 13.82 billion years old). The Doppler effect and Hubble's law can be combined to form the equation , where c is the speed of light. Peculiar motion Objects that are gravitationally bound will rotate around a common center of mass. For stellar bodies, this motion is known as peculiar velocity and can alter the Hubble Flow. Thus, an extra term for the peculiar motion needs to be added to Hubble's law: This motion can cause confusion when looking at a solar or galactic spectrum, because the expected redshift based on the simple Hubble law will be obscured by the peculiar motion. For example, the shape and size of the Virgo Cluster has been a matter of great scientific scrutiny due to the very large peculiar velocities of the galaxies in the cluster. Binary stars Just as planets can be gravitationally bound to stars, pairs of stars can orbit each other. Some binary stars are visual binaries, meaning they can be observed orbiting each other through a telescope. Some binary stars, however, are too close together to be resolved. These two stars, when viewed through a spectrometer, will show a composite spectrum: the spectrum of each star will be added together. This composite spectrum becomes easier to detect when the stars are of similar luminosity and of different spectral class. Spectroscopic binaries can be also detected due to their radial velocity; as they orbit around each other one star may be moving towards the Earth whilst the other moves away, causing a Doppler shift in the composite spectrum. The orbital plane of the system determines the magnitude of the observed shift: if the observer is looking perpendicular to the orbital plane there will be no observed radial velocity. For example, a person looking at a carousel from the side will see the animals moving toward and away from them, whereas if they look from directly above they will only be moving in the horizontal plane. Planets, asteroids, and comets Planets, asteroids, and comets all reflect light from their parent stars and emit their own light. For cooler objects, including Solar System planets and asteroids, most of the emission is at infrared wavelengths we cannot see, but that are routinely measured with spectrometers. For objects surrounded by gas, such as comets and planets with atmospheres, further emission and absorption happens at specific wavelengths in the gas, imprinting the spectrum of the gas on that of the solid object. In the case of worlds with thick atmospheres or complete cloud or haze cover (such as the four giant planets, Venus, and Saturn's satellite Titan), the spectrum is mostly or completely due to the atmosphere alone. Planets The reflected light of a planet contains absorption bands due to minerals in the rocks present for rocky bodies, or due to the elements and molecules present in the atmosphere. To date over 3,500 exoplanets have been discovered. These include so-called Hot Jupiters, as well as Earth-like planets. Using spectroscopy, compounds such as alkali metals, water vapor, carbon monoxide, carbon dioxide, and methane have all been discovered. Asteroids Asteroids can be classified into three major types according to their spectra. The original categories were created by Clark R. Chapman, David Morrison, and Ben Zellner in 1975, and further expanded by David J. Tholen in 1984. In what is now known as the Tholen classification, the C-types are made of carbonaceous material, S-types consist mainly of silicates, and X-types are 'metallic'. There are other classifications for unusual asteroids. C- and S-type asteroids are the most common asteroids. In 2002 the Tholen classification was further "evolved" into the SMASS classification, expanding the number of categories from 14 to 26 to account for more precise spectroscopic analysis of the asteroids. Comets The spectra of comets consist of a reflected solar spectrum from the dusty clouds surrounding the comet, as well as emission lines from gaseous atoms and molecules excited to fluorescence by sunlight and/or chemical reactions. For example, the chemical composition of Comet ISON was determined by spectroscopy due to the prominent emission lines of cyanogen (CN), as well as two- and three-carbon atoms (C2 and C3). Nearby comets can even be seen in X-ray as solar wind ions flying to the coma are neutralized. The cometary X-ray spectra therefore reflect the state of the solar wind rather than that of the comet. See also References Spectroscopy Observational astronomy
Astronomical spectroscopy
Physics,Chemistry,Astronomy
4,316
16,993,001
https://en.wikipedia.org/wiki/Moody%20chart
In engineering, the Moody chart or Moody diagram (also Stanton diagram) is a graph in non-dimensional form that relates the Darcy–Weisbach friction factor fD, Reynolds number Re, and surface roughness for fully developed flow in a circular pipe. It can be used to predict pressure drop or flow rate down such a pipe. History In 1944, Lewis Ferry Moody plotted the Darcy–Weisbach friction factor against Reynolds number Re for various values of relative roughness ε / D. This chart became commonly known as the Moody chart or Moody diagram. It adapts the work of Hunter Rouse but uses the more practical choice of coordinates employed by R. J. S. Pigott, whose work was based upon an analysis of some 10,000 experiments from various sources. Measurements of fluid flow in artificially roughened pipes by J. Nikuradse were at the time too recent to include in Pigott's chart. The chart's purpose was to provide a graphical representation of the function of C. F. Colebrook in collaboration with C. M. White, which provided a practical form of transition curve to bridge the transition zone between smooth and rough pipes, the region of incomplete turbulence. Description Moody's team used the available data (including that of Nikuradse) to show that fluid flow in rough pipes could be described by four dimensionless quantities: Reynolds number, pressure loss coefficient, diameter ratio of the pipe and the relative roughness of the pipe. They then produced a single plot which showed that all of these collapsed onto a series of lines, now known as the Moody chart. This dimensionless chart is used to work out pressure drop, (Pa) (or head loss, (m)) and flow rate through pipes. Head loss can be calculated using the Darcy–Weisbach equation in which the Darcy friction factor appears : Pressure drop can then be evaluated as: or directly from where is the density of the fluid, is the average velocity in the pipe, is the friction factor from the Moody chart, is the length of the pipe and is the pipe diameter. The chart plots Darcy–Weisbach friction factor against Reynolds number Re for a variety of relative roughnesses, the ratio of the mean height of roughness of the pipe to the pipe diameter or . The Moody chart can be divided into two regimes of flow: laminar and turbulent. For the laminar flow regime (< ~3000), roughness has no discernible effect, and the Darcy–Weisbach friction factor was determined analytically by Poiseuille: For the turbulent flow regime, the relationship between the friction factor the Reynolds number Re, and the relative roughness is more complex. One model for this relationship is the Colebrook equation (which is an implicit equation in ): Fanning friction factor This formula must not be confused with the Fanning equation, using the Fanning friction factor , equal to one fourth the Darcy-Weisbach friction factor . Here the pressure drop is: References See also Friction loss Darcy friction factor formulae Fluid dynamics Hydraulics Piping
Moody chart
Physics,Chemistry,Engineering
629
151,762
https://en.wikipedia.org/wiki/Zu%20Chongzhi
Zu Chongzhi (; 429 – 500), courtesy name Wenyuan (), was a Chinese astronomer, inventor, mathematician, politician, and writer during the Liu Song and Southern Qi dynasties. He was most notable for calculating pi as between 3.1415926 and 3.1415927, a record in precision which would not be surpassed for nearly 900 years. Life and works Chongzhi's ancestry was from modern Baoding, Hebei. To flee from the ravages of war, Zu's grandfather Zu Chang moved to the Yangtze, as part of the massive population movement during the Eastern Jin. Zu Chang () at one point held the position of Chief Minister for the Palace Buildings () within the Liu Song and was in charge of government construction projects. Zu's father, Zu Shuozhi (), also served the court and was greatly respected for his erudition. Zu was born in Jiankang. His family had historically been involved in astronomical research, and from childhood Zu was exposed to both astronomy and mathematics. When he was only a youth, his talent earned him much repute. When Emperor Xiaowu of Song heard of him, he was sent to the Hualin Xuesheng () academy, and later the Imperial Nanjing University (Zongmingguan) to perform research. In 461 in Nanxu (today Zhenjiang, Jiangsu), he was engaged in work at the office of the local governor. In 464, Zu moved to Louxian (today Songjiang district, Shanghai), there, he compiled the Daming calender and calculated π. Zu Chongzhi, along with his son Zu Gengzhi, wrote a mathematical text entitled Zhui Shu (; "Methods for Interpolation"). It is said that the treatise contained formulas for the volume of a sphere, cubic equations and an accurate value of pi. This book has been lost since the Song dynasty. His mathematical achievements included the Daming calendar () introduced by him in 465. distinguishing the sidereal year and the tropical year. He measured 45 years and 11 months per degree between those two; today we know the difference is 70.7 years per degree. calculating one year as 365.24281481 days, which is very close to 365.24219878 days as we know today. calculating the number of overlaps between sun and moon as 27.21223, which is very close to 27.21222 as we know today; using this number he successfully predicted an eclipse four times during 23 years (from 436 to 459). calculating the Jupiter year as about 11.858 Earth years, which is very close to 11.862 as we know of today. deriving two approximations of pi, (3.1415926535897932...) which held as the most accurate approximation for for over nine hundred years. His best approximation was between 3.1415926 and 3.1415927, with (, milü, close ratio) and (, yuelü, approximate ratio) being the other notable approximations. He obtained the result by approximating a circle with a 24,576 (= 213 × 3) sided polygon. This was an impressive feat for the time, especially considering that the counting rods he used for recording intermediate results were merely a pile of wooden sticks laid out in certain patterns. Japanese mathematician Yoshio Mikami pointed out, " was nothing more than the value obtained several hundred years earlier by the Greek mathematician Archimedes, however milü = could not be found in any Greek, Indian or Arabian manuscripts, not until 1585 Dutch mathematician Adriaan Anthoniszoon obtained this fraction; the Chinese possessed this most extraordinary fraction over a whole millennium earlier than Europe". Hence Mikami strongly urged that the fraction be named after Zu Chongzhi as Zu's fraction. In Chinese literature, this fraction is known as "Zu's ratio". Zu's ratio is a best rational approximation to , and is the closest rational approximation to from all fractions with denominator less than 16600. finding the volume of a sphere as D3/6 where D is diameter (equivalent to 4/3r3). Astronomy Zu was an accomplished astronomer who calculated the time values with unprecedented precision. His methods of interpolation and the use of integration were far ahead of his time. Even the results of the astronomer Yi Xing (who was beginning to utilize foreign knowledge) were not comparable. The Sung dynasty calendar was backwards to the "Northern barbarians" because they were implementing their daily lives with the Da Ming Li. It is said that his methods of calculation were so advanced, the scholars of the Sung dynasty and Indo influence astronomers of the Tang dynasty found it confusing. Mathematics The majority of Zu's great mathematical works are recorded in his lost text the Zhui Shu. Most schools argue about his complexity since traditionally the Chinese had developed mathematics as algebraic and equational. Logically, scholars assume that the Zhui Shu yields methods of cubic equations. His works on the accurate value of pi describe the lengthy calculations involved. Zu used the Liu Hui's algorithm described earlier by Liu Hui to inscribe a 12,288-gon. Zu's value of pi is precise to six decimal places and for almost nine hundred years thereafter no subsequent mathematician computed a value this precise. Zu also worked on deducing the formula for the volume of a sphere with his son Zu Gengzhi. In their calculation, Zu used the concept that two solids with equal cross-sectional areas at equal heights must also have equal volumes to find the volume of a Steinmetz solid. And further multiplied the volume of the Steinmetz solid with π/4, therefore found the volume of a sphere as πd^3/6 (d is the diameter of the sphere). Inventions and innovations Hammer mills In 488, Zu Chongzhi was responsible for erecting water powered trip hammer mills which was inspected by Emperor Wu of Southern Qi during the early 490s. Paddle boats Zu is also credited with inventing Chinese paddle boats or Qianli chuan in the late 5th century AD during the Southern Qi dynasty. The boats made sailing a more reliable form of transportation and based on the shipbuilding technology of its day, numerous paddle wheel ships were constructed during the Tang era as the boats were able to cruise at faster speeds than the existing vessels at the time as well as being able to cover hundreds of kilometers of distance without the aid of wind. South pointing chariot The south-pointing chariot device was first invented by the Chinese mechanical engineer Ma Jun (c. 200–265 AD). It was a wheeled vehicle that incorporated an early use of differential gears to operate a fixed figurine that would constantly point south, hence enabling one to accurately measure their directional bearings. This effect was achieved not by magnetics (like in a compass), but through intricate mechanics, the same design that allows equal amounts of torque applied to wheels rotating at different speeds for the modern automobile. After the Three Kingdoms period, the device fell out of use temporarily. However, it was Zu Chongzhi who successfully re-invented it in 478, as described in the texts of the Book of Song and the Book of Qi, with a passage from the latter below: When Emperor Wu of Liu Song subdued Guanzhong he obtained the south-pointing carriage of Yao Xing, but it was only the shell with no machinery inside. Whenever it moved it had to have a man inside to turn (the figure). In the Sheng-Ming reign period, Gao Di commissioned Zi Zu Chongzhi to reconstruct it according to the ancient rules. He accordingly made new machinery of bronze, which would turn round about without a hitch and indicate the direction with uniformity. Since Ma Jun's time such a thing had not been.Book of Qi, 52.905 Literature Zu's paradoxographical work Accounts of Strange Things [] survives. Named after him ≈ as Zu Chongzhi's ratio. The lunar crater Tsu Chung-Chi 1888 Zu Chong-Zhi is the name of asteroid 1964 VO1. ZUC stream cipher is a new encryption algorithm. Notes References Needham, Joseph (1986). Science and Civilization in China: Volume 4, Part 2. Cambridge University Press Further reading External links Encyclopædia Britannica's description of Zu Chongzhi Zu Chongzhi at Chinaculture.org Zu Chongzhi at the University of Maine 429 births 500 deaths 5th-century Chinese mathematicians 5th-century Chinese astronomers Ancient Chinese mathematicians Chinese inventors Liu Song government officials Liu Song writers Pi-related people Politicians from Nanjing Scientists from Nanjing Southern Qi government officials Writers from Nanjing Chinese geometers
Zu Chongzhi
Mathematics
1,793
49,867,763
https://en.wikipedia.org/wiki/Tuft%20cell
Tuft cells are chemosensory cells in the epithelial lining of the intestines. Similar tufted cells are found in the respiratory epithelium where they are known as brush cells. The name "tuft" refers to the brush-like microvilli projecting from the cells. Ordinarily there are very few tuft cells present but they have been shown to greatly increase at times of a parasitic infection. Several studies have proposed a role for tuft cells in defense against parasitic infection. In the intestine, tuft cells are the sole source of secreted interleukin 25 (IL-25). ATOH1 is required for tuft cell specification but not for maintenance of a mature differentiated state, and knockdown of Notch results in increased numbers of tuft cells. Human tuft cells The human gastrointestinal (GI) tract is full of tuft cells for its entire length. These cells were located between the crypts and villi. On the basal pole of all cells was expressed DCLK1. They did not have the same morphology as was describe in animal studies but they showed an apical brush border the same thickness. Colocalization of synaptophysin and DCLK1 were found in the duodenum, this suggests that these cells play a neuroendocrine role in this region. A specific marker of intestinal tuft cells is microtubule kinase - Double cortin-like kinase 1 (DCLK1). Tuft cells that are positive in this kinase are important in gastrointestinal chemosensation, inflammation or can make repairs after injuries in the intestine. Function One key to understanding the role of tuft cells is that they share many characteristics with chemosensory cells in taste buds. For instance, they express many taste receptors and taste signaling apparatus. This might suggest that tuft cells could function as chemoreceptive cells that can sense many chemical signals around them. However, with more new research suggests that tuft cells can also be activated by the taste receptor apparatus. These can also be triggered by different small molecules, such as succinate and aeroallergens. Tuft cells have been known to secrete various molecules which are important for biological functions. Due to this, tuft cells act as danger sensors and trigger a secretion of biologically active mediators. Despite this, the signals and the mediators that they secrete are wholly dependent on context. For example, tuft cells that are in the urethra respond to bitter compounds, through activation of the taste receptor. This then results in a rise in intracellular Ca2+  and the release of acetylcholine. It is thought that this then triggers an activation of various other cells in the proximity which then leads to bladder detrusor reflex and a greater emptying of the bladder. Tuft cells in type-2 immunity It has been discovered that the tuft cells in the intestines of mice are activated by parasitic infections. This leads to a secretion of IL25. IL25, being the key activator of innate lymphoid cells type 2. This then initiates and amplifies type-2 cytokine response, characterized by secretion of cytokines from ILC2 cells. Tissue remodeling during type-2 immune response is based on cytokine interleukin (IL)-13. This interleukin is produced mainly by group 2 innate lymphoid cells (ILC2s) and type 2 helper T cells (Th2s) located in lamina propria. Also during worm infection, the amount of tuft cells dramatically rises. Hyperplasia of tuft cells and goblet cells is a hallmark of type 2 infection and is regulated by a feed-forward signalling circuit. IL-25 produced by tuft cells induces IL-13 production by ILC2s in the lamina propria. IL-13 then interact with uncommitted epithelial progenitors to affect their lineage selection toward goblet and tuft cells. As a result, the IL-13 is responsible for dramatic remodeling enterocyte epithelium to epithelium which are dominated by tuft and goblet cells. Without IL-25 from tuft cells worm clearance is delayed. The type-2 immune response is based on tuft cells and the response is severely reduced without the presence of these cells, which confirm the important physiologic function for these cells during worm infection. Activation of Th2 cells is an important part of this feed-forward loop. The activation of tuft cells in the intestine is connected with metabolite succinate, which is produced by a parasite and binds to the specific tuft cells receptor Sucnr1 on their surface. Also, the role of intestinal tuft cells can be important for local regeneration in the intestine after an infection. Morphology Tuft cells were identified for the first time in the trachea and gastrointestinal tract in rodent, due to their typical morphology, by electron microscopy. The characteristic tubulovesicular system and apical bundle of microfilaments which are connected to tuft by long and thick microvilli, reaching into the lumen, gave them their name. This figure gave these cells their name and the whole of tufted morphology. The distribution and size of tuft cell microvilli are very different from enterocytes that neighbour them. Also tuft cells, in comparison with enterocytes, do not have a terminal web at the base of apical microvilli. Other characteristics of tuft cells are: quite narrow apical membrane which cause the tuft cells to be viewed as pinched at the top, prominent microfilaments from actin which extend to the cell and finish just above the nucleus, vast but largely empty apical vesicles which make a tubulovesicular network, on the apical side of the cells' nucleus is a Golgi apparatus, deficiency of rough endoplasmic reticulum and desmosomes with tight junction which fixes tuft cells to their neighbours. The shape of the tuft cell body varies and depends on the organ. Tuft cells in the intestine are cylindric and narrow at the apical and basal ends. Alveolar tuft cells are flatter in comparison with intestinal and gall bladder tuft cells have a cuboidal shape. Differences in tuft cells can reflect their organ's specific functions. Tuft cells express chemosensory proteins, like TRPM5 and α-gustducin. These proteins indicate that neighbouring neurons can innervate tuft cells. Tuft cells can be identified by staining for cytokeratin 18, neurofilaments, actin filaments, acetylated tubulin, and DCLK1 to differentiate between tuft cells and enterocytes. Tuft cells are found in the intestine, and stomach, and as pulmonary brush cells in the respiratory tract, from nose to alveoli. Tuft cells in disease A loss of tolerance to antigens that appear in the environment cause inflammatory bowel disease (IBD) and Crohn's disease (CD) in people who are more genetically susceptible. Helminth colonization inducts a type-2 immune response, causes mucosal healing and achieves clinical remission. During an intense infection, tuft cells can make their own specification and the hyperplasia of tuft cells is a key response to the expulsion of the worm. This shows that the modulation of tuft cell function may be effective in the treatment of Crohn's Disease. Helminth infections Tuft cells have been shown to use taste receptors in the detection of many different helminth species. The clearance of helminth in mice that lacked taste receptor function (Trpm5 or/-gustducin  KO)   or enough tuft cells (Pou2f3 KO) was impaired compared to that of wild-type mice. This shows that tufts cells are important in playing a protective role during the helminth infections. It was observed that IL-25 derived from tuft cells was mediating the protective response, initiating type 2 immune responses. History and distribution Tuft cells were first discovered in the trachea of the rat, and in the mouse stomach. In the late 1920s, Dr. Chlopkov was tracking a project on developmental stages of goblet cells which are in the intestines. In the microscope he found a cell with a bundle of unusually long microvilli rising into the intestinal lumen. He thought he had found an early stage intestinal goblet cell but it was actually the first report of a new epithelial lineage which we now call the tuft cell. In 1956, two scientists, Rhodin and Dalhamn, described tuft cells in the rat trachea; later the same year Järvi and Keyriläinen found similar cells in the mouse stomach. Tuft cells are generally located in the columnar epithelium organs derived from endoderm. In rodents, they have been definitively been found: for example, in the trachea, the thymus, the glandular stomach, the gall bladder, the small intestine, the colon, the auditory tube, the pancreatic duct and the urethra. Tuft cells are most of the time isolated cells and take <1% of the epithelium. In the mouse gall bladder and rat bile and pancreatic duct, the tuft cells are more abundant but still isolated. See also List of human cell types derived from the germ layers References Human cells Stomach Immunology Respiratory physiology
Tuft cell
Biology
2,013
47,331,609
https://en.wikipedia.org/wiki/Pelargonium%20inquinans
Pelargonium inquinans, the scarlet geranium, is a species of plant in the genus Pelargonium (family Geraniaceae). It is a shrub endemic to South Africa, ranging from Mpumalanga to KwaZulu-Natal and Eastern Cape provinces. It is one of the ancestors of the hybrid line of horticultural pelargoniums, referred to as the zonal group. They can easily be propagated by seeds and cuttings. Etymology and history The generic name Pelargonium in scientific Latin derives from the Greek pelargós (πελαργός), which means the stork and the shape of their fruit evoking the beak of the wader. The specific epithet "messy" derives from the Latin verb inquino "dirty, soil" because the leaves leave a brown trace on the fingers when touched. The Pelargonium inquinans was grown in the garden of the Bishop of London, Henry Compton, an admirer of exotic plants. In 1713, when he died, Pelargonium inquinans was found in his collection. The first illustration from 1732 was made from a plant growing in the garden of British botanist James Sherard. Many hybrids have been derived from this species, but the true wild species can be recognized by its red glandular hairs. Description In the wild, Pelargonium inquinans is a small shrub, about 2 m tall, branched, with young succulent twigs becoming woody with age, bearing red glandular hairs. The evergreen leaves, borne by long petioles, are orbicular (like Pelargonium × hortorum but without dark markings), incised in 5 to 7 crenate lobes, with a viscous pubescence, giving a cottony appearance to both sides. To the touch, the leaves stain the fingers brown rust. The scarlet red flowers, sometimes pink or white, are grouped by 10 to 20 in pseudo-umbels. They are bilateral symmetry (zygomorph) with the 2 upper petals may be a little smaller than the 3 lower petals. Stamens and style are exerted. The filaments of the seven fertile stamens join over most of their length. In South Africa flowering is spread throughout the year. The pericardial fruit is composed of 5 capsules terminated by a long, hairy, twisted curl at maturity. Distribution The pelargonium with scarlet flowers grows in the Eastern Cape, Uitenhage, Albany and Caffirland, south of Kwazulu-Natal, South Africa. It grows on clay soils, like Pelargonium × hortorum. Hybrid Pelargonium inquinans and Pelargonium zonale are generally considered the two main wild ancestors of the zonal group of horticultural pelargoniums, commonly referred to as "florist geraniums" or "zoned leaf hybrid pelargoniums". In botany, the name Pelargonium × hortorum L.H. Bailey is accepted. These two species were introduced in the great gardens of Europe at the beginning of the eighteenth century. Uses Indigenous people use crushed leaves for headache and influenza. They are also used as a body deodorant. Gallery References Endemic flora of South Africa Flora of the Cape Provinces Flora of KwaZulu-Natal Flora of the Northern Provinces inquinans Plants described in 1753 Plants used in traditional African medicine Plants that can bloom all year round
Pelargonium inquinans
Biology
720
46,510,192
https://en.wikipedia.org/wiki/Herv%C3%A9%20Moulin
Hervé Moulin (born 1950 in Paris) is a French mathematician who is the Donald J. Robertson Chair of Economics at the Adam Smith Business School at the University of Glasgow. He is known for his research contributions in mathematical economics, in particular in the fields of mechanism design, social choice, game theory and fair division. He has written five books and over 100 peer-reviewed articles. Moulin was the George A. Peterkin Professor of Economics at Rice University (from 1999 to 2013):, the James B. Duke Professor of Economics at Duke University (from 1989 to 1999), the University Distinguished Professor at Virginia Tech (from 1987 to 1989), and Academic Supervisor at Higher School of Economics in St. Petersburg, Russia (from 2015 to 2022). He is a fellow of the Econometric Society since 1983, and the president of the Game Theory Society for the term 2016 - 2018. He also served as president of the Society for Social Choice and Welfare for the period of 1998 to 1999. He became a Fellow of the Royal Society of Edinburgh in 2015. Moulin's research has been supported in part by seven grants from the US National Science Foundation. He collaborates as an adviser with the fair division website Spliddit, created by Ariel Procaccia. On the occasion of his 65th birthday, the Paris School of Economics and the Aix-Marseille University organised a conference in his honor, with Peyton Young, William Thomson, Salvador Barbera, and Moulin himself among the speakers. Biography Moulin obtained his undergraduate degree from the École Normale Supérieure in Paris in 1971 and his doctoral degree in Mathematics at the University of Paris-IX in 1975 with a thesis on zero-sum games, which was published in French at the Mémoires de la Société Mathématique de France and in English in the Journal of Mathematical Analysis and its Applications. On 1979, he published a seminal paper in Econometrica introducing the notion of dominance solvable games. Dominance solvability is a solution concept for games which is based on an iterated procedure of deletion of dominated strategies by all participants. Dominance solvability is a stronger concept than Nash equilibrium because it does not require ex-ante coordination. Its only requirement is iterated common knowledge of rationality. His work on this concept was mentioned in Eric Maskin's Nobel Prize Lecture. One year later he proved an interesting result concerning the famous Gibbard-Satterthwaite Theorem, which states that any voting procedure on the universal domain of preferences whose range contains more than two alternatives is either dictatorial or manipulable. Moulin proved that it is possible to define non-dictatorial and non-manipulable social choice functions in the restricted domain of single-peaked preferences, i.e. those in which there is a unique best option, and other options are better as they are closer to the favorite one. Moreover, he provided a characterization of such rules. This paper inspired a whole literature on achieving strategy-proofness and fairness (even in a weak form as non-dictatorial schemes) on restricted domains of preferences. Moulin is also known for his seminal work in cost sharing and assignment problems. In particular, jointly with Anna Bogomolnaia, he proposed the probabilistic-serial procedure as a solution to the fair random assignment problem, which consists of dividing several goods among a number of persons. Probabilistic serial allows each person to "eat" her favorite shares, hence defining a probabilistic outcome. It always produces an outcome which is unambiguously efficient ex-ante, and thus has a strong claim over the popular random priority. The paper was published in 2001 in the Journal of Economic Theory. By summer of 2016, the article had 395 citations. He has been credited as the first proposer of the famous beauty contest game, also known as the guessing game, which shows that players fail to anticipate strategic behavior from other players. Experiments testing the equilibrium prediction of this game started the field of experimental economics. In July 2018 Moulin was elected Fellow of the British Academy (FBA). Coauthors Moulin has published work jointly with Matthew O. Jackson, Scott Shenker, and Anna Bogomolnaia, among many other academics. See also List of economists References External links Hervé Moulin's Personal Website List of Hervé Moulin's Publications at IDEAS REPEC 1950 births Academics of Adam Smith Business School Living people Game theorists French economists Fellows of the Econometric Society French mathematicians University of Paris alumni French expatriates in Scotland Fellows of the British Academy Fair division researchers Academics of the University of Glasgow
Hervé Moulin
Mathematics
952
44,484,486
https://en.wikipedia.org/wiki/Tylopilus%20costaricensis
Tylopilus costaricensis is a bolete fungus in the family Boletaceae found in Costa Rica, where it grows under oak in montane forest. It was described as new to science in 1991. References External links costaricensis Fungi described in 1991 Fungi of Central America Fungus species
Tylopilus costaricensis
Biology
60
36,746,396
https://en.wikipedia.org/wiki/Bernard%20Price%20Memorial%20Lecture
The Bernard Price Memorial Lecture is the premier annual lecture of the South African Institute of Electrical Engineers. It is of general scientific or engineering interest and is given by an invited guest, often from overseas, at several of the major centres on South Africa. The main lecture and accompanying dinner are usually held at the University of Witwatersrand and it is also presented in the space of one week at other centres, typically Cape Town, Durban, East London and Port Elizabeth. The Lecture is named in memory of the eminent electrical engineer Bernard Price. The first lecture was held in 1951, and it has occurred as an annual event ever since. Lecturers 1951 Basil Schonland 1952 A M Jacobs 1953 H J Van Eck 1954 J M Meek 1955 Frank Nabarro 1956 A L Hales 1957 P G Game 1958 Colin Cherry 1959 Thomas Allibone 1960 M G Say 1961 Willis Jackson 1963 W R Stevens 1964 William Pickering 1965 G H Rawcliffe 1966 Harold Bishop 1967 Eric Eastwood 1968 F J Lane 1969 A H Reeves 1970 Andrew R Cooper 1971 Herbert Haslegrave 1972 W J Bray 1973 R Noser 1974 D Kind 1975 L Kirchmayer 1976 S Jones 1977 J Johnson 1978 T G E Cockbain 1979 A R Hileman 1980 James Redmond 1981 L M Muntzing 1982 K F Raby 1983 R Isermann 1984 M N John 1985 J W L de Villiers 1986 Derek Roberts 1987 Wolfram Boeck 1988 Karl Gehring 1989 Leonard Sagan 1990 GKF Heyner 1991 P S Blythin 1992 P M Neches 1993 P Radley 1994 P R Rosen 1995 F P Sioshansi 1996 J Taylor 1997 M Chamia 1998 C Gellings 1999 M W Kennedy 2000 John Midwinter 2001 Pragasen Pillay 2002 Polina Bayvel 2003 Case Rijsdijk 2004 Frank Larkins 2005 Igor Aleksander 2006 Kevin Warwick 2007 Skip Hatfield 2008 Sami Solanki 2009 William Gruver 2010 Glenn Ricart 2011 Philippe Paelinck 2012 Nick Frydas 2013 Vint Cerf 2014 Ian Jandrell 2015 Saurabh Sinha 2016 Tshilidzi Marwala 2017 Fulufhelo Nelwamondo 2018 Ian Craig 2019 Robert Metcalfe 2020 Roger Price 2021 Saifur Rahman 2022 Stuart J. Russell 2023 Jan Meyer 2024 Vukosi Marivate See also List of engineering awards References Recurring events established in 1951 Price, Bernard Electrical engineering awards 1951 establishments in South Africa Engineering education Annual events in South Africa
Bernard Price Memorial Lecture
Engineering
499
331,755
https://en.wikipedia.org/wiki/Transitional%20fossil
A transitional fossil is any fossilized remains of a life form that exhibits traits common to both an ancestral group and its derived descendant group. This is especially important where the descendant group is sharply differentiated by gross anatomy and mode of living from the ancestral group. These fossils serve as a reminder that taxonomic divisions are human constructs that have been imposed in hindsight on a continuum of variation. Because of the incompleteness of the fossil record, there is usually no way to know exactly how close a transitional fossil is to the point of divergence. Therefore, it cannot be assumed that transitional fossils are direct ancestors of more recent groups, though they are frequently used as models for such ancestors. In 1859, when Charles Darwin's On the Origin of Species was first published, the fossil record was poorly known. Darwin described the perceived lack of transitional fossils as "the most obvious and gravest objection which can be urged against my theory," but he explained it by relating it to the extreme imperfection of the geological record. He noted the limited collections available at the time but described the available information as showing patterns that followed from his theory of descent with modification through natural selection. Indeed, Archaeopteryx was discovered just two years later, in 1861, and represents a classic transitional form between earlier, non-avian dinosaurs and birds. Many more transitional fossils have been discovered since then, and there is now abundant evidence of how all classes of vertebrates are related, including many transitional fossils. Specific examples of class-level transitions are: tetrapods and fish, birds and dinosaurs, and mammals and "mammal-like reptiles". The term "missing link" has been used extensively in popular writings on human evolution to refer to a perceived gap in the hominid evolutionary record. It is most commonly used to refer to any new transitional fossil finds. Scientists, however, do not use the term, as it refers to a pre-evolutionary view of nature. Evolutionary and phylogenetic taxonomy Transitions in phylogenetic nomenclature In evolutionary taxonomy, the prevailing form of taxonomy during much of the 20th century and still used in non-specialist textbooks, taxa based on morphological similarity are often drawn as "bubbles" or "spindles" branching off from each other, forming evolutionary trees. Transitional forms are seen as falling between the various groups in terms of anatomy, having a mixture of characteristics from inside and outside the newly branched clade. With the establishment of cladistics in the 1990s, relationships commonly came to be expressed in cladograms that illustrate the branching of the evolutionary lineages in stick-like figures. The different so-called "natural" or "monophyletic" groups form nested units, and only these are given phylogenetic names. While in traditional classification tetrapods and fish are seen as two different groups, phylogenetically tetrapods are considered a branch of fish. Thus, with cladistics there is no longer a transition between established groups, and the term "transitional fossils" is a misnomer. Differentiation occurs within groups, represented as branches in the cladogram. In a cladistic context, transitional organisms can be seen as representing early examples of a branch, where not all of the traits typical of the previously known descendants on that branch have yet evolved. Such early representatives of a group are usually termed "basal taxa" or "sister taxa," depending on whether the fossil organism belongs to the daughter clade or not. Transitional versus ancestral A source of confusion is the notion that a transitional form between two different taxonomic groups must be a direct ancestor of one or both groups. The difficulty is exacerbated by the fact that one of the goals of evolutionary taxonomy is to identify taxa that were ancestors of other taxa. However, because evolution is a branching process that produces a complex bush pattern of related species rather than a linear process producing a ladder-like progression, and because of the incompleteness of the fossil record, it is unlikely that any particular form represented in the fossil record is a direct ancestor of any other. Cladistics deemphasizes the concept of one taxonomic group being an ancestor of another, and instead emphasizes the identification of sister taxa that share a more recent common ancestor with one another than they do with other groups. There are a few exceptional cases, such as some marine plankton microfossils, where the fossil record is complete enough to suggest with confidence that certain fossils represent a population that was actually ancestral to a later population of a different species. But, in general, transitional fossils are considered to have features that illustrate the transitional anatomical features of actual common ancestors of different taxa, rather than to be actual ancestors. Prominent examples Archaeopteryx Archaeopteryx is a genus of theropod dinosaur closely related to the birds. Since the late 19th century, it has been accepted by palaeontologists, and celebrated in lay reference works, as being the oldest known bird, though a study in 2011 has cast doubt on this assessment, suggesting instead that it is a non-avialan dinosaur closely related to the origin of birds. It lived in what is now southern Germany in the Late Jurassic period around 150 million years ago, when Europe was an archipelago in a shallow warm tropical sea, much closer to the equator than it is now. Similar in shape to a European magpie, with the largest individuals possibly attaining the size of a raven, Archaeopteryx could grow to about 0.5 metres (1.6 ft) in length. Despite its small size, broad wings, and inferred ability to fly or glide, Archaeopteryx has more in common with other small Mesozoic dinosaurs than it does with modern birds. In particular, it shares the following features with the deinonychosaurs (dromaeosaurs and troodontids): jaws with sharp teeth, three fingers with claws, a long bony tail, hyperextensible second toes ("killing claw"), feathers (which suggest homeothermy), and various skeletal features. These features make Archaeopteryx a clear candidate for a transitional fossil between dinosaurs and birds, making it important in the study both of dinosaurs and of the origin of birds. The first complete specimen was announced in 1861, and ten more Archaeopteryx fossils have been found since then. Most of the eleven known fossils include impressions of feathers—among the oldest direct evidence of such structures. Moreover, because these feathers take the advanced form of flight feathers, Archaeopteryx fossils are evidence that feathers began to evolve before the Late Jurassic. Australopithecus afarensis The hominid Australopithecus afarensis represents an evolutionary transition between modern bipedal humans and their quadrupedal ape ancestors. A number of traits of the A. afarensis skeleton strongly reflect bipedalism, to the extent that some researchers have suggested that bipedality evolved long before A. afarensis. In overall anatomy, the pelvis is far more human-like than ape-like. The iliac blades are short and wide, the sacrum is wide and positioned directly behind the hip joint, and there is clear evidence of a strong attachment for the knee extensors, implying an upright posture. While the pelvis is not entirely like that of a human (being markedly wide, or flared, with laterally orientated iliac blades), these features point to a structure radically remodelled to accommodate a significant degree of bipedalism. The femur angles in toward the knee from the hip. This trait allows the foot to fall closer to the midline of the body, and strongly indicates habitual bipedal locomotion. Present-day humans, orangutans and spider monkeys possess this same feature. The feet feature adducted big toes, making it difficult if not impossible to grasp branches with the hindlimbs. Besides locomotion, A. afarensis also had a slightly larger brain than a modern chimpanzee (the closest living relative of humans) and had teeth that were more human than ape-like. Pakicetids, Ambulocetus The cetaceans (whales, dolphins and porpoises) are marine mammal descendants of land mammals. The pakicetids are an extinct family of hoofed mammals that are the earliest whales, whose closest sister group is Indohyus from the family Raoellidae. They lived in the Early Eocene, around 53 million years ago. Their fossils were first discovered in North Pakistan in 1979, at a river not far from the shores of the former Tethys Sea. Pakicetids could hear under water, using enhanced bone conduction, rather than depending on tympanic membranes like most land mammals. This arrangement does not give directional hearing under water. Ambulocetus natans, which lived about 49 million years ago, was discovered in Pakistan in 1994. It was probably amphibious, and looked like a crocodile. In the Eocene, ambulocetids inhabited the bays and estuaries of the Tethys Ocean in northern Pakistan. The fossils of ambulocetids are always found in near-shore shallow marine deposits associated with abundant marine plant fossils and littoral molluscs. Although they are found only in marine deposits, their oxygen isotope values indicate that they consumed water with a range of degrees of salinity, some specimens showing no evidence of sea water consumption and others none of fresh water consumption at the time when their teeth were fossilized. It is clear that ambulocetids tolerated a wide range of salt concentrations. Their diet probably included land animals that approached water for drinking, or freshwater aquatic organisms that lived in the river. Hence, ambulocetids represent the transition phase of cetacean ancestors between freshwater and marine habitat. Tiktaalik Tiktaalik is a genus of extinct sarcopterygian (lobe-finned fish) from the Late Devonian period, with many features akin to those of tetrapods (four-legged animals). It is one of several lines of ancient sarcopterygians to develop adaptations to the oxygen-poor shallow water habitats of its time—adaptations that led to the evolution of tetrapods. Well-preserved fossils were found in 2004 on Ellesmere Island in Nunavut, Canada. Tiktaalik lived approximately 375 million years ago. Paleontologists suggest that it is representative of the transition between non-tetrapod vertebrates such as Panderichthys, known from fossils 380 million years old, and early tetrapods such as Acanthostega and Ichthyostega, known from fossils about 365 million years old. Its mixture of primitive fish and derived tetrapod characteristics led one of its discoverers, Neil Shubin, to characterize Tiktaalik as a "fishapod." Unlike many previous, more fish-like transitional fossils, the "fins" of Tiktaalik have basic wrist bones and simple rays reminiscent of fingers. They may have been weight-bearing. Like all modern tetrapods, it had rib bones, a mobile neck with a separate pectoral girdle, and lungs, though it had the gills, scales, and fins of a fish. However in a 2008 paper by Boisvert at al. it is noted that Panderichthys, due to its more derived distal portion, might be closer to tetrapods than Tiktaalik, which might have independently developed similarities to tetrapods by convergent evolution. Tetrapod footprints found in Poland and reported in Nature in January 2010 were "securely dated" at 10 million years older than the oldest known elpistostegids (of which Tiktaalik is an example), implying that animals like Tiktaalik, possessing features that evolved around 400 million years ago, were "late-surviving relics rather than direct transitional forms, and they highlight just how little we know of the earliest history of land vertebrates." Amphistium Pleuronectiformes (flatfish) are an order of ray-finned fish. The most obvious characteristic of the modern flatfish is their asymmetry, with both eyes on the same side of the head in the adult fish. In some families the eyes are always on the right side of the body (dextral or right-eyed flatfish) and in others they are always on the left (sinistral or left-eyed flatfish). The primitive spiny turbots include equal numbers of right- and left-eyed individuals, and are generally less asymmetrical than the other families. Other distinguishing features of the order are the presence of protrusible eyes, another adaptation to living on the seabed (benthos), and the extension of the dorsal fin onto the head. Amphistium is a 50-million-year-old fossil fish identified as an early relative of the flatfish, and as a transitional fossil. In Amphistium, the transition from the typical symmetric head of a vertebrate is incomplete, with one eye placed near the top-center of the head. Paleontologists concluded that "the change happened gradually, in a way consistent with evolution via natural selection—not suddenly, as researchers once had little choice but to believe." Amphistium is among the many fossil fish species known from the Monte Bolca Lagerstätte of Lutetian Italy. Heteronectes is a related, and very similar fossil from slightly earlier strata of France. Runcaria A Middle Devonian precursor to seed plants has been identified from Belgium, predating the earliest seed plants by about 20 million years. Runcaria, small and radially symmetrical, is an integumented megasporangium surrounded by a cupule. The megasporangium bears an unopened distal extension protruding above the multilobed integument. It is suspected that the extension was involved in anemophilous pollination. Runcaria sheds new light on the sequence of character acquisition leading to the seed, having all the qualities of seed plants except for a solid seed coat and a system to guide the pollen to the seed. Fossil record Not every transitional form appears in the fossil record, because the fossil record is not complete. Organisms are only rarely preserved as fossils in the best of circumstances, and only a fraction of such fossils have been discovered. Paleontologist Donald Prothero noted that this is illustrated by the fact that the number of species known through the fossil record was less than 5% of the number of known living species, suggesting that the number of species known through fossils must be far less than 1% of all the species that have ever lived. Because of the specialized and rare circumstances required for a biological structure to fossilize, logic dictates that known fossils represent only a small percentage of all life-forms that ever existed—and that each discovery represents only a snapshot of evolution. The transition itself can only be illustrated and corroborated by transitional fossils, which never demonstrate an exact half-way point between clearly divergent forms. The fossil record is very uneven and, with few exceptions, is heavily slanted toward organisms with hard parts, leaving most groups of soft-bodied organisms with little to no fossil record. The groups considered to have a good fossil record, including a number of transitional fossils between traditional groups, are the vertebrates, the echinoderms, the brachiopods and some groups of arthropods. History Post-Darwin The idea that animal and plant species were not constant, but changed over time, was suggested as far back as the 18th century. Darwin's On the Origin of Species, published in 1859, gave it a firm scientific basis. A weakness of Darwin's work, however, was the lack of palaeontological evidence, as pointed out by Darwin himself. While it is easy to imagine natural selection producing the variation seen within genera and families, the transmutation between the higher categories was harder to imagine. The dramatic find of the London specimen of Archaeopteryx in 1861, only two years after the publication of Darwin's work, offered for the first time a link between the class of the highly derived birds, and that of the more basal reptiles. In a letter to Darwin, the palaeontologist Hugh Falconer wrote: Had the Solnhofen quarries been commissioned—by august command—to turn out a strange being à la Darwin—it could not have executed the behest more handsomely—than in the Archaeopteryx. Thus, transitional fossils like Archaeopteryx came to be seen as not only corroborating Darwin's theory, but as icons of evolution in their own right. For example, the Swedish encyclopedic dictionary Nordisk familjebok of 1904 showed an inaccurate Archaeopteryx reconstruction (see illustration) of the fossil, "ett af de betydelsefullaste paleontologiska fynd, som någonsin gjorts" ("one of the most significant paleontological discoveries ever made"). The rise of plants Transitional fossils are not only those of animals. With the increasing mapping of the divisions of plants at the beginning of the 20th century, the search began for the ancestor of the vascular plants. In 1917, Robert Kidston and William Henry Lang found the remains of an extremely primitive plant in the Rhynie chert in Aberdeenshire, Scotland, and named it Rhynia. The Rhynia plant was small and stick-like, with simple dichotomously branching stems without leaves, each tipped by a sporangium. The simple form echoes that of the sporophyte of mosses, and it has been shown that Rhynia had an alternation of generations, with a corresponding gametophyte in the form of crowded tufts of diminutive stems only a few millimetres in height. Rhynia thus falls midway between mosses and early vascular plants like ferns and clubmosses. From a carpet of moss-like gametophytes, the larger Rhynia sporophytes grew much like simple clubmosses, spreading by means of horizontal growing stems growing rhizoids that anchored the plant to the substrate. The unusual mix of moss-like and vascular traits and the extreme structural simplicity of the plant had huge implications for botanical understanding. Missing links The idea of all living things being linked through some sort of transmutation process predates Darwin's theory of evolution. Jean-Baptiste Lamarck envisioned that life was generated constantly in the form of the simplest creatures, and strove towards complexity and perfection (i.e. humans) through a progressive series of lower forms. In his view, lower animals were simply newcomers on the evolutionary scene. After On the Origin of Species, the idea of "lower animals" representing earlier stages in evolution lingered, as demonstrated in Ernst Haeckel's figure of the human pedigree. While the vertebrates were then seen as forming a sort of evolutionary sequence, the various classes were distinct, the undiscovered intermediate forms being called "missing links." The term was first used in a scientific context by Charles Lyell in the third edition (1851) of his book Elements of Geology in relation to missing parts of the geological column, but it was popularized in its present meaning by its appearance on page xi of his book Geological Evidences of the Antiquity of Man of 1863. By that time, it was generally thought that the end of the last glacial period marked the first appearance of humanity; Lyell drew on new findings in his Antiquity of Man to put the origin of human beings much further back. Lyell wrote that it remained a profound mystery how the huge gulf between man and beast could be bridged. Lyell's vivid writing fired the public imagination, inspiring Jules Verne's Journey to the Center of the Earth (1864) and Louis Figuier's 1867 second edition of La Terre avant le déluge ("Earth before the Flood"), which included dramatic illustrations of savage men and women wearing animal skins and wielding stone axes, in place of the Garden of Eden shown in the 1863 edition. The search for a fossil showing transitional traits between apes and humans, however, was fruitless until the young Dutch geologist Eugène Dubois found a skullcap, a molar and a femur on the banks of Solo River, Java in 1891. The find combined a low, ape-like skull roof with a brain estimated at around 1000 cc, midway between that of a chimpanzee and an adult human. The single molar was larger than any modern human tooth, but the femur was long and straight, with a knee angle showing that "Java Man" had walked upright. Given the name Pithecanthropus erectus ("erect ape-man"), it became the first in what is now a long list of human evolution fossils. At the time it was hailed by many as the "missing link," helping set the term as primarily used for human fossils, though it is sometimes used for other intermediates, like the dinosaur-bird intermediary Archaeopteryx. While "missing link" is still a popular term, well-recognized by the public and often used in the popular media, the term is avoided in scientific publications. Some bloggers have called it "inappropriate"; both because the links are no longer "missing", and because human evolution is no longer believed to have occurred in terms of a single linear progression. Punctuated equilibrium The theory of punctuated equilibrium developed by Stephen Jay Gould and Niles Eldredge and first presented in 1972 is often mistakenly drawn into the discussion of transitional fossils. This theory, however, pertains only to well-documented transitions within taxa or between closely related taxa over a geologically short period of time. These transitions, usually traceable in the same geological outcrop, often show small jumps in morphology between extended periods of morphological stability. To explain these jumps, Gould and Eldredge envisaged comparatively long periods of genetic stability separated by periods of rapid evolution. Gould made the following observation concerning creationist misuse of his work to deny the existence of transitional fossils: See also Crocoduck Evidence of common descent Missing link Speciation References Sources The book is available from The Complete Work of Charles Darwin Online. Retrieved 2015-05-13. External links Evolutionary biology concepts Zoology Phylogenetics
Transitional fossil
Biology
4,613
14,227,786
https://en.wikipedia.org/wiki/Transmissibility%20%28structural%20dynamics%29
Transmissibility, in the context of structural dynamics, can be defined as the ratio of the maximum force () on the floor as a result of the vibration of a machine to the maximum machine force (): Where is equal to the damping ratio and is equal to the frequency ratio. is the ratio of the dynamic to static amplitude. Further reading Vibration Control and Measurement Tech Tip: Spring & Dampers, Episode Four References Structural analysis
Transmissibility (structural dynamics)
Engineering
90
42,599,642
https://en.wikipedia.org/wiki/Hemihelix
A hemihelix is a curved geometric shape consisting of a series of helices with alternating chirality, connected by a perversion at the reversals. The formation of hemihelices with periodic distributions of perversions in slender structures is understood in terms of competing buckling instabilities generated by in-plane stresses. References External links Helices Geometric shapes Curves Articles containing video clips
Hemihelix
Mathematics
82
99,468
https://en.wikipedia.org/wiki/Bert%20H%C3%B6lldobler
Berthold Karl Hölldobler BVO (born 25 June 1936) is a German zoologist, sociobiologist and evolutionary biologist who studies evolution and social organization in ants. He is the author of several books, including The Ants, for which he and his co-author, E. O. Wilson, received the Pulitzer Prize for non-fiction writing in 1991. Biography Hölldobler was born June 25, 1936, in Erling-Andechs, Bavaria, Germany; he was the son of Karl and Maria Hölldobler. He studied biology and chemistry at the University of Würzburg. His doctoral thesis was on the social behavior of the male carpenter ant and their role in the organization of carpenter ant societies. He was named professor of zoology at the University of Frankfurt in 1971. From 1973 to 1990, he was professor of biology and the Alexander Agassiz Professor of Zoology at Harvard University in Cambridge, Massachusetts. In 1989, he returned to Germany to accept the chair of behavioral physiology and sociobiology at the Theodor-Boveri-Institute of the University of Würzburg. From 2002 to 2008 Hölldobler was an Andrew D. White Professor at Large at Cornell University in Ithaca, New York. Since his retirement in 2004 Hölldobler has worked as a research professor in the School of Life Sciences at Arizona State University in Tempe, Arizona. There he is one of the founders of the Social Insect Research Group (SIRG) and the Center for Social Dynamics and Complexity. Research fields and publications Hölldobler is one of the world's leading experts in myrmecology. His experimental and theoretical contributions cover sociobiology, behavioral ecology, and chemical ecology. His primary study subjects are social insects and in particular ants. His work has provided valuable insights into mating strategies, regulation of reproduction, the evolution of social parasitism, chemical communications, and the concept of "superorganisms". Publications on these topics include: 1965. with U. Maschwitz, Der Hochzeitsschwarm der Rossameise Camponotus herculeanus L. (Hym. Formicidae). Z. Vergl. Physiol. 50:551-568 1971. Sex pheromone in the ant Xenomyrmex floridanus J. Insect. Physiol. 17:1497-1499 1973. with M. Wüst Ein Sexualpheromon bei der Pharaoameise Monomorium pharaonis (L.) Z. Tierpsychol. 32:1-9 1974. Home range orientation and territoriality in harvesting ants Proc. Natl. Acad. Sci. USA, 71:3274-3277 1976. Recruitment behavior, home range orientation and territoriality in harvester ants, Pogonomyrmex Behav. Ecol. Sociobiol. 1:3-44 1978. with H. Engel, Tergal and sternal glands in ants Psyche 85:285-330 1980. with C. Lumsden, Territorial Strategies in Ants Science 210:732-739 1982. with H. Engel, R.W. Taylor, A New Sternal Gland in Ants and its Function in Chemical Communication Naturwissenschaften 69:90 1983. with E.O. Wilson, Queen Control in Colonies of Weaver Ants (Hymenoptera: Formicidae) Ann. of the Ent. Soc. of America 76:235-238 1984. with N.F. Carlin, Nestmate and Kin Recognition in Interspecific Mixed Colonies of Ants Science 222:1027-1029 1987. with N.F. Carlin, Anonymity and specificity in the chemical communication signals of social insects J. Comp. Physiol. A 161:567-581 1992. with K. Sommer, Coexistence and dominance among queens and mated workers in the ant Pachycondyla tridentata Naturwissenschaften 19:470-472 1998. with M. Obermayer, G.D. Alpert, Chemical trail communication in the amblyoponine species Mystrium rogeri Forel (Hymenoptera, Formicidae, Ponerinae) Chemoecology, 8:119-123 1999 with K. Tsuji, K. Egashira, Regulation of worker reproduction by direct physical contact in the ant Diacamma sp. from Japan Animal Behaviour 58:337-343 2003. with J. Gadau, C.P. Strehl, J. Oettler, Determinants of intracolonial relatedness in Pogonomyrmex rugosus (Hymenoptera; Formicidae) – mating frequency and brood raids, Molecular Ecology 12: 1931-1938 1999. with J. Liebig, C. Peeters, Worker policing limits the number of reproductives in a ponerine ant Proc. R. Soc. Lond. B 266:1865-1870 Awards John Simon Guggenheim Fellowship (1980) Gottfried Wilhelm Leibniz Prize of the Deutschen Forschungsgemeinschaft (1990) Pulitzer Prize (1991) for The Ants together with Edward O. Wilson U.S. Senior Scientist Prize of the Alexander von Humboldt Foundation Werner Heisenberg-Medal of the Alexander von Humboldt Foundation Körberpreis for the European Science (1996) Karl Ritter von Frisch Medal and Science Prize of the German Zoological Society (1996) Benjamin Franklin-Wilhelm v. Humboldt Prize of the German-American Academic Council ( 1999) Honorary doctor Biology of the University of Konstanz (2000) Order of Merit First Class of the Federal Republic of Germany (2000) Bavarian Maximilian Order for Science and Art (2003) Alfried-Krupp-Wissenschaftspreis (2004) Treviranus-Medal of the Verband deutscher Biologen (vdbiol) (2006) Lichtenberg Medal (2010) Ernst-Jünger-Prize for Entomology Baden-Württemberg (2010) Cothenius Medal in Gold of the Deutschen Akademie der Naturforscher Leopoldina (2011) Exemplar Award by the American Animal Behavior Society (2013) Fabricius medal (2019) (German entomological society) Academic associations Fellow of the American Animal Behavior Society (1992) Bayerischen Akademie der Wissenschaften (1986 korrespondierend, 1995 ordentlich) American Academy of Arts and Sciences Deutsche Nationale Akademie der Wissenschaften Leopoldina (1975) Fellow of the American Association for the Advancement of Sciences (1979) Academia Europaea (1994) Berlin-Brandenburgische Akademie der Wissenschaften (1995) National Academy of Sciences (United States) (1998) American Philosophical Society (1997) Fellow of the Entomological Society of America (2019) Documentary films In addition to his published scientific papers and books, Hölldobler's work was the subject of the documentary film Ants - Nature's Secret Power the winner of the 2005 Jackson Hole Wildlife Film Festivals Special Jury Prize. Books Bert Hölldobler, E.O. Wilson: The Ants, Harvard University Press, 1990, Bert Hölldobler, E.O. Wilson: Journey to the Ants: A Story of Scientific Exploration, 1994, Bert Hölldobler, E.O. Wilson: The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies, W.W. Norton, 2008. Bert Hölldobler, E.O. Wilson: The Leafcutter Ants: Civilization by Instinct, W.W. Norton & Company, Inc., 2011, Bert Hölldobler, Christina L. Kwapich: The Guests of Ants: How Myrmecophiles Interact with Their Hosts, Belknap Press of Harvard University Press, 2022, Notes References External links Curriculum vitae Faculty website Darwin Distinguished Lecture Series Jackson Hole Wildlife Film Festival The Social Nature of Nature - Ask A Biologist Audio Interview Web interviews Hölldobler's 2007 interview on the Ask A Biologist podcast program details his early life growing up in Germany as well as his interest in ants and writing. UC Riverside. Leading Entomologist to Give Talk at UC Riverside on Communication and Cooperation in Ant Societies 1936 births Living people Gottfried Wilhelm Leibniz Prize winners German entomologists Myrmecologists Pulitzer Prize for General Nonfiction winners University of Würzburg alumni Academic staff of the University of Würzburg Academic staff of Goethe University Frankfurt Harvard University faculty Cornell University faculty Arizona State University faculty 20th-century German writers 21st-century German writers 21st-century German male writers Members of the German National Academy of Sciences Leopoldina Members of Academia Europaea Foreign associates of the National Academy of Sciences Members of the United States National Academy of Sciences Fellows of the Entomological Society of America People from Starnberg (district) German expatriates in the United States Chemical ecologists Entomological writers Sociobiologists Members of the American Philosophical Society Recipients of the Cothenius Medal
Bert Hölldobler
Chemistry
1,871
4,926,585
https://en.wikipedia.org/wiki/John%20Muratore
John F. Muratore (born 1956) is a former NASA systems engineer-project manager and launch director at SpaceX. He is well known in the aerospace circles for his gregarious and unconventional style and use of rapid spiral development to reduce cost and schedule for introducing technical innovations. Biography Muratore was born in Brooklyn in 1956. He earned his Bachelor of Science in Electrical Engineering in 1979 from Yale University and a Master of Science in Computer Science in 1988 from the University of Houston–Clear Lake. He served in the US Air Force on the Air Force Space Shuttle Program at Vandenberg AFB, CA from 1979 until 1983 where he spent most of his time on assignment at Kennedy Space Center working on the Launch Sequence software. After his tour in the Air Force, Mr. Muratore joined NASA JSC after which he held progressively responsible leadership positions including Space Shuttle Flight Director, and Chief, Control Center Systems Division in the Mission Operations Directorate; and Associate Director and Deputy Manager, Advance Development Office and Assistant to the Director, Engineering within the Engineering Directorate. He was the 35th flight director in the history of human spaceflight of the United States and had the call sign "Kitty Hawk Flight" in honor of the location of the Wright Brothers first powered flight. As Chief of the Control Center Division, he led the conversion of Mission Control Center from the Apollo legacy mainframe based system to a networked Unix workstation based system to support Space Shuttle and the International Space Station. From 1996 to 2003, he was the Program Manager of the X-38 program, an unmanned demonstrator which performed a series of successful demonstration flights at Edwards Air Force Base. He gathered a team of young, relatively inexperienced but highly motivated engineers to try to apply the 'faster, better, cheaper' method advocated by Daniel Goldin to human spaceflight, in order for NASA to obtain at affordable cost a Crew Return Vehicle. In addition to serving as the project manager, Muratore served as Mission Director and B-52 Launch Panel Operator for several of the atmospheric drop tests of the vehicle. In 2003, the X-38 program was cancelled due to the International Space Station program's financial woes. Following the Columbia accident, Mr. Muratore was named Manager, Space Shuttle Systems Engineering and Integration Office, where he led the re-certification of the shuttle to enable its return to flight. In April 2006, following his technical opposition to Michael Griffin's decisions regarding the Shuttle return to flight, he was reassigned as Senior Systems Engineer supporting the Shuttle/Station Engineering Office in the Engineering Directorate. In August 2006, as part of NASA's outreach program, Muratore became an Adjunct Lecturer at Rice University in Houston, Texas. He taught graduate-level classes in Aerospace Systems Engineering and Introductory Flight Testing. Also while at Rice, Muratore advised an undergraduate Senior Design group tasked with creating an experiment to be flown on NASA's Weightless Wonder microgravity research aircraft. The purpose of the experiment was to demonstrate the feasibility of using the aircraft as a test-bed for commercial small-scale zero-gravity systems; testing of such systems in a 1 G environment requires costly simulators that cannot completely model micro- and zero-gravity environments. The Rice team, under the guidance of Muratore, showed that the NASA aircraft indeed was a viable platform for such testing, creating an impressive mock satellite in only two semesters with a very limited budget. Muratore's experiences at Rice University inspired him to teach full-time. He served four years as a research associate professor at the University of Tennessee Space Institute in Tullahoma, TN. His research at UTSI focused on the use of advanced airborne data acquisition networks for aircraft flight testing and airborne science. He instrumented aircraft that supported missions for NASA and NOAA making earth observations and performing atmospheric sampling. A course he developed in Space Systems Engineering at UTSI is hosted by NASA online for free access. In 2011, Muratore returned to space development by joining SpaceX of Hawthorne California. He supported the first commercial Falcon-9/Dragon mission to the International Space Station in May 2012. Mr Muratore served as the Launch Chief Engineer for the Falcon 9-7 launch of the SES-8 satellite in December 2013, the Falcon 9-8 launch of the Thaicom-6 satellite in January 2014 and the Falcon 9-15 launch of the DSCOVR spacecraft in February 2015. Muratore led the conversion of Launch Complex 39a and was Launch Director for the first flight of Falcon 9 at Launch Complex 39a in February 2017. Muratore then worked on the rebuild of SLC-40 which was damaged in the AMOS-6 explosion on 1 Sept 2016. Muratore was Launch Director for the first launch off the rebuilt SLC-40 in December 2017. In 2018 Muratore moved to South Texas to serve as the site director for the SpaceX complex at Boca Chica Texas, Muratore led the development of the site, the Starship build facilities and the launch pad. In 2019 Muratore served as the launch director for the first flight of Starhopper, a prototype of the SpaceX Mars Starship vehicle. Starhopper was the first SpaceX flight test of the Raptor methane-LOX engine. During his time at NASA, Mr Muratore was awarded the NASA Outstanding Leadership Medal, The NASA Exceptional Achievement Medal and the NASA Exceptional Service Medal. In September 2020, Muratore left SpaceX and joined Kairos Power as Senior Director of Special Projects. In this role, Muratore led the construction and operations of the Engineering Test Unit hardware demonstration in Albuquerque New Mexico. Kairos Power's mission is to enable the world’s transition to clean energy, with the ultimate goal of dramatically improving people’s quality of life while protecting the environment. Mr Muratore left Kairos Power in March 2023 at the completion of the ETU-1 build. The ETU-1 build was described in the 7 February 2023 edition of the Atlantic magazine. In April 2023, Muratore re-entered aerospace joining Venturi Astrolab working on the Lunar Terrain Vehicle (LTV) Services contract proposal. In April 2024, Venturi Astrolab won a competitive contract for developing a LTV rover for human and robotic exploration of the lunar South Pole. Muratore is serving as the Program Manager for Astrolab's effort. Muratore is a registered Professional Engineer (PE) in the State of Texas Publications Muratore has written several articles. A selection: John F. Muratore, Troy A. Heindel, Terri B. Murphy, Arthur N. Rasmussen, and Robert Z. McFarland, Space Shuttle Telemetry Monitoring by Expert Systems in Mission Control In Innovative Applications of Artificial Intelligence, H. Schoor and A. Rappaport, Eds., AAAI Press, 1989, pp. 3–14. 1990. John F. Muratore, Troy A. Heindel, Terri B. Murphy, Arthur N. Rasmussen, Robert Z. McFarland: "Real-Time Data Acquisition at Mission Control". In: Commun. ACM 33(12): 18-31 (1990) Ricardo Machin, Jenny Stein, John F. Muratore, An overview of the X-38 prototype crew return vehicle development and test program,15th Aerodynamic Decelerator Systems Technology Conference,10.2514/6.1999-1703, https://arc.aiaa.org/doi/abs/10.2514/6.1999-1703 2009 Chapter 7 Emergency Systems of “Safety Design of Space Systems” Gary Eugene Musgrave, Axel (Skip) M. Larsen and Tommaso Sgobba Elsevier References External links University of Tennessee Space Institute (University of Tennessee) NASA Space Grant Course Online - Space Systems Engineering SpaceX spacex-to-restore-upgraded-launch-pad-to-service-with-tuesday-cargo-flight SpaceX old and improved launch pad reopens for business tomorrow Kairos Power howa group of nasa renegades transformed mission control Venturi Astrolab awarded by NASA Moving Humanity Forward 1956 births Living people Systems engineers Yale University alumni University of Houston–Clear Lake alumni Rice University staff
John Muratore
Engineering
1,684
39,153,890
https://en.wikipedia.org/wiki/Mycobacterium%20virus%20L5
Mycobacterium virus L5 is a bacteriophage known to infect bacterial species of the genus Mycobacterium, including Mycobacterium smegmatis and Mycobacterium tuberculosis. The viral effect on these species plays an important role in vaccine development and research on Mycobacteria pathogenic properties. References Siphoviridae Mycobacteriophages
Mycobacterium virus L5
Biology
87
23,281,838
https://en.wikipedia.org/wiki/Beryllium%20sulfate
Beryllium sulfate normally encountered as the tetrahydrate, [Be(H2O)4]SO4 is a white crystalline solid. It was first isolated in 1815 by Jons Jakob Berzelius. Beryllium sulfate may be prepared by treating an aqueous solution of many beryllium salts with sulfuric acid, followed by evaporation of the solution and crystallization. The hydrated product may be converted to anhydrous salt by heating at 400 °C. Structure According to X-ray crystallography the tetrahydrate contains a tetrahedral Be(OH2)42+ unit and sulfate anions. The small size of the Be2+ cation determines the number of water molecules that can be coordinated. In contrast, the analogous magnesium salt, MgSO4·6H2O contains an octahedral Mg(OH2)62+ unit. The existence of the tetrahedral [Be(OH2)4]2+ ion in aqueous solutions of beryllium nitrate and beryllium chloride has been confirmed by vibrational spectroscopy, as indicated by the totally symmetric BeO4 mode attt 531 cm−1. This band is absent in beryllium sulfate, and the sulfate modes are perturbed. The data support the existence of Be(OH2)3OSO3. The anhydrous compound has a structure similar to that of boron phosphate. The structure contains alternating tetrahedrally coordinated Be and S and each oxygen is 2 coordinate (Be-O-S). The Be-O distance is 156 pm and the S-O distance is 150 pm. A mixture of beryllium and radium sulfate was used as the neutron source in the discovery of nuclear fission. References External links IARC Monograph "Beryllium and Beryllium Compounds" IPCS Health & Safety Guide 44 IPCS CICAD 32 Beryllium compounds Sulfates
Beryllium sulfate
Chemistry
400
12,086,235
https://en.wikipedia.org/wiki/Instituto%20de%20Medicina%20Molecular
The Instituto de Medicina Molecular João Lobo Antunes (Institute of Molecular Medicine), or iMM for short, is an associated research institution of the University of Lisbon, in Lisbon, Portugal. IMM is devoted to human genome research with the aim of contributing to a better understanding of disease mechanisms, developing novel predictive tests, improving diagnostics tools, and developing new therapeutic approaches. History IMM was created in November 2001, as a result from the association of 5 research centres from the University of Lisbon Medical School: the Biology and Molecular Pathology Centre (CEBIP), the Lisbon Neurosciences Centre (CNL), the Microcirculation and Vascular Pathobiology Centre (CMBV), the Gastroenterology Centre (CG), and the Nutrition and Metabolism Centre (CNB). In 2003, the Molecular Pathobiology Research Centre (CIPM) of the Portuguese Institute of Oncology Francisco Gentil (IPOFG) became an associate member of IMM. Historically, IMM benefited from the full integration of academic researchers into the Lisbon Medical School who initiated their academic training and scientific careers at Instituto Gulbenkian de Ciência (IGC), in Oeiras, one of the first national institutions to introduce and make use of state-of-the-art cell and molecular biology techniques. The IMM is now known as Instituto de Medicina Molecular João Lobo Antunes, to honour one of its founders and president (2001-2014), Professor João Lobo Antunes. Maria do Carmo-Fonseca is the current president of IMM, having served before as IMM Executive Director since its creation. The current executive director is the malaria researcher Maria Mota. References External links Official site Members Medical research institutes in Portugal Biotechnology organizations University of Lisbon 2001 establishments in Portugal Organizations established in 2001
Instituto de Medicina Molecular
Engineering,Biology
378
70,417,545
https://en.wikipedia.org/wiki/Vincent%20Bouchiat
Vincent Bouchiat (born 1970) is a French condensed matter physicist and entrepreneur. He was a CNRS research director from 1997 to 2019. In 2019 he co-founded the company Grapheal SAS, of which he is currently CEO. Early life and education Bouchiat was born to Claude Bouchiat and Marie-Anne Bouchiat, both of whom were physicists. Vincent Bouchiat followed his studies in Paris partially at the Lycée Henri-IV. In 1993, he received an engineer degree from the School of Industrial Physics and Chemistry of Paris ESPCI in 1993 and a master's degree in solid state physics from the University of Paris, Pierre & Marie Curie. After completing his Ph.D. at Quantronics group in CEA-Saclay in 1997 under the supervision of Michel Devoret and Daniel Estève. Career Directeur de recherche Bouchiat became a director of research at the French National Centre for Scientific Research (CNRS) in 1997. He was affiliated with the Institut Néel in Grenoble from 2012. Bouchiat also became invited professor in 2007 at the Physics department of University of California, Berkeley. Grapheal SAS In 2019, Bouchiat co-founded the company Grapheal SAS, where he is currently CEO. It is a startup focusing on the healthcare applications of graphene. Research Bouchiat's PhD thesis is recognized as a pioneering study in the field of quantum computing hardware, showing the quantum superposition of charge states in a single Cooper pair box. This experiment paved the way for the realisation of a charge qubit. Bouchiat's research interests cover a wide range of solid state physics and multidisciplinary investigations which include quantum information, superconductivity, carbon nanostructures (graphene and carbon nanotubes), bioelectronics and translational research research in medical sciences . Awards Bouchiat has won the following awards: Visiting Miller Professorship Award (2007) from Miller Institute at University of California, Berkeley Lee-Hsun Research Award (2017) from the Chinese Academy of Sciences (Institute of Metal Research) Yves Rocard Prize (2033) from the French Physical Society Personal life Vincent has a sister, Hélène Bouchiat, who is also a physicist. References External links LinkedIn profile PhD manuscript file 20th-century French physicists Condensed matter physicists Pierre and Marie Curie University alumni ESPCI Paris alumni 1970 births Living people 21st-century French physicists Research directors of the French National Centre for Scientific Research
Vincent Bouchiat
Physics,Materials_science
522
173,323
https://en.wikipedia.org/wiki/Chorology
Chorology (from Greek , khōros, "place, space"; and , -logia) can mean the study of the causal relations between geographical phenomena occurring within a particular region the study of the spatial distribution of organisms (biogeography). In geography, the term was first used by Strabo. In the twentieth century, Richard Hartshorne worked on that notion again. The term was popularized by Ferdinand von Richthofen. See also Chorography Khôra References Biogeography
Chorology
Biology
106
1,886,355
https://en.wikipedia.org/wiki/Eudoxa
Eudoxa was a Swedish think tank, from 2000 to 2016. Eudoxa has a transhumanist, and liberal political profile, with a focus on promoting dynamism, emerging technologies, harm reduction policy and discussing the challenges of the environment and the future. It was independent from political parties and other political and religious interest groups. Eudoxa organized seminars and conferences about these subjects, produced reports for corporations and organizations and promoted public debate. It had a staff consisting both of scientist and humanists, in order to bridge the rift between The Two Cultures on evaluating the effects of emerging technologies, and give a better analysis. Its intellectual inspiration derived much from the book The Future and Its Enemies by Virginia Postrel. Eudoxa discussed biotechnology, harm reduction, health care, nanotechnology, RFID, and intellectual property. Eudoxa was the Swedish partner in the International Property Rights Index. The think tank consisted of: Waldemar Ingdahl, Alexander Sanchez, and Anders Sandberg. According to the 2014 Global Go To Think Tank Index Report (Think Tanks and Civil Societies Program, University of Pennsylvania), Eudoxa is rated number 25 (of 45) in the "Top Science and Technology Think Tanks" of the world. The think tank was closed in 2016 References External links Eudoxa Political and economic think tanks based in Europe Politics of Sweden 2000 establishments in Sweden Radio-frequency identification Science and technology think tanks Libertarian think tanks Think tanks based in Sweden Libertarianism in Sweden Think tanks established in 2000
Eudoxa
Engineering
308
25,313,236
https://en.wikipedia.org/wiki/C23H38O4
The molecular formula C23H38O4 (molar mass: 378.54 g/mol, exact mass: 378.2770 u) may refer to: Apocholic acid 2-Arachidonoylglycerol (2-AG) Molecular formulas
C23H38O4
Physics,Chemistry
60
14,762,357
https://en.wikipedia.org/wiki/ST13
Hsc70-interacting protein also known as suppression of tumorigenicity 13 (ST13) is a protein that in humans is encoded by the ST13 gene. Function The protein encoded by this gene is an adaptor protein that mediates the association of the heat shock proteins HSP70 and HSP90. This protein has been shown to be involved in the assembly process of glucocorticoid receptor, which requires the assistance of multiple molecular chaperones. The expression of this gene is reported to be downregulated in colorectal carcinoma tissue suggesting that is a candidate tumor suppressor gene. References Further reading Co-chaperones
ST13
Chemistry
137
363,293
https://en.wikipedia.org/wiki/List%20of%20craters%20on%20Mercury
This is a list of named craters on Mercury, the innermost planet of the Solar System (for other features, see list of geological features on Mercury). Most Mercurian craters are named after famous writers, artists and composers. According to the rules by IAU's Working Group for Planetary System Nomenclature, all new craters must be named after an artist that was famous for more than fifty years, and dead for more than three years, before the date they are named. Craters larger than 250 km in diameter are referred to as "basins" (also see ). As of 2021, there are 414 named Mercurian craters, a small fraction of the total number of named Solar System craters, most of which are lunar, Martian and Venerian craters. Other, non-planetary bodies with numerous named craters include Callisto (141), Ganymede (131), Rhea (128), Vesta (90), Ceres (90), Dione (73), Iapetus (58), Enceladus (53), Tethys (50) and Europa (41). For a full list, see List of craters in the Solar System. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Terminology As on the Moon and Mars, sequences of craters and basins of differing relative ages provide the best means of establishing stratigraphic order on Mercury. Overlap relations among many large mercurian craters and basins are clearer than those on the Moon. Therefore, as this map shows, we can build up many local stratigraphic columns involving both crater or basin materials and nearby plains materials. Over all of Mercury, the crispness of crater rims and the morphology of their walls, central peaks, ejecta deposits, and secondary-crater fields have undergone systematic changes with time. The youngest craters or basins in a local stratigraphic sequence have the sharpest, crispest appearance. The oldest craters consist only of shallow depressions with slightly raised, rounded rims, some incomplete. On this basis, five age categories of craters and basins have been mapped; the characteristics of each are listed in the explanation. In addition, secondary crater fields are preserved around proportionally far more craters and basins on Mercury than on the Moon or Mars, and are particularly useful in determining overlap relations and degree of modification. Because only limited photographic evidence was available from Mariner 10s three flybys of the planet, these divisions are often tentative. The five crater groups, from youngest to oldest, are: c5: Fresh-appearing, sharp-rimmed, rayed craters. Highest albedo in map area; haloes and rays may extend many crater diameters from rim crests. Superposed on all other map units. Generally smaller and fewer than older craters. c4: Fresh but slightly modified craters—Similar in morphology to c5 craters but without bright haloes or rays; sharp rim crests; continuous ejecta blankets; very few superposed secondary craters. Floors consist of crater or smooth plains materials. c3: Modified craters—Rim crest continuous but slightly rounded and subdued. Ejecta blanket generally less extensive than those of younger craters of similar size. Superposed craters and rays common; smooth plains and intermediate plains materials cover floors of many craters. Central peaks more common than in c4 craters, probably because of larger average size of c3 craters. c2: Subdued craters—Low-rimmed, relatively shallow craters, many with discontinuous rim crests. Floors covered by smooth plains and intermediate plains materials. Crater density of ejecta blankets similar to that of intermediate plains material. c1 Degraded craters—Similar to c2 crater material but more deteriorated; many superposed craters. See also List of geological features on Mercury List of quadrangles on Mercury Note References Batson R.M., Russell J.F. (1994), Gazetteer of Planetary Nomenclature, United States Geological Survey Bulletin 2129 Davies M.E., Dwornik S.E., Gault D.E., Strom R.G. (1978), Atlas of Mercury, NASA Scientific and Technical Information Office External links USGS: Mercury nomenclature USGS: Mercury Nomenclature: Craters Atlas of Mercury Mercury Mercury (planet)-related lists sv:Lista över geologiska strukturer på Merkurius#Kratrar
List of craters on Mercury
Astronomy
908
76,904,389
https://en.wikipedia.org/wiki/Ana%20Passos
Ana Lúcia Silva de Passos (born 30 May 1967) is a politician and biologist. From 2015 to 2021, she was a member of the Assembly of the Republic of Portugal. Biography Ana Passos was born on 30 May 1967. She has a doctorate in genetics and molecular biology, and works as a biologist. Passos belongs to the Socialist Party, and from 4 October 2015 to 4 December 2021, she was a member of the Assembly of the Republic of Portugal, from the constituency of the Faro District. References 1967 births Living people 21st-century Portuguese women politicians Members of the 13th Assembly of the Republic (Portugal) Members of the 14th Assembly of the Republic (Portugal) Women members of the Assembly of the Republic (Portugal) Socialist Party (Portugal) politicians 21st-century Portuguese biologists Women biologists Women geneticists Women molecular biologists Molecular biologists Molecular geneticists Geneticists 20th-century Portuguese biologists 21st-century biologists People from Faro District
Ana Passos
Chemistry,Biology
195
14,054,801
https://en.wikipedia.org/wiki/Koch%20reaction
The Koch reaction is an organic reaction for the synthesis of tertiary carboxylic acids from alcohols or alkenes and carbon monoxide. Some commonly industrially produced Koch acids include pivalic acid, 2,2-dimethylbutyric acid and 2,2-dimethylpentanoic acid. The Koch reaction employs carbon monoxide as a reagent and can therefore be classified as a carbonylation. The carbonylated product is converted to a carboxylic acid, so in this respect the Koch reaction can also be classified as a carboxylation. Substrate scope and applications Pivalic acid is produced from isobutene using the Koch reaction, as well as several other branched carboxylic acids. An estimated 150,000 tonnes of "Koch acids" and their derivatives annually. Koch–Haaf-type reactions have been used to carboxylate adamantanes. Conditions The reaction is a strongly acid-catalyzed carbonylation and typically proceeds under pressures of CO and at elevated temperatures. The commercially important synthesis of pivalic acid from isobutenes operates near 50 °C and 50 kPa (50 atm). Generally the reaction is conducted with strong mineral acids such as sulfuric acid, HF, or phosphoric acid in combination with BF3. Formic acid, which readily decomposes to carbon monoxide in the presence of acids, can be used instead of carbon monoxide. This method is referred to as the Koch–Haaf reaction. This variation allows for reactions at nearly standard room temperature and pressure. Mechanism The mechanism has been intensively scrutinized. The mechanism involves generation of a tertiary carbenium ion, which binds carbon monoxide. The resulting acylium ion is then hydrolysed to the tertiary carboxylic acid: The carbenium ion can be produced either by protonation of an alkene or protonation/elimination of a tertiary alcohol: Catalyst usage and variations Standard acid catalysts are sulfuric acid or a mixture of BF3 and HF. Although the use of acidic ionic liquids for the Koch reaction requires relatively high temperatures and pressures (8 MPa and 430 K in one 2006 study), acidic ionic solutions themselves can be reused with only a very slight decrease in yield, and the reactions can be carried out biphasically to ensure easy separation of products. A large number of transition metal catalyst carbonyl cations have also been investigated for usage in Koch-like reactions: Cu(I), Au(I) and Pd(I) carbonyl cations catalysts dissolved in sulfuric acid can allow the reaction to progress at room temperature and atmospheric pressure. Usage of a Nickel tetracarbonyl catalyst with CO and water as a nucleophile is known as the Reppe carbonylation, and there are many variations on this type of metal-mediated carbonylation used in industry, particularly those used by Monsanto and the Cativa processes, which convert methanol to acetic acid using acid catalysts and carbon monoxide in the presence of metal catalysts. Because of the use of strong mineral acids, industrial implementation of the Koch reaction is complicated by equipment corrosion, separation procedures for products and difficulty in managing large amounts of waste acid. Several acid resins and acidic ionic liquids have been investigated in order to discover if Koch acids can be synthesized in more mild environments. Side reactions Koch reactions can involve a large number of side products, although high yields are generally possible (Koch and Haaf reported yields of over 80% for several alcohols in their 1958 paper). Carbocation rearrangements, etherization (in case an alcohol is used as a substrate, instead of an alkene), and occasionally substrate CN+1 carboxylic acids are observed due to fragmentation and dimerization of carbon monoxide-derived carbenium ions, especially since each step of the reaction is reversible. Alkyl sulfuric acids are also known to be possible side products, but are usually eliminated by the excess sulfuric acid used. Further reading See also Hydroformylation - related reaction of alkenes and CO to form aldehydes Gattermann–Koch reaction, arenes are converted to benzaldehyde derivatives in the presence of CO, AlCl3, and HCl. References Addition reactions Name reactions
Koch reaction
Chemistry
896
12,471,936
https://en.wikipedia.org/wiki/Hadamard%20manifold
In mathematics, a Hadamard manifold, named after Jacques Hadamard — more often called a Cartan–Hadamard manifold, after Élie Cartan — is a Riemannian manifold that is complete and simply connected and has everywhere non-positive sectional curvature. By Cartan–Hadamard theorem all Cartan–Hadamard manifolds are diffeomorphic to the Euclidean space Furthermore it follows from the Hopf–Rinow theorem that every pairs of points in a Cartan–Hadamard manifold may be connected by a unique geodesic segment. Thus Cartan–Hadamard manifolds are some of the closest relatives of Examples The Euclidean space with its usual metric is a Cartan–Hadamard manifold with constant sectional curvature equal to Standard -dimensional hyperbolic space is a Cartan–Hadamard manifold with constant sectional curvature equal to Properties In Cartan-Hadamard manifolds, the map is a diffeomorphism for all See also References Riemannian manifolds
Hadamard manifold
Mathematics
208
10,405,712
https://en.wikipedia.org/wiki/QuickLOAD
QuickLOAD is an internal ballistics predictor computer program for firearms. For computations apart from other parameters, the cartridge the projectile (bullet) the gun barrel length the cartridge overall length the propellant type and quantity must be entered for calculating an estimated maximum chamber gas piezo pressure, muzzle velocity, muzzle pressure and other relevant data. QuickLOAD database QuickLOAD has a default database of predefined bullets, cartridges and propellants. The database of the more recent versions of QuickLOAD also include dimensional technical drawings of the predefined cartridges and for most cartridges photographic images. Data can later be imported or entered by the user to expand the programs database. The default database contains more than 2,500 projectiles, over 1,200 cartridges, over 225 powders and dimensional drawings and photos of many cartridges. The default database however contains some errors, so measuring sizes, weights and case capacities of components intended for use and if appropriate correcting default provided data is wise to avoid surprises and make the predictions more accurate. Some default data is incomplete, since it was not released by the manufacturer or when components that are neither officially registered with nor sanctioned by C.I.P. (Commission Internationale Permanente Pour L'Epreuve Des Armes A Feu Portative) or its American equivalent, SAAMI (Sporting Arms and Ammunition Manufacturers’ Institute) come into play. Such wildcat cartridges have no official dimensions nor other performance related specifications. Cartridge case volume establishment Besides the standard entered information, the actual internal volume or cartridge case capacity of the used cases is an important parameter for QuickLOAD to obtain usable predictions. The internal case volume has to be established by weighing empty once-fired cartridge cases from a production lot, then filling the cases with fresh or distilled water up to the point of overflowing and weighing the water-filled cases. The added weight of the water is then used to establish the liquid volume and hence the case capacity. This liquid volume measurement method can be practically employed to about a 0.01 to 0.02 ml or 0.15 to 0.30 grains of water precision level for firearms cartridge cases. A case capacity establishment should be done by measuring several fired cases from a particular production lot and calculating their average case capacity. This also provides insight into the uniformity of the sampled lot. A water case capacity test measurement of 4 fired .35 Whelen Remington cases resulted in: The case capacity of different cartridge brands of a particular chambering can significantly vary between cartridge case manufacturers and even production lots. The default database of QuickLOAD for example contains 5 different .300 Winchester Magnum case capacities for 5 different cartridge case manufacturers. Practical use and limitations QuickLOAD mainly helps reloaders understand how changing variables can affect barrel harmonics, pressures and muzzle velocities. It can predict the effect of changes in ambient temperature, bullet seating depth, and barrel length. However, QuickLOAD has limitations, as it is merely a computer simulation. It does not account for different brands of primers for example, and its ability to predict the effect of seating bullets into the rifling is crude. A QuickLOAD user most certainly should not just "plug in" a cartridge, bullet and powder and use that load, assuming it is safe. It is good practice to double- or triple-check QuickLOAD's output against reliable load data supplied by the powder producing companies. Of course the best way to check firearms cartridge loads are actual proof test measurements at certified test facilities. QuickTARGET external ballistics predictor computer program The QuickLOAD interior ballistics predictor program also contains the external ballistics predictor computer program QuickTARGET. QuickTARGET is based on the Siacci/Mayevski G1 model and gives the user the possibility to enter several different BC G1 constants for different speed regimes to calculate ballistic predictions that more closely match a bullet's flight behaviour at longer ranges in comparison to calculations that use only one BC constant. In 2008 QuickTARGET Unlimited was introduced as an additional part of the QuickLOAD/QuickTARGET 3.4 version software suite. QuickTARGET Unlimited is an enhanced beta version of QuickTARGET that can take several long range factors into account to make the external ballistic predictions more accurate. For this it can use several drag models; G1, G5, G7, etc. and a custom drag function that uses drag coefficient (Cd) data. In January 2009 the Finnish ammunition manufacturer Lapua published Doppler radar tests derived drag coefficient data for most of their rifle projectiles. The predictive capabilities of the custom mode are based on actual bullet flight data derived from Doppler radar test sessions. With this data engineers can create algorithms that utilize both known mathematical ballistic models as well as test specific, tabular data in unison. Besides the data for Lapua bullets QuickTARGET Ultimate also contains Cd data for some other projectiles that are often used for extended range shooting. Computer requirements QuickLOAD/QuickTARGET 3.6 version and up is compatible only with the Microsoft Windows 7 to Windows 11 operating system. The software suite can be used with metric units and imperial units/United States customary units and was created and is maintained by mechanical engineer Mr. Hartmut G. Brömel in Babenhausen, Germany. QuickLOAD is distributed in the United States, Canada, Mexico, South Africa, Australia and New Zealand a by NECO (Nostalgia Enterprises Company) and Europe except Germany, Czech Republic, Denmark, Finland and Ukraine by JMS Arms. References External links 6mmBR.com QuickLOAD Review & User's Guide RealGuns.com QuickLOAD Review Desktop Data by Craig Boddington, Guns & Ammo Magazine Handloading Ballistics
QuickLOAD
Physics
1,160
26,136,135
https://en.wikipedia.org/wiki/HIP%2079431%20b
HIP 79431 b is an extrasolar planet discovered by the W. M. Keck Observatory in 2010. The planet is found in an M type dwarf star catalogued as HIP 79431, and is located within the Scorpius constellation approximately 47 light years away from the Earth. Its orbital period lasts about 111.7 days and has an orbital eccentricity of 0.29. The planet is the 6th giant planet to be detected in the Doppler surveys of M dwarfs and is considered to be one of the most massive planets found around M dwarf stars. The planet HIP 79431 b is named Barajeel. The name was selected in the NameExoWorlds campaign by the United Arab Emirates, during the 100th anniversary of the IAU. A barajeel is a wind tower used to direct the flow of the wind so that air can be recirculated as a form of air conditioning. Observations HIP 79431 b is located in orbit around a star whose metallicity had been challenging to be assessed due to the uncertainties within the molecular data line, however it has not been typical for M Dwarfs to have strong emissions from the core data lines. This led to the inclusion of the HIP 79431 star to the KECK program in April 2009 as part of the exoplanet studies for low mass stars. During this study, 13 Doppler measurements of the star were done over a 6-month period using the High Resolution Echelle Spectrometer (HIRES). The exposure times used in its observation was 600 seconds which yielded a signal-to-noise ratio of just under 100. Each program observation required the use of iodine absorption lines in order to model the wavelength scale as well as the instrumental profile of the telescope and spectrometer optics. The Doppler experiments derived radial velocities fit the Keplerian model showing an unambiguous signal and orbital parameters which best fit planetary gravitational pull, this revealed the presence of the planet HIP 79431 b. However, there is no evidence that any additional planets was found. Low transit possibility The planet HIP 79431 b has a low transit probability mainly due to its semi-major axis orientation. Another observation with regard to its eccentricity orbit is that it brings the planet closer to its star periastron increasing the probability of a transit, which was estimated as a meager 0.5%. According to the KECK program, if a transit would occur, the depth would be remarkable mainly due to the calculated mass of the planet. See also Gliese 179 b References Exoplanets discovered in 2009 Scorpius Giant planets Exoplanets with proper names
HIP 79431 b
Astronomy
551
2,213,768
https://en.wikipedia.org/wiki/Property%20of%20Baire
A subset of a topological space has the property of Baire (Baire property, named after René-Louis Baire), or is called an almost open set, if it differs from an open set by a meager set; that is, if there is an open set such that is meager (where denotes the symmetric difference). Definitions A subset of a topological space is called almost open and is said to have the property of Baire or the Baire property if there is an open set such that is a meager subset, where denotes the symmetric difference. Further, has the Baire property in the restricted sense if for every subset of the intersection has the Baire property relative to . Properties The family of sets with the property of Baire forms a σ-algebra. That is, the complement of an almost open set is almost open, and any countable union or intersection of almost open sets is again almost open. Since every open set is almost open (the empty set is meager), it follows that every Borel set is almost open. If a subset of a Polish space has the property of Baire, then its corresponding Banach–Mazur game is determined. The converse does not hold; however, if every game in a given adequate pointclass is determined, then every set in has the property of Baire. Therefore, it follows from projective determinacy, which in turn follows from sufficient large cardinals, that every projective set (in a Polish space) has the property of Baire. It follows from the axiom of choice that there are sets of reals without the property of Baire. In particular, a Vitali set does not have the property of Baire. Already weaker versions of choice are sufficient: the Boolean prime ideal theorem implies that there is a nonprincipal ultrafilter on the set of natural numbers; each such ultrafilter induces, via binary representations of reals, a set of reals without the Baire property. See also References External links Springer Encyclopaedia of Mathematics article on Baire property Descriptive set theory Determinacy
Property of Baire
Mathematics
434
15,853,493
https://en.wikipedia.org/wiki/Sectional%20density
Sectional density (often abbreviated SD) is the ratio of an object's mass to its cross sectional area with respect to a given axis. It conveys how well an object's mass is distributed (by its shape) to overcome resistance along that axis. Sectional density is used in gun ballistics. In this context, it is the ratio of a projectile's weight (often in either kilograms, grams, pounds or grains) to its transverse section (often in either square centimeters, square millimeters or square inches), with respect to the axis of motion. It conveys how well an object's mass is distributed (by its shape) to overcome resistance along that axis. For illustration, a nail can penetrate a target medium with its pointed end first with less force than a coin of the same mass lying flat on the target medium. During World War II, bunker-busting Röchling shells were developed by German engineer August Coenders, based on the theory of increasing sectional density to improve penetration. Röchling shells were tested in 1942 and 1943 against the Belgian Fort d'Aubin-Neufchâteau and saw very limited use during World War II. Formula In a general physics context, sectional density is defined as: SD is the sectional density M is the mass of the projectile A is the cross-sectional area The SI derived unit for sectional density is kilograms per square meter (kg/m2). The general formula with units then becomes: where: SDkg/m2 is the sectional density in kilograms per square meters mkg is the weight of the object in kilograms Am2 is the cross sectional area of the object in meters Units conversion table (Values in bold face are exact.)<noinclude> 1 g/mm2 equals exactly  kg/m2. 1 kg/cm2 equals exactly  kg/m2. With the pound and inch legally defined as and 0.0254 m respectively, it follows that the (mass) pounds per square inch is approximately: 1 lb/in2 = /(0.0254 m × 0.0254 m) ≈ Use in ballistics The sectional density of a projectile can be employed in two areas of ballistics. Within external ballistics, when the sectional density of a projectile is divided by its coefficient of form (form factor in commercial small arms jargon); it yields the projectile's ballistic coefficient. Sectional density has the same (implied) units as the ballistic coefficient. Within terminal ballistics, the sectional density of a projectile is one of the determining factors for projectile penetration. The interaction between projectile (fragments) and target media is however a complex subject. A study regarding hunting bullets shows that besides sectional density several other parameters determine bullet penetration. If all other factors are equal, the projectile with the greatest amount of sectional density will penetrate the deepest. Metric units When working with ballistics using SI units, it is common to use either grams per square millimeter or kilograms per square centimeter. Their relationship to the base unit kilograms per square meter is shown in the conversion table above. Grams per square millimeter Using grams per square millimeter (g/mm2), the formula then becomes: Where: SDg/mm2 is the sectional density in grams per square millimeters mg is the mass of the projectile in grams dmm is the diameter of the projectile in millimeters For example, a small arms bullet with a mass of and having a diameter of has a sectional density of: 4 · 10.4 / (π·6.72) = 0.295 g/mm2 Kilograms per square centimeter Using kilograms per square centimeter (kg/cm2), the formula then becomes: Where: SDkg/cm2 is the sectional density in kilograms per square centimeter mg is the mass of the projectile in grams dcm is the diameter of the projectile in centimeters For example, an M107 projectile with a mass of 43.2 kg and having a body diameter of has a sectional density of: 4 · 43.2 / (π·154.712) = 0.230 kg/cm2 English units In older ballistics literature from English speaking countries, and still to this day, the most commonly used unit for sectional density of circular cross-sections is (mass) pounds per square inch (lbm/in2) The formula then becomes: where: SD is the sectional density in (mass) pounds per square inch the mass of the projectile is: mlb in pounds mgr in grains din is the diameter of the projectile in inches The sectional density defined this way is usually presented without units. In Europe the derivative unit g/cm2 is also used in literature regarding small arms projectiles to get a number in front of the decimal separator. As an example, a bullet with a mass of and a diameter of , has a sectional density (SD) of: 4·(160 gr/7000) / (π·0.264 in2) = 0.418 lbm/in2 As another example, the M107 projectile mentioned above with a mass of and having a body diameter of has a sectional density of: 4 · (95.24) / (π·6.09092) = 3.268 lbm/in2 See also Ballistic coefficient References Projectiles Aerodynamics Ballistics
Sectional density
Physics,Chemistry,Engineering
1,087
53,584,725
https://en.wikipedia.org/wiki/Post-tech
Post-tech, (or post-technology, post-digital-technology) is type of technology that is more concerned about being human than about technology. It advocates a design that is not merely focused on efficiency and exploiting users by increasing their time spent with digital devices and technology itself but to support the user's focus and intent, well-being, and independence (from technology). With this interstitial spaces could also be created, similar to what Michel Foucault describes as Heterotopia (space). See also Human-centered computing (discipline) Human-computer interaction Attention economy References Human–computer interaction Ubiquitous computing Postmodernism Product design
Post-tech
Technology,Engineering
138
47,447,385
https://en.wikipedia.org/wiki/Cyan%20Engineering
Cyan Engineering was an American computer engineering company located in Grass Valley, California. It was founded by Steve Mayer and Larry Emmons. The company was purchased in 1973 by Atari, Inc. and developed the Atari Video Computer System console, which was released in 1977 and renamed the Atari 2600 in November 1982. It also carried out some robotics research and development work on behalf of Atari, including the Kermit mobile robot, originally intended as a stand-alone product intended to bring a beer. The company also programmed the original "portrait style" animatronics for Chuck E. Cheese's Pizza Time Theatre pizza chain in 1977. Further reading References Defunct computer companies of the United States Defunct computer hardware companies Defunct robotics companies of the United States 1978 in robotics Atari
Cyan Engineering
Technology
155
44,514,254
https://en.wikipedia.org/wiki/Compact%20finite%20difference
The compact finite difference formulation, or Hermitian formulation, is a numerical method to compute finite difference approximations. Such approximations tend to be more accurate for their stencil size (i.e. their compactness) and, for hyperbolic problems, have favorable dispersive error and dissipative error properties when compared to explicit schemes. A disadvantage is that compact schemes are implicit and require to solve a diagonal matrix system for the evaluation of interpolations or derivatives at all grid points. Due to their excellent stability properties, compact schemes are a popular choice for use in higher-order numerical solvers for the Navier-Stokes Equations. Example The classical Pade scheme for the first derivative at a cell with index () reads; Where is the spacing between points with index . The equation yields a fourth-order accurate solution for when supplemented with suitable boundary conditions (typically periodic). When compared to the 4th-order accurate central explicit method; the former (implicit) method is compact as it only uses values on a 3-point stencil instead of 5. Derivation of compact schemes Compact schemes are derived using a Taylor series expansion. Say we wish to construct a compact scheme with a three-point stencil (as in the example): From a symmetry argument we deduce , and , resulting in a two-parameter system, We write the expansions around up to a reasonable number of terms and using notation , Each column on the right-hand side gives an equation for the coefficients , We now have two equations for two unknowns and therefore stop checking for higher-order-term equations. which is indeed the scheme from the example. Evaluation of a compact scheme List of compact schemes First derivative 4th order central scheme: 6th order central scheme: References Multivariable calculus
Compact finite difference
Mathematics
364
11,680,645
https://en.wikipedia.org/wiki/Interval%20order
In mathematics, especially order theory, the interval order for a collection of intervals on the real line is the partial order corresponding to their left-to-right precedence relation—one interval, I1, being considered less than another, I2, if I1 is completely to the left of I2. More formally, a countable poset is an interval order if and only if there exists a bijection from to a set of real intervals, so , such that for any we have in exactly when . Such posets may be equivalently characterized as those with no induced subposet isomorphic to the pair of two-element chains, in other words as the -free posets . Fully written out, this means that for any two pairs of elements and one must have or . The subclass of interval orders obtained by restricting the intervals to those of unit length, so they all have the form , is precisely the semiorders. The complement of the comparability graph of an interval order (, ≤) is the interval graph . Interval orders should not be confused with the interval-containment orders, which are the inclusion orders on intervals on the real line (equivalently, the orders of dimension ≤ 2). Interval orders' practical applications include modelling evolution of species and archeological histories of pottery styles. Interval orders and dimension An important parameter of partial orders is order dimension: the dimension of a partial order is the least number of linear orders whose intersection is . For interval orders, dimension can be arbitrarily large. And while the problem of determining the dimension of general partial orders is known to be NP-hard, determining the dimension of an interval order remains a problem of unknown computational complexity. A related parameter is interval dimension, which is defined analogously, but in terms of interval orders instead of linear orders. Thus, the interval dimension of a partially ordered set is the least integer for which there exist interval orders on with exactly when and . The interval dimension of an order is never greater than its order dimension. Combinatorics In addition to being isomorphic to -free posets, unlabeled interval orders on are also in bijection with a subset of fixed-point-free involutions on ordered sets with cardinality . These are the involutions with no so-called left- or right-neighbor nestings where, for any involution on , a left nesting is an such that and a right nesting is an such that . Such involutions, according to semi-length, have ordinary generating function The coefficient of in the expansion of gives the number of unlabeled interval orders of size . The sequence of these numbers begins 1, 2, 5, 15, 53, 217, 1014, 5335, 31240, 201608, 1422074, 10886503, 89903100, 796713190, 7541889195, 75955177642, … Notes References . . . . . Further reading Order theory Combinatorics
Interval order
Mathematics
623
34,976,899
https://en.wikipedia.org/wiki/Group%20contraction
In theoretical physics, Eugene Wigner and Erdal İnönü have discussed the possibility to obtain from a given Lie group a different (non-isomorphic) Lie group by a group contraction with respect to a continuous subgroup of it. That amounts to a limiting operation on a parameter of the Lie algebra, altering the structure constants of this Lie algebra in a nontrivial singular manner, under suitable circumstances. For example, the Lie algebra of the 3D rotation group , , etc., may be rewritten by a change of variables , , , as . The contraction limit trivializes the first commutator and thus yields the non-isomorphic algebra of the plane Euclidean group, . (This is isomorphic to the cylindrical group, describing motions of a point on the surface of a cylinder. It is the little group, or stabilizer subgroup, of null four-vectors in Minkowski space.) Specifically, the translation generators , now generate the Abelian normal subgroup of (cf. Group extension), the parabolic Lorentz transformations. Similar limits, of considerable application in physics (cf. correspondence principles), contract the de Sitter group to the Poincaré group , as the de Sitter radius diverges: ; or the super-anti-de Sitter algebra to the super-Poincaré algebra as the AdS radius diverges ; or the Poincaré group to the Galilei group, as the speed of light diverges: ; or the Moyal bracket Lie algebra (equivalent to quantum commutators) to the Poisson bracket Lie algebra, in the classical limit as the Planck constant vanishes: . Notes References Lie algebras Lie groups Mathematical physics Turkish inventions
Group contraction
Physics,Mathematics
343
49,741,712
https://en.wikipedia.org/wiki/Douglas%20Keszler
Douglas A. Keszler is a distinguished professor in the Department of Chemistry at Oregon State University, adjunct professor in the Physics Department at OSU and adjunct professor in the Department of Chemistry at University of Oregon. He is also the director of the Center for Sustainable Materials Chemistry, and a member of the Oregon Nanoscience and Microtechnologies Institute (ONAMI) leadership team. Career Keszler received his BS at Southwestern Oklahoma State University in 1979. He worked on his PhD in Northwestern University under the supervision of Prof. James A. Ibers and received his degree in 1984. He continued his career as a postdoctoral fellow at Cornell University under the supervision of Prof. Roald Hoffmann in 1984–1985. Keszler joined the faculty of Oregon State University in 1985 as an assistant professor. He became an associate professor in 1990, professor in 1995 and distinguished professor in 2006. Research Early Research Some of Keszler’s early work shows the importance of his research in material science for application purposes. For example, in 2002, he worked on thin-film electroluminescent devices which display high definition monochromic color outputs, and developing them to display a full range of color. They specifically focused on phosphor Zn1-3x/2GaxS:Mn and strontium sulfide codoped with copper and potassium powders which was observed to have identical emission properties as thin films. Essentially by codoping, the band gap length of a material can be tuned so that the color of the light can be adjusted. The light itself is emitted when excited electrons in the conduction band fall back down to the valence band. By manipulating the properties of crystal and defect chemistry, any color can be portrayed for display. Keszler has also developed a convenient method for solid synthesis. In 2001, he demonstrated a hydrothermal dehydration technique of precipitates which avoids formation of amorphous products that are created through the conventional drying process of heating. Through this method, he showed the formation of Zn2SiO4 and SnSiO3. This technique has allowed for development of materials such as powders, thin films, and luminescent materials. In 2000, Douglas Keszler and his colleagues worked with non-linear optical materials such as Ca4GdO(BO3)3(GdCOB). They measured the Raman spectra of Ca4GdO(BO3)3(GdCOB) which was grown using the Czochralski method. This experiment was done to ultimately understand the spectroscopic features of Yb3+ and Nd3+ by analyzing vibrations of two different types of (BO3)3− groups. Recent Research Keszler’s research group focuses on the synthesis and study of inorganic molecules and materials related to next-generation electronic and energy devices. Their discovery and development on water-based chemistries for high-quality films demonstrates the leading results in the field of ultra small-scale dense nanopatterning and tunneling electronic devices. One of their recent publication in 2014 focuses on amorphous oxide semiconductor (AOS) thin-film transistors (TFTs) which are widely used in active-matrix organic light-emitting diode (AMOLED) applications, as well as active-matrix liquid crystal display (AMLCD) backplane applications. AMOLED is a display technology used in smartwatches, mobile devices, laptops, and televisions. OLED describes a specific type of think-film-display technology. TFT is a special kind of field-effect transistor made by depositing thin films of an active semiconductor layer as well as the dielectric layer and metallic contacts over a supporting but non-conducting substrate. Keszler's group discovered that indium gallium zinc oxide (IGZO) is a material-of choice for the replacement of hydrogenated silicon (a-Si:H) that is currently used in switching TFTs. IGZO comparing to hydrogenated silicon material has incredible advantages in the cost of synthesis. Another aspect of Keszler's research demonstrates synthesis of functional inorganic materials such as high-quality inorganic films and ordered nanostructures with single-digit nanometer resolution in solution. In 2013, they came up with successful aqueous-based synthesis of ultrathin films of TiO2 and aqueous-derived Al4O3(PO4)2(AlPO) films and were able to assemble these materials into nanolaminates. Keszler group’s successful synthesis of flat cluster [Al13(μ3-OH)6(μ2-OH)18(H2O)24]15+ using an electrochemical method and treating aqueous aluminum nitrate solution with a zinc metal powder at room temperature demonstrate the importance of his work to the field of water-based material synthesis. From that, they focused more on developing aqueous-based synthesis of couple other compounds such as [Sc2(μ-OH)2(H2O)6(NO3)2](NO3)2] from an aqueous scandium nitrate solution. Awards Douglas Keszler has received a number of awards and honors, including the following: Exxon Solid-state Chemistry Award (1988) Alfred P. Sloan Research Fellow (1990) T. T. Sugihara Young Faculty Research Award (1994) F.A. Gilfillan Award (2001) OSU Researcher of the Year Award (2003) SWOSU Alumni Fellow (2005) ACS Award in the Chemistry of Materials (2017) References External links Center for Sustainable Materials Chemistry Keszler Research Group Oregon State University, Department of Chemistry 21st-century American chemists Oregon State University faculty Northwestern University alumni Living people Solid state chemists Southwestern Oklahoma State University alumni Year of birth missing (living people)
Douglas Keszler
Chemistry
1,206
72,539,112
https://en.wikipedia.org/wiki/S/2011%20J%203
S/2011 J 3 is a small outer natural satellite of Jupiter discovered by Scott S. Sheppard on 27 September 2011, using the 6.5-meter Magellan-Baade Telescope at Las Campanas Observatory, Chile. It was announced by the Minor Planet Center 11 years later on 20 December 2022, after observations were collected over a long enough time span to confirm the satellite's orbit. S/2011 J 3 is part of the Himalia group, a tight cluster of prograde irregular moons of Jupiter that follow similar orbits to Himalia at semi-major axes between and inclinations between 26–31°. With an estimated diameter of for an absolute magnitude of 16.3, it is among the smallest known members of the Himalia group. References Himalia group Moons of Jupiter Irregular satellites 20110927 Discoveries by Scott S. Sheppard Moons with a prograde orbit
S/2011 J 3
Astronomy
180