id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
10,159,361
https://en.wikipedia.org/wiki/Indium%28I%29%20bromide
Indium(I) bromide is a chemical compound of indium and bromine. It is a red crystalline compound that is isostructural with β-TlI and has a distorted rock salt structure. Indium(I) bromide is generally made from the elements, heating indium metal with InBr3. It has been used in the sulfur lamp. In organic chemistry, it has been found to promote the coupling of α, α-dichloroketones to 1-aryl-butane-1,4-diones. Oxidative addition reactions with for example alkyl halides to give alkyl indium halides and with NiBr complexes to give Ni-In bonds are known. It is unstable in water decomposing into indium metal and indium tribromide. When indium dibromide is dissolved in water, InBr is produced as a, presumably, insoluble red precipitate, that then rapidly decomposes. See also Indium halides References WebElements Indium(I) compounds Bromides Metal halides
Indium(I) bromide
Chemistry
230
2,839,486
https://en.wikipedia.org/wiki/Xiaohan
The traditional Chinese calendar divides a year into 24 solar terms. Xiǎohán, Shōkan, Sohan, or Tiểu hàn () is the 23rd solar term. It begins when the Sun reaches the celestial longitude of 285° and ends when it reaches the longitude of 300°. It more often refers in particular to the day when the Sun is exactly at the celestial longitude of 285°. In the Gregorian calendar, it usually begins around 5 January and ends around 20 January. Date and time References 23 Winter time
Xiaohan
Physics
104
3,138,849
https://en.wikipedia.org/wiki/Cleveland%3A%20Now%21
Cleveland: Now! was a public and private funding program for the rehabilitation of neighborhoods in Cleveland, Ohio initiated by Mayor Carl B. Stokes on May 1, 1968. Local businesses agreed to cooperate with the Stokes administration on the program "to combat the ills of Cleveland's inner city in order to preserve racial peace." The aim of the Now! was to "raise $1.5 billion over 10 years with $177 million projected during the first 2 years to fund youth activities and employment, community centers, health-clinic facilities, housing units, and economic renewal projects." The program's funding aims were quickly met within the first few months of its initiation. However, on July 23, 1968, the Glenville Shootout occurred. Subsequent revelations found that Fred "Ahmed" Evans, one of the major instigators in the incident, had indirectly received some $6,000 in funds from the program. Donations declined. However, Stokes pulled through and was reelected for a second term in 1969. The Now program continued to actively operate until 1970. Stokes announced that its "last major commitment would be the funding of 4 new community centers." The organization was not formally dissolved until George V. Voinovich assumed office in 1980. The city donated the remaining $220,000 to the Cleveland Foundation to "use for youth employment and low-income housing." References Further reading External links Carl & Louis Stokes Making History, Western Reserve Historical Society History of Cleveland Urban planning
Cleveland: Now!
Engineering
299
70,884,169
https://en.wikipedia.org/wiki/StatMuse
StatMuse Inc. is an American artificial intelligence company founded in 2014. The company maintains its own eponymous website where it hosts a database of sports statistics. History Friends Adam Elmore and Eli Dawson founded the company in 2014. In email correspondence to the Springfield News-Leader, Elmore detailed that he and Dawson, fans of the National Basketball Association (NBA), were compelled to create StatMuse after they realized there was not a place online they could search "lebron james most points" [sic] and quickly get a result "showing his highest scoring games." As a startup, the company's goal was to utilize a type of artificial intelligence called natural language processing (NLP) for sports. In 2015, the company was part of the second group of startups accepted into the Disney Accelerator program. The company ultimately received the backing of The Walt Disney Company, Techstars, Allen & Company, the NFL Players Association, Greycroft and NBA Commissioner David Stern. As part of their partnership with Disney, StatMuse signed a content deal with ESPN (owned by Disney) to provide stats content on social media and television during the 2015–16 NBA season. Initially, the company only had stats available for the NBA, but eventually expanded to provide stats for the other major North American sports leagues. The company's initial demographic was players of fantasy sports, but eventually expanded to target general sports fans as well. StatMuse offers responses to user queries in the voices of sports-related public figures. Dawson shared with VentureBeat that StatMuse brings people in and record them saying different words and phrases. These celebrity voices were made accessible through Google's Google Assistant service, Microsoft's Cortana virtual assistant, and Amazon's Echo devices. The company launched its phone app in September 2017. Through the app, users can query StatMuse's sports statistics database using their own natural language. Upon the launch of the phone app, Fitz Tepper of TechCrunch wrote that: "The technology isn't perfect – some of the pauses between words are a bit awkward – making it clear that some phrases is being stitched together on the fly. But this is the exception, and on the whole most responses sound pretty good." StatMuse plug-ins for Slack and Facebook Messenger were also made, providing text-based sports stats. In 2019, StatMuse received investment from the Google Assistant Investment program. The service launched a premium option dubbed StatMuse+ in May 2023, offering options that had previously been included for free, such as unlimited searches and full results in data tables. The premium version also included early access to new features and a personalized searched history, as well as not having ads. It was met with mixed feedback. In January 2024, the service launched a Premier League version of the website dubbed StatMuse FC. It is planned to introduce more leagues in the website. References 2014 establishments in California American companies established in 2014 American sport websites Analytics companies Chatbots Companies based in San Francisco Data companies Internet properties established in 2014 Natural language processing Online companies of the United States Sports records and statistics
StatMuse
Technology
638
22,159,309
https://en.wikipedia.org/wiki/Rigid-band%20model
The Rigid-Band Model (or RBM) is one of the models used to describe the behavior of metal alloys. In some cases the model is even used for non-metal alloys such as Si alloys. According to the RBM the shape of the constant energy surfaces (hence the Fermi surface as well) and curve of density of states of the alloy are the same as those of the solvent metal under the following conditions: The excess charge of the solute atoms localizes around them. The mean free path of the electrons is much greater than the lattice spacing of the alloy. The electron states of interest in the pure solvent are all in one energy band, which is greatly separated in energy from the other bands. The only effect of the addition of the solute, given that its valence is greater than that of the solvent, is the addition of electrons to the valence band. This results to swelling the Fermi surface and filling the density of states curve to a higher energy. Theory In a pure metal, because of the periodicity of the lattice, the features of its electronic structure are well known. The single-particle states can be described in terms of Bloch states, the energy structure is characterized by Brillouin zone boundaries, energy gaps and energy bands. In reality though no metal is perfectly pure. When the amount of the foreign element is dilute, the added atoms may be treated as impurities. But when its concentration exceeds several atomic %, an alloy is formed and the interaction among the added atoms can no longer be neglected. Before giving a more mathematical outline of the RBM it is convenient to give somewhat of a visualization of what happens to a metal upon alloying it. In a pure metal, we'll take silver as an example, all lattice sites are occupied by silver atoms. When different kind of atoms are dissolved into it, for example 10% of copper, some random lattice sites become occupied by copper atoms. Since silver has a valence of 1 and copper has a valence of 2, the alloy will now have a valence of 1.1. Most lattice sites however are still occupied by silver atoms and consequently the changes in electronic structure are minimal. Basic concepts behind the Rigid-Band model In a pure metal of valence Z1, all atoms become positive ions with the valence +Z1 by releasing the outermost Z1 electrons per atom to form the valence band. As a result, conduction electrons carrying negative charges are uniformly distributed over any atomic site with equal probability densities and maintain charge neutrality with the array of ions with positive charges. When an impurity atom of valence Z2 is introduced, the periodic potential is disturbed, conduction electrons are scattered and a screening potential is formed where U(r) is the potential of the electrons in distance r, 1/λ is the screening radius and . The Fermi surface of the pure metal is constructed under the assumption that the wave vector k of the Bloch electron is a good quantum number. But alloying destroys the periodicity of the lattice potential and thus results in scattering of the Bloch electron. The wave vector k changes upon scattering of the Bloch electron and can no longer be taken as a good quantum number. In spite of such fundamental difficulties, experimental and theoretical works have provided ample evidence that the concept of the Fermi surface and Brillouin zone is still valid even in concentrated crystalline alloys In an alloy of atoms A and B, an intermetallic compound super-lattice structure tends to be formed. The chemical bonding between the unlike atoms leads to a very strong potential of the form where is the potential on position due to ion X, whose position is specified by . X here stands for either A or B, so that indicates the potential of ion A. The RBM assumes , hence ignores the difference in the potential of ions A and B. Thus, the electronic structure of the pure metal A is assumed to be the same as that of the pure metal B or any compositions in the alloy A–B. The Fermi level is then chosen so as to be consistent with the electron concentration of the alloy. It is convenient to divide the predictions of the rigid-band model into two categories, geometric and density of states. The geometric predictions are those that use only the geometric properties of the constant energy surfaces. The density-of-states predictions are related to those properties which depend on the density of states at the Fermi energy such as the electronic specific heat. Geometric structure In a pure metal the eigenstates are the Bloch functions Ψk with energies ek. When the periodicity of the pure metal is destroyed by alloying, these Bloch states are no longer eigenstates and their energy becomes complex The imaginary part Γk shows that the Bloch state in the alloy is no longer an eigenstate but scatters into other states with a lifetime of the order of (2Γk)−1. However, if , where Δ is the width of the band, then the Bloch states are approximately eigenstates and they can be used to calculate the properties of the alloys. In this case we can ignore Γk . The change in the energy of a Bloch state with alloying is then When the perturbation is fairly localized about the solute site (which is one of the conditions of the RBM), ΔEk depends only on ek and not on k and thus . Therefore, the plot of versus k for the alloy will have the same shape of constant energy surfaces as for the plot of versus k for the pure solvent. A given energy surface of the alloy will naturally correspond to a different energy value from that of the same shaped surface of the pure solvent, but the shapes will remain exactly the same. Density of states According to the Rigid Band Model is constant (for a given energy level) and the density of states of the alloy has the shame shape as that of the pure solvent, displaced by . When the concentration of the solute a is small, is also small and the density of states of the alloy at constant a is where is the density of states of the pure solvent. In the case when is constant we get meaning that the shape of the density of states will be the same, only displaced by . References Electronic band structures
Rigid-band model
Physics,Chemistry,Materials_science
1,281
5,544,993
https://en.wikipedia.org/wiki/Florentine%20flask
A florentine flask, also known as florentine receiver, florentine separator or essencier (from the French), other shapes called florentine vase or florentine vessel, is an oil–water separator fed with condensed vapors of a steam distillation in a fragrance extraction process. Description When the raw material is heated with steam from boiling water, volatile fragrant compounds and steam leave the still. The vapours are cooled in the condenser and become liquid. The liquid runs into the florentine receiver where the water and essential oil phases separate. The essential oils phase separates from water because the oils have a different density than water, and are not water-soluble. There are two main types of florentines in use. One separates essential oils of lower density than water, for example lavender oil, accumulating in a layer floating on the water. This kind of florentine has to be airtight to reduce the loss of volatile substances. The other type is intended for oils that are denser than water, where the oil accumulates beneath the water phase, for instance cinnamon, wintergreen, vetiver, patchouli or cloves. The floating water phase avoids the loss of volatile compounds from the oil. There are also florentines that are able to accommodate oils that are denser or less dense than water. The separated water is a herbal distillate and can be fed back into the still or may in some cases be sold as herbal water. Because small droplets of oil are entrained with the water flowing out of the florentine, the yield can sometimes be increased by using more than one florentine in series. For laboratory use, a small glass florentine without a base is called a florentine vase, as it has a slight resemblance to a small amphora. Larger glass receivers with base are called florentine flasks or essenciers. Glass is normally only used up to 15 liter vessels; above this size, glass is too fragile, so that metal is used for larger capacities. References Essential oils Flavor technology Liquid-liquid separation Distillation
Florentine flask
Chemistry
446
75,030,807
https://en.wikipedia.org/wiki/HD%20204904
HD 204904 (HR 8234; 59 G. Octantis) is a spectroscopic binary located in the southern circumpolar constellation Octans. It has an apparent magnitude of 6.17, placing it near the limit for naked eye visibility, even under ideal conditions. The object is located relatively close at distance of 212 light-years based on Gaia DR3 parallax measurements and it is drifting closer with a heliocentric radial velocity of . At its current distance, HD 204904's brightness is diminished by 0.19 magnitudes due to interstellar extinction and it has an absolute magnitude of +2.13. HD 204904 has a stellar classification of either F6 IV or F4 IV, indicating that it is a slightly evolved F-type subgiant. It has 1.53 times the mass of the Sun and a slightly enlarged radius 2.87 times that of the Sun's. It radiates 12.1 times the luminosity of the Sun from its photosphere at an effective temperature of , giving it the typical yellowish-white hue of an F-type star. HD 204904 is metal deficient with an iron abundance of [Fe/H] = −0.20 or 63.1% of the Sun's iron abundance. It is estimated to be 2.56 billion years old and it spins modestly with a projected rotational velocity of . In 2014, J. R. De Medeiros and colleagues detected radial velocity variations from the star, indicating that it was a spectroscopic binary. However, the system does not have a defined orbit. References F-type subgiants Spectroscopic binaries Octans Octantis, 59 CD-79 00856 204904 106881 8234 00354944747
HD 204904
Astronomy
376
18,611,367
https://en.wikipedia.org/wiki/Spiroxatrine
Spiroxatrine is a drug which acts as a selective antagonist at both the 5-HT1A receptor and the α2C adrenergic receptor. It is an analog of spiperone and also has some dopamine antagonist effects. References 5-HT1A antagonists Abandoned drugs Alpha-2 blockers Benzodioxans Dopamine antagonists Imidazolidinones Spiro compounds
Spiroxatrine
Chemistry
90
60,042,632
https://en.wikipedia.org/wiki/Cloacibacillus
Cloacibacillus is a Gram-negative and anaerobic genus of bacteria from the family of Synergistaceae. Cloacibacillus bacteria are pathogenic. See also List of bacterial orders List of bacteria genera References Synergistota Bacteria genera
Cloacibacillus
Biology
57
3,720,589
https://en.wikipedia.org/wiki/Entertainment%20technology
Entertainment technology is the discipline of using manufactured or created components to enhance or make possible any sort of entertainment experience. Because entertainment categories are so broad, and because entertainment models the world in many ways, the types of implemented technology are derived from a variety of sources. Thus, in theatre, for example, entertainment technology practitioners must be able to design and construct scenery, install electrical systems, build clothing, use motors if there is scenery automation, and provide plumbing (if functioning kitchen fixtures are required, or if "singing in the rain"). In this way, the entertainment technology field intersects with most other types of technology. Entertainment technology helps people relax and enjoy some free time. The latest technology has revolutionized daily entertainment. Old ways such as recording on records, tapes, and CDs, have made music more accessible across the world. Movies are brought into living rooms through photography, film, and video. With the emergence of computer technology, ways of being entertained have been optimized greatly. Many households are now having computers, consoles, or any other kind of hand-holding computer game. The diversity and complexity of entertainment technology will bring endless joy and convenience to people's spare time. Traditionally, entertainment technology is derived from theatrical stagecraft, and stagecraft is an important subset of the discipline. However, the rise of new types and venues for entertainment, as well as rapidly advancing technological development, has increased the range and scope of its practice. In animation and game design, the phrase "entertainment technology" refers to entertainment experiences made possible by the advent of primarily computer-mediated digital technologies. History Entertainment technology dates back to at least Antiquity, with the development of tools and Automatons by Hero of Alexandria which were used to enhance and automate aspects of theatric performances. Popular entertainment (technology) began with the invention of the phonograph by Thomas Edison, which was used to record and playback sound. This was followed by other media such as silent films, broadcast media, and different formats of pre-recorded music and other entertainment. This in turn impacted society, as this technology became a large part of everyday life and allowed people, governments, and organizations a way to communicate their ideas and creations with others. Since the 19th century, the production, regulation, and dissemination of entertainment technology have been the core of controversies over the waft of information and cultural products. These technologies include video games, virtual worlds, online role-playing games and recreational social networking technologies. In addition, there are two fundamental emphases in the scholarly cure of entertainment technologies. At the stage of audience consumption and participation, media outlets considered as entertainment applied sciences can be discussed as the capacity for acquiring statistics and cultivating attitudes and as a "space" for interaction. At the "macro" level of and production, illustration can work to fortify modes of belonging, identity, and attitudes. In the 1980s, consumers first adapt digital entertainment in the form of audio CDs, and then at the beginning of the 1990s, the DVD format came into people's lives, at the same time, the direct-to-home satellite had already started to provide customers with digital TV services. The satellite TV boxes that many households had at that time could be their earliest digital entertainment technology. United States Analog television broadcast ended on June 12, 2009. Television broadcast at most of the regions in the United States and Europe turned into digital with high-definition videos and digital sound. It was a big challenge at that time to switch to digital. As the approaching to millennium years, portable mobile devices were becoming popular among consumers. iPod published by Apple in 2001 could be a good example. Being one of the icons in the twenty-first century, it was a portable digital music player and started a revolution for mobile devices. iPod could be a very personal belonging, as time passes by, such personal digital devices would have the chance to replace the usage of personal computers, TV, DVR, and old mobile phones. Under some circumstances, consumers would prefer small-screen portable digital entertainment. Types of entertainment technology Properties Costume Automation Animatronics Computer simulation and virtual reality Augmented reality and interactive environments High dynamic range Light field devices Future developments Video streaming is becoming a huge part of society in this day and age and it is only beginning to expand. Video streaming brought in a revenue of $30.29 billion in 2016 and based on projections conducted by Research and Markets, will reach $70.05 billion in the year 2021. Challenges for development in the media industry are how to maximize content, brands, and advertising. Consumers drive this field, companies are constantly running data about consumers preferences, relationships, habits, and locations.   According to Ian Falles, CGI technology, which is known as Computer-generated Imagery, has been improved in recent years. For example, the actors who perform in the 2016 Star Wars prequel Rogue One died, and visual effects artists used motion-capture video of a stand-in reading his lines to reprise his role of Grand Moff Tarkin. Light on skins, hair, micro eye-darts, and blood flow under skins are all elements to make faces look real, which are all correlated with the re-creation of the similarities. The hologram technology will appears much more alive with the adapting of Epson projectors with "military-grade lasers'. In the future, the details of re-creation will be more focused, and artificial intelligence will be applied to CGI technology. What's more, computers will have algorithms embedded, and hours of footage will be recorded to generate human face movements. Gaming will continue to grow in the future. Arlington, Texas was best known for its football stadium "Jerry World". While the opening of Esports Stadium Arlington, officers hope it could be the center of e-sports, and produce $1.7 billion in revenue by 2021. The largest venue in North America has 100,000 square feet, with an 80-foot-wide stage, and 2000 gamers in the world sit around one by one, their movements are shown on an 85-foot LED screen. Several other esports venues include Esports Arena Las Vegas and Esports Arena Oakland. These venues are not only designed for championships but also can be a training center for gamers to get together so that they can communicate their skills there. Esports will attract more and more young people in the future. Many traditional sports owners, such as the owner of New England Patriots and the owner of New York Mets, have invested millions of dollars in gaming franchises. They believe that engaging in esports is the engagement with millennials in the future. Customers will not be able to recognize the slight improvements in pictures and sound when they are using TVs or other screens. There are two new technologies that will change their minds. The first technology is called immersive sound, and it is mostly used in movie theatres. Dolby Atmos, which is a 3D sound format, and is different from the traditional surrounded sound. According to the “object-based” sound, speakers are facing toward the ceiling, so that Atmos will make people feel the sound flying over their head when there is a plane on the screen. The technology has been applied to the home theater since 2014, but with a very limited number of movies and shows, because most movies and shows hadn't had those technologies embedded. The movie market finally caught up with the trend, atoms-embedded home theatre equipment is going to sell at a lower price; Apple's 4k TV, Amazon Prime Video, and Netflix all support 3D sound format streaming. Another technology is associated with the sharpest-ever picture. Many households are changing to 4K LCD TVs, while Samsung is developing new TVs which are much better than those televisions on sale, and it will change our imagination of TV. “The Wall”, which is the name of a giant TV, is 12 feet across and only a few inches thick, and it is powered by micro-LED panels, making images on TV look brighter and darker than other competing technologies. What's more, those micro-LED panels are stitched together to perform, so “the Wall” can fulfill any requirements on sizes and shapes. Schools that offer programs or degrees in entertainment technology include: Carnegie Mellon University, Entertainment Technology Center New York City College of Technology, Department of Entertainment Technology University of Southern California, Entertainment Technology Center The University of Texas at Austin, School of Design and Creative Technologies Millersville University, Multidisciplinary Studies Tshwane University of Technology, Pretoria, South Africa Entertainment Technology Training, New Zealand Currently, the only university offering a degree specifically in Entertainment Engineering and Design (EED) is the University of Nevada, Las Vegas (UNLV). Because UNLV's program is in its infancy, current entertainment technologists come from a wide variety of educational backgrounds, the most prevalent of which are theater and mechanical technology. The program provides a choice for students who want to get involved in the entertainment industry rather than pure engineering or technical theatre. The program can help students become competitive and successful in their career. They will be proficient in engineering principles, new materials, and new technologies, and at the same time, they can still reach the artistic demand in the entertainment industry. A bachelor's degree in these areas will typically have a difference of only a few specialized classes. Traditionally, people interested in careers in this field either presented themselves as apprentices within craft unions or attended college programs in theatre technology. Although both are appropriate in limited ways, the growing world of entertainment technology encompasses many different types of performance and display environments than the theatre. To this end, newer opportunities have arisen that provide a wider educational base than these more traditional environments. An article "Rethinking Entertainment Technology Education" by John Huntington describes new teaching philosophies that resonate with the need for a richer and more flexible educational environment: See also Creative technology Game design Special effects References Entertainment Hyperreality Technology in society Technology by type
Entertainment technology
Technology
2,024
23,843,036
https://en.wikipedia.org/wiki/Gymnopilus%20hemipenetrans
Gymnopilus hemipenetrans is a species of mushroom in the family Hymenogastraceae. See also List of Gymnopilus species External links Gymnopilus hemipenetrans at Index Fungorum hemipenetrans Fungus species
Gymnopilus hemipenetrans
Biology
58
11,983,362
https://en.wikipedia.org/wiki/Chicago%20Varnish%20Company%20Building
The Chicago Varnish Company Building is a building built in 1895 as the headquarters of one of the leading varnish manufacturers in the United States, the Chicago Varnish Company. The building is a rare example of Dutch Renaissance Revival-style architecture in Chicago, and is marked by a steeply pitched roof paired with stepped gables of red brick and light stone in contrasting colors. The building was designed by Henry Ives Cobb, a nationally recognized architect whose other significant works include the former Chicago Historical Society Building, the Newberry Library, and the original buildings for the University of Chicago campus. The building was listed on the National Register of Historic Places on June 14, 2001, and was designated a Chicago Landmark on July 25, 2001. After an extensive rehabilitation, including replacement of the multi-gabled clay tile roof and rebuilding the stepped parapets, Harry Caray's Italian Steakhouse opened in the building on October 23, 1987. The restaurant has received numerous awards for its food and service, and features many items of memorabilia, including a "Holy Cow" wearing the trademark Harry Caray eyeglasses that was sourced from Chicago's CowParade. The building is distinctive for its use of the Dutch Renaissance revival style, with its stepped gables, steeply-pitched tile roof, and contrasting brick and stone masonry. The building and its restoration received the Chicago Landmarks Preservation Excellence award in 2006 for its careful restoration of the Ludowici roof. See also Chicago Landmark References External links Chicago Varnish Company Building Commercial buildings on the National Register of Historic Places in Chicago Chicago Landmarks Office buildings completed in 1895 Stepped gables
Chicago Varnish Company Building
Chemistry,Engineering
321
20,500,116
https://en.wikipedia.org/wiki/Marc%20Aaronson%20Memorial%20Lectureship
The Marc Aaronson Memorial Lectureship, also known as the Aaronson Prize, is an award of the University of Arizona Department of Astronomy and Steward Observatory which promotes and recognizes excellence in astronomical research. It is named after astronomer Marc Aaronson, who died in 1987 in an accident while making astronomical observations. He was 36 years old. Background The lectureship and cash prize are awarded every eighteen months to an individual or group who, by his or her passion for research and dedication to excellence, has produced a body of work in observational astronomy which has resulted in a significant deepening of our understanding of the universe. Any living scientist is eligible for this award without consideration of race, sex, or nationality. Fourteen previous Aaronson Prize winners are returning to Tucson on Apr 3–4, 2017, for a scientific symposium in Marc's honor. Aaronson came to Steward Observatory as a postdoc after receiving his PhD degree from Harvard in 1977 and became an Associate Professor in 1983. His astronomical research focused on many of the most important problems of observational cosmology: the cosmic distance scale, the age of the Universe, the large-scale motion of matter, and the distribution of invisible mass in the Universe. Aaronson made important contributions to the understanding of stellar populations in the Large Magellanic Cloud. In recognition of his research achievements, Aaronson was awarded the George Van Biesbroeck Award by the University of Arizona in 1981, the Bart J. Bok Prize by Harvard University in 1983, and the Newton Lacy Pierce Prize by the American Astronomical Society in 1984. Recipients Source: University of Arizona See also List of astronomy awards References Aaronson Nomination Solicitation from 2003 University of Arizona University and college lecture series 1989 establishments in Arizona Science lecture series Astronomy education events Recurring events established in 1989
Marc Aaronson Memorial Lectureship
Astronomy
362
58,472,865
https://en.wikipedia.org/wiki/Phosphaethynolate
The phosphaethynolate anion, also referred to as PCO, is the phosphorus-containing analogue of the cyanate anion with the chemical formula or . The anion has a linear geometry and is commonly isolated as a salt. When used as a ligand, the phosphaethynolate anion is ambidentate in nature meaning it forms complexes by coordinating via either the phosphorus or oxygen atoms. This versatile character of the anion has allowed it to be incorporated into many transition metal and actinide complexes but now the focus of the research around phosphaethynolate has turned to utilising the anion as a synthetic building block to organophosphanes. Synthesis The first reported synthesis and characterisation of phosphaethynolate came from Becker et al. in 1992. They were able to isolate the anion as a lithium salt (in 87% yield) by reacting lithium bis(trimethylsilyl)phosphide with dimethyl carbonate . The x-ray crystallographic analysis of the anion determined the bond length to be (indicative of a phosphorus-carbon triple bond) and the bond length to be . Similar studies were performed on derivatives of this structure and the results indicated that dimerisation to form a four-membered Li ring is favoured by this molecule. Ten years later, in 2002, Westerhausen et al. published the use of Becker's method to make a family of alkaline earth metal salts of PCO ; this work involved the synthesis of the magnesium, calcium, strontium and barium bis-phosphaethynolates. Like the salts previously reported by Becker, the alkali-earth metal analogues were unstable to moisture and air and thus were required to be stored at low temperatures (around ) in dimethoxyethane solutions. It was not until 2011 that the first stable salt of the phosphaethynolate anion was reported by Grutzmacher and co-workers . They managed to isolate the compound as a brown solid in 28% yield. The structure of the stable sodium salt, formed by carbonylation of sodium phosphide, contains bridging PCO units in contrast to the terminal anions found in the previously reported structures. The authors noted that this sodium salt could be handled in air as well as water without major decomposition; this emphasises the significance of the accompanying counter cation in stabilisation of PCO. Direct carbonylation was a method also employed by Goicoechea in 2013 in order to synthesis a phosphaethynolate anion stabilised by a potassium cation sequestered in 18-crown-6 . This method required the carbonylation of solutions of at and produced by-products that were readily separated during aqueous work ups. The use of aqueous work ups reflects the high stability of the salt in water. This method afforded the PCO anion in reasonable yields around 43%. Characterisation of the compound involved infra-red spectroscopy; the band indicative of the triple bond stretch was observed at . Ambidentate nature of the anion The phosphaethynolate anion is the heavier isoelectronic congener of the cyanate anion. It has been shown that it behaves in a similar way to its lighter analogue, as an ambidentate nucleophile. This ambidentate character of the anion means that it is able to bind via both the phosphorus and oxygen atoms depending on the nature of the centre being coordinated. Computational studies carried out on the anion such as Natural Bond Orbital (NBO) and Natural Resonance Theory (NRT) analyses can go part way to explain why PCO can react in such a manner . The two dominant resonance forms of the phosphaethynolate anion localise negative charge on either the phosphorus or oxygen atoms meaning both are sites of nucleophilicity. The same applies for the cyanate anion hence why PCO is observed to have similar pseudo-halogenic behaviour. Attack by oxygen Coordination via the oxygen atom is favoured by hard, highly electropositive centres. This is due to the fact that oxygen is a more electronegative atom and thus prefers to bind via more ionic interactions. Examples of this type of coordination were presented in the work of Arnold et al. from 2015. The group found that actinide complexes of PCO involving uranium and thorium both coordinated through the oxygen. This is the result of the contracted nature of the actinide orbitals which makes the metal centres more 'core-like' thus favouring ionic interactions. Attack by phosphorus On the other hand, softer, more polarisable centres prefer to coordinate in a more covalent manner through the phosphorus atom. Examples of this include complexes accommodating a neutral or sparsely charged transition metal centre. The first example of this nature of PCO binding was published by Grutzmacher and co-workers in 2012. The group's studies used a Re(I) complex and the analysis of its bonding parameters and electronic structure showed that the phosphaethynolate anion coordinated in a bent fashion. This suggested the Re(I) – P bond possessed a highly covalent character thus the complex would be best described as a metallaphosphaketene. It wasn't until four years later that a second example of this coordination nature of PCO was identified. This time it came in the form of a W(0) pentacarbonyl complex produced by the Goicoechea group. Rearrangement of coordination character There is one particular reaction studied by Grutzmacher et al. that exhibits the rearrangement of coordination character of PCO. Initially when reacting the anion with triorganyl silicon compounds, it binds via the oxygen forming the kinetic oxyphosphaalkyne product. The thermodynamic silyl phosphaketene product is generated when the kinetic product rearranges to allow PCO to coordinate through phosphorus. The formation of the kinetic product is charged controlled and thus explains why it is formed by oxygen coordination. The oxygen atom favours a larger degree of ionic interactions as a result of its greater electronegativity. Contrastingly, the thermodynamic product of the reaction is generated under orbital control. This comes in the form of phosphorus coordination as the largest contribution in the HOMO of the anion resides on the phosphorus atom; this is clearly visible in Figure 3. Reactivity of the anion Extensive studies involving the phosphaethynolate anion have shown that it can react in a variety of ways. It has documented use in cycloadditions, as a phosphorus transfer agent, a synthetic building block and as pseudo halide ligands (as described above). Phosphorus transfer agents In these types of reactions, CO is released as the phosphaethynolate anion acts as either a mild nucleophilic source of phosphorus or a Brønsted base. Examples of these types of reactions involving PCO include work conducted by Grutzmacher and Goicoechea. In 2014, Grutzmacher et al. reported that an imidazolium salt would react with the phosphaethynolate anion to produce a phosphinidine carbene adduct. Computational mechanistic studies were conducted on this reaction using density functional theory at the B3LYP/6-31+G* level. The results of these investigations suggested that the lowest energy and therefore most likely pathway involves PCO acting as a Brønsted base initially deprotonating the acidic imidazolium cation to generate the intermediate phosphaketene, HPCO. The highly unstable protonated PCO remains hydrogen bonded to the newly produced N-heterocylic carbene prior to rearrangement and formation of the observed product. In this case, PCO does not act as a mild nucleophile due to the augmented stability of the starting imidazolium cation. On the other hand, in the work published by Goicoechea and co-workers in 2015, the phosphaethynolate anion can be seen to act as a source of nucleophilic phosphide (). The anion was seen to add across the double bond of cyclotrisilene thus introducing a phosphorus vertex into its scaffold (after undergoing decarbonylation). Cycloaddition Reagents After synthesising the potassium salt of the phosphaethynolate anion in 2013, Goicoechea et al. began to look into the potential of PCO towards cycloadditions. They found that the anion could react in a [2+2] fashion with a diphenyl ketene to produce the first isolatable example of a four-membered monoanionic phosphorus containing heterocycle. They employed the same method to test other unsaturated substrates such as carbodiimides and found that the likelihood of cyclisation heavily relies on the nature of the substituents on the unsaturated substrate. Cycloaddition reactions involving the phosphaethynolate anion have also been shown by Grutzmacher and co-workers to be a viable synthetic route to other heterocycles. One simple example is the reaction between the NaPCO and an α-pyrone. This reaction yields the sodium phosphinin-2-olate salt which is stable to both air and moisture. Synthetic building blocks A large part of the research involving PCO is now looking into utilising the anion as a synthetic building block to derive phosphorus containing analogues of small molecules. The first major breakthrough in this area came from Goicoechea et al. in 2013; they published the reaction between the PCO anion and ammonium salts which yielded the phosphorus containing analogue of urea in which phosphorus replaces a nitrogen atom. The group predict that this heavier congener could have applications in new materials, anion sensing and coordination chemistry. Goicoechea and co-workers were also able to isolate the heavily sought after phosphorus containing analogue of isocyanic acid, HPCO, in 2017. This molecule is thought to be a crucial intermediate in a lot of reactions involving PCO (including P-transfer to an imidazolium cation). Moreover, the most recent addition to this class of small molecules is the phosphorus containing analogue of N,N-dimethylformamide. This work in which the phosphorus again replaces a nitrogen atom was published in 2018 by Stephan and co-workers. Generating acylphosphines in this manner is considered a much milder route than other current strategies that require multi-step syntheses involving toxic, volatile and pyrophoric reagents. Other analogues The other analogues of the phosphaethynolate anion all obey the general formulae E-C-X and are made by varying E and X. When changing either atom, unique trends amongst the different analogues become apparent. Varying E As 'E' is varied by descending group 15, there is a clear shift in the weights of the resonance structures towards the phosphaketene analogue . This reflects the decrease in effective orbital overlap between E and C which in turn disfavours multiple bond formation. This increasing tendency to form double and not triple E-C bonds is also reflected in calculated E-C bond lengths . The data from Table 1 is evidence of E-C bond elongation which correlates with the change from triple to double bond. In addition, NBO analysis highlights that the greatest electron delocalisation within the anions stems from the donation of an oxygen lone pair into the E−C π antibonding orbital. The energy value associated with this donation is seen to increase down the group . This explains the increasing resonance weight towards the ketene like isomer as populating antibonding orbitals usually suggests the breaking of a bond. The shift towards the ketene isomer will also cause an increase in charge density on the elemental 'E' atom; this makes the elemental atom an increasing source of nucleophilicity . Varying X The simplest analogue that can be formed as 'X' is varied is . This anion was first isolated by Becker et al. by reacting the phosphaethynolate anion with carbon disulphide. Unlike PCO, PCS shows ambidentate nucleophilic tendencies towards the W(0) complex mentioned above. This is the result of a reduced difference in electronegativity between E and X thus neither atom offers a substantial advantage over the other in terms of providing ionic contributions to bonding. As a result, the average electron density in PCS is spread over the entire anion whereas in PCO, most electron density is localised on the phosphorus atom as this is the atom which bonds to form the thermodynamically favourable product. References Anions Organophosphorus compounds Physical organic chemistry Substances discovered in the 1990s
Phosphaethynolate
Physics,Chemistry
2,720
16,892,128
https://en.wikipedia.org/wiki/Come%20By%20Chance%20Refinery
Come By Chance Refinery is a renewable diesel refinery operated by Braya Renewable Fuels in Come By Chance, Newfoundland and Labrador, Canada. It has a refining capacity of . History The refinery was built by John Shaheen's Shaheen Resources from 1971 to 1973, with the help of British company Procon Limited, for $155 million. The refinery began operation in December 1973 until the refinery went bankrupt in 1976, with Shaheen Resources owing about $500 million. After four years of inactivity, the refinery was purchased by Petro-Canada for $10 million in 1980, but decided against reactivation, and instead sold the refinery to Bermuda-based refinery Newfoundland Processing Ltd. for $1 in 1986, which reopened it the following year. In August 1994, the Vitol Group purchased the refinery and the operating company North Atlantic Refining was founded. In 2014, it was acquired by SilverRange Capital Partners, a New York-based alternative asset manager. On May 28, 2020, Irving Oil announced that it was in negotiations to purchase the refinery. On October 5, 2020, the sale to Irving Oil collapsed and it was announced that the Come By Chance refinery would close permanently. In November 2021, the U.S. private equity group Cresta Fund Management purchased a controlling stake of the idling refinery and announced plans to convert the plant to a biofuel operation. As part of the acquisition, the refinery was renamed to Braya Renewable Fuels. On September 2, 2022, an explosion at the refinery injured 8 workers, one, Shawn Peddle of Clarenville (formerly of Hatchet Cove) died in hospital on October 15. The cause of the explosion is currently under investigation. In late August, 2023, the Government of Newfoundland & Labrador awarded exclusive rights to pursue development of the Toqlukuti'k Wind and Hydrogen Ltd. Project to Braya partner ABO Wind. Braya had previously issued an exclusive letter of support to ABO Wind for the joint development of green hydrogen production at its refinery in Come By Chance. The Crown lands decision awarded ABO Wind exclusive rights to 267,000 acres, or 417 square miles, in close proximity to the refinery. References External links North Atlantic Refining Oil refineries in Canada Buildings and structures in Newfoundland and Labrador
Come By Chance Refinery
Chemistry
462
11,421,870
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORA73
In molecular biology, the small nucleolar RNA SNORA73 (also called U17/E1 RNA) belongs to the H/ACA class of small nucleolar RNAs (snoRNAs). SNORA73 has functions involved in mediating the formation of 18S rRNA (an essential component of the ribosome), regulating chromatin function, and facilitating secretion of proteins by directing specific mRNAs to the signal recognition particle (SRP). SNORA73 has been dubbed a ternary-glue snoRNA (TAG-snoRNA) because of its ability to promote association of mRNAs encoding secreted proteins with the SRP. SNORA73 is one of the most abundant snoRNAs in human cells and its length in vertebrates ranges from 200-230 nucleotides, making it longer that most snoRNAs. There are two near copies of SNORA73 in the human genome, SNORA73A and SNORA73B, both found in the introns of snoRNA host gene 3 (SNHG3). Formation of 18S rRNA SNORA73 (U17) is essential for the cleavage of pre-rRNA within the 5' external transcribed spacer (ETS). This cleavage leads to the formation of 18S rRNA. Regions of the U17 RNA are complementary to rRNA and act as guides for RNA/RNA interactions, although these regions do not seem to be well conserved between organisms. Involvement in Protein Secretion SNORA73 promotes protein secretion by directing mRNA containing the sequence GAGGCCCAGC to interact with the Signal Recognition Particle (SRP) Complex. SNORA73 has two conserved RNA binding domains: 1) an mRNA binding domain (MBD) and 2) a 7SL binding domain (7BD) that recognizes a 14-bp region of 7SL, the RNA component of the ribonucleoprotein SRP complex. Binding of both domains creates mRNA-SNORA73-7SL RNA interactions, which causes SRP to interact with and bind the ribosome (aided in part by signal peptides present on the growing polypeptide chain). SRP then binds to SRP receptor on the surface of the Endoplasmic Reticulum (ER), allowing the peptide to pass through the translocon into the ER lumen, where proteins are processed for secretion or trafficking to the cell membrane. SNORA73 thus acts as a molecular glue that facilitates interactions between mRNA and the SRP, improving the rate of secretion of proteins encoded by the target mRNA. Other Functions There is evidence that SNORA73 functions as a regulator of chromatin function. SNORA73 is chromatin-associated RNA (caRNA) and stably linked to chromatin. Notably, SNORA73 can bind to PARP1, leading to the activation of its ADPRylation (PAR) function. SNORA73 Interacts with the PARP1 DNA-Binding Domain. In addition, the snoRNA-activated PARP1 ADPRylates DDX21 in cells to promote cell proliferation. See also Small nucleolar SNORD12/SNORD106 References External links Small nuclear RNA
Small nucleolar RNA SNORA73
Chemistry
701
4,007,780
https://en.wikipedia.org/wiki/Wohlwill%20process
The Wohlwill process is an industrial-scale chemical procedure used to refine gold to the highest degree of purity (99.999%). The process was invented in 1874 by Emil Wohlwill. This electrochemical process involves using a cast gold ingot, often called a doré bar, of 95%+ gold to serve as an anode. Lower percentages of gold in the anode will interfere with the reaction, especially when the contaminating metal is silver or one of the platinum group elements. The cathodes for this reaction are small sheets of pure (24k) gold sheeting or stainless steel. Current is applied to the system, and electricity travels through the electrolyte of chloroauric acid. Gold and other metals are dissolved at the anode, and pure gold (coming through the chloroauric acid by ion transfer) is preferably plated onto the gold cathode while other metals remain in the solution. When the anode is dissolved, the cathode is removed and melted or otherwise processed in the manner required for sale or use. The resulting gold is 99.999% pure, and of higher purity than gold produced by the other common refining method, the Miller process, which produces gold of 99.5% purity. For industrial gold production the Wohlwill process is necessary for highest purity gold applications. When lower purity gold is required, refiners often utilize the Miller process for its relative ease and quicker turnaround times and because it does not require a large inventory of gold, in the form of chloroauric acid. See also Gold parting References Metallurgical processes Gold
Wohlwill process
Chemistry,Materials_science
343
2,405,677
https://en.wikipedia.org/wiki/Uncinula%20necator
Uncinula necator (syn. Erysiphe necator) is a fungus that causes powdery mildew of grape. It is a common pathogen of Vitis species, including the wine grape, Vitis vinifera. The fungus is believed to have originated in North America. European varieties of Vitis vinifera are more or less susceptible to this fungus. Uncinula necator infects all green tissue on the grapevine, including leaves and young berries. It can cause crop loss and poor wine quality if untreated. The sexual stage of this pathogen requires free moisture to release ascospores from its cleistothecia in the spring. However, free moisture is not needed for secondary spread via conidia; high atmospheric humidity is sufficient. Its anamorph is called Oidium tuckeri. It produces common odors such as 1-octen-3-one and (Z)-1,5-octadien-3-one. This mildew can be treated with sulfur or fungicides; however resistance to several chemical classes such as Benomyl, the DMIs, and Strobilurins has developed. While synthetic fungicides are often recommended as applications around bloom, it is common to include sulfur in a tank mix to help with resistance management. Summary Powdery mildews are generally host-specific, and powdery mildew of grape is caused by a host-specific pathogen named Uncinula necator. Powdery mildew is a polycylic disease that thrives in warm, moist environments. Its symptoms are widely recognizable and include gray-white fungal growth on the surface of infected plants. A sulfur formulation, fungicides, and limiting the environmental factors that favor the growth of powdery mildews are all practices that can stall and/or halt its growth. Hosts and symptoms Powdery mildews are generally host-specific. Uncinula necator is the pathogen that causes powdery mildew on grape. The most susceptible hosts of this pathogen are members of the genus Vitis. The signs of powdery mildews are widely recognizable and easily identifiable. The majority of them can be found on the upper sides of the leaves; however, it can also infect the bottom sides, buds, flowers, young fruit, and young stems. A gray-white, dusty, fungal growth consisting of mycelia, conidia and conidiophores coat much of the infected plant. Chasmothecia, which are the overwintering structures, present themselves as tiny, spherical fruiting structures that go from white, to yellowish-brown to black in color, and are about the size of the head of a pin. Symptoms that occur as a result of the infection include necrosis, stunting, leaf curling, and a decrease in quality of the fruit produced. Disease cycle Powdery mildew is a polycyclic disease (one which produces a secondary inoculum) that initially infects the leaf surface with primary inoculum, which is conidia from mycelium, or secondary inoculum, which is an overwintering structure called a chasmothecium. When the disease begins to develop, it looks like a white powdery substance. The primary inoculum process begins with an ascogonium (female) and antheridium (male) joining to produce an offspring. This offspring, a young chasmothecium, is used to infect the host immediately or overwinter on the host to infect when the timing is right (typically in spring). To infect, it produces a conidiophore that then bears conidia. These conidia move along to a susceptible surface to germinate. Once these spores germinate, they produce a structure called a haustoria, capable of "sucking" nutrients from the plant cells directly under the epidermis of the leaf. At this point, the fungi can infect leaves, buds and twigs that then reinfect other plants or further infect the current host. From this point, you see more white powdery signs of powdery mildew, and these structures produce secondary inoculum to reinfect the host with mycelium and conidia, or use the mycelium to produce primary inoculum to another plant. For germination to occur using a chasmothecium, the chasmothecium must be exposed to the right environmental conditions to rupture the structure to thereby release spores in hope that they'll germinate. Germination of conidia occurs at temperatures between 7 and 31 °C and is inhibited above 33 °C. Germination is greatest at 30–100% relative humidity. Environment Powdery Mildew thrives in warm, moist environments and infects younger plant tissues like fruit, leaves, and green stems and buds. Free water can disrupt conidia and only requires a humid microclimate for infection. Most infection begins when spring rain (2.5mm) falls and temperatures are approximately 15 °C or higher. Rates of infection decline at temperatures higher than 30 °C, since the evaporation of water occurs readily. Cooler conditions, such as shading and poor aeration, promote infection due to a higher relative humidity, optimally 85% or greater. However, sporulation does occur at levels as low as 40%. Spores are dispersed mostly by wind and rain splash. Young underdeveloped tissues are most susceptible to infection, primarily leaves and fruit. Warmer weather cultivars of Vitis vinifera and French hybrids provide overwintering protection in buds and during moderate winters climates. American cultivars are generally less susceptible to infection unless an unusually warm winter does not kill the chasmothecia in buds. Most chasmothecia survive on the vine where ample protection is provided in the bark. Management First and foremost, limiting environmental factors that promote infection are key to managing powdery mildew on grapes. Optimal sites feature full sun on all grape structures and ample aeration to reduce humid microclimates under shading leaves. Pruning vines and clusters and planting on a gentle slope and orienting in rows running North and South promote full sun and aeration. Dusting leaves and berries with lime and sulfur was effective in the 1850s during the epidemic in Europe. Current organic agricultural practices still use a sulfur formulation as a treatment for powdery mildew. However, some cultivars like Concord are susceptible to phytotoxic damage with sulfur use. Since the fungus grows on tissue surfaces rather than inside epithelial cells, topical applications of oils and other compounds are recommended. Integrated pest management programs are utilized by organic and conventional agriculture systems, while the latter prescribes the addition of fungicides. Typical applications of fungicides occur during prebloom and for 2–4 weeks post bloom. If the previous year was a conducive environment for infection or the current year had a warm winter, earlier sprays are recommended due to a potentially higher amount of overwintered chasmothecia. If warm and humid, conidia are produced every 5–7 days throughout the growing season. To limit powdery mildew resistance, growers alternate treatments by employing multiple modes of action. Importance The disease affects grapes worldwide, leaving all agricultural grape businesses at risk of Uncinula necator. Powdery mildew of grape affects the size of the vines, the total yield of fruit, as well as affecting the taste of wine produced from infected grapes. The disease can also cause the blossoms to fall and result in failure to produce fruit. References External links WD Gubler, MR Rademacher, SJ Vasquez, CS Thomas. 1999. Control of powdery mildew using the UC Davis Powdery Mildew Risk Index. APSnet Feature. https://web.archive.org/web/20100901181048/http://www.agf.gov.bc.ca/cropprot/grapeipm/mildew.htm Fungal grape diseases Erysiphales Fungi described in 1834 Fungus species
Uncinula necator
Biology
1,664
9,671,027
https://en.wikipedia.org/wiki/Lysophosphatidic%20acid
A lysophosphatidic acid (LPA) is a phospholipid derivative that can act as a signaling molecule. Function LPA acts as a potent mitogen due to its activation of three high-affinity G-protein-coupled receptors called LPAR1, LPAR2, and LPAR3 (also known as EDG2, EDG4, and EDG7). Additional, newly identified LPA receptors include LPAR4 (P2RY9, GPR23), LPAR5 (GPR92) and LPAR6 (P2RY5, GPR87). Clinical significance Because of its ability to stimulate cell proliferation, aberrant LPA-signaling has been linked to cancer in numerous ways. Dysregulation of autotaxin or the LPA receptors can lead to hyperproliferation, which may contribute to oncogenesis and metastasis. LPA may be the cause of pruritus (itching) in individuals with cholestatic (impaired bile flow) diseases. GTPase activation Downstream of LPA receptor activation, the small GTPase Rho can be activated, subsequently activating Rho kinase. This can lead to the formation of stress fibers and cell migration through the inhibition of myosin light-chain phosphatase. Metabolism There are a number of potential routes to its biosynthesis, but the most well-characterized is by the action of a lysophospholipase D called autotaxin, which removes the choline group from lysophosphatidylcholine. Lysophosphatidic acids are also intermediates in the synthesis of phosphatidic acids. See also Autotaxin GPR35 Phosphatidic acid Sphingosine-1-phosphate Gintonin References Further reading Phospholipids
Lysophosphatidic acid
Chemistry
394
511,091
https://en.wikipedia.org/wiki/Dipstick
A dipstick is one of several measurement devices. Some dipsticks are dipped into a liquid to perform a chemical test or to provide a measure of quantity of the liquid. Since the late 20th century, a flatness/levelness measuring device trademarked "Dipstick" has been used to produce concrete and pavement surface profiles and to help establish profile measurement standards in the concrete floor and paving industries. Testing dipstick A testing dipstick is usually made of paper or cardboard and is impregnated with reagents that indicate some feature of the liquid by changing color. In medicine, dipsticks can be used to test for a variety of liquids for the presence of a given substance, known as an analyte. For example, urine dipsticks are used to test urine samples for haemoglobin, nitrite (produced by bacteria in a urinary tract infection), protein, nitrocellulose, glucose and occasionally urobilinogen or ketones. They are usually brightly coloured, and extremely rough to the touch. Measuring dipstick Dipsticks can also be used to measure the quantity of liquid in an otherwise inaccessible space, by inserting and removing the stick and then checking the extent of it covered by the liquid. The most familiar example is the oil level dipstick found on most internal combustion engines. Other kinds of dipsticks are used to measure everything from fuel levels to the amount of beer left in an ale cask (Firkin). Floor & pavement profiler "Dipstick" is the trade name of a profiling device manufactured by Face Construction Technologies of Norfolk, Virginia USA. The instrument is used in 66 countries on six continents to measure the flatness and levelness of concrete floor slabs and pavements. The Dipstick measures concrete floor slab flatness/levelness in terms of Face Floor Profile Numbers ("F-Numbers"), a profile measurement system adopted in 1990 by the American Concrete Institute. The Dipstick is 'walked' across sections of the floor between two successive points and data is collated. This is now regarded by some in the industry as a long slow process with other profiling devices available offering accurate results in less time. The Dipstick can have a variable sampling rate from 75mm up to 300mm. F-Number measurement procedures were established by ASTM Standard E1155. The instrument also measures TR-34 Free Movement (FM); TR-34 Defined Movement (DM); Gap under Sliding Unleveled Straightedge; Gap under Rolling Straightedge; and DIN 18202. The U.S. Federal Highway Administration (FHWA) and the World Bank (with its International Roughness Index... or "IRI") have established measurement procedures using Dipstick profiler data. The American Association of State Highway and Transportation Officials (AASHTO) has established its Standard R 41 (most recently published as R 41-05 (2010)) to, "... manually collect precision profile data utilizing the Face Technologies Dipstick. The instrument measures profiles (relative elevation differences) at a rate and accuracy greater than traditional rod and level surveys. Procedures for measuring both longitudinal and transverse profiles are described." The Dipstick, with a reported accuracy of .01 mm ( 0.0004 inches), measures "true" profiles and is the most widely used and accepted Class 1 profiler for the purposes of calibrating other profilers. Although, most state DOTs now use rolling inclinometer based systems for AASHTO r56 certification procedures which collect at the same 25mm sampling interval as highway profilers. Dipstick was used to obtain data that were used as ground truth in FHWA evaluations of the repeatability of IRI values as measured by other profilers and in Long-Term Pavement Performance (LTTP) studies conducted by several states. The instrument was similarly used to produce reference measurements by the World Road Association (PIARC) in its 1998 "International Experiment to Harmonise Longitudinal and Transverse Profile Measurement and Reporting Procedures." The PIARC experiment was conducted in the US, Japan, Holland and Germany and included IRI values from airport runways and super highways to rough unpaved roads. See also Dripstick Litmus Urine test strip References Volumetric instruments de:Ölwechsel#Ölstandsmessung
Dipstick
Technology,Engineering
880
1,616,580
https://en.wikipedia.org/wiki/Chilbolton%20Observatory
The Chilbolton Observatory is a facility for atmospheric and radio research located on the edge of the village of Chilbolton near Stockbridge in Hampshire, England. The facilities are run by the STFC Radio Communications Research Unit of the Rutherford Appleton Laboratory and form part of the Science and Technology Facilities Council. Overview The Chilbolton Observatory operates many pieces of research equipment associated with radar propagation and meteorology. , these include: An S band Doppler weather radar with its distinctive, fully steerable, 25 metre (82') parabolic antenna. This equipment can be referred to as CAMRa (Chilbolton Advanced Meteorological Radar). An L band Clear-air radar A W band bistatic zenith radar A UV Raman Lidar Multiple Ka band radiometers Multiple rain gauges The observatory also hosts the UK's LOFAR station. Timeline of projects 1998 - CLARE'98 Cloud Lidar and Radar experiment, which eventually fed into the European Space Agency EarthCARE programme 2001 to 2004 - CLOUDMAP2 project to assist in Numerical weather prediction models 2006 - Chilbolton Observatory joined forces with several European Space Agency sites to verify the L band radio transmissions from the GIOVE-A satellite 2006 - NERC Cirrus and Anvils: European Satellite and Airborne Radiation measurements project 2008 - In-Orbit Test (IOT) performed for GIOVE-B 2008-9 - APPRAISE, during which the CAMRa and Lidar were used to direct airborne measurements in mixed-phase clouds 2010 - LOFAR station UK608 constructed History Construction of Chilbolton Observatory started in 1963. It was built partially on the site of RAF Chilbolton, which was decommissioned in 1946. Several sites around the south-east of England were considered for the construction. The site at Chilbolton, on the edge of Salisbury Plain, was chosen in part because of excellent visibility of the horizon and its relative remoteness from major roads whose cars could cause interference. The facility was opened in April 1967. Within several months of being commissioned the azimuth bearing of the antenna suffered a catastrophic failure. GEC were contracted to repair the bearing and devised a system to replace the failed part while leaving the 400 tonne dish ostensibly in-place. Originally, the antenna was engaged in Ku band radio astronomy, but now operates as a S and L band radar. References External links Chilbolton Observatory Facilities retrieved May 17, 2006 CLOUDMAP2 project homepage ESA News 'GIOVE A transmits loud and clear', ESA Portal - Improving Daily Life, March 9, 2006, retrieved May 17, 2006 Astronomical observatories in England Buildings and structures in Hampshire Low-Frequency Array Research institutes in Hampshire Science and Technology Facilities Council Space Situational Awareness Programme Test Valley Weather radars Meteorological observatories
Chilbolton Observatory
Environmental_science
576
60,373,549
https://en.wikipedia.org/wiki/Phase%20separation
Phase separation is the creation of two distinct phases from a single homogeneous mixture. The most common type of phase separation is between two immiscible liquids, such as oil and water. This type of phase separation is known as liquid-liquid equilibrium. Colloids are formed by phase separation, though not all phase separations forms colloids - for example oil and water can form separated layers under gravity rather than remaining as microscopic droplets in suspension. A common form of spontaneous phase separation is termed spinodal decomposition; it is described by the Cahn–Hilliard equation. Regions of a phase diagram in which phase separation occurs are called miscibility gaps. There are two boundary curves of note: the binodal coexistence curve and the spinodal curve. On one side of the binodal, mixtures are absolutely stable. In between the binodal and the spinodal, mixtures may be metastable: staying mixed (or unmixed) absent some large disturbance. The region beyond the spinodal curve is absolutely unstable, and (if starting from a mixed state) will spontaneously phase-separate. The upper critical solution temperature (UCST) and the lower critical solution temperature (LCST) are two critical temperatures, above which or below which the components of a mixture are miscible in all proportions. It is rare for systems to have both, but some exist: the nicotine-water system has an LCST of 61 °C, and also a UCST of 210 °C at pressures high enough for liquid water to exist at that temperature. The components are therefore miscible in all proportions below 61 °C and above 210 °C (at high pressure), and partially miscible in the interval from 61 to 210 °C. Physical basis Mixing is governed by the Gibbs free energy, with phase separation or mixing occurring for whichever case lowers the Gibbs free energy. The free energy can be decomposed into two parts: , with the enthalpy, the temperature, and the entropy. Thus, the change of the free energy in mixing is the sum of the enthalpy of mixing and the entropy of mixing. The enthalpy of mixing is zero for ideal mixtures, and ideal mixtures are enough to describe many common solutions. Thus, in many cases, mixing (or phase separation) is driven primarily by the entropy of mixing. It is generally the case that the entropy will increase whenever a particle (an atom, a molecule) has a larger space to explore; and thus, the entropy of mixing is generally positive: the components of the mixture can increase their entropy by sharing a larger common volume. Phase separation is then driven by several distinct processes. In one case, the enthalpy of mixing is positive, and the temperature is low: the increase in entropy is insufficient to lower the free energy. In another, considerably more rare case, the entropy of mixing is "unfavorable", that is to say, it is negative. In this case, even if the change in enthalpy is negative, phase separation will occur unless the temperature is low enough. It is this second case which gives rise to the idea of the lower critical solution temperature. Phase separation in cold gases A mixture of two helium isotopes (helium-3 and helium-4) in a certain range of temperatures and concentrations separates into parts. The initial mix of the two isotopes spontaneously separates into ^{4}He-rich and {}^3He-rich regions. Phase separation also exists in ultracold gas systems. It has been shown experimentally in a two-component ultracold Fermi gas case. The phase separation can compete with other phenomena as vortex lattice formation or an exotic Fulde-Ferrell-Larkin-Ovchinnikov phase. See also Biomolecular condensate Colloid Phase diagram Phase rule UNIQUAC References Further reading Equilibrium chemistry Solvents Condensed matter physics
Phase separation
Physics,Chemistry,Materials_science,Engineering
802
25,698,890
https://en.wikipedia.org/wiki/Rann%20of%20Kutch
The Rann of Kutch is a large area of salt marshes that span the border between India and Pakistan. It is located mostly in the Kutch district of the Indian state of Gujarat, with a minor portion extending into the Sindh province of Pakistan. It is divided into the Great Rann and Little Rann. It used to be a part of the Arabian Sea, but it then dried up, leaving behind the salt, which formed the Rann of Kutch. The Luni flowed into the Rann of Kutch, but when the Rann dried up, the Luni was left behind, which explains why the Luni does not flow into the Arabian Sea today. Geography The Rann of Kutch is located mostly in the Indian state of Gujarat, specifically Kutch district, for which it is named. Some parts extend into the Pakistani province of Sindh. The word Rann means "desert" in Gujarati. The Rann of Kutch covers around 26,000 square kilometres (10,000 square miles). The Great Rann of Kutch is the larger portion of the Rann. It extends east and west, with the Thar Desert to the north and the low hills of Kutch to the south. The Indus River Delta lies to the west in southern Pakistan. The Little Rann of Kutch lies southeast of the Great Rann, and extends southwards to the Gulf of Kutch. Many rivers originating in Rajasthan and Gujarat flow into the Rann of Kutch, including the Luni, Bhuki, Bharud, Nara, Kharod, Banas, Saraswati, Rupen, Bambhan, and Machchhu. Kori Creek and Sir Creek, tidal creeks which are part of the Indus River Delta, are located at the western end of the Great Rann. The surface is generally flat and very close to sea level, and most of the Rann floods annually during the monsoon season. There are areas of sandy higher ground, known as bets or s, which lie two to three metres above flood level. Trees and shrubs grow on the bets, and they provide refuges for wildlife during the annual floods. Climate The climate of the ecoregion is tropical savanna/semi-arid. Temperatures average 44 °C during the hot summer months, and can reach highs of 50 °C. During winter the temperature can drop to or below freezing point. Rainfall is highly seasonal. The Rann of Kutch is dry for most of the year, and rainfall is concentrated in the June to September monsoon season. During the monsoon season, local rainfall and river runoff flood much of the Rann to a depth of 0.5 metres. The waters evaporate during the long dry season, leaving the Rann dry again by the start of the next monsoon season. Ecology The Rann of Kutch is the only large flooded grasslands zone in the Indomalayan realm. The area has desert on one side and the sea on the other, which enables various ecosystems, including mangroves and desert vegetation. Its grassland and deserts are home to forms of wildlife that have adapted to its often harsh conditions. These include endemic and endangered animal and plant species. Flora The predominant vegetation in the Rann of Kutch is grassland and thorn scrub. Common grass species include Apluda aristata, Cenchrus spp., Pennisetum spp., Cymbopogon spp., Eragrostis spp., and Elionurus spp. Trees are rare except on the bets which rise above the flood zone. The non-native tree Prosopis juliflora has become established on the bets, and its seed pods provide year-round food for the wild asses. Fauna The Rann of Kutch is home to about 50 species of mammals. They include several large herbivores, including Indian wild ass (Equus hemionus khur), chinkara (Gazella bennettii), nilgai (Boselaphus tragocamelus), and blackbuck (Antilope cervicapra), and large predators like wolf (Canis lupus), striped hyena (Hyaena hyaena), desert wildcat (Felis lybica), and caracal (Felis caracal). The Indian wild ass once had a wider distribution but is now limited to the Rann of Kutch. The nilgai and blackbuck are threatened species. There are over 200 bird species in the Rann of Kutch, including the threatened species lesser florican (Sypheotides indicus) and houbara bustard (Chlamydotis undulata). The seasonal wetlands provide habitat for many water birds, including the demoiselle crane (Grus virgo) and lesser flamingo (Phoeniconaias minor). History and culture The history of the Rann of Kutch began with early neolithic settlements. It was later inhabited by the Indus Valley Civilization as well as the Maurya Empire and Gupta Empire of India. Indus Valley period The people of the Indus Civilization appear to have settled in the Rann of Kutch around 3500 BCE. The Indus city of Dholavira, the largest Indus site in India, is located in the Rann of Kutch. This city was built on the Tropic of Cancer, possibly indicating that Dholavira's inhabitants were skilled in astronomy. The Rann of Kutch also contained the industrial site of Khirasara, where a warehouse was found. Many Indologists such as A. S. Gaur and Mani Murali hold the view that the Rann of Kutch was, rather than the salt marsh that it is today, a navigable archipelago at the time of the Indus Civilization. The Indus Civilization was known to have an extensive maritime trade system, so it has been proposed by Gaur et al. that there were perhaps ports in the Rann of Kutch. Imperial Indian Period The Rann of Kutch was a part of both the Maurya and Gupta empires of India. Colonial and Modern periods The Rann of Kutch came under the control of the British Raj, who imposed a ban on salt harvesting. This ban was protested and overturned by Indian activist Mahatma Gandhi. More recently, the residents of the Rann of Kutch began holding the Rann Utsav festival, a three-month long carnival, which marks the peak tourist season. Kadiya dhro in Nakhatrana is a popular place amongst tourists. Conservation and protected areas A 2017 assessment found that 20,946 km2, or 76%, of the ecoregion is in protected areas. They include the Kutch Desert Wildlife Sanctuary (7506.22 km2), which was established in 1986 and covers much of the Great Rann, and the Indian Wild Ass Sanctuary (4953.71 km2), which was established in 1973 and covers much of the Little Rann. Pakistan's Rann of Kutch Wildlife Sanctuary protects the northern portion of the Great Rann and adjacent Thar Desert. References External links Global Species : Ecoregion : Rann of Kutch seasonal salt marsh Indomalayan ecoregions Flooded grasslands and savannas Natural regions of India Ecoregions of Asia Ecoregions of Pakistan Geography of Kutch district Geography of Sindh
Rann of Kutch
Chemistry
1,454
25,525,269
https://en.wikipedia.org/wiki/GJMS%20operator
In the mathematical field of differential geometry, the GJMS operators are a family of differential operators, that are defined on a Riemannian manifold. In an appropriate sense, they depend only on the conformal structure of the manifold. The GJMS operators generalize the Paneitz operator and the conformal Laplacian. The initials GJMS are for its discoverers Graham, Jenne, Mason & Sparling (1992). Properly, the GJMS operator on a conformal manifold of dimension n is a conformally invariant operator between the line bundle of conformal densities of weight for k a positive integer The operators have leading symbol given by a power of the Laplace–Beltrami operator, and have lower order correction terms that ensure conformal invariance. The original construction of the GJMS operators used the ambient construction of Charles Fefferman and Robin Graham. A conformal density defines, in a natural way, a function on the null cone in the ambient space. The GJMS operator is defined by taking density ƒ of the appropriate weight and extending it arbitrarily to a function F off the null cone so that it still retains the same homogeneity. The function ΔkF, where Δ is the ambient Laplace–Beltrami operator, is then homogeneous of degree , and its restriction to the null cone does not depend on how the original function ƒ was extended to begin with, and so is independent of choices. The GJMS operator also represents the obstruction term to a formal asymptotic solution of the Cauchy problem for extending a weight function off the null cone in the ambient space to a harmonic function in the full ambient space. The most important GJMS operators are the critical GJMS operators. In even dimension n, these are the operators Ln/2 that take a true function on the manifold and produce a multiple of the volume form. References . Conformal geometry Differential operators
GJMS operator
Mathematics
401
30,468,790
https://en.wikipedia.org/wiki/Wentworth%20Wooden%20Puzzles
The Wentworth Wooden Jigsaw Company (also known as Wentworth Wooden Puzzles) is a British maker of jigsaw puzzles with whimsically shaped pieces reflecting the theme of the image portrayed on the puzzle. It was founded in 1991 by Kevin Wentworth Preston and is based in the village of Pinkney near Malmesbury, Wiltshire, an area of England known as the Cotswolds. Company history The venture was established on a dairy farm which was forced to diversify into other sources of income when milk production became uneconomical to sustain. Some of the old buildings were converted into industrial use, and the farm became an industrial estate housing many other traders, as well as the new puzzle enterprise. Technology Traditionally jigsaws are manufactured using a thin flexible cutting blade driven by a motor known as a bandsaw. This method of cutting thin wood requires a degree of manual dexterity and patience to avoid spoiling the work. An alternative solution to this labour-intensive method of cutting intricate shapes in wood was required using modern technology solutions. The advent of the commercial medium-power Laser device has enabled many industries to use this tool to cut many different types of material speedily. The puzzle-manufacturing process uses a laser-cutting method invented and perfected by the founder, Kevin Wentworth Preston, in 1994. Wentworth production can now focus on the quality of manufacture and design innovation that this new tooling provides. The high-speed production technique allows the small company to supply in excess of 250,000 puzzles a year to destinations in over 35 countries throughout the world. The design team produces each cutout style individually, most of the designs are unique "whimsy" jigsaw shapes. Whimsies are specially shaped pieces cut into puzzles "on a whim" by Victorian-era hand cutters, an era when jigsaw puzzles became a popular pastime. Wentworth retained this older style of manufacture, and is one of the remaining companies still producing puzzles using these Victorian techniques. 'Whimsy pieces' The 'Whimsy' laser-cut wooden puzzles feature unique, individual, "whimsical" cut-out shapes that reflect the theme of the image used on the face of the puzzle. All puzzles are supplied in a cotton draw-string bag within a lidded box. These wooden puzzles are cut from 3mm thick wooden boards (as opposed to softer cardboard) to ensure they will survive the rigours of use for a very long time. Puzzles are supplied to the customer with the option of an image of the puzzle's subject matter printed on the box. With no reference image there is the added difficulty of assembling the pieces into the correct pattern, and the element of surprise concerning the subject matter when the puzzle's image is reassembled. Manufacture: 3mm wooden board derived from sustainable managed forests. Features: rarely include corner pieces or two pieces of the same shape. 'whimsical' shapes: which reflect the theme of the image in all standard puzzles. Packaging: a cloth bag inside a sturdy box made from recycled material. Styles Traditional puzzles All traditional puzzles include the unique whimsy pieces. Common sizes included are 100, 250, 500, 1,000 and 1,500 pieces. Personalised puzzles The ability to use a photograph or image design is a feature that Wentworth's puzzles make available in all the various sizes. Text may be added at the image creation process to include such messages as 'Happy Birthday' and 'Happy Anniversary' etc. Difficult puzzles The Tessellation puzzles range use jigsaw pieces in which the pieces are almost all identical in pattern. Some utilise pieces shaped like animals, such as deer. Other subjects include repetitive plant shapes such as ivy and holly cuts. Children's puzzles Puzzles shapes and styles are designed to suit all ages and ability, including images specially suited for children, which are traditionally constructed with larger, more manageable pieces with simpler pattern and shape design. The company was awarded recognition for its production in this section of the market in 2008. In 2021, during mental health awareness week, Wentworth Wooden Puzzles released a series of mini jigsaw puzzles to promote the mindfulness and wellbeing that jigsaw puzzles can provide. Money from the sale of puzzles during that week were donated to Mind, a mental health charity. Mini Mindful puzzles May 6, 2021, Wentworth Wooden Puzzles released a series of mini jigsaw puzzles to promote the mindfulness and wellbeing that jigsaw puzzles can provide while also supporting mental health awareness week, Money from the sale of puzzles during that week were donated to Mind, a UK mental health charity. Impact of COVID-19 In March 2020 as people moved indoors due to COVID-19 restrictions, the Wiltshire factory saw an increase of nearly 400% in the amount of orders they received since the start of the pandemic and struggled to keep up with the demand. In an effort to show gratitude to all those fighting the COVID-19 virus, the company launched a special "say thank you with a rainbow" puzzle. In March 2021 the company announced they had raised nearly £12,000 which was donated to a local coronavirus charity. References External links Wooden toys Toy brands Jigsaw puzzle manufacturers Mechanical puzzles Companies based in Wiltshire British companies established in 1994
Wentworth Wooden Puzzles
Mathematics
1,076
2,380,033
https://en.wikipedia.org/wiki/V838%20Herculis
V838 Herculis, also known as Nova Herculis 1991, was a nova which occurred in the constellation Hercules in 1991. It was discovered by George Alcock of Yaxley, Cambridgeshire, England at 4:35 UT on the morning of 25 March 1991. He found it with 10×50 binoculars, and on that morning its apparent visual magnitude was 5 (making it visible to the naked eye). Palomar Sky Survey plates showed that before the outburst, the star was at photographic magnitude 20.6 (blue light) and 18.25 (red light). V838 Herculis declined from its peak brightness very quickly, fading by 2 magnitudes in less than three days, making it one of the fastest classical novae ever recorded. All novae are binary stars, with a "donor" star orbiting a white dwarf. The two stars are so close to each other that material is transferred from the donor to the white dwarf. Because the distance between the two stars is comparable to the radius of the donor star, novae are often eclipsing binaries, and V838 Herculis does show such eclipses. The eclipses were first detected a few weeks after the nova outburst, and they show the system's orbital period to be 7 hours, 8 minutes and 36 seconds as of 1991. The shape of the eclipse light curve suggests that the white dwarf itself is not being eclipsed by the donor, but rather that the accretion disk surrounding the white dwarf is being partially eclipsed. The depth of the eclipses was initially only 0.1 magnitudes, but grew over the year following the nova event to 0.7 magnitudes, indicating that the accretion disk re-established itself after the nova outburst during that time. The white dwarf in the V838 Herculis system is an oxygen-neon-magnesium white dwarf, with a mass of about 1.35 , which is near the Chandrasekhar limit for white dwarf masses. The donor star is believed to be a main sequence star. References External links https://web.archive.org/web/20050915104557/http://www.tsm.toyama.toyama.jp/curators/aroom/var/nova/1990.htm (the page is currently offline) V838 Herculis at the American Association of Variable Star Observers website Novae Hercules (constellation) 1991 in science Astronomical objects discovered in 1991 Herculis, V838
V838 Herculis
Astronomy
510
48,416,325
https://en.wikipedia.org/wiki/Vilcha%2C%20Kharkiv%20Oblast
Vilcha () is a Ukrainian rural settlement in Chuhuiv Raion (prior to 2020 in Vovchansk Raion) in Kharkiv Oblast. It belongs to Vovchansk urban hromada, one of the hromadas of Ukraine. Population: Until 18 July 2020, Vilcha belonged to Vovchansk Raion. The raion was abolished in July 2020 as part of the administrative reform of Ukraine, which reduced the number of raions of Kharkiv Oblast to seven. The area of Vovchansk Raion was merged into Chuhuiv Raion. On 26 January 2024, a new law entered into force which abolished the status of uran-type settlement, and Vilcha became a rural settlement. History The urban-type settlement, sometimes named New Vilcha, was founded in 1993, when the 2,000 residents of the Old Vilcha (709 km far, in Kyiv Oblast), located 45 km from the Chernobyl Nuclear Power Plant, moved here in the period 1993-1996. Immediately after the accident of 1986, the "Exclusion Zone" was recognized only in the area within a radius of 30 km from the nuclear plant. The idea of creating a village built up with typical cottage-type houses where people were settled so that their relatives were next to them ("dachas") was conceived back in 1989. The Soviet authorities presented Vilcha as a showcase for the concept. The center of the village was to be dedicated to an industry along with a laundromat, a branch of the Kharkov Radio Plant, a hotel, a youth sports and entertainment complex with 400 seats, greenhouses and a swimming pool. Russo-Ukrainian War During the initial eastern campaign of the 2022 Russian invasion of Ukraine, the village was occupied by Russia during the first days the conflict. It was retaken by Ukrainian forces later that year during the 2022 Kharkiv counteroffensive. Russian forces began combat operations in the area of Vilcha once again on 10 May 2024 during the 2024 Kharkiv offensive. Geography Located 6 km south of Vovchansk, and not too far from the borders with the Russian Oblast of Belgorod; Vilcha it is served by the provincial highway T2104, and by Harbuzivka railway station, on Belgorod-Kupiansk line. The town is 20 km far from Bilyi Kolodiaz, 26 from Staryi Saltiv, 52 from Velykyi Burluk, 56 from Belgorod and 71 from Kharkiv. See also Vilcha, Kyiv Oblast References External links Rural settlements in Chuhuiv Raion Populated places established in 1993 Aftermath of the Chernobyl disaster
Vilcha, Kharkiv Oblast
Technology
569
19,518,563
https://en.wikipedia.org/wiki/Smart%20system
Smart systems are systems (usually computer systems or electronic system) which are able to incorporate and perform functions of sensing, actuation, and control in order to analyze a situation, based on acquired data and perform decisions in a predictive or adaptive manner, thereby performing smart actions. In most cases the Intelligence/"smartness" of the system can be attributed to autonomous operation based on closed loop control, resource management, and networking capabilities. Characteristics Smart systems typically consist of diverse components: Sensors for signal acquisition Elements transmitting the information to the command-and-control unit Command-and-control units that take decisions and give instructions based on the available information Components transmitting decisions and instructions Actuators that perform or trigger the required action Development A lot of smart systems evolved from microsystems. They combine technologies and components from microsystems technology (miniaturized electric, mechanical, optical, and fluidic devices) with other disciplines like biology, chemistry, nanoscience, or cognitive sciences. There are three generations of smart systems: First-generation smart systems: object recognition devices, driver status monitoring, and multifunctional devices for minimally invasive surgery Second-generation smart systems: active miniaturized artificial organs like cochlear implants or artificial pancreas, advanced energy management systems, and environmental sensor networks Third-generation smart systems: combine technical “intelligence” and cognitive functions so that they can provide an interface between the virtual and the physical world Challenges A major challenge in smart systems technology is the integration of a multitude of diverse components, developed and produced in very different technologies and materials. Focus is on the design and manufacturing of completely new marketable products and services for specialized applications (e.g., in medical technologies), and for mass market applications (e.g., in the automotive industries). In an industrial context, and when emphasizing the combination of components with the aim of merging their functional and technical abilities into an interoperable system, the term "smart systems integration" is used. This term reflects the industrial requirement and particular challenge of integrating different technologies, component sizes, and materials into one system. The systems approach calls for integrated design and manufacturing and has to bring together interdisciplinary technological approaches and solutions (converging technologies). Manufacturing companies as well as research institutes therefore face challenges in terms of specialized technological knowhow, skilled labor, design tools, and equipment needed for the research, design and manufacturing of integrated smart systems. Applications area for smart systems Smart systems address environmental, societal, and economic challenges like limited resources, climate change, population ageing, and globalization. They are for that reason increasingly used in a large number of sectors. Key sectors in this context are transportation, healthcare, energy, safety and security, logistics, ICT, and manufacturing. Environment In terms of environmental challenges, smart solutions for energy management and distribution, smart control of electrical drives, smart logistics, or energy-efficient facility management could, by 2020, reduce global emissions by 23%, with an equivalent of 9.2 Gt e. Automotive sector In the automotive sector, smart systems integration will be a key enabler for pre-crash systems and predictive driver assistance features to reach the goal of the Road Safety Action Plan to halve the number of traffic deaths by 2020. Furthermore, smart systems are considered fundamental for sustainable and energy-efficient mobility, e.g., hybrid and electric traction. Internet of Things Smart systems also considerably contribute to the development of the future Internet of Things, in that they provide smart functionality to everyday objects, e.g., to industrial goods in the supply chain, or to food products in the food supply chain. With the help of active RFID technology, wireless sensors, real-time sense and response capability, energy efficiency, as well as networking functionality, objects will become smart objects. These smart objects could support the elderly and the disabled. The close tracking and monitoring of food products could improve food supply and quality. Smart industrial goods could store information about their origin, destination, components, and use. And waste disposal could become a truly efficient individual recycling process. Armatix developed a pistol that uses an RFID-active wristwatch to function. Healthcare In the healthcare sector, smart systems technology leads to better diagnostic tools, to better treatment and quality of life for patients by simultaneously reducing costs of public healthcare systems. Key developments in this sector are smart miniaturized devices and artificial organs like artificial pancreas or cochlear implants. For example, Lab-on-a-chip devices have biochemical sensors that detect specific molecular markers in body fluids or tissue. They can include multiple functionalities such as sample taking, sample preparation, and sample pre-treatment, data processing, and storage, implantable systems which can be reabsorbed by the body after use, non-invasive sensors based on transdermal principles, or devices for responsive administration of medication. In healthcare, smart systems often operate autonomously and within networks, because those systems are able to provide real-time monitoring, diagnosis, interaction with other devices, and communication with the patient or physician. See also Internet of Things Machine learning Microbotics RoboBee Smart grid Smart city Microelectromechanical systems Artificial Intelligence of Things References Akhras, G., "Smart Materials and Smart Systems for the Future", Canadian Military Journal, 08/2000 European Commission ICT Work Programme 2007-08 European Commission ICT Work Programme 2009-10 Meyer, G. et al.: Advanced Microsystems for Automotive Applications 2009 - Smart Systems for Safety, Sustainability and Comfort, Springer 2009 Internet-of-Things in 2020 – A roadmap for the future, 2008 Strategy Paper “Smart Systems for the Full Electric Vehicle”, 2008 Varadan, V. K.: Handbook of Smart Systems and Materials, Inst Of Physics Pub, London 2005 Wadhawan, V. K.: Smart Structures, Oxford University Press 2005 External links The European Technology Platform on Smart Systems Integration (EPoSS) EPoSS Strategic Research Agenda 2017 Product Showcase Smart Systems Integrated® Smart Systems Integration 2009 - European Conference and Exhibition Smart Systems for Clean, Safe and Shared Road Vehicles – 22nd International Forum on Advanced Microsystems for Automotive Applications (AMAA 2018) Systems engineering el:Ευφυή υλικά
Smart system
Engineering
1,272
827,611
https://en.wikipedia.org/wiki/Castle%20Bravo
Castle Bravo was the first in a series of high-yield thermonuclear weapon design tests conducted by the United States at Bikini Atoll, Marshall Islands, as part of Operation Castle. Detonated on 1 March 1954, the device remains the most powerful nuclear device ever detonated by the United States and the first lithium deuteride-fueled thermonuclear weapon tested using the Teller-Ulam design. Castle Bravo's yield was , 2.5 times the predicted , due to unforeseen additional reactions involving lithium-7, which led to radioactive contamination in the surrounding area. Fallout, the heaviest of which was in the form of pulverized surface coral from the detonation, fell on residents of Rongelap and Utirik atolls, while the more particulate and gaseous fallout spread around the world. The inhabitants of the islands were evacuated only three days later and suffered radiation sickness. Twenty-three crew members of the Japanese fishing vessel Daigo Fukuryū Maru ("Lucky Dragon No. 5") were also contaminated by the heavy fallout, experiencing acute radiation syndrome, including the death six months later of Kuboyama Aikichi, the boat's chief radioman. The blast incited a strong international reaction over atmospheric thermonuclear testing. The Bravo Crater is located at . The remains of the Castle Bravo causeway are at . Bomb design Primary system The Castle Bravo device was housed in a cylinder that weighed and measured in length and in diameter. The primary device was a COBRA deuterium-tritium gas-boosted atomic bomb made by Los Alamos Scientific Laboratory, a very compact MK 7 device. This boosted fission device had been tested in the Upshot-Knothole Climax event and yielded (out of 50–70 kt expected yield range). It was considered successful enough that the planned operation series Domino, designed to explore the same question about a suitable primary for thermonuclear bombs, could be canceled. The implosion system was quite lightweight at , because it eliminated the aluminum pusher shell around the tamper and used the more compact ring lenses, a design feature shared with the Mark 5, 12, 13 and 18 designs. The explosive material of the inner charges in the MK 7 was changed to the more powerful Cyclotol 75/25, instead of the Composition B used in most stockpiled bombs at that time, as Cyclotol 75/25 was denser than Composition B and thus could generate the same amount of explosive force in a smaller volume (it provided 13 percent more compressive energy than Comp B). The composite uranium-plutonium COBRA core was levitated in a type-D pit. COBRA was Los Alamos' most recent product of design work on the "new principles" of the hollow core. A copper pit liner encased within the weapon-grade plutonium inner capsule prevented DT gas diffusion into the plutonium, a technique first tested in Greenhouse Item. The assembled module weighed , measuring across. It was located at the end of the device, which, as seen in the declassified film, shows a small cone projecting from the ballistic case. This cone is the part of the paraboloid that was used to focus the radiation emanating from the primary into the secondary. Deuterium and lithium The device was called SHRIMP, and had the same basic configuration (radiation implosion) as the Ivy Mike wet device, except with a different type of fusion fuel. SHRIMP used lithium deuteride (LiD), which is solid at room temperature; Ivy Mike used cryogenic liquid deuterium (D2), which required elaborate cooling equipment. Castle Bravo was the first test by the United States of a practical deliverable fusion bomb, even though the TX-21 as proof-tested in the Bravo event was not weaponized. The successful test rendered obsolete the cryogenic design used by Ivy Mike and its weaponized derivative, the JUGHEAD, which was slated to be tested as the initial Castle Yankee. It also used a 7075 aluminum ballistic case. Aluminum was used to drastically reduce the bomb's weight and simultaneously provided sufficient radiation confinement time to raise yield, a departure from the heavy stainless steel casing (304L or MIM 316L) employed by other weapon-projects at the time. The SHRIMP was at least in theory and in many critical aspects identical in geometry to the RUNT and RUNT II devices later proof-fired in Castle Romeo and Castle Yankee respectively. On paper it was a scaled-down version of these devices, and its origins can be traced back to 1953. The United States Air Force indicated the importance of lighter thermonuclear weapons for delivery by the B-47 Stratojet and B-58 Hustler. Los Alamos National Laboratory responded to this indication with a follow-up enriched version of the RUNT scaled down to a 3/4 scale radiation-implosion system called the SHRIMP. The proposed weight reduction (from TX-17's to TX-21's ) would provide the Air Force with a much more versatile deliverable gravity bomb. The final version tested in Castle used partially enriched lithium as its fusion fuel. Natural lithium is a mixture of lithium-6 and lithium-7 isotopes (with 7.5% of the former). The enriched lithium used in Bravo was nominally 40% lithium-6 (the remainder was the much more common lithium-7, which was incorrectly assumed to be inert). The fuel slugs varied in enrichment from 37 to 40% in Li, and the slugs with lower enrichment were positioned at the end of the fusion-fuel chamber, away from the primary. The lower levels of lithium enrichment in the fuel slugs, compared with the ALARM CLOCK and many later hydrogen weapons, were due to shortages in enriched lithium at that time, as the first of the Alloy Development Plants (ADP) started production in late 1953. The volume of LiD fuel used was approximately 60% the volume of the fusion fuel filling used in the wet SAUSAGE and dry RUNT I and II devices, or about , corresponding to about 390 kg of lithium deuteride (as LiD has a density of 0.78201 g/cm3). The mixture cost about 4.54 USD/g at that time. The fusion burn efficiency was close to 25.1%, the highest attained efficiency of the first thermonuclear weapon generation. This efficiency is well within the figures given in a November 1956 statement, when a DOD official disclosed that thermonuclear devices with efficiencies ranging from 15% to up about 40% had been tested. Hans Bethe reportedly stated independently that the first generation of thermonuclear weapons had (fusion) efficiencies varying from as low as 15% to up about 25%. The thermonuclear burn would produce (like the fission fuel in the primary) pulsations (generations) of high-energy neutrons with an average temperature of 14 MeV through Jetter's cycle. Jetter's cycle The Jetter cycle is a combination of reactions involving lithium, deuterium, and tritium. It consumes lithium-6 and deuterium, and in two reactions (with energies of 17.6 MeV and 4.8 MeV, mediated by a neutron and tritium) it produces two alpha particles. The reaction would produce high-energy neutrons with 14 MeV, and its neutronicity was estimated at ≈0.885 (for a Lawson criterion of ≈1.5). Possible additional tritium for high-yield As SHRIMP, along with the RUNT I and ALARM CLOCK, were to be high-yield shots required to assure the thermonuclear "emergency capability," their fusion fuel may have been spiked with additional tritium, in the form of LiT. All of the high-energy 14 MeV neutrons would cause fission in the uranium fusion tamper wrapped around the secondary and the spark plug's plutonium rod. The ratio of deuterium (and tritium) atoms burned by 14 MeV neutrons spawned by the burning was expected to vary from 5:1 to 3:1, a standardization derived from Mike, while for these estimations, the ratio of 3:1 was predominantly used in ISRINEX. The neutronicity of the fusion reactions harnessed by the fusion tamper would dramatically increase the yield of the device. SHRIMPs indirect drive Attached to the cylindrical ballistic case was a natural-uranium liner, the radiation case, that was about 2.5 cm thick. Its internal surface was lined with copper that was about 240 μm thick, and made from 0.08-μm thick copper foil, to increase the overall albedo of the hohlraum. Copper possesses excellent reflecting properties, and its low cost, compared to other reflecting materials like gold, made it useful for mass-produced hydrogen weapons. Hohlraum albedo is a very important design parameter for any inertial-confinement configuration. A relatively high albedo permits higher interstage coupling due to the more favorable azimuthal and latitudinal angles of reflected radiation. The limiting value of the albedo for high-Z materials is reached when the thickness is 5–10 g/cm, or 0.5–1.0 free paths. Thus, a hohlraum made of uranium much thicker than a free path of uranium would be needlessly heavy and costly. At the same time, the angular anisotropy increases as the atomic number of the scatterer material is reduced. Therefore, hohlraum liners require the use of copper (or, as in other devices, gold or aluminium), as the absorption probability increases with the value of Z of the scatterer. There are two sources of X-rays in the hohlraum: the primary's irradiance, which is dominant at the beginning and during the pulse rise; and the wall, which is important during the required radiation temperature's (T) plateau. The primary emits radiation in a manner similar to a flash bulb, and the secondary needs constant T to properly implode. This constant wall temperature is dictated by the ablation pressure requirements to drive compression, which lie on average at about 0.4 keV (out of a range of 0.2 to 2 keV), corresponding to several million kelvins. Wall temperature depended on the temperature of the primary's core which peaked at about 5.4 keV during boosted-fission. The final wall-temperature, which corresponds to energy of the wall-reradiated X-rays to the secondary's pusher, also drops due to losses from the hohlraum material itself. Natural uranium nails, lined to the top of their head with copper, attached the radiation case to the ballistic case. The nails were bolted in vertical arrays in a double-shear configuration to better distribute the shear loads. This method of attaching the radiation case to the ballistic case was first used successfully in the Ivy Mike device. The radiation case had a parabolic end, which housed the COBRA primary that was employed to create the conditions needed to start the fusion reaction, and its other end was a cylinder, as also seen in Bravo's declassified film. The space between the uranium fusion tamper, and the case formed a radiation channel to conduct X-rays from the primary to the secondary assembly; the interstage. It is one of the most closely guarded secrets of a multistage thermonuclear weapon. Implosion of the secondary assembly is indirectly driven, and the techniques used in the interstage to smooth the spatial profile (i.e. reduce coherence and nonuniformities) of the primary's irradiance are of utmost importance. This was done with the introduction of the channel filler—an optical element used as a refractive medium, also encountered as random-phase plate in the ICF laser assemblies. This medium was a polystyrene plastic foam filling, extruded or impregnated with a low-molecular-weight hydrocarbon (possibly methane gas), which turned to a low-Z plasma from the X-rays, and along with channeling radiation it modulated the ablation front on the high-Z surfaces; it "tamped" the sputtering effect that would otherwise "choke" radiation from compressing the secondary. The reemitted X-rays from the radiation case must be deposited uniformly on the outer walls of the secondary's tamper and ablate it externally, driving the thermonuclear fuel capsule (increasing the density and temperature of the fusion fuel) to the point needed to sustain a thermonuclear reaction. (see Nuclear weapon design). This point is above the threshold where the fusion fuel would turn opaque to its emitting radiation, as determined from its Rosseland opacity, meaning that the generated energy balances the energy lost to fuel's vicinity (as radiation, particle losses). After all, for any hydrogen weapon system to work, this energy equilibrium must be maintained through the compression equilibrium between the fusion tamper and the spark plug (see below), hence their name equilibrium supers. Since the ablative process takes place on both walls of the radiation channel, a numerical estimate made with ISRINEX (a thermonuclear explosion simulation program) suggested that the uranium tamper also had a thickness of 2.5 cm, so that an equal pressure would be applied to both walls of the hohlraum. The rocket effect on the surface of tamper's wall created by the ablation of its several superficial layers would force an equal mass of uranium that rested in the remainder of the tamper to speed inwards, thus imploding the thermonuclear core. At the same time, the rocket effect on the surface of the hohlraum would force the radiation case to speed outwards. The ballistic case would confine the exploding radiation case for as long as necessary. The fact that the tamper material was uranium enriched in U is primarily based on the final fission reaction fragments detected in the radiochemical analysis, which conclusively showed the presence of U, found by the Japanese in the shot debris. The first-generation thermonuclear weapons (MK-14, 16, 17, 21, 22 and 24) all used uranium tampers enriched to 37.5% U. The exception to this was the MK-15 ZOMBIE that used a 93.5% enriched fission jacket. The secondary assembly The secondary assembly was the actual SHRIMP component of the weapon. The weapon, like most contemporary thermonuclear weapons at that time, bore the same codename as the secondary component. The secondary was situated in the cylindrical end of the device, where its end was locked to the radiation case by a type of mortise and tenon joint. The hohlraum at its cylindrical end had an internal projection, which nested the secondary and had better structural strength to support the secondary's assembly, which had most of the device's mass. A visualization to this is that the joint looked much like a cap (the secondary) fitted in a cone (the projection of the radiation case). Any other major supporting structure would interfere to radiation transfer from the primary to the secondary and complex vibrational behavior. With this form of joint bearing most of the structural loads of the secondary, the latter and the hohlraum-ballistic case ensemble behaved as a single mass sharing common eigenmodes. To reduce excessive loading of the joint, especially during deployment of the weapon, the forward section of the secondary (i.e. the thermal blast/heat shield) was anchored to the radiation case by a set of thin wires, which also aligned the center line of the secondary with the primary, as they diminished bending and torsional loads on the secondary, another technique adopted from the SAUSAGE. The secondary assembly was an elongated truncated cone. From its front part (excluding the blast-heat shield) to its aft section it was steeply tapered. Tapering was used for two reasons. First, radiation drops by the square of the distance, hence radiation coupling is relatively poor in the aftermost sections of the secondary. This made the use of a higher mass of the then scarce fusion fuel in the rear end of the secondary assembly ineffective and the overall design wasteful. This was also the reason why the lower-enriched slugs of fusion fuel were placed far aft of the fuel capsule. Second, as the primary could not illuminate the whole surface of the hohlraum, in part due to the large axial length of the secondary, relatively small solid angles would be effective to compress the secondary, leading to poor radiation focusing. By tapering the secondary, the hohlraum could be shaped as a cylinder in its aft section obviating the need to machine the radiation case to a parabola at both ends. This optimized radiation focusing and enabled a streamlined production line, as it was cheaper, faster and easier to manufacture a radiation case with only one parabolic end. The tapering in this design was much steeper than its cousins, the RUNT, and the ALARM CLOCK devices. SHRIMP's tapering and its mounting to the hohlraum apparently made the whole secondary assembly resemble the body of a shrimp. The secondary's length is defined by the two pairs of dark-colored diagnostic hot spot pipes attached to the middle and left section of the device. These pipe sections were in diameter and long and were butt-welded end-to-end to the ballistic case leading out to the top of the shot cab. They would carry the initial reaction's light up to the array of 12 mirror towers built in an arc on the artificial shot island created for the event. From those pipes, mirrors would reflect early bomb light from the bomb casing to a series of remote high-speed cameras, and so that Los Alamos could determine both the simultaneity of the design (i.e. the time interval between primary's firing and secondary's ignition) and the thermonuclear burn rate in these two crucial areas of the secondary device. This secondary assembly device contained the lithium deuteride fusion fuel in a stainless-steel canister. Running down to the center of the secondary was a 1.3 cm thick hollow cylindrical rod of plutonium, nested in the steel canister. This was the spark plug, a tritium-boosted fission device. It was assembled by plutonium rings and had a hollow volume inside that measured about 0.5 cm in diameter. This central volume was lined with copper, which like the liner in the primary's fissile core prevented DT gas diffusion in plutonium. The spark plug's boosting charge contained about 4 grams of tritium and, imploding together with the secondary's compression, was timed to detonate by the first generations of neutrons that arrived from the primary. Timing was defined by the geometric characteristics of the sparkplug (its uncompressed annular radius), which detonated when its criticality, or k, transcended 1. Its purpose was to compress the fusion material around it from its inside, equally applying pressure with the tamper. The compression factor of the fusion fuel and its adiabatic compression energy determined the minimal energy required for the spark plug to counteract the compression of the fusion fuel and the tamper's momentum. The spark plug weighed about 18 kg, and its initial firing yielded . Then it would be completely fissioned by the fusion neutrons, contributing about to the total yield. The energy required by the spark plug to counteract the compression of the fusion fuel was lower than the primary's yield because coupling of the primary's energy in the hohlraum is accompanied by losses due to the difference between the X-ray fireball and the hohlraum temperatures. The neutrons entered the assembly by a small hole through the ≈28 cm thick U blast-heat shield. It was positioned in front of the secondary assembly facing the primary. Similar to the tamper-fusion capsule assembly, the shield was shaped as a circular frustum, with its small diameter facing the primary's side, and with its large diameter locked by a type of mortise and tenon joint to the rest of the secondary assembly. The shield-tamper ensemble can be visualized as a circular bifrustum. All parts of the tamper were similarly locked together to provide structural support and rigidity to the secondary assembly. Surrounding the fusion-fuel–spark-plug assembly was the uranium tamper with a standoff air-gap about 0.9 cm wide that was to increase the tamper's momentum, a levitation technique used as early as Operation Sandstone and described by physicist Ted Taylor as hammer-on-the-nail-impact. Since there were also technical concerns that high-Z tamper material would mix rapidly with the relatively low-density fusion fuel—leading to unacceptably large radiation losses—the stand-off gap also acted as a buffer to mitigate the unavoidable and undesirable Taylor mixing. Use of boron Boron was used at many locations in this dry system; it has a high cross-section for the absorption of slow neutrons, which fission U and Pu, but a low cross-section for the absorption of fast neutrons, which fission U. Because of this characteristic, B deposited onto the surface of the secondary stage would prevent pre-detonation of the spark plug by stray neutrons from the primary without interfering with the subsequent fissioning of the U of the fusion tamper wrapping the secondary. Boron also played a role in increasing the compressive plasma pressure around the secondary by blocking the sputtering effect, leading to higher thermonuclear efficiency. Because the structural foam holding the secondary in place within the casing was doped with B, the secondary was compressed more highly, at a cost of some radiated neutrons. (The Castle Koon MORGENSTERN device did not use B in its design; as a result, the intense neutron flux from its RACER IV primary predetonated the spherical fission spark plug, which in turn "cooked" the fusion fuel, leading to an overall poor compression.) The plastic's low molecular weight is unable to implode the secondary's mass. Its plasma-pressure is confined in the boiled-off sections of the tamper and the radiation case so that material from neither of these two walls can enter the radiation channel that has to be open for the radiation transit. Detonation The device was mounted in a "shot cab" on an artificial island built on a reef off Namu Island, in Bikini Atoll. A sizable array of diagnostic instruments were trained on it, including high-speed cameras trained through an arc of mirror towers around the shot cab. The detonation took place at 06:45 on 1 March 1954, local time (18:45 on 28 February GMT). When Bravo was detonated, within one second it formed a fireball almost across. This fireball was visible on Kwajalein Atoll over away. The explosion left a crater in diameter and in depth. The mushroom cloud reached a height of and a diameter of in about a minute, a height of and in diameter in less than 10 minutes and was expanding at more than . As a result of the blast, the cloud contaminated more than of the surrounding Pacific Ocean, including some of the surrounding small islands like Rongerik, Rongelap, and Utirik. In terms of energy released (usually measured in TNT equivalence), Castle Bravo was about 1,000 times more powerful than the atomic bomb that was dropped on Hiroshima during World War II. Castle Bravo is the sixth largest nuclear explosion in history, exceeded by the Soviet tests of Tsar Bomba at approximately 50 Mt, Test 219 at 24.2 Mt, and three other (Test 147, Test 173 and Test 174) ≈20 Mt Soviet tests in 1962 at Novaya Zemlya. High yield The yield of 15 (± 5) Mt was triple that of the 5 Mt predicted by its designers. The cause of the higher yield was an error made by designers of the device at Los Alamos National Laboratory. They considered only the lithium-6 isotope in the lithium deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert. It was expected that the lithium-6 isotope would absorb a neutron from the fissioning plutonium and emit an alpha particle and tritium in the process, of which the latter would then fuse with the deuterium and increase the yield in a predicted manner. Lithium-6 indeed reacted in this manner. It was assumed that the lithium-7 would absorb one neutron, producing lithium-8, which decays (through beta decay into beryllium-8) to a pair of alpha particles on a timescale of nearly a second, vastly longer than the timescale of nuclear detonation. However, when lithium-7 is bombarded with energetic neutrons with an energy greater than 2.47 MeV, rather than simply absorbing a neutron, it undergoes nuclear fission into an alpha particle, a tritium nucleus, and another neutron. As a result, much more tritium was produced than expected, the extra tritium fusing with deuterium and producing an extra neutron. The extra neutron produced by fusion and the extra neutron released directly by lithium-7 decay produced a much larger neutron flux. The result was greatly increased fissioning of the uranium tamper and increased yield. Summarizing, the reactions involving lithium-6 result in some combination of the two following net reactions: n + Li → H + He + 4.783 MeV Li + H → 2 He + 22.373 MeV But when lithium-7 is present, one also has some amounts of the following two net reactions: Li + n → H + He + n Li + H → 2 He + n + 15.123 MeV This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device's explosive output. The test used lithium with a high percentage of lithium-7 only because lithium-6 was then scarce and expensive; the later Castle Union test used almost pure lithium-6. Had sufficient lithium-6 been available, the usability of the common lithium-7 might not have been discovered. The unexpectedly high yield of the device severely damaged many of the permanent buildings on the control site island on the far side of the atoll. Little of the desired diagnostic data on the shot was collected; many instruments designed to transmit their data back before being destroyed by the blast were instead vaporized instantly, while most of the instruments that were expected to be recovered for data retrieval were destroyed by the blast. In an additional unexpected event, albeit one of far less consequence, X-rays traveling through line-of-sight (LOS) pipes caused a small second fireball at Station 1200 with a yield of . High levels of fallout The fission reactions of the natural uranium tamper were quite dirty, producing a large amount of fallout. That, combined with the larger than expected yield and a major wind shift, produced some very serious consequences for those in the fallout range. In the declassified film Operation Castle, the task force commander Major General Percy Clarkson pointed to a diagram indicating that the wind shift was still in the range of "acceptable fallout", although just barely. The decision to carry out the Bravo test under the prevailing winds was made by Dr. Alvin C. Graves, the Scientific Director of Operation Castle. Graves had total authority over detonating the weapon, above that of the military commander of Operation Castle. Graves appears in the widely available film of the earlier 1952 test "Ivy Mike", which examines the last-minute fallout decisions. The narrator, the western actor Reed Hadley, is filmed aboard the control ship in that film, showing the final conference. Hadley points out that 20,000 people live in the potential area of the fallout. He asks the control panel scientist if the test can be aborted and is told "yes", but it would ruin all their preparations in setting up timed measuring instruments. In Mike, the fallout correctly landed north of the inhabited area but, in the 1954 Bravo test, there was a large amount of wind shear, and the wind that was blowing north the day before the test steadily veered towards the east. Inhabited islands affected Radioactive fallout was spread eastward onto the inhabited Rongelap and Rongerik atolls, which were evacuated 48 hours after the detonation. In 1957, the Atomic Energy Commission deemed Rongelap safe to return, and allowed 82 inhabitants to move back to the island. Upon their return, they discovered that their previous staple foods, including arrowroot, makmok, and fish, had either disappeared or gave residents various illnesses, and they were again removed. Ultimately, 15 islands and atolls were contaminated, and by 1963 Marshall Islands natives began to suffer from thyroid tumors, including 20 of 29 Rongelap children at the time of Bravo, and many birth defects were reported. The islanders received compensation from the U.S. government, relative to how much contamination they received, beginning in 1956; by 1995 the Nuclear Claims Tribunal reported that it had awarded $43.2 million, nearly its entire fund, to 1,196 claimants for 1,311 illnesses. A medical study, named Project 4.1, studied the effects of the fallout on the islanders. Although the atmospheric fallout plume drifted eastward, once fallout landed in the water it was carried in several directions by ocean currents, including northwest and southwest. Fishing boats A Japanese fishing boat, (Lucky Dragon No. 5), came in direct contact with the fallout, which caused many of the crew to grow ill due to radiation sickness. One member died of a secondary infection six months later after acute radiation exposure, and another had a child that was stillborn and deformed. This resulted in an international incident and reignited Japanese concerns about radiation, especially as Japanese citizens were once more adversely affected by US nuclear weapons. The official US position had been that the growth in the strength of atomic bombs was not accompanied by an equivalent growth in radioactivity released, and they denied that the crew was affected by radioactive fallout, despite data from the Japanese fishing vessel and other international sources showing the contrary. Sir Joseph Rotblat, working at St Bartholomew's Hospital, London, demonstrated that the contamination caused by the fallout from the test was far greater than that stated officially. Rotblat deduced that the bomb had three stages and showed that the fission phase at the end of the explosion increased the amount of radioactivity a thousand-fold. Rotblat's paper was taken up by the media, and the outcry in Japan reached such a level that diplomatic relations became strained and the incident was even dubbed by some as a "second Hiroshima". Nevertheless, the Japanese and US governments quickly reached a political settlement, with the transfer to Japan of $15.3 million as compensation, with the surviving victims receiving about  million each ($5,550 in 1954, or about $ in ). It was also agreed that the victims would not be given Hibakusha status. In 2016, 45 Japanese fishermen from other ships sued their government for not disclosing records about their exposure to Operation Castle fallout. Records released in 2014 acknowledge that the crews of 10 ships were exposed but under health-damaging levels. In 2018 the suit was rejected by the Kochi District Court, who acknowledged the fishermen's radiation exposure but could not "conclude that the state persistently gave up providing support and conducting health surveys to hide the radiation exposure". Bomb test personnel take shelter Unanticipated fallout and the radiation emitted by it also affected many of the vessels and personnel involved in the test, in some cases forcing them into bunkers for several hours. In contrast to the crew of the , who did not anticipate the hazard and therefore did not take shelter in the hold of their ship, or refrain from inhaling the fallout dust, the firing crew that triggered the explosion safely sheltered in their firing station when they noticed the wind was carrying the fallout in the unanticipated direction towards the island of Enyu on the Bikini Atoll where they were located, with the fire crew sheltering in place ("buttoning up") for several hours until outside radiation decayed to safer levels. "25 roentgens per hour" was recorded above the bunker. US Navy ships affected The US Navy tanker was at Enewetak Atoll in late February 1954. Patapsco lacked a decontamination washdown system, and was therefore ordered on 27 February, to return to Pearl Harbor at the highest possible speed. A breakdown in her engine systems, namely a cracked cylinder liner, slowed Patapsco to one-third of her full speed, and when the Castle Bravo detonation took place, she was still about 180 to 195 nautical miles east of Bikini. Patapsco was in the range of nuclear fallout, which began landing on the ship in the mid-afternoon of 2 March. By this time Patapsco was 565 to 586 nautical miles from ground zero. The fallout was at first thought to be harmless and there were no radiation detectors aboard, so no decontamination measures were taken. Measurements taken after Patapsco had returned to Pearl Harbor suggested an exposure range of 0.18 to 0.62 R/hr. Total exposure estimates range from 3.3 R to 18 R of whole-body radiation, taking into account the effects of natural washdown from rain, and variations between above- and below-deck exposure. International incident The fallout spread traces of radioactive material as far as Australia, India and Japan, and even the United States and parts of Europe. Though organized as a secret test, Castle Bravo quickly became an international incident, prompting calls for a ban on the atmospheric testing of thermonuclear devices. A worldwide network of gummed film stations was established to monitor fallout following Operation Castle. Although meteorological data was poor, a general connection of tropospheric flow patterns with observed fallout was evident. There was a tendency for fallout/debris to remain in tropical latitudes, with incursions into the temperate regions associated with meteorological disturbances of the predominantly zonal flow. Outside of the tropics, the Southwestern United States received the greatest total fallout, about five times that received in Japan. Stratospheric fallout particles of strontium-90 from the test were later captured with balloon-borne air filters used to sample the air at stratospheric altitudes; the research (Project Ashcan) was conducted to better understand the stratosphere and fallout times, and arrive at more accurate meteorological models after hindcasting. The fallout from Castle Bravo and other testing on the atoll also affected islanders who had previously inhabited the atoll, and who returned there some time after the tests. This was due to the presence of radioactive caesium-137 in locally grown coconut milk. Plants and trees absorb potassium as part of the normal biological process, but will also readily absorb caesium if present, being of the same group on the periodic table, and therefore very similar chemically. Islanders consuming contaminated coconut milk were found to have abnormally high concentrations of caesium in their bodies and so had to be evacuated from the atoll a second time. The American magazine Consumer Reports warned of the contamination of milk with strontium-90. Impact on US nuclear test policy Following the disaster and higher-than-expected yields throughout Operation Castle (48.2 Mt total yield to the expected 23 Mt), US policy on thermonuclear testing in the Pacific changed. In 1956, Operation Redwing was conducted on an "energy budget", limiting the total testing yield to 20 Mt, and specifically limiting the fission yield, divided between Los Alamos Scientific Laboratory and University of California Radiation Laboratory at Livermore. While some very "dirty" fission weapons were tested, this also began the usage of the "materials substitution method", where the fission product fallout-producing uranium-238 tamper was replaced with a "clean" lead tamper, at the cost of halving the yield. In 1958, during preparation for Operation Hardtack I in, the second round of thermonuclear testing since Castle, president Dwight D. Eisenhower established an unwritten rule that no American test could exceed the 15 Mt yield of Castle Bravo. His successor, president John F. Kennedy adhered to this standard, even following the 50 Mt Soviet Tsar Bomba test in 1961 and pressure from the Department of Defense, Atomic Energy Commission, and the Livermore laboratory. Following the 1963 Partial Nuclear Test Ban Treaty against non-underground tests, American testing continued underground, with the largest yield in the 1971 Grommet Cannikin test at 5 Mt. Weapon history The Soviet Union had previously used lithium deuteride in its Sloika design (known as the "Joe-4" in the U.S.), in 1953. It was not a true hydrogen bomb; fusion provided only 15–20% of its yield, most coming from boosted fission reactions. Its yield was 400 kilotons, and it could not be infinitely scaled, as with a true thermonuclear device. The Teller–Ulam-based "Ivy Mike" device had a much greater yield of 10.4 Mt, but most of this also came from fission: 77% of the total came from fast fission of its natural-uranium tamper. Castle Bravo had the greatest yield of any U.S. nuclear test, 15 Mt, though again, a substantial fraction came from fission. In the Teller–Ulam design, the fission and fusion stages were kept physically separate in a reflective cavity. The radiation from the exploding fission primary brought the fuel in the fusion secondary to critical density and pressure, setting off thermonuclear (fusion) chain reactions, which in turn set off a tertiary fissioning of the bomb's U fusion tamper and casing. Consequently, this type of bomb is also known as a "fission-fusion-fission" device. The Soviet researchers, led by Andrei Sakharov, developed and tested their first Teller–Ulam device in 1955. The publication of the Bravo fallout analysis was a militarily sensitive issue, with Joseph Rotblat possibly deducing the staging nature of the Castle Bravo device by studying the ratio and presence of tell-tale isotopes, namely uranium-237, present in the fallout. This information could potentially reveal the means by which megaton-yield nuclear devices achieve their yield. Soviet scientist Andrei Sakharov hit upon what the Soviet Union regarded as "Sakharov's third idea" during the month after the Castle Bravo test, the final piece of the puzzle being the idea that the compression of the secondary can be accomplished by the primary's X-rays before fusion began. The Shrimp device design later evolved into the Mark 21 nuclear bomb, of which 275 units were produced, weighing and measuring long and in diameter. This 18-megaton bomb was produced until July 1956. In 1957, it was converted into the Mark 36 nuclear bomb and entered into production again. Health impacts Following the test, the United States Department of Energy estimated that 253 inhabitants of the Marshall Islands were impacted by the radioactive fallout. This single test exposed the surrounding populations to varying levels of radiation. The fallout levels attributed to the Castle Bravo test are the highest in history. Populations neighboring the test site were exposed to high levels of radiation resulting in mild radiation sickness of many (nausea, vomiting, diarrhea). The unexpected strength of the detonation, combined with shifting wind patterns, sent some of the radioactive fallout over the inhabited atolls of Rongelap and Utrik. Within 52 hours, the 86 people on Rongelap and 167 on Utrik were evacuated to Kwajalein for medical care. Several weeks later, many people began suffering from alopecia (hair loss) and skin lesions. The exposure to fallout has been linked to increase the likelihood of several types of cancer such as leukemia and thyroid cancer. The relationship between iodine-131 levels and thyroid cancer is still being researched. There are also correlations between fallout exposure levels and diseases such as thyroid disease like hypothyroidism. Populations of the Marshall Islands that received significant exposure to radionuclides have a much greater risk of developing cancer. There is a presumed association between radiation levels and functioning of the female reproductive system. In popular culture The Castle Bravo detonation and the subsequent poisoning of the crew aboard Daigo Fukuryū Maru led to an increase in antinuclear protests in Japan. It was compared to the bombings of Hiroshima and Nagasaki, and the Castle Bravo test was frequently part of the plots of numerous Japanese media, especially in relation to Japan's most widely recognized media icon, Godzilla. In the 2019 film Godzilla: King of the Monsters, Castle Bravo becomes the call sign for Monarch Outpost 54 located in the Atlantic Ocean, near Bermuda. The Donald Fagen song "Memorabilia" from his 2012 album Sunken Condos mentions both the Castle Bravo and Ivy King nuclear tests. In 2013, the Defense Threat Reduction Agency published Castle Bravo: Fifty Years of Legend and Lore. The report is a guide to off-site radiation exposures, a narrative history, and a guide to primary historical references concerning the Castle Bravo test. The report focuses on the circumstances that resulted in radioactive exposure of the uninhabited atolls, and makes no attempt to address in detail the effects on or around Bikini Atoll. Gallery See also Chagai-I Chagai-II History of nuclear weapons Operation Ivy Tsar Bomba References Notes Citations Bibliography Chuck Hansen, U. S. Nuclear Weapons: The Secret History (Arlington: AeroFax, 1988) Holly M. Barker, Bravo for the Marshallese: Regaining control in a Post-Nuclear, Post Colonial World (Belmont, CA: Wadsworth, 2004) Republic of the Marshall Islands Embassy website External links US tests hydrogen bomb in Bikini (BBC News) First-person article about conducting the test Strategic Air Command History – Development of Atomic Weapons 1956 1950s in the Marshall Islands 1954 disasters in Oceania 1954 in military history 1954 in the environment 1954 in the Trust Territory of the Pacific Islands Explosions in 1954 March 1954 events in Oceania Nuclear accidents and incidents Nuclear testing at Bikini Atoll Ralik Chain
Castle Bravo
Chemistry
8,755
485,457
https://en.wikipedia.org/wiki/Periodogram
In signal processing, a periodogram is an estimate of the spectral density of a signal. The term was coined by Arthur Schuster in 1898. Today, the periodogram is a component of more sophisticated methods (see spectral estimation). It is the most common tool for examining the amplitude vs frequency characteristics of FIR filters and window functions. FFT spectrum analyzers are also implemented as a time-sequence of periodograms. Definition There are at least two different definitions in use today. One of them involves time-averaging, and one does not. Time-averaging is also the purview of other articles (Bartlett's method and Welch's method). This article is not about time-averaging. The definition of interest here is that the power spectral density of a continuous function,   is the Fourier transform of its auto-correlation function (see Cross-correlation theorem, Spectral density, and Wiener–Khinchin theorem): Computation For sufficiently small values of parameter an arbitrarily-accurate approximation for can be observed in the region    of the function: which is precisely determined by the samples that span the non-zero duration of  (see Discrete-time Fourier transform). And for sufficiently large values of parameter ,   can be evaluated at an arbitrarily close frequency by a summation of the form: where is an integer. The periodicity of    allows this to be written very simply in terms of a Discrete Fourier transform: where is a periodic summation:   When evaluated for all integers, , between 0 and -1, the array: is a periodogram. Applications When a periodogram is used to examine the detailed characteristics of an FIR filter or window function, the parameter is chosen to be several multiples of the non-zero duration of the sequence, which is called zero-padding (see ).  When it is used to implement a filter bank, is several sub-multiples of the non-zero duration of the sequence (see ). One of the periodogram's deficiencies is that the variance at a given frequency does not decrease as the number of samples used in the computation increases. It does not provide the averaging needed to analyze noiselike signals or even sinusoids at low signal-to-noise ratios. Window functions and filter impulse responses are noiseless, but many other signals require more sophisticated methods of spectral estimation. Two of the alternatives use periodograms as part of the process: The method of averaged periodograms,  more commonly known as Welch's method,  divides a long x[n] sequence into multiple shorter, and possibly overlapping, subsequences. It computes a windowed periodogram of each one, and computes an array average, i.e. an array where each element is an average of the corresponding elements of all the periodograms. For stationary processes, this reduces the noise variance of each element by approximately a factor equal to the reciprocal of the number of periodograms. Smoothing is an averaging technique in frequency, instead of time. The smoothed periodogram is sometimes referred to as a spectral plot. Periodogram-based techniques introduce small biases that are unacceptable in some applications. Other techniques that do not rely on periodograms are presented in the spectral density estimation article. See also Matched filter Filtered backprojection (Radon transform) Welch's method Bartlett's method Discrete-time Fourier transform Least-squares spectral analysis, for computing periodograms in data that is not equally spaced MUltiple SIgnal Classification (MUSIC), a popular parametric superresolution method SAMV Notes References Further reading Frequency-domain analysis Fourier analysis
Periodogram
Physics
747
47,522,164
https://en.wikipedia.org/wiki/List%20of%20blizzards
This is a list of blizzards, arranged alphabetically by continent. A blizzard is defined as a severe snowstorm characterized by strong sustained winds of at least and lasting for three hours or more. The list states blizzards in various countries since 1972. Africa Asia Australia Europe North America South America See also List of ice storms List of costly or deadly hailstorms List of dust storms with visibility of 1/4 mile or less, or meters or less List of weather records Lowest temperature recorded on Earth References Weather hazards Severe weather and convection Weather-related lists
List of blizzards
Physics
112
4,237,747
https://en.wikipedia.org/wiki/Simulated%20fluorescence%20process%20algorithm
The Simulated Fluorescence Process (SFP) is a computing algorithm used for scientific visualization of 3D data from, for example, fluorescence microscopes. By modeling a physical light/matter interaction process, an image can be computed which shows the data as it would have appeared in reality when viewed under these conditions. Principle The algorithm considers a virtual light source producing excitation light that illuminates the object. This casts shadows either on parts of the object itself or on other objects below it. The interaction between the excitation light and the object provokes the emission light, which also interacts with the object before it finally reaches the eye of the viewer. See also Computer graphics lighting Rendering (computer graphics) References External links Freeware SFP renderer Computational science Computer graphics algorithms Visualization (graphics) Microscopes Microscopy Fluorescence
Simulated fluorescence process algorithm
Physics,Chemistry,Astronomy,Mathematics,Technology,Engineering
171
23,653
https://en.wikipedia.org/wiki/Pyrimidine
Pyrimidine (; ) is an aromatic, heterocyclic, organic compound similar to pyridine (). One of the three diazines (six-membered heterocyclics with two nitrogen atoms in the ring), it has nitrogen atoms at positions 1 and 3 in the ring. The other diazines are pyrazine (nitrogen atoms at the 1 and 4 positions) and pyridazine (nitrogen atoms at the 1 and 2 positions). In nucleic acids, three types of nucleobases are pyrimidine derivatives: cytosine (C), thymine (T), and uracil (U). Occurrence and history The pyrimidine ring system has wide occurrence in nature as substituted and ring fused compounds and derivatives, including the nucleotides cytosine, thymine and uracil, thiamine (vitamin B1) and alloxan. It is also found in many synthetic compounds such as barbiturates and the HIV drug zidovudine. Although pyrimidine derivatives such as alloxan were known in the early 19th century, a laboratory synthesis of a pyrimidine was not carried out until 1879, when Grimaux reported the preparation of barbituric acid from urea and malonic acid in the presence of phosphorus oxychloride. The systematic study of pyrimidines began in 1884 with Pinner, who synthesized derivatives by condensing ethyl acetoacetate with amidines. Pinner first proposed the name “pyrimidin” in 1885. The parent compound was first prepared by Gabriel and Colman in 1900, by conversion of barbituric acid to 2,4,6-trichloropyrimidine followed by reduction using zinc dust in hot water. Nomenclature The nomenclature of pyrimidines is straightforward. However, like other heterocyclics, tautomeric hydroxyl groups yield complications since they exist primarily in the cyclic amide form. For example, 2-hydroxypyrimidine is more properly named 2-pyrimidone. A partial list of trivial names of various pyrimidines exists. Physical properties Physical properties are shown in the data box. A more extensive discussion, including spectra, can be found in Brown et al. Chemical properties Per the classification by Albert, six-membered heterocycles can be described as π-deficient. Substitution by electronegative groups or additional nitrogen atoms in the ring significantly increase the π-deficiency. These effects also decrease the basicity. Like pyridines, in pyrimidines the π-electron density is decreased to an even greater extent. Therefore, electrophilic aromatic substitution is more difficult while nucleophilic aromatic substitution is facilitated. An example of the last reaction type is the displacement of the amino group in 2-aminopyrimidine by chlorine and its reverse. Electron lone pair availability (basicity) is decreased compared to pyridine. Compared to pyridine, N-alkylation and N-oxidation are more difficult. The pKa value for protonated pyrimidine is 1.23 compared to 5.30 for pyridine. Protonation and other electrophilic additions will occur at only one nitrogen due to further deactivation by the second nitrogen. The 2-, 4-, and 6- positions on the pyrimidine ring are electron deficient analogous to those in pyridine and nitro- and dinitrobenzene. The 5-position is less electron deficient and substituents there are quite stable. However, electrophilic substitution is relatively facile at the 5-position, including nitration and halogenation. Reduction in resonance stabilization of pyrimidines may lead to addition and ring cleavage reactions rather than substitutions. One such manifestation is observed in the Dimroth rearrangement. Pyrimidine is also found in meteorites, but scientists still do not know its origin. Pyrimidine also photolytically decomposes into uracil under ultraviolet light. Synthesis Pyrimidine biosynthesis creates derivatives —like orotate, thymine, cytosine, and uracil— de novo from carbamoyl phosphate and aspartate. As is often the case with parent heterocyclic ring systems, the synthesis of pyrimidine is not that common and is usually performed by removing functional groups from derivatives. Primary syntheses in quantity involving formamide have been reported. As a class, pyrimidines are typically synthesized by the principal synthesis involving cyclization of β-dicarbonyl compounds with N–C–N compounds. Reaction of the former with amidines to give 2-substituted pyrimidines, with urea to give 2-pyrimidinones, and guanidines to give 2-aminopyrimidines are typical. Pyrimidines can be prepared via the Biginelli reaction and other multicomponent reactions. Many other methods rely on condensation of carbonyls with diamines for instance the synthesis of 2-thio-6-methyluracil from thiourea and ethyl acetoacetate or the synthesis of 4-methylpyrimidine with 4,4-dimethoxy-2-butanone and formamide. A novel method is by reaction of N-vinyl and N-aryl amides with carbonitriles under electrophilic activation of the amide with 2-chloro-pyridine and trifluoromethanesulfonic anhydride: Reactions Because of the decreased basicity compared to pyridine, electrophilic substitution of pyrimidine is less facile. Protonation or alkylation typically takes place at only one of the ring nitrogen atoms. Mono-N-oxidation occurs by reaction with peracids. Electrophilic C-substitution of pyrimidine occurs at the 5-position, the least electron-deficient. Nitration, nitrosation, azo coupling, halogenation, sulfonation, formylation, hydroxymethylation, and aminomethylation have been observed with substituted pyrimidines. Nucleophilic C-substitution should be facilitated at the 2-, 4-, and 6-positions but there are only a few examples. Amination and hydroxylation have been observed for substituted pyrimidines. Reactions with Grignard or alkyllithium reagents yield 4-alkyl- or 4-aryl pyrimidine after aromatization. Free radical attack has been observed for pyrimidine and photochemical reactions have been observed for substituted pyrimidines. Pyrimidine can be hydrogenated to give tetrahydropyrimidine. Derivatives Nucleotides Three nucleobases found in nucleic acids, cytosine (C), thymine (T), and uracil (U), are pyrimidine derivatives: {| |- | || || |- | || || |} In DNA and RNA, these bases form hydrogen bonds with their complementary purines. Thus, in DNA, the purines adenine (A) and guanine (G) pair up with the pyrimidines thymine (T) and cytosine (C), respectively. In RNA, the complement of adenine (A) is uracil (U) instead of thymine (T), so the pairs that form are adenine:uracil and guanine:cytosine. Very rarely, thymine can appear in RNA, or uracil in DNA, but when the other three major pyrimidine bases are represented, some minor pyrimidine bases can also occur in nucleic acids. These minor pyrimidines are usually methylated versions of major ones and are postulated to have regulatory functions. These hydrogen bonding modes are for classical Watson–Crick base pairing. Other hydrogen bonding modes ("wobble pairings") are available in both DNA and RNA, although the additional 2′-hydroxyl group of RNA expands the configurations, through which RNA can form hydrogen bonds. Theoretical aspects In March 2015, NASA Ames scientists reported that, for the first time, complex DNA and RNA organic compounds of life, including uracil, cytosine and thymine, have been formed in the laboratory under outer space conditions, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), the most carbon-rich chemical found in the universe, may have been formed in red giants or in interstellar dust and gas clouds. Prebiotic synthesis of pyrimidine nucleotides In order to understand how life arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of life under plausible prebiotic conditions. The RNA world hypothesis holds that in the primordial soup there existed free-floating ribonucleotides, the fundamental molecules that combine in series to form RNA. Complex molecules such as RNA must have emerged from relatively small molecules whose reactivity was governed by physico-chemical processes. RNA is composed of pyrimidine and purine nucleotides, both of which are necessary for reliable information transfer, and thus natural selection and Darwinian evolution. Becker et al. showed how pyrimidine nucleosides can be synthesized from small molecules and ribose, driven solely by wet-dry cycles. Purine nucleosides can be synthesized by a similar pathway. 5’-mono-and diphosphates also form selectively from phosphate-containing minerals, allowing concurrent formation of polyribonucleotides with both the pyrimidine and purine bases. Thus a reaction network towards the pyrimidine and purine RNA building blocks can be established starting from simple atmospheric or volcanic molecules. See also ANRORC mechanism Purine Pyrimidine metabolism Simple aromatic rings Transition Transversion References Biomolecules Aromatic bases Simple aromatic rings Substances discovered in the 19th century
Pyrimidine
Chemistry,Biology
2,151
1,318,086
https://en.wikipedia.org/wiki/Revolutions%20per%20minute
Revolutions per minute (abbreviated rpm, RPM, rev/min, r/min, or r⋅min−1) is a unit of rotational speed (or rotational frequency) for rotating machines. One revolution per minute is equivalent to hertz. Standards ISO 80000-3:2019 defines a physical quantity called rotation (or number of revolutions), dimensionless, whose instantaneous rate of change is called rotational frequency (or rate of rotation), with units of reciprocal seconds (s−1). A related but distinct quantity for describing rotation is angular frequency (or angular speed, the magnitude of angular velocity), for which the SI unit is the radian per second (rad/s). Although they have the same dimensions (reciprocal time) and base unit (s−1), the hertz (Hz) and radians per second (rad/s) are special names used to express two different but proportional ISQ quantities: frequency and angular frequency, respectively. The conversions between a frequency and an angular frequency are Thus a disc rotating at 60 rpm is said to have an angular speed of 2π rad/s and a rotation frequency of 1 Hz. The International System of Units (SI) does not recognize rpm as a unit. It defines units of angular frequency and angular velocity as rad s−1, and units of frequency as Hz, equal to s−1. Examples For a wheel, a pump, or a crank shaft, the number of times that it completes one full cycle in one minute is given the unit revolution per minute. A revolution is one complete period of motion, whether this be circular, reciprocating or some other periodic motion. On many kinds of disc recording media, the rotational speed of the medium under the read head is a standard given in rpm. Phonograph (gramophone) records, for example, typically rotate steadily at , , 45 rpm or 78 rpm (0.28, 0.55, 0.75, or 1.3, respectively, in Hz). Air turbine rotating up to (25 kHz) Modern air turbine dental drills can rotate at over (13.3 kHz). The second hand of a conventional analog clock rotates at 1 rpm. Audio CD players read their discs at a precise, constant rate (4.3218 Mbit/s of raw physical data for 1.4112 Mbit/s (176.4 KB/s) of usable audio data) and thus must vary the disc's rotational speed from 8 Hz (480 rpm) when reading at the innermost edge to 3.5 Hz (210 rpm) at the outer edge. DVD players also usually read discs at a constant linear rate. The disc's rotational speed varies from 25.5 Hz (1530 rpm) when reading at the innermost edge, to 10.5 Hz (630 rpm) at the outer edge. A washing machine's drum may rotate at 500 rpm to (8 Hz – 46 Hz) during the spin cycles. A baseball thrown by a Major League Baseball pitcher can rotate at over (41.7 Hz); faster rotation yields more movement on breaking balls. A power-generation turbine (with a two-pole alternator) rotates at 3000 rpm (50 Hz), 3600 rpm (60 Hz), and over 4000 rpm ( Hz) Modern automobile engines are typically operated around – (33 Hz – 50 Hz) when cruising, with a minimum (idle) speed around 750 rpm – 900 rpm (12.5 Hz – 15 Hz), and an upper limit anywhere from 4500 rpm to up to (75 Hz – 166 Hz) for a road car, very rarely reaching up to for certain cars (such as the GMA T.50), or for racing engines such as those in Formula 1 cars (during the season, with the 2.4 L N/A V8 engine configuration; limited to , with the 1.6 L V6 turbo-hybrid engine configuration). The exhaust note of V8, V10, and V12 F1 cars has a much higher pitch than an I4 engine, because each of the cylinders of a four-stroke engine fires once for every two revolutions of the crankshaft. Thus an eight-cylinder engine turning 300 times per second will have an exhaust note of . A piston aircraft engine typically rotates at a rate between and (42 Hz – 166 Hz). Computer hard drives typically rotate at – (125 Hz – 166 Hz), the most common speeds for the ATA or SATA-based drives in consumer models. High-performance drives (used in fileservers and enthusiast-gaming PCs) rotate at – (160 Hz – 250 Hz), usually with higher-level SATA, SCSI or Fibre Channel interfaces and smaller platters to allow these higher speeds, the reduction in storage capacity and ultimate outer-edge speed paying off in much quicker access time and average transfer speed thanks to the high spin rate. Until recently, lower-end and power-efficient laptop drives could be found with or even spindle speeds (70 Hz or 60 Hz), but these have fallen out of favour due to their lower performance, improvements in energy efficiency in faster models and the takeup of solid-state drives for use in slimline and ultraportable laptops. Similar to CD and DVD media, the amount of data that can be stored or read for each turn of the disc is greater at the outer edge than near the spindle; however, hard drives keep a constant rotational speed so the effective data rate is faster at the edge (conventionally, the "start" of the disc, opposite to a CD or DVD). Floppy disc drives typically ran at a constant 300 rpm or occasionally 360 rpm (a relatively slow 5 Hz or 6 Hz) with a constant per-revolution data density, which was simple and inexpensive to implement, though inefficient. Some designs such as those used with older Apple computers (Lisa, early Macintosh, later II's) were more complex and used variable rotational speeds and per-track storage density (at a constant read/record rate) to store more data per disc; for example, between 394 rpm (with 12 sectors per track) and 590 rpm (8 sectors) with Mac's 800 kB double-density drive at a constant 39.4 kB/s (max) – versus 300 rpm, 720 kB and 23 kB/s (max) for double-density drives in other machines. A Zippe-type centrifuge for enriching uranium spins at () or faster. Gas turbine engines rotate at tens of thousands of rpm. JetCat model aircraft turbines are capable of over () with the fastest reaching (). A Flywheel energy storage system works at – (1 kHz – 8.3 kHz) range using a passively magnetic levitated flywheel in a vacuum. The choice of the flywheel material is not the most dense, but the one that pulverises the most safely, at surface speeds about 7 times the speed of sound. A typical 80 mm, 30 CFM computer fan will spin at – (43 Hz – 50 Hz) on 12 V DC power. A millisecond pulsar can have near (833 Hz). A turbocharger can reach (16.6 kHz), while – (1 kHz – 3 kHz) is common. A supercharger can spin at speeds between or as high as – (833 Hz – 1666 Hz) Molecular microbiology – molecular engines. The rotation rates of bacterial flagella have been measured to be (170 Hz) for Salmonella typhimurium, (270 Hz) for Escherichia coli, and up to () for polar flagellum of Vibrio alginolyticus, allowing the latter organism to move in simulated natural conditions at a maximum speed of 540 mm/h. See also Constant angular velocity (CAV) – used when referring to the speed of gramophone (phonograph) records Constant linear velocity (CLV) – used when referring to the speed of audio CDs Radian per second Rotational speed Compressor map Turn (geometry) Idle speed Overspeed (engine) Redline Rev limiter RPM gauge Notes References Units of frequency Rotation
Revolutions per minute
Physics,Mathematics
1,678
67,944,516
https://en.wikipedia.org/wiki/Physics-informed%20neural%20networks
Physics-informed neural networks (PINNs), also referred to as Theory-Trained Neural Networks (TTNs), are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs). Low data availability for some biological and engineering problems limit the robustness of conventional machine learning models used for these applications. The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the generalizability of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples. Function approximation Most of the physical laws that govern the dynamics of a system can be described by partial differential equations. For example, the Navier–Stokes equations are a set of partial differential equations derived from the conservation laws (i.e., conservation of mass, momentum, and energy) that govern fluid mechanics. The solution of the Navier–Stokes equations with appropriate initial and boundary conditions allows the quantification of flow dynamics in a precisely defined geometry. However, these equations cannot be solved exactly and therefore numerical methods must be used (such as finite differences, finite elements and finite volumes). In this setting, these governing equations must be solved while accounting for prior assumptions, linearization, and adequate time and space discretization. Recently, solving the governing partial differential equations of physical phenomena using deep learning has emerged as a new field of scientific machine learning (SciML), leveraging the universal approximation theorem and high expressivity of neural networks. In general, deep neural networks could approximate any high-dimensional function given that sufficient training data are supplied. However, such networks do not consider the physical characteristics underlying the problem, and the level of approximation accuracy provided by them is still heavily dependent on careful specifications of the problem geometry as well as the initial and boundary conditions. Without this preliminary information, the solution is not unique and may lose physical correctness. On the other hand, physics-informed neural networks (PINNs) leverage governing physical equations in neural network training. Namely, PINNs are designed to be trained to satisfy the given training data as well as the imposed governing equations. In this fashion, a neural network can be guided with training data that do not necessarily need to be large and complete. Potentially, an accurate solution of partial differential equations can be found without knowing the boundary conditions. Therefore, with some knowledge about the physical characteristics of the problem and some form of training data (even sparse and incomplete), PINN may be used for finding an optimal solution with high fidelity. PINNs allow for addressing a wide range of problems in computational science and represent a pioneering technology leading to the development of new classes of numerical solvers for PDEs. PINNs can be thought of as a meshfree alternative to traditional approaches (e.g., CFD for fluid dynamics), and new data-driven approaches for model inversion and system identification. Notably, the trained PINN network can be used for predicting the values on simulation grids of different resolutions without the need to be retrained. In addition, they allow for exploiting automatic differentiation (AD) to compute the required derivatives in the partial differential equations, a new class of differentiation techniques widely used to derive neural networks assessed to be superior to numerical or symbolic differentiation. Modeling and computation A general nonlinear partial differential equation can be: where denotes the solution, is a nonlinear operator parameterized by , and is a subset of . This general form of governing equations summarizes a wide range of problems in mathematical physics, such as conservative laws, diffusion process, advection-diffusion systems, and kinetic equations. Given noisy measurements of a generic dynamic system described by the equation above, PINNs can be designed to solve two classes of problems: data-driven solution data-driven discovery of partial differential equations. Data-driven solution of partial differential equations The data-driven solution of PDE computes the hidden state of the system given boundary data and/or measurements , and fixed model parameters . We solve: . By defining the residual as , and approximating by a deep neural network. This network can be differentiated using automatic differentiation. The parameters of and can be then learned by minimizing the following loss function : . Where is the error between the PINN and the set of boundary conditions and measured data on the set of points where the boundary conditions and data are defined, and is the mean-squared error of the residual function. This second term encourages the PINN to learn the structural information expressed by the partial differential equation during the training process. This approach has been used to yield computationally efficient physics-informed surrogate models with applications in the forecasting of physical processes, model predictive control, multi-physics and multi-scale modeling, and simulation. It has been shown to converge to the solution of the PDE. Data-driven discovery of partial differential equations Given noisy and incomplete measurements of the state of the system, the data-driven discovery of PDE results in computing the unknown state and learning model parameters that best describe the observed data and it reads as follows: . By defining as , and approximating by a deep neural network, results in a PINN. This network can be derived using automatic differentiation. The parameters of and , together with the parameter of the differential operator can be then learned by minimizing the following loss function : . Where , with and state solutions and measurements at sparse location , respectively and residual function. This second term requires the structured information represented by the partial differential equations to be satisfied in the training process. This strategy allows for discovering dynamic models described by nonlinear PDEs assembling computationally efficient and fully differentiable surrogate models that may find application in predictive forecasting, control, and data assimilation. Physics-informed neural networks for piece-wise function approximation PINN is unable to approximate PDEs that have strong non-linearity or sharp gradients that commonly occur in practical fluid flow problems. Piece-wise approximation has been an old practice in the field of numerical approximation. With the capability of approximating strong non-linearity extremely light weight PINNs are used to solve PDEs in much larger discrete subdomains that increases accuracy substantially and decreases computational load as well. DPINN (Distributed physics-informed neural networks) and DPIELM (Distributed physics-informed extreme learning machines) are generalizable space-time domain discretization for better approximation. DPIELM is an extremely fast and lightweight approximator with competitive accuracy. Domain scaling on the top has a special effect. Another school of thought is discretization for parallel computation to leverage usage of available computational resources. XPINNs is a generalized space-time domain decomposition approach for the physics-informed neural networks (PINNs) to solve nonlinear partial differential equations on arbitrary complex-geometry domains. The XPINNs further pushes the boundaries of both PINNs as well as Conservative PINNs (cPINNs), which is a spatial domain decomposition approach in the PINN framework tailored to conservation laws. Compared to PINN, the XPINN method has large representation and parallelization capacity due to the inherent property of deployment of multiple neural networks in the smaller subdomains. Unlike cPINN, XPINN can be extended to any type of PDEs. Moreover, the domain can be decomposed in any arbitrary way (in space and time), which is not possible in cPINN. Thus, XPINN offers both space and time parallelization, thereby reducing the training cost more effectively. The XPINN is particularly effective for the large-scale problems (involving large data set) as well as for the high-dimensional problems where single network based PINN is not adequate. The rigorous bounds on the errors resulting from the approximation of the nonlinear PDEs (incompressible Navier–Stokes equations) with PINNs and XPINNs are proved. However, DPINN debunks the use of residual (flux) matching at the domain interfaces as they hardly seem to improve the optimization. Physics-informed neural networks and theory of functional connections In the PINN framework, initial and boundary conditions are not analytically satisfied, thus they need to be included in the loss function of the network to be simultaneously learned with the differential equation (DE) unknown functions. Having competing objectives during the network's training can lead to unbalanced gradients while using gradient-based techniques, which causes PINNs to often struggle to accurately learn the underlying DE solution. This drawback is overcome by using functional interpolation techniques such as the Theory of functional connections (TFC)'s constrained expression, in the Deep-TFC framework, which reduces the solution search space of constrained problems to the subspace of neural network that analytically satisfies the constraints. A further improvement of PINN and functional interpolation approach is given by the Extreme Theory of Functional Connections (X-TFC) framework, where a single-layer Neural Network and the extreme learning machine training algorithm are employed. X-TFC allows to improve the accuracy and performance of regular PINNs, and its robustness and reliability are proved for stiff problems, optimal control, aerospace, and rarefied gas dynamics applications. Physics-informed PointNet (PIPN) for multiple sets of irregular geometries Regular PINNs are only able to obtain the solution of a forward or inverse problem on a single geometry. It means that for any new geometry (computational domain), one must retrain a PINN. This limitation of regular PINNs imposes high computational costs, specifically for a comprehensive investigation of geometric parameters in industrial designs. Physics-informed PointNet (PIPN) is fundamentally the result of a combination of PINN's loss function with PointNet. In fact, instead of using a simple fully connected neural network, PIPN uses PointNet as the core of its neural network. PointNet has been primarily designed for deep learning of 3D object classification and segmentation by the research group of Leonidas J. Guibas. PointNet extracts geometric features of input computational domains in PIPN. Thus, PIPN is able to solve governing equations on multiple computational domains (rather than only a single domain) with irregular geometries, simultaneously. The effectiveness of PIPN has been shown for incompressible flow, heat transfer and linear elasticity. Physics-informed neural networks (PINNs) for inverse computations Physics-informed neural networks (PINNs) have proven particularly effective in solving inverse problems within differential equations, demonstrating their applicability across science, engineering, and economics. They have shown useful for solving inverse problems in a variety of fields, including nano-optics, topology optimization/characterization, multiphase flow in porous media, and high-speed fluid flow. PINNs have demonstrated flexibility when dealing with noisy and uncertain observation datasets. They also demonstrated clear advantages in the inverse calculation of parameters for multi-fidelity datasets, meaning datasets with different quality, quantity, and types of observations. Uncertainties in calculations can be evaluated using ensemble-based or Bayesian-based calculations. Physics-informed neural networks for elasticity problems Surrogate networks are intended for the unknown functions, namely, the components of the strain and the stress tensors as well as the unknown displacement field, respectively. The residual network provides the residuals of the partial differential equations (PDEs) and of the boundary conditions.The computational approach is based on principles of artificial intelligence. Physics-informed neural networks (PINNs) with backward stochastic differential equation Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE) to solve high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods like finite difference methods or Monte Carlo simulations, which struggle with the curse of dimensionality. Deep BSDE methods use neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden. Additionally, integrating Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws into the neural network architecture, ensuring solutions adhere to governing stochastic differential equations, resulting in more accurate and reliable solutions. Physics-informed neural networks for biology An extension or adaptation of PINNs are Biologically-informed neural networks (BINNs). BINNs introduce two key adaptations to the typical PINN framework: (i) the mechanistic terms of the governing PDE are replaced by neural networks, and (ii) the loss function is modified to include , a term used to incorporate domain-specific knowledge that helps enforce biological applicability. For (i), this adaptation has the advantage of relaxing the need to specify the governing differential equation a priori, either explicitly or by using a library of candidate terms. Additionally, this approach circumvents the potential issue of misspecifying regularization terms in stricter theory-informed cases. A natural example of BINNs can be found in cell dynamics, where the cell density is governed by a reaction-diffusion equation with diffusion and growth functions and , respectively: In this case, a component of could be for , which penalizes values of that fall outside a biologically relevant diffusion range defined by . Furthermore, the BINN architecture, when utilizing multilayer-perceptrons (MLPs), would function as follows: an MLP is used to construct from model inputs , serving as a surrogate model for the cell density . This surrogate is then fed into the two additional MLPs, and , which model the diffusion and growth functions. Automatic differentiation can then be applied to compute the necessary derivatives of , and to form the governing reaction-diffusion equation. Note that since is a surrogate for the cell density, it may contain errors, particularly in regions where the PDE is not fully satisfied. Therefore, the reaction-diffusion equation may be solved numerically, for instance using a method-of-lines approach approach. Limitations Translation and discontinuous behavior are hard to approximate using PINNs. They fail when solving differential equations with slight advective dominance and hence asymptotic behaviour causes the method to fail. Such PDEs could be solved by scaling variables. This difficulty in training of PINNs in advection-dominated PDEs can be explained by the Kolmogorov n–width of the solution. They also fail to solve a system of dynamical systems and hence have not been a success in solving chaotic equations. One of the reasons behind the failure of regular PINNs is soft-constraining of Dirichlet and Neumann boundary conditions which pose a multi-objective optimization problem which requires manually weighing the loss terms to be able to optimize. More generally, posing the solution of a PDE as an optimization problem brings with it all the problems that are faced in the world of optimization, the major one being getting stuck in local optima. References External links Physics Informed Neural Network PINN – repository to implement physics-informed neural network in Python XPINN – repository to implement extended physics-informed neural network (XPINN) in Python PIPN – repository to implement physics-informed PointNet (PIPN) in Python Differential equations Deep learning
Physics-informed neural networks
Mathematics
3,172
61,774,860
https://en.wikipedia.org/wiki/Sony%20CLI%C3%89%20NR%20Series
The Clie NR were a series of handheld personal digital assistants (PDAs) made by Sony, announced in March 2002. These devices were distinctive, due a folding "Flip-and-Rotate" clamshell design, with a vertical rotatable screen. Models PEG-NR70 The Clié PEG-NR70 was a Personal Digital Assistant (PDA) made by Sony. The device ran Palm OS (version 4.1) and featured a color display, thumb-sized keyboard and MP3/Atrac3 playback with a built-in speaker; features which were uncommon among other PDAs of its time. Specifications Palm OS: 4.1 CPU: Motorola 66 MHz MC68SZ328 Memory: 16 MB DRAM Display: 320 x 480, 16bit Color Sound: Internal audio amplifier and speaker, Headphone out. External Connectors: USB Expansion: Memory Stick Wireless: Infrared Battery: Rechargeable Li-Ion Size & Weight: 7 oz Colour: Silver PEG-NR70V Otherwise identical to the NR70, the NR70V added a 0.1MP (320x240 pixel) stills camera to the device. See also Sony CLIÉ NX Series: The NX series succeeds the NR series. Nokia N93, a cellphone with similar form factor References External links Detailed Specifications of the PEG-NR70 SMUP Review of the PEG-NR70 Sony CLIÉ
Sony CLIÉ NR Series
Technology
289
4,665,840
https://en.wikipedia.org/wiki/Reliability-centered%20maintenance
Reliability-centered maintenance (RCM) is a concept of maintenance planning to ensure that systems continue to do what their users require in their present operating context. Successful implementation of RCM will lead to increase in cost effectiveness, reliability, machine uptime, and a greater understanding of the level of risk that the organization is managing. Context It is generally used to achieve improvements in fields such as the establishment of safe minimum levels of maintenance, changes to operating procedures and strategies and the establishment of capital maintenance regimes and plans. Successful implementation of RCM will lead to increase in cost effectiveness, machine uptime, and a greater understanding of the level of risk that the organization is managing. John Moubray characterized RCM as a process to establish the safe minimum levels of maintenance. This description echoed statements in the Nowlan and Heap report from United Airlines. It is defined by the technical standard SAE JA1011, Evaluation Criteria for RCM Processes, which sets out the minimum criteria that any process should meet before it can be called RCM. This starts with the seven questions below, worked through in the order that they are listed: 1. What is the item supposed to do and its associated performance standards? 2. In what ways can it fail to provide the required functions? 3. What are the events that cause each failure? 4. What happens when each failure occurs? 5. In what way does each failure matter? 6. What systematic task can be performed proactively to prevent, or to diminish to a satisfactory degree, the consequences of the failure? 7. What must be done if a suitable preventive task cannot be found? Reliability centered maintenance is an engineering framework that enables the definition of a complete maintenance regimen. It regards maintenance as the means to maintain the functions a user may require of machinery in a defined operating context. As a discipline it enables machinery stakeholders to monitor, assess, predict and generally understand the working of their physical assets. This is embodied in the initial part of the RCM process which is to identify the operating context of the machinery, and write a Failure Mode Effects and Criticality Analysis (FMECA). The second part of the analysis is to apply the "RCM logic", which helps determine the appropriate maintenance tasks for the identified failure modes in the FMECA. Once the logic is complete for all elements in the FMECA, the resulting list of maintenance is "packaged", so that the periodicities of the tasks are rationalised to be called up in work packages; it is important not to destroy the applicability of maintenance in this phase. Lastly, RCM is kept live throughout the "in-service" life of machinery, where the effectiveness of the maintenance is kept under constant review and adjusted in light of the experience gained. RCM can be used to create a cost-effective maintenance strategy to address dominant causes of equipment failure. It is a systematic approach to defining a routine maintenance program composed of cost-effective tasks that preserve important functions. The important functions (of a piece of equipment) to preserve with routine maintenance are identified, their dominant failure modes and causes determined and the consequences of failure ascertained. Levels of criticality are assigned to the consequences of failure. Some functions are not critical and are left to "run to failure" while other functions must be preserved at all cost. Maintenance tasks are selected that address the dominant failure causes. This process directly addresses maintenance preventable failures. Failures caused by unlikely events, non-predictable acts of nature, etc. will usually receive no action provided their risk (combination of severity and frequency) is trivial (or at least tolerable). When the risk of such failures is very high, RCM encourages (and sometimes mandates) the user to consider changing something which will reduce the risk to a tolerable level. The result is a maintenance program that focuses scarce economic resources on those items that would cause the most disruption if they were to fail. RCM emphasizes the use of predictive maintenance (PdM) techniques in addition to traditional preventive measures. Background The term "reliability-centered maintenance" authored by Tom Matteson, Stanley Nowlan and Howard Heap of United Airlines (UAL) to describe a process used to determine the optimum maintenance requirements for aircraft (having left United Airlines to pursue a consulting career a few months before the publication of the final Nowlan-Heap report, Matteson received no authorial credit for the work). The US Department of Defense (DOD) sponsored the authoring of both a textbook (by UAL) and an evaluation report (by Rand Corporation) on Reliability-Centered Maintenance, both published in 1978. They brought RCM concepts to the attention of a wider audience. The first generation of jet aircraft had a crash rate that would be considered highly alarming today, and both the Federal Aviation Administration (FAA) and the airlines' senior management felt strong pressure to improve matters. In the early 1960s, with FAA approval the airlines began to conduct a series of intensive engineering studies on in-service aircraft. The studies proved that the fundamental assumption of design engineers and maintenance planners—that every aircraft and every major component thereof (such as its engines) had a specific "lifetime" of reliable service, after which it had to be replaced (or overhauled) in order to prevent failures—was wrong in nearly every specific example in a complex modern jet airliner. This was one of many astounding discoveries that have revolutionized the managerial discipline of physical asset management and have been at the base of many developments since this seminal work was published. Among some of the paradigm shifts inspired by RCM were: an understanding that the vast majority of failures are not necessarily linked to the age of the asset changing from efforts to predict life expectancies to trying to manage the process of failure an understanding of the difference between the requirements of assets from a user perspective, and the design reliability of the asset an understanding of the importance of managing assets on condition (often referred to as condition monitoring, condition based maintenance and predictive maintenance) an understanding of four basic routine maintenance tasks linking levels of tolerable risk to maintenance strategy development Later RCM was defined in the standard SAE JA1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. This sets out the minimum criteria for what is, and for what is not, able to be defined as RCM. The standard is a watershed event in the ongoing evolution of the discipline of physical asset management. Prior to the development of the standard many processes were labeled as RCM even though they were not true to the intentions and the principles in the original report that defined the term publicly. Basic features The RCM process described in the DOD/UAL report recognized three principal risks from equipment failures: threats to safety, to operations, and to the maintenance budget. Modern RCM gives threats to the environment a separate classification, though most forms manage them in the same way as threats to safety. RCM offers five principal options among the risk management strategies: Predictive maintenance tasks, Preventive Restoration or Preventive Replacement maintenance tasks, Detective maintenance tasks, Run-to-Failure, and One-time changes to the "system" (changes to hardware design, to operations, or to other things). RCM also offers specific criteria to use when selecting a risk management strategy for a system that presents a specific risk when it fails. Some are technical in nature (can the proposed task detect the condition it needs to detect? does the equipment actually wear out, with use?). Others are goal-oriented (is it reasonably likely that the proposed task-and-task-frequency will reduce the risk to a tolerable level?). The criteria are often presented in the form of a decision-logic diagram, though this is not intrinsic to the nature of the process. In use After being created by the commercial aviation industry, RCM was adopted by the U.S. military (beginning in the mid-1970s) and by the U.S. commercial nuclear power industry (in the 1980s). Starting in the late 1980s, an independent initiative led by John Moubray corrected some early flaws in the process, and adapted it for use in the wider industry. Moubray was also responsible for popularizing the method and for introducing it to much of the industrial community outside of the aviation industry. In the two decades since this approach (called by the author RCM2) was first released, industry has undergone massive change with advances in lean thinking and efficiency methods. At this point in time many methods sprung up that took an approach of reducing the rigour of the RCM approach. The result was the propagation of methods that called themselves RCM, yet had little in common with the original concepts. In some cases these were misleading and inefficient, while in other cases they were even dangerous. Since each initiative is sponsored by one or more consulting firms eager to help clients use it, there is still considerable disagreement about their relative dangers (or merits). The RCM standard (SAE JA1011, available from http://www.sae.org) provides the minimum criteria that processes must comply with if they are to be called RCM. Although a voluntary standard, it provides a reference for companies looking to implement RCM to ensure they are getting a process, software package or service that is in line with the original report. The Walt Disney Company introduced RCM to its parks in 1997, led by Paul Pressler and consultants McKinsey & Company, laying off a large number of maintenance workers and saving large amounts of money. Some people blamed the new cost-conscious maintenance culture for some of the Incidents at Disneyland Resort that occurred in the following years. See also Maintenance RAMS Notes References Further reading Standard To Define RCM (Part 1), Dana Netherton, Maintenance Technology (1998) Standard To Define RCM (Part 2), Dana Netherton, Maintenance Technology (1998) Standard RCM Process Requirements, Jesús R, Sifonte, Conscious Reliability (2017) What about RCM-R®? How does it stand when compared with SAE JA1011?, Jesús R, Sifonte, Conscious Reliability (2017) Reliability Centered Maintenance: 9 Principles of a Modern Preventive Maintenance Program, Erik Hupje, Reliability Academy (2020) Maintenance Reliability engineering
Reliability-centered maintenance
Engineering
2,096
1,543,032
https://en.wikipedia.org/wiki/DIY%20audio
DIY Audio, do it yourself audio. Rather than buying a piece of possibly expensive audio equipment, such as a high-end audio amplifier or speaker, the person practicing DIY Audio will make it themselves. Alternatively, a DIYer may take an existing manufactured item of vintage era and update or modify it. The benefits of doing so include the satisfaction of creating something enjoyable, the possibility that the equipment made or updated is of higher quality than commercially available products and the pleasure of creating a custom-made device for which no exact equivalent is marketed. Other motivations for DIY audio can include getting audio components at a lower cost, the entertainment of using the item, and being able to ensure quality of workmanship. History Audio DIY came to prominence in the 1950s to 1960s, as audio reproduction was relatively new and the technology complex. Audio reproduction equipment, and in particular high performance equipment, was not generally offered at the retail level. Kits and designs were available for consumers to build their own equipment. Famous vacuum tube kits from Dynaco, Heathkit, and McIntosh, as well as solid state (transistor) kits from Hafler allowed for consumers to build their own hi fidelity systems. Books and magazines were published which explained new concepts regarding the design and operation of vacuum tube and (later) transistor circuits. While audio equipment has become easily accessible in the current day and age, there still exists an interest in building and repairing one's own equipment including, but not limited to; pre-amplifiers, amplifiers, speakers, cables, CD players and turntables. Today, a network of companies, parts vendors, and on-line communities exist to foster this interest. DIY is especially active in loudspeaker and in tube amplification. Both are relatively simple to design and fabricate without access to sophisticated industrial equipment. Both enable the builder to pick and choose between various available parts, on matters of price as well as quality, allow for extensive experimentation, and offer the chance to use exotic or highly labor-intensive solutions, which would be expensive for a manufacturer to implement, but only require personal labor by the DIYer, which is a source of satisfaction to them. Construction issues Since the 1960s, integrated circuits make construction of DIY audio systems easier, but the proliferation of surface mount components (which are small and sometimes difficult to solder with a soldering iron) and fine pitch printed circuit boards (PCBs) can make the physical act of construction more difficult. Nevertheless, surface mounting is often used, as are conventional PCBs and electronic components, while some enthusiasts insist on using old-style perforated cardboard onto which individual components are hardwired and soldered. Test equipment is readily available for purchase and enables convenient testing of parts and systems. Specifications of parts and components are readily accessible through the Internet including data sheets and equipment designs. It has become easier to make audio components from scratch rather than from kits due to the availability of CAD software for printed circuit board (PCB) layouts and electronic circuit simulation. Such software can be free, and a trial version may also be used. PCB vendors are more accessible than ever, and can manufacture PCBs in small quantities for the do-it-yourselfer. In fact, kits and chemicals for self-manufacturing one's own PCB can be obtained. Electronic parts and components are accessible online or in speciality shops, and various high-end parts vendors exist. On the other hand, a wide variety of kits, designs and premanufactured PCBs are available for almost any type of audio component. To construct a device takes more than knowledge of circuits, many would urge that the mechanical aspects of cabinets, cases and chassis' are the most time consuming aspects of audio DIY. Drilling, metalworking and physical measurements are critical to constructing almost any DIY audio project, especially speakers. Measuring equipment such as a Vernier caliper is often essential. Woodworking skills are required to construct wooden enclosures (e.g. for speakers), with some enthusiasts going beyond traditional woodworking to CNC turning, and luxurious veneers and lacquers. Room acoustics solutions are also popular among DIYers, as they can be made with inexpensive and readily available insulating materials, and can be dimensioned to fit each particular room in a precise and aesthetically pleasing way. DIY audio involves projects directed to audio. Many DIY audio people fancy themselves to be audiophiles. These people use rare and expensive parts and components in their projects. Examples are the use of silver wire, expensive capacitors, non-standard solders of various alloys, and use of parts that have been cryogenically cooled. Vacuum tube or valve projects are common in audio DIY. While, for mass market audio components, the vacuum tube has been replaced in modern times with the transistor and IC, the vacuum tube remains prominent in specialty high end audio equipment. Thus, interest exists in building components using vacuum tubes, and the vacuum tube is still widely available. There is a wide variety of tubes manufactured nowadays, and many tubes on the market are advertised as NOS; not all of the latter being genuinely NOS. Circuits utilizing tubes often are far less complicated than those utilizing transistors or op-amps. Tube enthusiasts often use transformers, sometimes custom-made ones, or even hand-wind their own transformers using cores and wire of their own choice. Note that vacuum tube projects almost always use dangerously high voltages and should be undertaken with due care. In case lead-containing solder is used instead of RoHS-compliant solder, appropriate environmental precautions with regard to lead and lead products should be taken. Tweaking and tweakers DIY audio can also involve tweaking of mass market components. It is thought that mass market audio components are compromised by the use of cheap or inferior internal parts that can be easily replaced with high quality substitutes. As a result, an audio component of improved characteristics may be obtained for relatively low cost. Some common changes include replacing opamps, replacing capacitors (recap), or even replacing resistors in order to increase signal-to-noise ratio. Changing an audio component in this way is similar to what a tweaker or modder does with a personal computer. Circuit bending Circuit bending is the creative customization of the circuits within electronic devices such as low voltage, battery-powered guitar effects, and small digital synthesizers to create new musical or visual instruments and sound generators. Emphasizing spontaneity and randomness, the techniques of circuit bending have been commonly associated with noise music, though many more conventional contemporary musicians and musical groups have been known to experiment with bent instruments. Circuit bending usually involves dismantling the machine and adding components such as switches and potentiometers that alter the circuit. Cloning and cloners Another common practice in the DIY audio community is to attempt to clone or copy a pre-existing design or component from a commercial manufacturer. This involves obtaining a lawful public version of, or lawfully reverse engineering, the circuit schematics for the design, and/or even the publicly available PCB layouts. Such a clone will not be a perfect copy since different brands and types of parts (often newer parts) will be used, and mechanical aspects of construction will likely differ. However, the circuit or other distinguishing features should be close to the original. There are many reasons for wanting to recreate an existing design. The design might be historically important and/or out of production, so the only way to obtain the component is to build it. The design might be very simple so copying it is easily done. The commercial product might be very expensive but its design known, so it may be built for far less than it cost to be purchased. The original design may have some sentimental value to the person building the recreation, and the design built for the memories in one's past. The copy may be made to test or evaluate design concepts or principles in the original. As an example, a well known clone includes amplifiers using high power integrated circuits, such as the National Semiconductor LM3875 and LM3886. The use of a high power IC as part of a quality audio amplifier was popularized by the 47 Labs Gaincard amplifier, and thus the DIY amplifiers using power ICs are often called chipamps or Gainclones. Usually cloning additionally involves improving or tweaking (see above) the original design, potentially by using more modern components (in the case of discontinued designs,) higher quality parts, or more efficient board layout. Operational amplifier swapping Operational amplifier (op-amps) swapping is the process of replacing an operational amplifier in audio equipment with a different one, in an attempt to improve performance or change the perceived sound quality. Op-amps are used in most audio devices, and most op-amps have the same pinouts, making replacement fairly simple. If the new device's parameters sometimes do not match it can lead to problems like high-frequency oscillation. References External links DIY audio wiki Audio electronics Do it yourself
DIY audio
Engineering
1,857
27,921,219
https://en.wikipedia.org/wiki/Fort%20de%20Maulde
The Fort de Maulde, also known as Fort de Beurnonville and the Ensemble de Maulde, is located to the south of Maulde, France. It is part of the Fortified Sector of the Escaut, an extension of the Maginot Line. The Séré de Rivières system fort was built 1881–1884. In 1936–1937 the old fort, commanding high ground behind Maulde, was altered for more modern fortifications. It was evacuated by its garrison in 1940 during the Battle of France as part of the French retreat from the border with Belgium. The site at the top of the Mont de Ligne dominates the plains of the Scarpe and the Escaut, occupying the site of an old military encampment used in 1792 by General Dumouriez against the Austrians. It is within of the Belgian border. The fort is named for Pierre de Ruel, marquis de Beurnonville, a Marshal of France. Description The original fort is typical of the Séré de Rivières system, with a low wall, surrounded by a ditch, which is in turn defended by two caponiers. The roof of the single-level barracks is concreted and supports an artillery platform, or cavalier. A relatively small fort, it was disarmed in 1912, then rearmed in 1914 with 90 mm guns. In the 1930s the fort was chosen as a site for fortifications associated with the Maginot Line extension around Valenciennes, part of the "New Fronts" program. An observation post and two casemates were built in 1936–1937 within the walls of the fort. The casemates, facing east and west, were each furnished with two 75 mm guns and two automatic rifle ports. To the south, just outside the walls, a north-facing casemate for a 155 mm field gun was built, aligned to cover the bridges at Tournai. None of the casemates or the observatory were connected by underground passages in the manner of a fully developed Maginot fortification. A short distance to the north, a chain of casemates covered an anti-tank ditch extending about . The position is described as an "ensemble" rather than an ouvrage, as a true Maginot fortification would be termed, due to the absence of connecting underground galleries and support facilities. Nevertheless, with four 75 mm guns and a 155 mm gun, the position was heavily armed. Mauldet was not constructed by CORF (), the Maginot Line's design and construction agency, but by the Army Engineer Service ( [STG]), further removing it from the main Maginot Line; five artillery casemates were planned but only three were built. In 1939 work was undertaken to link the observation post, 75 mm and 155 mm casemates by underground passages in a manner similar to Maginot works. The excavation was not completed before the outbreak of war in 1940. Casemates and positions The casemates are named for their relation to the town of Maulde, rather than to the fort. Casemate de 75 du Fort de Maulde Ouest: two 75 mm gun embrasures and two automatic rifle embrasures (inside the fort) Casemate de 75 du Fort de Maulde Est: two 75 mm gun embrasures and two automatic rifle embrasures (inside the fort) Observatiore du Fort de Maulde: two automatic rifle cloches (GFM-B) (inside the fort) Casemate du 155 du Fort de Maulde: one 155mm gun Blockhaus de Sud-Ouest de Maulde 1: one GFM-B cloche covering the anti-tank ditch Blockhaus de Sud-Ouest de Maulde 2: one GFM-B cloche covering the anti-tank ditch Blockhaus de Sud-Ouest de Maulde 3: double blockhouse with automatic rifle embrasures Blockhaus de Sud de Maulde 1: one GFM-B cloche covering the anti-tank ditch Blockhaus de la Trinquette: one GFM-B cloche covering the anti-tank ditch History In 1870, France was partly occupied by the Prussian army. As a result of this defeat, the Séré de Rivières system of fortifications was planned and constructed to defend the nation. Valenciennes, located close by the border between France and Belgium, received additional fortifications. Construction on the Fort de Maulde started in 1881, completing in 1884 for a garrison of 431 men armed with 27 artillery pieces. In 1890 Valenciennes was declared an open city, and its Vauban-era fortifications in the city center were razed, while the new forts were disarmed. They were hastily rearmed in 1914, but were captured by the Germans in the opening stages of World War I. Before retreating in 1918 the Germans blew up the fort's powder magazine. See Fortified Sector of the Escaut for a broader discussion of the Escaut sector of the Maginot Line. In the 1930s, France invested in the construction of the Maginot Line, which covered the eastern frontiers of France. The frontier with Belgium was regarded as a lesser priority because France's war plan called for the French Army to advance into Belgium and conduct an offensive there. Belatedly, France began construction of a limited series of defenses around Valenciennes in the mid-1930s. These fortifications were individually assaulted and captured in the opening phases of World War II. Following the fall of Maubeuge in May 1940, German forces advanced on Maulde, reaching the fort, commanded by Captain Schwengler, by 20 May. Bombardment began that day, but infantry units did not reach the area until the next day, delayed by fire from the 75 mm guns of the eastern casemate of Maulde. By 26 May, Maulde's peripheral casemates had been captured, and Schwengler was ordered to sabotage and evacuate the fort in the night. The Germans found the fort abandoned on 27 May. After 1940, the Fort de Maulde was used by the Germans for explosives effects testing, leaving significant damage. After the war it became the property of a tile factory. Present condition The Fort de Maulde was closed to public access in 2009, due to the dangers of falling into pits in the darkness of its interior and from falling masonry. Toxic waste from the tileworks was dumped in the fort until the 1980s. Its subsidiary bunkers remain visible in the surrounding fields. References External links Fort de Maulde Fort de Maulde at Ligne Maginot MAUL Maginot Line Séré de Rivières system
Fort de Maulde
Engineering
1,378
77,481,919
https://en.wikipedia.org/wiki/Thiocyanuric%20acid
Thiocyanuric acid is the organosulfur compound with the formula . It is analogous to cyanuric acid (). Cyanuric acid is white whereas thiocyanuric acid is yellow. It can also be viewed as a trimeric thioamide. Structure It is a planar molecular as determined by X-ray crystallography. Like cyanuric acid, thiocyanuric acid forms extended hydrogen-bonded network resulting in a sheet-like structure. This arrangement is relevant to the high melting point of the compound. Synthesis, reactions, applications Thiocyanuric acid precipitates from warm, acidic solutions of thiocyanic acid. A modern synthesis begins instead with a preformed ring: cyanuric chloride reacts with sodium hydrosulfide to give table salt and thiocyanuric acid. The compound is mildly acidic, with pKa's of 5.7, 8.4, and 11.4. Various salts of have been characterized. The compound has an affinity for heavy metals. This attribute has been exploited by applying thiocyanuric acid and derivatives to treatment of waste waters. References Isocyanuric acids Thioamides
Thiocyanuric acid
Chemistry
249
33,983,258
https://en.wikipedia.org/wiki/Malala%20Yousafzai
Malala Yousafzai (, , pronunciation: ; born 12 July 1997) is a Pakistani female education activist, film and television producer, and the 2014 Nobel Peace Prize laureate at the age of 17. She is the youngest Nobel Prize laureate in history, the second Pakistani and the only Pashtun to receive a Nobel Prize. Yousafzai is a human rights advocate for the education of women and children in her native homeland, Swat, where the Pakistani Taliban had at times banned girls from attending school. Her advocacy has grown into an international movement, and according to former Prime Minister Shahid Khaqan Abbasi, she has become Pakistan's "most prominent citizen." The daughter of education activist Ziauddin Yousafzai, she was born to a Yusufzai Pashtun family in Swat and was named after the Afghan folk heroine Malalai of Maiwand. Considering Abdul Ghaffar Khan, Barack Obama, and Benazir Bhutto as her role models, she was also inspired by her father's thoughts and humanitarian work. In early 2009, when she was 11, she wrote a blog under her pseudonym Gul Makai for the BBC Urdu to detail her life during the Taliban's occupation of Swat. The following summer, journalist Adam B. Ellick made a New York Times documentary about her life as the Pakistan Armed Forces launched Operation Rah-e-Rast against the militants in Swat. In 2011, she received Pakistan's first National Youth Peace Prize. She interned for the Swat Relief Initiative, a foundation founded by Zebunisa Jilani, a princess of the Royal House of Swat which supports schools and clinics. She rose in prominence, giving interviews in print and on television, and was nominated for the International Children's Peace Prize by activist Desmond Tutu. On 9 October 2012, while on a bus in Swat District after taking an exam, Yousafzai and two other girls were shot by a Taliban gunman in an assassination attempt targeting her for her activism; the gunman fled the scene. She was struck in the head by a bullet and remained unconscious and in critical condition at the Rawalpindi Institute of Cardiology, but her condition later improved enough for her to be transferred to the Queen Elizabeth Hospital in Birmingham, UK. The attempt on her life sparked an international outpouring of support. Deutsche Welle reported in January 2013 that she may have become "the most famous teenager in the world". Weeks after the attempted murder, a group of 50 leading Muslim clerics in Pakistan issued a fatwā against those who tried to kill her. Governments, human rights organizations and feminist groups subsequently condemned the Tehrik-i-Taliban Pakistan. In response, the Taliban further denounced Yousafzai, indicating plans for a possible second assassination attempt which the Taliban felt was justified as a religious obligation. This sparked another international outcry. After her recovery, Yousafzai became a more prominent activist for the right to education. Based in Birmingham, she co-founded the Malala Fund, a non-profit organisation, with Shiza Shahid. In 2013, she co-authored I Am Malala, an international best seller. In 2013, she received the Sakharov Prize, and in 2014, she was the co-recipient of the 2014 Nobel Peace Prize with Kailash Satyarthi of India. Aged 17 at the time, she was the youngest-ever Nobel Prize laureate. In 2015, she was the subject of the Oscar-shortlisted documentary He Named Me Malala. The 2013, 2014 and 2015 issues of Time magazine featured her as one of the most influential people globally. In 2017 she was awarded honorary Canadian citizenship and became the youngest person to address the House of Commons of Canada. Yousafzai completed her secondary school education at Edgbaston High School, Birmingham in England from 2013 to 2017. From there she won a place at Lady Margaret Hall, Oxford, and undertook three years of study for a Bachelor of Arts degree in Philosophy, Politics and Economics (PPE), graduating in 2020. She returned in 2023 to become the youngest ever Honorary Fellow at Linacre College, Oxford. Early life Childhood Yousafzai was born on 12 July 1997 in the Swat District of Pakistan's northwestern Khyber Pakhtunkhwa province, into a lower-middle-class family. She is the daughter of Ziauddin Yousafzai and Toor Pekai Yousafzai. Her family is Sunni Muslim of Pashtun ethnicity, belonging to the Yusufzai tribe. The family did not have enough money for a hospital birth and Yousafzai was born at home with the help of neighbours. She was given her first name Malala (meaning "grief-stricken") after Malalai of Maiwand, a famous Pashtun poet and warrior woman from southern Afghanistan. At her house in Mingora, she lived with her two younger brothers, Khushal and Atal, her parents, Ziauddin and Tor Pekai, and two chickens. Fluent in Pashto, Urdu and English, Yousafzai was educated mostly by her father, Ziauddin Yousafzai, a poet, school owner, and an educational activist himself, running a chain of private schools known as the Khushal Public School. In an interview, she once said that she aspired to become a doctor, though later her father encouraged her to become a politician instead. Ziauddin referred to his daughter as something entirely special, allowing her to stay up at night and talk about politics after her two brothers had been sent to bed. Inspired by the twice-elected, assassinated Prime Minister Benazir Bhutto, Yousafzai started speaking about education rights as early as September 2008, when her father took her to Peshawar to speak at the local press club. "How dare the Taliban take away my basic right to education?" she asked in a speech covered by newspapers and television channels throughout the region. In 2009, she began as a trainee and was then a peer educator in the Institute for War and Peace Reporting's Open Minds Pakistan youth programme, which worked in the region's schools to help students engage in constructive discussion on social issues through journalism, public debate and dialogue. As a BBC blogger In late 2008, Aamer Ahmed Khan of the BBC Urdu website and his colleagues came up with a novel way of covering the Pakistani Taliban's growing influence in Swat. They decided to ask a schoolgirl to blog anonymously about her life there. Their correspondent in Peshawar, Abdul Hai Kakar, had been in touch with a local school teacher, Ziauddin Yousafzai, but could not find any students willing to report, as their families considered it too dangerous. Finally, Yousafzai suggested his own daughter, 11-year-old Malala. At the time, Pakistani Taliban militants led by Maulana Fazlullah were taking over the Swat Valley, banning television, music, girls' education, and women from going shopping. Bodies of beheaded policemen were being displayed in town squares. At first, a girl named Aisha from her father's school agreed to write a diary, but her parents stopped her from doing it because they feared Taliban reprisals. The only alternative was Yousafzai, who was four years younger and in seventh grade at the time. "We had been covering the violence and politics in Swat in detail but we didn't know much about how ordinary people lived under the Taliban", said Mirza Waheed, former editor of BBC Urdu. Because they were concerned for Yousafzai's safety, the BBC editors insisted she use a pseudonym. Her blog was published under the byline "Gul Makai" ("cornflower" in Pashto), a name taken from a character in a Pashtun folktale. On 3 January 2009, her first entry was posted to the BBC Urdu blog. She hand-wrote notes and passed them to a reporter who scanned and e-mailed them. The blog recorded Yousafzai's thoughts during the First Battle of Swat, as military operations took place, fewer girls show up to school, and finally, her school shut down. That day she wrote: I had a terrible dream yesterday with military helicopters and the Taliban. I have had such dreams since the launch of the military operation in Swat. My mother made me breakfast and I went off to school. I was afraid going to school because the Taliban had issued an edict banning all girls from attending schools. Only 11 out of 27 pupils attended the class because the number decreased because of the Pakistani Taliban's edict. My three friends have shifted to Peshawar, Lahore and Rawalpindi with their families after this edict. In Swat, the Pakistani Taliban had set an edict that no girls could attend school after 15 January 2009. They had already blown up more than 100 girls' schools. The night before the ban took effect was filled with the noise of artillery fire, waking Yousafzai several times. The following day, she also read for the first time excerpts from her blog that were published in a local newspaper. Banned from school Following the edict, the Pakistani Taliban destroyed several more local schools. On 24 January 2009, Yousafzai wrote: "Our annual exams are due after the vacations but this will only be possible if the Pakistani Taliban allow girls to go to school. We were told to prepare certain chapters for the exam but I do not feel like studying." In February 2009, girls' schools were still closed. In solidarity, private schools for boys had decided not to open until 9 February, and notices appeared saying so. On 7 February, Yousafzai and her brother returned to their hometown of Mingora, where the streets were deserted, and there was an "eerie silence". She wrote in her blog: "We went to the supermarket to buy a gift for our mother but it was closed, whereas earlier it used to remain open till late. Many other shops were also closed." Their home had been robbed and their television was stolen. After boys' schools reopened, the Pakistani Taliban lifted restrictions on girls' primary education, where there was co-education. Girls-only schools were still closed. Yousafzai wrote that only 70 pupils attended out of the 700 who were enrolled. On 15 February, gunshots were heard in Mingora's streets, but Yousafzai's father reassured her, saying, "Don't be scared—this is firing for peace." Her father had read in the newspaper that the government and militants were going to sign a peace deal the next day. Later that night, when the Taliban announced the peace deal on their FM Radio studio, another round of stronger firing started outside. Yousafzai spoke out against the Pakistani Taliban on the national current affairs show Capital Talk on 18 February. Three days later, Tehreek-e-Nafaz-e-Shariat-e-Mohammadi leader Maulana Fazlulla announced on his FM radio station that he was lifting the ban on women's education, and girls would be allowed to attend school until exams were held on 17 March, but that they had to wear burqas. Girls' schools reopen On 25 February, Yousafzai wrote on her blog that she and her classmates "played a lot in class and enjoyed ourselves like we used to before." Attendance at Yousafzai's class was up to 19 of 27 pupils by 1 March, but the Pakistani Taliban were still active in the area. Shelling continued, and relief goods meant for displaced people were looted. Only two days later, Yousafzai wrote that there was a skirmish between the military and Taliban, and the sounds of mortar shells could be heard: "People are again scared that the peace may not last for long. Some people are saying that the peace agreement is not permanent, it is just a break in fighting." On 9 March, Yousafzai wrote about a science paper that she performed well on, and added that the Taliban were no longer searching vehicles as they once did. Her blog ended on 12 March 2009. As a displaced person After the BBC diary ended, Yousafzai and her father were approached by New York Times reporter Adam B. Ellick about filming a documentary. In May, the Pakistani Army moved into the region to regain control during the Second Battle of Swat (also known as Operation Rah-e-Rast). Mingora was evacuated and Yousafzai's family was displaced and separated. Her father went to Peshawar to protest and lobby for support, while she was sent into the countryside to live with relatives. "I'm really bored because I have no books to read," she is filmed saying in the documentary. That month, after criticising militants at a press conference, Yousafzai's father received a death threat over the radio by a Pakistani Taliban commander. Yousafzai was deeply inspired in her activism by her father. That summer, for the first time, she committed to becoming a politician and not a doctor, as she had once aspired to be. By early July, refugee camps were filled to capacity. The prime minister made a long-awaited announcement saying it was safe to return to the Swat Valley. The Pakistani military had pushed the Taliban out of the cities and into the countryside. Yousafzai's family reunited, and on 24 July 2009 they headed home. They made one stop first—to meet with a group of other grassroots activists that had been invited to see United States President Barack Obama's special representative to Afghanistan and Pakistan, Richard Holbrooke. Yousafzai pleaded with Holbrooke to intervene in the situation, saying, "Respected ambassador, if you can help us in our education, so please help us." When her family finally returned home, they found it had not been damaged, and her school had sustained only light damage. Early activism Following the documentary, Yousafzai was interviewed on the national Pashto-language station AVT Khyber, the Urdu-language Daily Aaj, and Canada's Toronto Star. She made a second appearance on Capital Talk on 19 August 2009. Her BBC blogging identity was being revealed in articles by December 2009. She also began appearing on television to publicly advocate for female education. From 2009 to 2010 she was the chair of the District Child Assembly of the Khpal Kor Foundation. In 2011, Yousafzai trained with local girls' empowerment organisation, Aware Girls, run by Gulalai Ismail, whose training included advice on women's rights and empowerment to peacefully oppose radicalisation through education. In October 2011, Archbishop Desmond Tutu, a South African activist, nominated Yousafzai for the International Children's Peace Prize of the Dutch international children's advocacy group, KidsRights Foundation. She was the first Pakistani girl to be nominated for the award. The announcement said, "Malala dared to stand up for herself and other girls and used national and international media to let the world know girls should also have the right to go to school." The award was won by Michaela Mycroft of South Africa. Yousafzai's public profile rose even further when she was awarded Pakistan's first National Youth Peace Prize two months later in December. On 19 December 2011, Prime Minister Yousaf Raza Gillani awarded her the National Peace Award for Youth. At the ceremony, she stated she was not a member of any political party, but hoped to found a national party of her own to promote education. The prime minister directed the authorities to set up an IT campus in the Swat Degree College for Women at Yousafzai's request, and a secondary school was renamed in her honour. By 2012, she was planning to organise the Malala Education Foundation, which would help poor girls go to school. In 2012, she attended the International Marxist Tendency National Marxist Summer School. In a television interview the same year, she named Barack Obama, Benazir Bhutto and Abdul Ghaffar Khan (Bacha Khan), a Pashtun leader known for his nonviolent Khudai Khidmatgar resistance movement against the British Raj, as inspirations for her activism. Murder attempt As Yousafzai became more recognised, the dangers facing her increased. Death threats against her were published in newspapers and slipped under her door. On Facebook, where she was an active user, she began to receive threats. Eventually, a Pakistani Taliban spokesman said they were "forced" to act. In a meeting held in the summer of 2012, Taliban leaders unanimously agreed to kill her. On 9 October 2012, a Taliban gunman shot Yousafzai as she rode home on a bus after taking an exam in Pakistan's Swat Valley. Yousafzai was 15 years old at the time. According to reports, a masked gunman shouted: "Which one of you is Malala? Speak up, otherwise I will shoot you all." Upon being identified, Yousafzai was shot with one bullet, which travelled from the side of her left eye, through her neck and landed in her shoulder. Two other girls were also wounded in the shooting: Kainat Riaz and Shazia Ramzan, both of whom were stable enough following the shooting to speak to reporters and provide details of the attack. Medical treatment After the shooting, Yousafzai was airlifted to a military hospital in Peshawar, where doctors were forced to operate after swelling developed in the left portion of her brain, which had been damaged by the bullet when it passed through her head. After a five-hour operation, doctors successfully removed the bullet, which had lodged in her shoulder near her spinal cord. The day following the attack, doctors performed a decompressive craniectomy, in which part of her skull was removed to allow room for swelling. On 11 October 2012, a panel of Pakistani and British doctors decided to move Yousafzai to the Armed Forces Institute of Cardiology in Rawalpindi. Mumtaz Khan, a doctor, said that she had a 70% chance of survival. Interior Minister Rehman Malik said that Yousafzai would be moved to Germany, where she could receive the best medical treatment, as soon as she was stable enough to travel. A team of doctors would travel with her, and the government would bear the cost of her treatment. Doctors reduced Yousafzai's sedation on 13 October, and she moved all four limbs. Offers to treat Yousafzai came from around the world. On 15 October, Yousafzai travelled to the United Kingdom for further treatment, approved by both her doctors and family. Her plane landed in Birmingham, England, where she was treated at the Queen Elizabeth Hospital, one of the specialties of this hospital being the treatment of military personnel injured in conflict. According to media reports at the time, the UK Government stated that "[t]he Pakistani government is paying all transport, migration, medical, accommodation and subsistence costs for Malala and her party." Yousafzai had come out of her coma by 17 October 2012, was responding well to treatment, and was said to have a good chance of fully recovering without any brain damage. Later updates on 20 and 21 October stated that she was stable, but was still battling an infection. By 8 November, she was photographed sitting up in bed. On 11 November, Yousafzai underwent surgery for eight and a half hours, in order to repair her facial nerve. On 3 January 2013, Yousafzai was discharged from the hospital to continue her rehabilitation at her family's temporary home in the West Midlands, where she had weekly physiotherapy. She underwent a five-hour-long operation on 2 February to reconstruct her skull and restore her hearing with a cochlear implant, after which she was reported to be in stable condition. Yousafzai wrote in July 2014 that her facial nerve had recovered up to 96%. Reaction The murder attempt received worldwide media coverage and produced an outpouring of sympathy and anger. Protests against the shooting were held in several Pakistani cities the day after the attack, and over 2 million people signed the Right to Education campaign's petition, which led to ratification of the first Right to Education Bill in Pakistan. Pakistani officials offered a 10 million rupee (≈US$105,000) reward for information leading to the arrest of the attackers. Responding to concerns about his safety, Yousafzai's father said: "We wouldn't leave our country if my daughter survives or not. We have an ideology that advocates peace. The Taliban cannot stop all independent voices through the force of bullets." Pakistan's president Asif Ali Zardari described the shooting as an attack on "civilized people". UN Secretary-General Ban Ki-moon called it a "heinous and cowardly act". United States President Barack Obama found the attack "reprehensible, disgusting and tragic", while Secretary of State Hillary Clinton said Yousafzai had been "very brave in standing up for the rights of girls" and that the attackers had been "threatened by that kind of empowerment". British Foreign Secretary William Hague called the shooting "barbaric" and that it had "shocked Pakistan and the world". American singer Madonna dedicated her song "Human Nature" to Yousafzai at a concert in Los Angeles the day of the attack, and also had a temporary Malala tattoo on her back. American actress Angelina Jolie wrote an article explaining the event to her children and answering questions like "Why did those men think they needed to kill Malala?" Jolie later donated $200,000 to the Malala Fund for girls' education. Former First Lady of the United States, Laura Bush wrote an op-ed piece in The Washington Post in which she compared Yousafzai to Holocaust diarist Anne Frank. Ehsanullah Ehsan, chief spokesman for the Pakistani Taliban, claimed responsibility for the attack, saying that Yousafzai "is the symbol of the infidels and obscenity", adding that if she survived, the group would target her again. In the days following the attack, the Pakistani Taliban reiterated its justification, saying Yousafzai had been brainwashed by her father: "We warned him several times to stop his daughter from using dirty language against us, but he didn't listen and forced us to take this extreme step." The Pakistani Taliban also justified its attack as part of religious scripture, stating that the Quran says that "people propagating against Islam and Islamic forces would be killed", going on to say that "Sharia says that even a child can be killed if he is propagating against Islam". On 12 October 2012, a group of Islamic clerics in Pakistan issued a fatwā – a ruling of Islamic law – against the Taliban gunmen who tried to kill Yousafzai. Islamic scholars from the Sunni Ittehad Council publicly denounced attempts by the Pakistani Taliban to mount religious justifications for the shooting of Yousafzai and two of her classmates. Although the attack was roundly condemned in Pakistan, "some fringe Pakistani political parties and extremist outfits" have aired conspiracy theories, such as the shooting being staged by the American Central Intelligence Agency to provide an excuse for continuing drone attacks. The Pakistani Taliban and some other pro-Pakistani Taliban elements branded Yousafzai an "American spy". United Nations petition On 15 October 2012, UN Special Envoy for Global Education, Gordon Brown, the former British Prime Minister, visited Yousafzai while she was in the hospital, and launched a petition in her name and "in support of what Malala fought for". Using the slogan "I am Malala", the petition's main demand was that there be no child left out of school by 2015, with the hope that "girls like Malala everywhere will soon be going to school". Brown said he would hand the petition to President Zardari in Islamabad in November. The petition contains three demands: We call on Pakistan to agree to a plan to deliver education for every child. We call on all countries to outlaw discrimination against girls. We call on international organisations to ensure the world's 61 million out-of-school children are in education by the end of 2015. Criminal investigation, arrests, and acquittals The day after the shooting, Pakistan's Interior Minister Rehman Malik stated that the Taliban gunman who shot Yousafzai had been identified. Police named 23-year-old Atta Ullah Khan, a graduate student in chemistry, as the gunman in the attack. , he remained at large, possibly in Afghanistan. The police also arrested six men for involvement in the attack, but they were later released due to lack of evidence. In November 2012, US sources confirmed that Mullah Fazlullah, the cleric who ordered the attack on Yousafzai, was hiding in eastern Afghanistan. He was killed by a U.S.-Afghan air strike in June 2018. On 12 September 2014, ISPR Director, Major General Asim Bajwa, told a media briefing in Islamabad that the 10 attackers belonged to a militant group called "Shura". General Bajwa said that Israrur Rehman was the first member of the militant group to be identified and apprehended by troops. Acting upon the information received during his interrogation, all other members of the militant group were arrested. It was an intelligence-based joint operation conducted by ISI, police, and the military. In April 2015, it was first reported that the ten men who had been arrested were sentenced to life in prison by Judge Mohammad Amin Kundi, a counterterrorism judge, with the chance of eligibility for parole, and possible release, after 25 years. It is not known whether the actual would-be murderers were among the ten sentenced. But in June it was revealed that eight of the ten men, who were tried in-camera for the attack, and actually confessed to helping plan the attack, had in fact been acquitted in the secret trial. Insiders revealed that one of the men acquitted and freed had been the mastermind behind the murder bid. It is believed that all the other men involved in the shooting of Yousafzai fled to Afghanistan soon afterwards and were never even captured. The information about the release of suspects came to light after the London Daily Mirror attempted to locate the men in prison. Senior police official Salim Khan and the Pakistan High Commission in London stated that the eight men were released because there was not enough evidence to connect them to the attack. Education From March 2013 to July 2017, Yousafzai was a pupil at the all-girls Edgbaston High School in Birmingham. In August 2015, she received 6 A*s and 4 As at GCSE level. At A-Level, she studied Geography, History, Mathematics and Religious Studies. Also applying to Durham University, the University of Warwick and the London School of Economics (LSE), Yousafzai was interviewed at Lady Margaret Hall, Oxford in December 2016 and received a conditional offer of three As in her ALevels; in August 2017, she was accepted to study Philosophy, Politics and Economics (PPE). In February 2020, climate change activist Greta Thunberg travelled to Oxford University to meet Yousafzai. On 19 June 2020, Yousafzai said after passing her final examinations that she had completed her PPE degree at Oxford; she graduated with honours. Continuing activism Yousafzai addressed the United Nations in July 2013, and had an audience with Queen Elizabeth II in Buckingham Palace. In September, she spoke at Harvard University, and in October, she met with US President Barack Obama and his family; during that meeting, she confronted him on his use of drone strikes in Pakistan. In December, she addressed the Oxford Union. In July 2014, Yousafzai spoke at the Girl Summit in London. In October 2014, she donated $50,000 to the UNRWA for reconstruction of schools on the Gaza Strip. Even though she was fighting for women's rights as well as children's rights, Yousafzai did not describe herself as a feminist when asked on Forbes Under 30 Summit in 2014. In 2015, Yousafzai told Emma Watson she decided to call herself a feminist after hearing Watson's speech at the UN launching the HeForShe campaign. On 12 July 2015, her 18th birthday, Yousafzai opened a school in the Bekaa Valley, Lebanon, near the Syrian border, for Syrian refugees. The school, funded by the not-for-profit Malala Fund, offers education and training to girls aged 14 to 18 years. Yousafzai called on world leaders to invest in "books, not bullets". Yousafzai has repeatedly condemned the Rohingya persecution in Myanmar. In June 2015, the Malala Fund released a statement in which Yousafzai argues that the Rohingya people deserve "citizenship in the country where they were born and have lived for generations" along with "equal rights and opportunities." She urges world leaders, particularly in Myanmar, to "halt the inhuman persecution of Burma's Muslim minority Rohingya people." In September 2017, speaking in Oxford, Yousafzai said: "This should be a human rights issue. Governments should react to it. People are being displaced, they're facing violence." Yousafzai also posted a statement on Twitter calling for Nobel Peace Prize laureate Aung San Suu Kyi to condemn the treatment of the Rohingya people in Myanmar. Suu Kyi has avoided taking sides in the conflict, or condemning violence against the Rohingya people, leading to widespread criticism. In 2014, Yousafzai stated that she wished to return to Pakistan following her education in the UK, and inspired by Benazir Bhutto, she would consider running for prime minister: "If I can help my country by joining the government or becoming the prime minister, I would definitely be up for this task." She repeated this aim in 2015 and 2016. However, Yousafzai noted in 2018 that her goal had changed, stating that "now that I have met so many presidents and prime ministers around the world, it just seems that things are not simple and there are other ways that I can bring the change that I want to see." In a 2018 interview with David Letterman for Netflix's show My Next Guest Needs No Introduction, Yousafzai was asked: "Would you ever want to hold a political position?" She replied: "Me? No." Representation Former British Prime Minister Gordon Brown arranged for Yousafzai's appearance before the United Nations in July 2013. Brown also requested that McKinsey consultant Shiza Shahid, a friend of the Yousafzai family, chair Yousafzai's charity fund, which had gained the support of Angelina Jolie. Google's vice-president Megan Smith also sits on the fund's board. In November 2012, the consulting firm Edelman began work for Yousafzai on a pro bono basis, which according to the firm "involves providing a press office function for Malala". The office employs five people, and is headed by speechwriter Jamie Lundie. McKinsey also continues to provide assistance to Yousafzai. Malala Day On 12 July 2013, Yousafzai's 16th birthday, she spoke at the UN to call for worldwide access to education. The UN dubbed the event "Malala Day". Yousafzai wore one of Benazir Bhutto's shawls to the UN. It was her first public speech since the attack, leading the first ever Youth Takeover of the UN, with an audience of over 500 young education advocates from around the world. Yousafzai received several standing ovations. Ban Ki-moon, who also spoke at the session, described her as "our hero". Yousafzai also presented the chamber with "The Education We Want", a Youth Resolution of education demands written by Youth for Youth, in a process co-ordinated by the UN Global Education First Youth Advocacy Group, telling her audience: The Pakistani government did not comment on Yousafzai's UN appearance, amid a backlash against her in Pakistan's press and social media. Words from the speech were used as lyrics for "Speak Out", a song by Kate Whitley commissioned by BBC Radio 3 and broadcast on International Women's Day 2017. Jon Stewart interview On 8 October 2013 Malala, at the age of 16, visited The Daily Show with Jon Stewart, an American television programme, her first major late night appearance. She was there as a guest to promote her book, I Am Malala. On the program they discussed her assassination attempt, human rights, and women's education. She left Jon Stewart speechless when she described her thoughts after learning the Pakistani Taliban wanted her dead, saying: Stewart, visibly moved by her words, ended the conversation saying: "I am humbled to speak with you." Stewart would again have her as a guest on the show after the 2015 Charleston Church Shooting, in which he started the show citing no jokes saying, "our guest is an incredible person who suffered unspeakable violence by extremists and her perseverance and determination through that to continue on is an incredible inspiration and to be quite honest with you, I don't think there's anyone else in the world I would rather talk to tonight than Malala so that's what we'll do and sorry about no jokes." Nobel Peace Prize On 10 October 2014, Yousafzai was announced as the co-recipient of the 2014 Nobel Peace Prize for her struggle against the suppression of children and young people and for the right of all children to education. Having received the prize at the age of 17, Yousafzai is the youngest Nobel laureate. Yousafzai shared the prize with Kailash Satyarthi, a children's rights activist from India. She is the second Pakistani to receive a Nobel Prize after 1979 Physics laureate Abdus Salam. After she was awarded the Nobel Peace Prize, there was praise, but also some disapproval of the decision. A Norwegian jurist, Fredrik Heffermehl, commented on being awarded the Nobel Prize: "This is not for fine people who have done nice things and are glad to receive it. All of that is irrelevant. What Nobel wanted was a prize that promoted global disarmament." Adán Cortés, a college student from Mexico City and asylum seeker, interrupted Yousafzai's Nobel Peace Prize award ceremony in protest for the 2014 Iguala mass kidnapping in Mexico, but was quickly taken away by security personnel. Yousafzai later sympathised, and acknowledged that problems are faced by young people all over the world, saying "there are problems in Mexico, there are problems even in America, even here in Norway, and it is really important that children raise their voices". David Letterman interview In March 2018, Yousafzai was the subject of an interview with David Letterman for his Netflix show My Next Guest Needs No Introduction. Speaking about the Taliban, she opined that their misogyny comes from a superiority complex, and is reinforced by finding "excuses" in culture or literature, such as by misinterpreting teachings of Islam. On the topic of her attackers, Yousafzai comments: "I forgive them because that's the best revenge I can have." Pointing out that the person who attacked her was a young boy, she says: "He thought he was doing the right thing". Asked about the presidency of Donald Trump, Yousafzai said: "Some of the things have really disappointed me, like sexual harassment and the ban on Muslims and racism." She also criticised the Trump administration's proposed budget cuts to education, saying that education is the first step to "eradicating extremism and ending poverty". Throughout the episode, clips are shown of Yousafzai acting as a tour guide for prospective students to her college Lady Margaret Hall, Oxford. Afghanistan In July 2021, amid a major offensive by the Taliban insurgents, Yousafzai urged the international community to press for an immediate ceasefire in Afghanistan and provide humanitarian aid to Afghan civilians. Following the Taliban takeover of Kabul on 15 August 2021, she expressed concern about the fate of women's rights, fearing that women in Afghanistan would lose the social and educational gains that had been made during the previous Afghan government's two decades. Yousafzai condemned the Taliban's ban on girls' education beyond 6th grade, and said "the Taliban will continue to make excuses to prevent girls from learning beyond primary school." She said the Taliban "want to erase girls and women from all public life in Afghanistan," and asked "leaders around the world to take collective action to hold the Taliban accountable for violating the human rights of millions of women and girls." Women's clothing, marriage Yousafzai had said that she did not understand why people had to marry. After her own marriage in 2021 she said that she had not been against marriage, but had concerns about it related to child marriage and forced marriage, and unequal marriages where "women make more compromises than men". In her own marriage she felt that she had found a person who understood her values. On 7 March 2022, Malala Yousafzai advocated for every woman's right to decide to wear what she likes for herself, from a burqa to a bikini: "Come and talk to us about individual freedom and autonomy, about preventing harm and violence, about education and emancipation. Do not come with your wardrobe notes." According to Yousafzai, "refusing to let girls go to school in their hijabs is horrifying". Personal life On 9 November 2021, Yousafzai married Asser Malik, a manager with the Pakistan Cricket Board, in Birmingham. Yousafzai is a practising Sunni Muslim. In an interview with Muslim Girl, she stated, "[The Islamic] faith has always been a big part of my life – and it continues to be so today." She has also defended her practice of wearing a shayla. Reception Yousafzai's opposition to the policy of Talibanisation made her unpopular in Pakistan among Taliban sympathisers. A Dawn columnist said she was scapegoated by the "failing state government," and a journalist in The Nation wrote Yousafzai was hated by "overzealous patriots" who were keen to deny the oppression of women in Pakistan. Her statements conflicted with the view that militancy in Pakistan was a result of Western interference, and conservatives and Islamic fundamentalists described her ideology as "anti-Pakistan". Many Pakistanis view her as an "agent of the West", due to her Nobel prize, Oxford education and residence in England; however, Yousafzai is seen as courageous by some Pakistanis. Farman Nawaz argued in Daily Outlook Afghanistan that Yousafzai would have gained more fame in Pakistan if she belonged to the province of Punjab. In 2015, the All Pakistan Private Schools Federation (APPSF) banned her autobiographical book, I Am Malala, at all Pakistani private schools, with the APPSF president Mirza Kashif Ali releasing his own book against her, I Am Not Malala. His book accused Yousafzai of attacking the Pakistan Armed Forces under the pretence of female education, described her father as a "double agent" and "traitor", and denounced the Malala Fund's promotion of secular education. However, Ali pointed out that the APPSF had gone on a national strike when Yousafzai was attacked by the Pakistani Taliban. Conspiracy theorists in newspapers and social media alleged that Yousafzai had staged her assassination attempt, or that she was an agent of the US Central Intelligence Agency (CIA). Another conspiracy theory alleges that Yousafzai is a Jewish agent. On 29 March 2018, Yousafzai returned to Pakistan for the first time since the shooting. Meeting Prime Minister Shahid Khaqan Abbasi, she gave a speech in which she said it had been her dream to return without any fear. Yousafzai then visited her hometown Mingora in Swat District, Khyber Pakhtunkhwa. She vowed to return to her country after studies, and responding to criticism, said "I am proud of my religion and country." Criticism On 7 August 2019, following the Indian revocation of the special status of Jammu and Kashmir, Yousafzai urged the UN to help Kashmiri children go safely back to school in response to the Indian Government's lockdown and communications blackout in the Kashmir valley and expressed her concern about the situation, and appealed to the international community to ensure peace in Jammu and Kashmir. People in India accused her of spreading the "Pakistani agenda" over the Kashmir conflict, and being selective in condemning human rights abuses, while in Pakistan she was criticised for being late in her response. After the start of Gaza Israel conflict in October 2023, Yousafzai drew criticism for being silent over Israel's onslaught on Gaza and her "hypocritical" support statement about the conflict. She was condemned by Pakistani authors Nida Kirmani and Mehr Tarar over a Broadway musical she co-produced with former US Secretary of State Hillary Clinton, who had rejected calls for ceasefire in Gaza. After a severe backlash, Yousafzai reaffirmed her support for people of Gaza and called for a ceasefire. Works Yousafzai's memoir I Am Malala: The Story of the Girl Who Stood Up for Education and Was Shot by the Taliban, co-written with British journalist Christina Lamb, was published in October 2013 by Little, Brown and Company in the US and by Weidenfeld & Nicolson in the UK. Fatima Bhutto, reviewing the book for The Guardian called the book "fearless" and stated that "the haters and conspiracy theorists would do well to read this book", though she criticised "the stiff, know-it-all voice of a foreign correspondent" that is interwoven with Yousafzai's. Marie Arana for The Washington Post called the book "riveting" and wrote "It is difficult to imagine a chronicle of a war more moving, apart from perhaps the diary of Anne Frank." Tina Jordan in Entertainment Weekly gave the book a "B+", writing "Malala's bravely eager voice can seem a little thin here, in I Am Malala, likely thanks to her co-writer, but her powerful message remains undiluted." A children's edition of the memoir was published in 2014 under the title I Am Malala: How One Girl Stood Up for Education and Changed the World. According to Publishers Weekly, in 2017 the book had sold almost 2 million copies, and there were 750,000 copies of the children's edition in print. Yousafzai was the subject of the 2015 documentary He Named Me Malala, which was shortlisted for the Academy Award for Best Documentary Feature. In 2020, an Indian Hindi-language biographical film Gul Makai by H. E. Amjad Khan was released, with Reem Sameer Shaikh portraying her. Yousafzai authored a picture book, Malala's Magic Pencil, which was illustrated by Kerascoët and published on 17 October 2017. By March 2018, The Bookseller reported that the book had over 5,000 sales in the UK. In a review for The Guardian, Imogen Carter describes the book as "enchanting", opining that it "strikes just the right balance" between "heavy-handed" and "heartfelt", and is a "welcome addition to the frustratingly small range of children's books that feature BAME central characters". Rebecca Gurney of The Daily Californian gives the book a grade of 4.5 out of 5, calling it a "beautiful account of a terrifying but inspiring tale" and commenting "Though the story begins with fantasy, it ends starkly grounded in reality." In March 2018, it was announced that Yousafzai's next book We Are Displaced: True Stories of Refugee Lives would be published on 4 September 2018 by Little, Brown and Company's Young Readers division. The book is about refugees, and includes stories from Yousafzai's own life along with those of people she has met. Speaking about the book, Yousafzai said that "What tends to get lost in the current refugee crisis is the humanity behind the statistics" and "people become refugees when they have no other option. This is never your first choice." Profits from the book will go to Yousafzai's charity Malala Fund. She visited Australia and criticized its asylum policies and compared immigration policies of the US and Europe unfavourably to those of poor countries and Pakistan. The book was published on 8 January 2019. On 8 March 2021, a multiyear partnership between Yousafzai and Apple was announced. She will work on programming for Apple's streaming service, Apple TV+. The work will span “dramas, comedies, documentaries, animation, and children's series, and draw on her ability to inspire people around the world.” Awards and honours National and international honours, listed by the date: 2011: International Children's Peace Prize (nominee) 2011: National Youth Peace Prize January 2012: Anne Frank Award for Moral Courage October 2012: Sitara-e-Shujaat, Pakistan's second-highest civilian bravery award November 2012: Foreign Policy magazine top 100 global thinker December 2012: Time magazine Person of the Year shortlist for 2012 November 2012: Mother Teresa Awards for Social Justice December 2012: Rome Prize for Peace and Humanitarian Action January 2013: Top Name in Annual Survey of Global English in 2012 January 2013: Simone de Beauvoir Prize March 2013: Memminger Freiheitspreis 1525 (conferred on 7 December 2013 in Oxford) March 2013: Doughty Street Advocacy award of Index on Censorship March 2013: Fred and Anne Jarvis Award of the UK National Union of Teachers April 2013: Vital Voices Global Leadership Awards, Global Trailblazer April 2013: One of Times "100 Most Influential People in the World" May 2013: Premi Internacional Catalunya Award of Catalonia, May 2013 June 2013: Annual Award for Development of the OPEC Fund for International Development (OFID) June 2013: International Campaigner of the Year, 2013 Observer Ethical Awards August 2013: Tipperary International Peace Award for 2012, Ireland Tipperary Peace Convention 2013: Portrait of Yousafzai by Jonathan Yeo displayed at National Portrait Gallery, London September 2013: Ambassador of Conscience Award from Amnesty International 2013: International Children's Peace Prize 2013: Clinton Global Citizen Awards from Clinton Foundation September 2013: Harvard Foundation's Peter Gomes Humanitarian Award from Harvard University 2013: Anna Politkovskaya Award – Reach All Women in War 2013: Reflections of Hope Award – Oklahoma City National Memorial & Museum 2013: Sakharov Prize for Freedom of Thought – awarded by the European Parliament 2013: Honorary Master of Arts degree awarded by the University of Edinburgh 2013: Pride of Britain (October) 2013: Glamour magazine Woman of the Year 2013: GG2 Hammer Award at GG2 Leadership Awards (November) 2013: International Prize for Equality and Non-Discrimination 2014: Awarded the World Children's Prize also known as Children's Nobel Prize 2014: Awarded Honorary Life Membership by the PSEU (Ireland) 2014: Skoll Global Treasure Award 2014: Honorary Doctor of Civil Law, University of King's College, Halifax, Nova Scotia, Canada 2014: 2014 Nobel Peace Prize, shared with Kailash Satyarthi 2014: Philadelphia Liberty Medal 2014: Asia Game Changer Award 2014: One of Time Magazine "The 25 Most Influential Teens of 2014" 2014: Honorary Canadian citizenship 2015: Asteroid 316201 Malala named in her honour. 2015: The audio version of her book I Am Malala wins Grammy Award for Best Children's Album. 2016: Honorary President of The Students' Union of the University of Sheffield 2016: Order of the Smile 2017: Youngest ever United Nations Messenger of Peace 2017: Received honorary doctorate from the University of Ottawa 2017: Ellis Island International Medal of Honor 2017: Wonk of the Year 2017 from American University 2017: Harper's Bazaar inducted Malala in the list of "150 of the most influential female leaders in the UK". 2018: Advisor to Princess Zebunisa of Swat, Swat Relief Initiative Foundation, Princeton, New Jersey 2018: Gleitsman Award from the Center for the Public Leadership at Harvard Kennedy School 2019: For their first match of March 2019, the women of the United States women's national soccer team each wore a jersey with the name of a woman they were honoring, on the back; Carli Lloyd chose the name of Yousafzai. 2022: Elected World's Children's Prize Decade Child Rights Hero. In popular culture In the 2016 action comedy film Zoolander 2, Malala Yousafzai is depicted as dating/marrying the "next hot model" Derek Zoolander Jr. (portrayed by Cyrus Arnold), who earlier had been admiring and reading her various autobiographies. In the 2019 coming-of-age comedy film Booksmart, two main characters Amy and Molly (portrayed by Beanie Feldstein and Kaitlyn Dever), named their code word "Malala", named after Yousafzai, and the code means they need the other to do something, no question asked. Yousafzai herself loved the film and approves the reference. In the 2023 animated superhero film Spider-Man: Across the Spider-Verse, Sofia Barclay voices Malala Windsor / Spider-UK (Earth-835), described as a composite of Malala Yousafzai and the House of Windsor. A lieutenant of Miguel O'Hara's Spider-Society, Barclay said of the character: "Who better to model a superhero after than a real-life superhero? A woman famous in real life for her integrity and bravery when faced with dangerous odds: yes please!". The second season of the Channel 4 British sitcom We Are Lady Parts, released in May 2024, contained an episode titled and inspired by Yousafzai, "Malala made me do it". It featured Malala in her debut acting role. See also Farida Afridi Bibi Aisha Muzoon Almellehan Humaira Bachal British Pakistanis Sahar Gul Aitzaz Hasan Shenila Khoja-Moolji List of peace activists Women's education in Pakistan Women's rights in 2014 Women's rights in Pakistan Explanatory notes References External links "Malala: Wars Never End Wars", DAWN, 2013 interview with audio clips of Yousafzai Class Dismissed: Malala's Story, English-language documentary July 2013 United Nations speech in full (with 17 min. Al Jazeera video) Forging the Ideal Educated Girl by Shenila Khoja-Moolji for academic work on Yousafzai 1997 births 21st-century memoirists 21st-century Pakistani women writers 21st-century Pakistani writers Alumni of Lady Margaret Hall, Oxford Asia Game Changer Award winners BBC people Child writers Conspiracy theories in Pakistan Education activists Incidents of violence against girls Incidents of violence against women Living people Muslim socialists Muslim writers Nobel Peace Prize laureates Nonviolence advocates Pakistani bloggers Pakistani women bloggers Pakistani child activists Pakistani children's rights activists Pakistani educational theorists Pakistani expatriates in England Pakistani feminists Pakistani memoirists Pakistani Nobel laureates Pakistani refugees Pakistani socialists Pakistani Sunni Muslims Pakistani terrorism victims Pakistani women's rights activists Pashtun people People from Swat District People of the insurgency in Khyber Pakhtunkhwa Proponents of Islamic feminism Sakharov Prize laureates Shooting survivors Shorty Award winners Victims of the Tehrik-i-Taliban Pakistan Violence against women in Pakistan Women and education Women memoirists Women Nobel laureates Writers from Birmingham, West Midlands Youth activists Pashtun women Pashtun activists Pashtun women writers Pashtun children United Nations Messengers of Peace Recipients of Sitara-i-Shujaat 21st-century Pakistani politicians 21st-century Pakistani women politicians
Malala Yousafzai
Technology
10,767
52,102,207
https://en.wikipedia.org/wiki/NGC%20317
NGC 317 is a pair of interacting galaxies, consisting of a lenticular galaxy NGC 317A (also designated as PGC 3442) and a spiral galaxy NGC 317B (also designated as PGC 3445), in the constellation Andromeda. It was discovered on October 1, 1885 by Lewis Swift. Two supernovae have been observed in NGC 317B: SN 1999gl (type II, mag. 16.2), and SN 2014dj (type Ic, mag. 17). References 0317 18851001 Andromeda (constellation) Interacting galaxies 003442
NGC 317
Astronomy
124
52,502,930
https://en.wikipedia.org/wiki/1%2C2-Difluoroethane
1,2-Difluoroethane is a saturated hydrofluorocarbon containing an atom of fluorine attached to each of two carbons atoms. The formula can be written CH2FCH2F. It is an isomer of 1,1-difluoroethane. It has a HFC name of HFC-152 with no letter suffix. When cooled to cryogenic temperatures it can have different conformers, gauche and trans. In the liquid form these are about equally abundant and easily interconvert. As a gas it is mostly the gauche form. In the HFC-152 designation, 2 means two fluorine atoms, 5 means 5-1 or four hydrogen atoms, and 1 means 1+1 or two carbon atoms. Formation Ethylene reacts explosively with fluorine yielding a mixture of 1,2-difluoroethane and vinyl fluoride. With solid fluorine it will react when triggered by near-infrared radiation. Properties Critical temperature is 107.5 °C. If a C-H bond is over excited with too much vibration, the intramolecular vibrational relaxation takes 490 picoseconds. The F-C-C-F dihedral angle is about 72°. Natural bond orbital deletion bond calculations show that 1,2-difluoroethane prefers the gauche conformation due to hyperconjugation effects. Since F is much more electronegative than the C atom, it will have greater electron density for the bonding orbital (Carbon-fluorine bond). Thus, C will have larger σ* orbital, which is stabilized through C-H hyperconjugation. This cis C-H bonds and the C-F σ* interactions are significant. The dihedral angle of about 72° is a result of decreasing hyper conjugative stability and decreasing steric destabilization. Reactions CH2FCH2F reacts with chlorine when treated with light. Two products are formed CH2FCCl2F and CHClFCHClF. The proportions of each depends on the solvent. Uses 1,2-Difluoroethane is primarily used in Refrigerants, 39%; foam blowing agents, 17%; solvents, 14%; fluoropolymers, 14%; sterilant gas, 2%; aerosol propellants, 2%; food freezant, 1%; other, 8%; exports, 3%. Safety 1,2-Difluoroethane is toxic when inhaled or when it comes into direct contact with the skin. Fluorocarbons are 4 to 5 times heavier than air, so it tends to concentrate in low-lying areas. This increases the risk of inhalation. 1,2-difluoroethane is toxic to humans through several mechanisms. First, because it has a high density, it can displace oxygen in the lungs causing suffocation. In addition, inhaled fluorocarbons causes the myocardium to become more sensitive to catecholamines, which results in deadly cardiac arrhythmias. When inhaled by rats, 1,2-difluoroethane is converted to fluoroacetate using cytochrome P450 and then to fluorocitrate both toxic. 100 parts per million in the atmosphere was sufficient to poison rats in 30 minutes and to kill them in four hours. 1,2-Difluoroethane is likely to be similarly toxic to humans. Environmental fate 1,2-Difluoroethane can enter the environment various ways. One way is through volatilization from rivers and lakes. Henry's law estimates that the volatilization half life from a model river is about 2.4 hours and 3.2 days from a model lake. When 1,2-difluoroethane is released to the environment, it will end up in the atmosphere. Here it is degraded by reaction with hydroxyl radicals and oxygen. CH2FCH2F + OH → CH2FCHF + H2O CH2FCHF + O2 → CH2FCHFO2 peroxy radical CH2FCHFO2 + NO → CH2FCHFO alkoxy radical When catalysed by chlorine atoms and oxidised by nitrogen oxides the end product is HCOF which can decompose further to HF and CO. The halflife in air is between 140 and 180 days. Control 1,2-Difluoroethane is a greenhouse gas when released to the atmosphere. It has a warming equivalent to 140 times that of carbon dioxide. As such it may be controlled by government regulation. The Australian government classifies 1,2-difluoroethane as an exotic synthetic greenhouse gas. References Fluoroalkanes Hydrofluorocarbons Organic compounds with 2 carbon atoms
1,2-Difluoroethane
Chemistry
1,023
40,205,819
https://en.wikipedia.org/wiki/ARQ%20%28journal%29
ARQ is a professional magazine published by the School of Architecture of the Pontifical Catholic University of Chile. It publishes articles on a wide range of architecture-related topics, primarily on issues that are relevant to Chile and South America. Each issue is centered on one theme. References External links Architecture journals Architecture in Chile Pontifical Catholic University of Chile academic journals Academic journals established in 1980 1980 establishments in Chile
ARQ (journal)
Engineering
79
614,794
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Astrophysics
The Max Planck Institute for Astrophysics (MPA) is a research institute located in Garching, just north of Munich, Bavaria, Germany. It is one of many scientific research institutes belonging to the Max Planck Society. The MPA is widely considered to be one of the leading institutions in the world for theoretical astrophysics research. According to Thomson Reuters, from 1999-2009 the Max Planck Society as a whole published more papers and accumulated more citations in the fields of physics and space science than any other research organization in the world. History The Max Planck Society was founded on 26 February 1948. It effectively replaced the Kaiser Wilhelm Society for the Advancement of Science, which was dissolved after World War II. The society is named after Max Planck, one of the founders of quantum theory. The MPA was founded as the Max Planck Institute for Physics and Astrophysics in 1958 and split into the Max Planck Institute for Astrophysics and the Max Planck Institute for Physics in 1991. In 1995, the numerical relativity group moved to the Max Planck Institute for Gravitational Physics. Organization The MPA is one of several Max Planck Institutes that specialize in astronomy and astrophysics. Others are the Max Planck Institute for Extraterrestrial Physics in Garching (located next-door to the MPA), the Max Planck Institute for Astronomy in Heidelberg, the Max Planck Institute for Radio Astronomy in Bonn, the Max Planck Institute for Solar System Research in Göttingen, and the Max Planck Institute for Gravitational Physics (a.k.a. Albert Einstein Institute) in Golm. The institute is located next-door to the MPI for Extraterrestrial Physics, as well as the headquarters of the European Southern Observatory. It also enjoys close working relationships with the Ludwig Maximilian University of Munich and Technical University Munich. At any given time, the institute employs approximately 50 scientists, instructs over 30 PhD students, and hosts about 20 visiting scientists (some 60 visitors stay for longer than 2 weeks in any given year). As of 2021, the four directors of the MPA are Selma de Mink, Guinevere Kauffmann, Eiichiro Komatsu, and Volker Springel. Previous directors include Ludwig Biermann (1958 – 75), Rudolf Kippenhahn (1975 – 91), Simon White (1994 – 2019), Rashid Sunyaev (1995 – 2018), Wolfgang Hillebrandt (1997 – 2009) and Martin Asplund (2007 – 2011). Science Focusing on theoretical investigations, the MPA covers a wide range of topics in astrophysics. These include: Cosmology, in particular galaxy formation and evolution, reionization, and the cosmic microwave background; High-energy astronomy and astrophysics, including supermassive black holes, galaxy clusters, active galactic nuclei and quasars, X-ray binaries, and accretion discs; Stellar physics, including stellar evolution and stellar explosions such as supernovae and gamma-ray bursts. Public outreach The MPA works to explain astrophysical concepts and disseminate its findings to the public. These activities include popular science articles written by MPA scientists, events hosting school groups, events open to the general public, and monthly research highlights written for a general audience. Graduate program The International Max Planck Research School (IMPRS) for Astrophysics is a graduate program offering a PhD in astrophysics. The school is a cooperation with the Ludwig Maximilian University of Munich and Technical University Munich. External links Homepage of the Max Planck Institute for Astrophysics Homepage of the International Max Planck Research School (IMPRS) for Astrophysics References Astrophysics Astrophysics research institutes 1958 establishments in West Germany Garching bei München
Max Planck Institute for Astrophysics
Physics
748
70,428,261
https://en.wikipedia.org/wiki/2022%20United%20Nations%20Biodiversity%20Conference
The 2022 United Nations Biodiversity Conference of the Parties (COP15) to the UN Convention on Biological Diversity (CBD) was a conference held in Montreal, Canada, which led to the international agreement to protect 30% of land and oceans by 2030 (30 by 30) and the adoption of the Kunming-Montreal Global Biodiversity Framework. History The conference was originally scheduled to be held in October 2020, but was delayed due to the COVID-19 pandemic. It was rescheduled to be held in April 2022 in Kunming, China, but was postponed again, for a fourth time due to China's zero-COVID policy, to the third quarter of 2022 according to the UN secretariat office on March 29. In May 2022, China requested Canada to assume the host responsibility. The Canadian Minister of Environment and Climate Change Steven Guilbeault met with representatives from the High Ambition Coalition in early June 2022 and these representatives asked Canada to host COP15. The Prime Minister of Canada Justin Trudeau approved the proposal. In June 2022, the UN secretariat for the Convention on Biological Diversity and China's environment ministry said in separate statements that the meeting would be held in December 2022 in Montreal, Canada, where the secretariat is based, though China would remain the president of the summit. This arrangement is consistent with previous practices of moving the meeting to a different country, such as the 2017 United Nations Climate Change Conference (Fiji held the presidency while Germany organized the meeting for practical purpose) and the 2019 United Nations Climate Change Conference (Chile maintained the presidency despite the meeting being moved to Spain due to political instability in Chile). While the host countries of previous COPs had one to two years to organize the conference, Canada had just five months to prepare for the arrival of 18,000 delegates from 196 CBD member states, non-governmental organizations, industry groups and academia. This is the second time Montreal served as the host city for a UN Conference of Parties meeting, the first time being the COP11 climate change conference in 2005. Montreal also played host to the negotiations for the Montreal Protocol. Development Lead-up Several cities signed the "Montreal Pledge" in advance of the conference to commit to protect biodiversity in their cities through 15 actions. Negotiations and adoption During the talks, divisions remained on numerous issues as the conference went into its final days, such as disputes over the funding for conservation efforts. There was also discussion that protections for marine biodiversity could be dropped completely. An op-ed published in The Guardian in mid-December criticized the proceedings as being very slow and lacking urgency. On December 19, almost every country on earth signed onto the agreement which includes protecting 30% of land and oceans by 2030 (30 by 30) and 22 other targets intended to reduce biodiversity loss. When the agreement was signed only 17% of land territory and 10% of ocean territory were protected. The agreement includes protecting the rights of indigenous peoples and changing the current subsidy policy to a one better for biodiversity protection. However, it makes a step backward in protecting species from extinction in comparison to the Aichi Targets. Some countries said the agreement does not go far enough to protect biodiversity, and that the process was rushed. Only the United States and the Holy See did not join it. The absence of the United States signature weakened the agreement. However, the country helped to reach the agreement, strongly advanced some of the targets mentioned in it, especially 30 by 30, nationally and internationally and is a major donor to biodiversity protection issues. Content In addition to protecting 30% of land and oceans by 2030, the agreement includes also recovering 30% of earth degraded ecosystems and increasing funding for biodiversity issues. Other targets for the year 2030 include cutting overconsumption and waste, reducing food waste by 50%, and completely stop harming ecosystems that are strongly important for biodiversity. There are also 4 targets for the year 2050 which includes increasing the area of natural ecosystems, restoring their integrity and normal functioning, reducing tenfold the human caused extinction rate, and protecting traditional knowledge. COP15 adopted a comprehensive package of 6 items: L25: Kunming-Montreal Global Biodiversity Framework (GBF) L26: Monitoring framework for the Kunming-Montreal Global Biodiversity Framework L27: Mechanisms for planning, monitoring, reporting and review L28: Capacity-building and development and technical and scientific cooperation L29: Resource mobilization L30: Digital sequence information on genetic resources. The advocacy of the UNCBD Women's Cacus and its members led to a Rio Convention for the first time in its 30-year history to adopt a stand-alone target, Target 23, on gender equality in the Kunming-Montreal Global Biodiversity Framework. See also Global Assessment Report on Biodiversity and Ecosystem Services 2024 United Nations Biodiversity Conference References External links COP15: The UN Biodiversity Conference The 15th Conference of Parties (COP15) to the Convention on Biological Diversity (CBD) official documents Biodiversity History of Kunming United Nations conferences on the environment Convention on Biological Diversity 2022 in the environment Events postponed due to the COVID-19 pandemic 2022 in international relations Events in Montreal Diplomatic conferences in Canada China and the United Nations Canada and the United Nations December 2022 events in Canada 2022 in Quebec
2022 United Nations Biodiversity Conference
Biology
1,065
69,876,148
https://en.wikipedia.org/wiki/Weather%20of%202002
The following is a list of weather events that occurred on Earth in 2002. There were several natural disasters around the world from various types of weather, including blizzards, cold waves, droughts, heat waves, tornadoes, and tropical cyclones. The deadliest disaster was a heat wave in India in May, which killed more than 1,030 people. The costliest event of the year was a flood in Europe in August, which killed 232 people and caused (US$27.115 billion) in damage. In September, Typhoon Rusa struck South Korea, killing at least 213 people and causing at least ₩5.148 trillion (US$4.2 billion) in damage. Winter storms and cold waves In October, Cyclone Jeanett killed 33 people when it moved across Europe. In December, an ice storm affected North Carolina, killing 24 people. Droughts, heat waves, and wildfires In May, a heat wave in India killed more than 1,030 people. A drought affected much of North America. Floods In February, flash floods affected the Bolivian capital city La Paz, killing 69 people. On March 31, flash floods in the Canary Islands killed eight people and left in damage. In June, floods in northern Chile killed 17 people. In August, widespread floods occurred throughout Europe, killing 232 people. The floods and (US$27.115 billion) in damage. Tornadoes There were 934 tornadoes in the United States alone, collectively resulting in 55 deaths. A tornado outbreak in November killed 36 people. Tropical cyclones The year began with Tropical Storm Cyprien developing near Madagascar, Cyclone Bernie developing off Northern Australia, Cyclone Waka moving away from Tonga, and a weak tropical depression near the Solomon Islands. There were a further 15 tropical cyclones in the south-west Indian Ocean in the year, including Cyclone Dina, which caused 15 deaths in the Mascarene Islands, and Cyclone Kesiny, which killed 33 people in Madagascar. The year ended with Tropical Storm Delfina moving ashore Mozambique. In the Australian region, nine tropical cyclones developed in the year after Bernie, including powerful Cyclone Chris which struck Western Australia. In the South Pacific, there were 16 tropical cyclones that developed after Waka. The year ended with Cyclone Zoe moving away from Fiji, three days after it became the second-most intense tropical cyclone on record within the Southern Hemisphere. The first storm to develop in the northern hemisphere was Tropical Storm Tapah on January 9 east of the Philippines. There were a total of 36 tropical cyclones that year. Among the storms were Typhoon Rusa, which was the most powerful typhoon to strike South Korea in 43 years, and which caused at least 213 fatailties and ₩5.148 trillion (US$4.2 billion). Tropical Storm Kammuri killed 153 people in China. Mudslides caused by Typhoon Chataan killed 47 people in the Federated States of Micronesia, becoming the deadliest natural disaster in the history of Chuuk State. In the North Indian Ocean, there were seven tropical cyclones, beginning with a cyclonic storm that struck Oman in May. In November, a cyclonic storm struck West Bengal, killing 173 people. There were 19 tropical cyclones in the eastern Pacific, including three Category 5 hurricanes – Elida, Hernan, and Kenna. The last of the three, Kenna, also struck southwestern Mexico. In the Atlantic Ocean, there were 14 tropical cyclones, nine of which formed in September, including hurricanes Isidore and Lili which moved through the Caribbean and into the southern United States. References Weather by year Weather-related lists 2002-related lists
Weather of 2002
Physics
733
60,090,599
https://en.wikipedia.org/wiki/ANNOVAR
ANNOVAR (ANNOtate VARiation) is a bioinformatics software tool for the interpretation and prioritization of single nucleotide variants (SNVs), insertions, deletions, and copy number variants (CNVs) of a given genome. It has the ability to annotate human genomes hg18, hg19, hg38, and model organisms genomes such as: mouse (Mus musculus), zebrafish (Danio rerio), fruit fly (Drosophila melanogaster), roundworm (Caenorhabditis elegans), yeast (Saccharomyces cerevisiae) and many others. The annotations could be used to determine the functional consequences of the mutations on the genes and organisms, infer cytogenetic bands, report functional importance scores, and/or find variants in conserved regions. ANNOVAR along with SNP effect (SnpEFF) and Variant Effect Predictor (VEP) are three of the most commonly used variant annotation tools. Background The cost of high throughput DNA sequencing has reduced drastically from around $100 million/human genome in 2001 to around $1000/human genome in 2017. Due to this increase in accessibility, high throughput DNA sequencing has become more widely used in research and clinical settings. Some common areas that utilize high throughput DNA sequencing extensively are: Whole Exome Sequencing, Whole Genome Sequencing (WGS), and genome wide association studies (GWAS). There are a growing number of tools available seeking to comprehensively manage, analyze and interpret the enormous amount of data generated from high-throughput DNA sequencing. The tools are required to be efficient and robust enough to analyze a large number of variants (more than 3 million in human genome) though sensitive enough to identify rare and clinically relevant variants that are likely harmful/deleterious. ANNOVAR was developed by Kai Wang in 2010 at the Center for Applied Genomics in Children's Hospital of Philadelphia. It is a type of variant annotation tool that compiles deleterious genetic variant prediction scores from programs such as PolyPhen, ClinVar, and CADD and annotates the SNVs, insertions, deletions, and CNVs of the provided genome. ANNOVAR is one of the first efficient, configurable, extensible and cross-platform compatible variant annotation tools created. In terms of the larger bioinformatics workflow, ANNOVAR fits in near the end, after DNA sequencing reads having between mapped, aligned, and variants have been predicted from an alignment file (BAM), also known as variant calling. This process will produce a resultant VCF file, a tab-separated text file in a tabular like structure, containing genetic variants as rows. This file can then be used as input into the ANNOVAR software program for the variant annotation process, outputting interpretations of the variants identified from the upstream bioinformatics pipeline. Types of functional annotation of genetic variants Gene-based annotation This approach identifies whether the input variants cause protein coding changes and the amino acids that are affected by the mutations. The input file can be composed of exons, introns, intergenic regions, splice acceptor/donor sites, and 5′/3′ untranslated regions. The focus is to explore the relationship between non-synonymous mutations (SNPs, indels, or CNVs) and their functional impact on known genes. Especially, gene-based annotation will highlight the exact amino acid change if the mutation is in the exonic region and the predicted effect on the function of the known gene. This approach is useful for identifying variants in known genes from Whole Exome Sequencing data. Region-based annotation This approach identifies deleterious variants in specific genomic regions based on the genomic elements around the gene. Some categories region-based annotation will take into account are: Is the variant in a known conserved genomic region?: Mutations occur during mitosis and meiosis. If there is no selective pressure for specific nucleotide sequences, then all areas of a genome would be mutated are equal rates. The genomic regions that are highly conserved indicate genomic sequences that are essential to the organism's survival and/or reproductive success. Thus, if the variant disrupts a highly conserved region, the variant is likely highly deleterious. Is the variant in a predicted transcription factor binding site?: DNA is transcribed into messenger RNA (mRNA) by RNA polymerase II. This process can be modulated transcription factors which can enhance or inhibit binding of RNApol II. If the variant disrupts a transcription factor binding site then transcription of the gene could be altered causing changes in gene expression level and/or protein production amount. This changes could cause phenotypic variations. Is the variant in a predicted miRNA target site?: MicroRNA (miRNA) is a type of RNA that complementary binds to targeted mRNA sequence to suppress or silence the translation of the mRNA. If the variant disrupts the miRNA target location, the miRNA could have altered binding affinity to the corresponding gene transcript thus changing the mRNA expression level of the transcript. This could further impact protein production levels which could cause phenotypic variations. Is the variant predicted to interrupt a stable RNA secondary structure?: RNA can function at the RNA level as non-coding RNA or be translated into proteins for downstream processes. RNA secondary structures are extremely important in determining the correct half-life and function of those RNA. Two RNA species with tightly regulated secondary structures are ribosomal RNA (rRNA) and transfer RNA (tRNA) which are essential in translation of mRNA to protein. If the variant disrupts the stability of the RNA secondary structure, the half-life of the RNA could be shortened thus lowering the concentration of RNA in the cell. Non-coding regions encompasses 99% of the human genome and region-based annotation is extremely useful in identifying variants in those regions. This approach can be used on WGS data. Filter-based annotation This approach identifies variants that are documented in specific databases. The variants could be obtained from dbSNP, 1000 Genomes Project, or user-supplied list. Additional information could be obtained from the frequency of the variants from the above databases or the predicted deleterious scores created by PolyPhen, CADD, ClinVar or many others. The more infrequent a variant appears in the public database, the more deleterious it is likely to be. Results from different deleterious score prediction tools can combined together by the researcher to make a more accurate call on the variant. Taken together, these approaches complement one another to filter through over 4 million variants in a human genome. Common, low-deleterious score variants are eliminated to reveal the rare, high-deleterious score variants which could be causal for congenital diseases. Technical information ANNOVAR is a command-line tool written in the Perl programming language and can be run on any operating system that has a Perl interpreter installed. If used for non-commercial purposes, it is available free as an open-source package that is downloadable through the ANNOVAR website. ANNOVAR can process most next-generation sequencing data which has been run through a variant calling software. File formats The ANNOVAR software accepts text-based input files, including VCF (Variant Call Format), the gold standard for describing genetic loci. The program's main annotation script, annotate_variation.pl requires a custom input file format, the ANNOVAR input format (.avinput). Common file types can be converted to ANNOVAR input format for annotation using a provided script (see below). It is a simple text file where each line in the file corresponds to a variant and within each line are tab-delimited columns representing the basic genomic coordinate fields (chromosome, start position, end position, reference nucleotides, and observed nucleotides), followed by optional columns The ANNOVAR file input contains the following basic fields: Chr Start End Ref Alt For basic "out-of-the-box" usage: A popular function of the ANNOVAR tool is the use of the table_annovar.pl script which simplifies the workflow into one single command-line call, given that the data sources for annotation have already been downloaded. File conversion from VCF file is handled within the function call, followed by annotation and output to an Excel-compatible file. The script takes a number of parameters for annotation and outputs a VCF file with the annotations as key-value pairs inside of the INFO column of the VCF file for each genetic variant, e.g. "genomic_function=exonic". Conversion to the ANNOVAR input file format File conversion to the ANNOVAR input format is possible using the provided file format conversion script convert2annovar.pl. The program accepts common file formats outputted by upstream variant calling tools. Subsequent functional annotation scripts annotate_variation.pl use the ANNOVAR input file. File formats that are accepted by the convert2annovar.pl include the following: Variant Call Format Samtools genotype-calling pileup format Illumina export format from GenomeStudio SOLiD GFF genotype-calling format Complete Genomics variant format Generating input files based on specific variants, transcripts, or genomic regions: When investigating candidate loci that are linked to diseases, using the above variant calling file formats as input to ANNOVAR is a standard workflow for functional annotation of genetic variants outputted from an upstream bioinformatics pipeline. ANNOVAR can also be used to in other scenarios, such as interrogating a set of genetic variants of interest based on a list of dbSNP identifiers as well as variants within specific genomic or exomic regions. In the case of dbSNP identifiers, providing to the convert2annovar.pl script a list of identifiers (e.g. rs41534544, rs4308095, rs12345678) in a text file along with the reference genome of interest as a parameter, ANNOVAR will output an ANNOVAR input file with the genomic coordinate fields for those variants which can then be used for functional annotation. In the case of genomic regions, one can provide a genomic range of interest (e.g. chr1:2000001-2000003) along with the reference genome of interest and ANNOVAR will generate an ANNOVAR input file of all the genetic loci spanning that range. In addition, insertion and deletion size could also be specified in which the script will select all the genetic loci where a specific size of interest insertion or deletion is found. Last, if looking at variants within specific exonic regions, users can generate ANNOVAR input files for all possible variants in exons (including splicing variants) when theconvert2annovar.pl script is provided an RNA transcript identifier (e.g. NM_022162) based on the standard HGVS (Human Genome Variation Society) nomenclature. Output file The possible output files are an annotated .avinput file, CSV, TSV, or VCF. Depending on the annotation strategy taken (see Figure below), the input and output files will differ. It is possible to configure the output file types given a specific input file, by providing the program the appropriate parameter. For example, for the table_annovar.pl program, if the input file is VCF, then the output will also be a VCF file. If the input file is of the ANNOVAR input format type, then the output will be a TSV by default, with the option to output to CSV if the -csvout parameter is specified. By choosing CSV or TSV as the output file type, a user could open the files to view the annotations in Excel or a different spreadsheet software application. This is a popular feature among users. The output file will contain all the data from the original input file with additional columns for the desired annotations. For example, when annotating variants with characteristics such as (1) genomic function and (2) the functional role of the coding variant, the output file will contain all the columns from the input file, followed additional columns "genomic_function" (e.g. with values "exonic" or "intronic") and "coding_variant_function" (e.g. with values "synonymous SNV" or "non-synonymous SNV"). System efficiency Benchmarked on a modern desktop computer (3 GHz Intel Xeon CPU, 8GB memory), for 4.7 million variants, ANNOVAR requires ~4 minutes to perform gene-based functional annotation, or ~15 minutes to perform stepwise "variants reduction". It is said to be practical for performing variant annotation and variant prioritization on hundreds of human genomes in a day. ANNOVAR could be sped up by using the -thread argument which enables multi-threading so that input files could be processed in parallel. Data resources To use ANNOVAR for functional annotation of variants, annotation datasets can be downloaded using the annotate_variation.pl script, which saves them to local disk. Different annotation data sources are used for the three major types of annotation (gene-based, region-based, and filter-based). These are some of the data sources for each annotation type: Gene-based annotation UCSC/Ensembl genes hg38 GENCODE/CCDS Region-based annotation ENCODE Custom-made databases conforming to GFF3 (Generic Feature Format version 3) Filter-based annotation Given the large number of data sources for filter-based annotation, here are examples of which subsets of the datasets to use for a few of the most common use cases. For frequency of variants in whole-exome data: ExAC: with allele frequencies for all ethnic groups NHLBI-ESP: from 6500 exomes, use three population groupings gnomAD allele frequency: with allele frequencies for multiple populations For disease-specific variants: ClinVar: with individual columns for each ClinVar field for each variant COSMIC: somatic mutations from cancer and the frequency of occurrence in each subtype of cancer ICGC: mutations from the International Cancer Genome Consortium NCI-60: human tumor cell panel exome sequencing allele frequency data Example application Using ANNOVAR for prioritization of genetic variants to identify mutations in a rare genetic disease ANNOVAR is one of the common annotation tools for identifying candidate and causal mutations and genes for rare genetic diseases. Using a combination of gene-based and filter-based annotation followed by variant reduction based on the annotation values of the variants, the causal gene in a rare recessive Mendelian disease called Miller syndrome can be identified. This will involve synthesizing a genome-wide data set of ~4.2 million single nucleotide variants (SNVs) and ~0.5 million insertions and deletions (indels). Two known causal mutations for Miller syndrome (G152R and G202A in the DHODH gene) are also included Steps in identifying the causal variants for the disease using ANNOVAR: Gene-based annotation to identify exonic/splicing variants of the combination of SNVs and indels (~4.7 million variants) where a total of 24,617 exonic variants are identified. Since Miller syndrome is a rare Mendelian disease, exonic protein-changing variants are of interest only, which makes up 11,166. From that, 4860 variants are identified that falls in highly conserved genomic regions As public databases such as dbSNP and 1000 Genomes Project archive previously reported variants which are often common, it is less likely that they will contain the Miller syndrome causal variants which are rare. Hence, variants found in those data sources are filtered out and 413 variants remain. Then, genes are assessed for whether multiple variants exist in the same gene as compound heterozygotes and 23 genes are left. Finally, ‘dispensable’ genes are removed, those which have high-frequency non-sense mutations (in greater than 1% of subjects in the 1000 Genomes Project) which are susceptible to sequencing and alignment errors in short-read sequencing platform. These genes are considered less likely to be causal of a rare Mendelian disease. Three genes as result are filtered out, and 20 candidate genes are leftover, including the causal gene DHODH Limitations of ANNOVAR Two limitations of ANNOVAR relate to detection of common diseases and larger structural variant annotations. These problems are present in all current variant annotation tools. Most common diseases such as diabetes and Alzheimer have multiple variants throughout the genome which are common in the population. These variants are expected to have low individual deleterious scores and cause disease though the accumulation of multiple variants. However ANNOVAR has default "variant-reduction" schemes that provides a small list of rare and highly predicted deleterious variants. These default settings could be optimized so the output data would display additional variants with decreasing predicted deleterious scores. ANNOVAR is primarily used for identifying variants involved rare diseases where the causal mutation is expected to be rare and highly deleterious. Larger structural variants (SVs) such as chromosomal inversions, translocations, and complex SVs have been shown to cause diseases such as haemophilia A and Alzheimer's. However, SVs are often difficult to annotate because it is difficult to assign specific deleterious scores to large mutated genomic regions. Currently, ANNOVAR can only annotate genes contained within deletions or duplications, or small indels of <50bp. ANNOVAR cannot infer complex SVs and translocations Alternate variant annotation tools There are also two other types of SNP annotation tools that are similar to ANNOVAR: SNP effect (SnpEFF) and Variant Effect Predictor (VEP). Many of the features between ANNOVAR, SnpEFF, and VEP are the same including the input and output file format, regulatory region annotations, and know variant annotations. However, the main differences are that ANNOVAR cannot annotate for loss of function predictions whereas both SnpEFF and VEP can. Also, ANNOVAR cannot annotate microRNA structural binding locations whereas VEP can. MicroRNA structural binding location predictions can be informative in revealing post-transcriptional mutations’ role in disease pathogenesis. Loss of function mutations are changes in the genome that results in the total dysfunction of the gene product. Thus, these predictions could be extremely informative in regards to disease diagnosis, especially in rare monogenic diseases. *Table adapted from McLaren et al. (2016). References Bioinformatics software Genetics software Genomics techniques
ANNOVAR
Chemistry,Biology
4,038
19,594,028
https://en.wikipedia.org/wiki/Theoretical%20physics
Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain, and predict natural phenomena. This is in contrast to experimental physics, which uses experimental tools to probe these phenomena. The advancement of science generally depends on the interplay between experimental studies and theory. In some cases, theoretical physics adheres to standards of mathematical rigour while giving little weight to experiments and observations. For example, while developing special relativity, Albert Einstein was concerned with the Lorentz transformation which left Maxwell's equations invariant, but was apparently uninterested in the Michelson–Morley experiment on Earth's drift through a luminiferous aether. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, previously an experimental result lacking a theoretical formulation. Overview A physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations. The quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory differs from a mathematical theorem in that while both are based on some form of axioms, judgment of mathematical applicability is not based on agreement with any experimental results. A physical theory similarly differs from a mathematical theory, in the sense that the word "theory" has a different meaning in mathematical terms. A physical theory involves one or more relationships between various measurable quantities. Archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles and the quantum mechanical idea that (action and) energy are not continuously variable. Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example. For instance: "phenomenologists" might employ (semi-) empirical formulas and heuristics to agree with experimental results, often without deep physical understanding. "Modelers" (also called "model-builders") often appear much like phenomenologists, but try to model speculative theories that have certain desirable features (rather than on experimental data), or apply the techniques of mathematical modeling to physics problems. Some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a physical system might be modeled; e.g., the notion, due to Riemann and others, that space itself might be curved. Theoretical problems that need computational investigation are often the concern of computational physics. Theoretical advances may consist in setting aside old, incorrect paradigms (e.g., aether theory of light propagation, caloric theory of heat, burning consisting of evolving phlogiston, or astronomical bodies revolving around the Earth) or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result. Sometimes though, advances may proceed along different paths. For example, an essentially correct theory may need some conceptual or factual revisions; atomic theory, first postulated millennia ago (by several thinkers in Greece and India) and the two-fluid theory of electricity are two cases in this point. However, an exception to all the above is the wave–particle duality, a theory combining aspects of different, opposing models via the Bohr complementarity principle. Physical theories become accepted if they are able to make correct predictions and no (or few) incorrect ones. The theory should have, at least as a secondary objective, a certain economy and elegance (compare to mathematical beauty), a notion sometimes called "Occam's razor" after the 13th-century English philosopher William of Occam (or Ockham), in which the simpler of two theories that describe the same matter just as adequately is preferred (but conceptual simplicity may mean mathematical complexity). They are also more likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method. Physical theories can be grouped into three categories: mainstream theories, proposed theories and fringe theories. History Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, and continued by Plato and Aristotle, whose views held sway for a millennium. During the rise of medieval universities, the only acknowledged intellectual disciplines were the seven liberal arts of the Trivium like grammar, logic, and rhetoric and of the Quadrivium like arithmetic, geometry, music and astronomy. During the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon. As the Scientific Revolution gathered pace, the concepts of matter, energy, space, time and causality slowly began to acquire the form we know today, and other sciences spun off from the rubric of natural philosophy. Thus began the modern era of theory with the Copernican paradigm shift in astronomy, soon followed by Johannes Kepler's expressions for planetary orbits, which summarized the meticulous observations of Tycho Brahe; the works of these men (alongside Galileo's) can perhaps be considered to constitute the Scientific Revolution. The great push toward the modern concept of explanation started with Galileo, one of the few physicists who was both a consummate theoretician and a great experimentalist. The analytic geometry and mechanics of Descartes were incorporated into the calculus and mechanics of Isaac Newton, another theoretician/experimentalist of the highest order, writing Principia Mathematica. In it contained a grand synthesis of the work of Copernicus, Galileo and Kepler; as well as Newton's theories of mechanics and gravitation, which held sway as worldviews until the early 20th century. Simultaneously, progress was also made in optics (in particular colour theory and the ancient science of geometrical optics), courtesy of Newton, Descartes and the Dutchmen Snell and Huygens. In the 18th and 19th centuries Joseph-Louis Lagrange, Leonhard Euler and William Rowan Hamilton would extend the theory of classical mechanics considerably. They picked up the interactive intertwining of mathematics and physics begun two millennia earlier by Pythagoras. Among the great conceptual achievements of the 19th and 20th centuries were the consolidation of the idea of energy (as well as its global conservation) by the inclusion of heat, electricity and magnetism, and then light. The laws of thermodynamics, and most importantly the introduction of the singular concept of entropy began to provide a macroscopic explanation for the properties of matter. Statistical mechanics (followed by statistical physics and Quantum statistical mechanics) emerged as an offshoot of thermodynamics late in the 19th century. Another important event in the 19th century was the discovery of electromagnetic theory, unifying the previously separate phenomena of electricity, magnetism and light. The pillars of modern physics, and perhaps the most revolutionary theories in the history of physics, have been relativity theory and quantum mechanics. Newtonian mechanics was subsumed under special relativity and Newton's gravity was given a kinematic explanation by general relativity. Quantum mechanics led to an understanding of blackbody radiation (which indeed, was an original motivation for the theory) and of anomalies in the specific heats of solids — and finally to an understanding of the internal structures of atoms and molecules. Quantum mechanics soon gave way to the formulation of quantum field theory (QFT), begun in the late 1920s. In the aftermath of World War 2, more progress brought much renewed interest in QFT, which had since the early efforts, stagnated. The same period also saw fresh attacks on the problems of superconductivity and phase transitions, as well as the first applications of QFT in the area of theoretical condensed matter. The 1960s and 70s saw the formulation of the Standard model of particle physics using QFT and progress in condensed matter physics (theoretical foundations of superconductivity and critical phenomena, among others), in parallel to the applications of relativity to problems in astronomy and cosmology respectively. All of these achievements depended on the theoretical physics as a moving force both to suggest experiments and to consolidate results — often by ingenious application of existing mathematics, or, as in the case of Descartes and Newton (with Leibniz), by inventing new mathematics. Fourier's studies of heat conduction led to a new branch of mathematics: infinite, orthogonal series. Modern theoretical physics attempts to unify theories and explain phenomena in further attempts to understand the Universe, from the cosmological to the elementary particle scale. Where experimentation cannot be done, theoretical physics still tries to advance through the use of mathematical models. Mainstream theories Mainstream theories (sometimes referred to as central theories) are the body of knowledge of both factual and scientific views and possess a usual scientific quality of the tests of repeatability, consistency with existing well-established science and experimentation. There do exist mainstream theories that are generally accepted theories based solely upon their effects explaining a wide variety of data, although the detection, explanation, and possible composition are subjects of debate. Examples Big Bang Chaos theory Classical mechanics Classical field theory Dynamo theory Field theory Ginzburg–Landau theory Kinetic theory of gases Classical electromagnetism Perturbation theory (quantum mechanics) Physical cosmology Quantum chromodynamics Quantum complexity theory Quantum electrodynamics Quantum field theory Quantum field theory in curved spacetime Quantum information theory Quantum mechanics Quantum thermodynamics Relativistic quantum mechanics Scattering theory Standard Model Statistical physics Theory of relativity Wave–particle duality Proposed theories The proposed theories of physics are usually relatively new theories which deal with the study of physics which include scientific approaches, means for determining the validity of models and new types of reasoning used to arrive at the theory. However, some proposed theories include theories that have been around for decades and have eluded methods of discovery and testing. Proposed theories can include fringe theories in the process of becoming established (and, sometimes, gaining wider acceptance). Proposed theories usually have not been tested. In addition to the theories like those listed below, there are also different interpretations of quantum mechanics, which may or may not be considered different theories since it is debatable whether they yield different predictions for physical experiments, even in principle. For example, AdS/CFT correspondence, Chern–Simons theory, graviton, magnetic monopole, string theory, theory of everything. Fringe theories Fringe theories include any new area of scientific endeavor in the process of becoming established and some proposed theories. It can include speculative sciences. This includes physics fields and physical theories presented in accordance with known evidence, and a body of associated predictions have been made according to that theory. Some fringe theories go on to become a widely accepted part of physics. Other fringe theories end up being disproven. Some fringe theories are a form of protoscience and others are a form of pseudoscience. The falsification of the original theory sometimes leads to reformulation of the theory. Examples Aether (classical element) Luminiferous aether Digital physics Electrogravitics Stochastic electrodynamics Tesla's dynamic theory of gravity Thought experiments vs real experiments "Thought" experiments are situations created in one's mind, asking a question akin to "suppose you are in this situation, assuming such is true, what would follow?". They are usually created to investigate phenomena that are not readily experienced in every-day situations. Famous examples of such thought experiments are Schrödinger's cat, the EPR thought experiment, simple illustrations of time dilation, and so on. These usually lead to real experiments designed to verify that the conclusion (and therefore the assumptions) of the thought experiments are correct. The EPR thought experiment led to the Bell inequalities, which were then tested to various degrees of rigor, leading to the acceptance of the current formulation of quantum mechanics and probabilism as a working hypothesis. See also List of theoretical physicists Philosophy of physics Symmetry in quantum mechanics Timeline of developments in theoretical physics Double field theory Notes References Further reading Duhem, Pierre. La théorie physique - Son objet, sa structure, (in French). 2nd edition - 1914. English translation: The physical theory - its purpose, its structure. Republished by Joseph Vrin philosophical bookstore (1981), . Feynman, et al. The Feynman Lectures on Physics (3 vol.). First edition: Addison–Wesley, (1964, 1966). Bestselling three-volume textbook covering the span of physics. Reference for both (under)graduate student and professional researcher alike. Landau et al. Course of Theoretical Physics. Famous series of books dealing with theoretical concepts in physics covering 10 volumes, translated into many languages and reprinted over many editions. Often known simply as "Landau and Lifschits" or "Landau-Lifschits" in the literature. Longair, MS. Theoretical Concepts in Physics: An Alternative View of Theoretical Reasoning in Physics. Cambridge University Press; 2d edition (4 Dec 2003). . Planck, Max (1909). Eight Lectures on theoretical physics. Library of Alexandria. , . A set of lectures given in 1909 at Columbia University. Sommerfeld, Arnold. Vorlesungen über theoretische Physik (Lectures on Theoretical Physics); German, 6 volumes. A series of lessons from a master educator of theoretical physicists. External links MIT Center for Theoretical Physics How to become a GOOD Theoretical Physicist, a website made by Gerard 't Hooft de:Physik#Theoretische Physik
Theoretical physics
Physics
2,859
38,900,278
https://en.wikipedia.org/wiki/Tricholoma%20fumosoluteum
Tricholoma fumosoluteum is a mushroom of the agaric genus Tricholoma. First described by Charles Horton Peck in 1875 as Agaricus fumosoluteus, it was transferred to the genus Tricholoma by Pier Andrea Saccardo in 1887. See also List of North American Tricholoma References External links Fungi described in 1875 Fungi of North America fumosoluteum Taxa named by Charles Horton Peck Fungus species
Tricholoma fumosoluteum
Biology
96
45,346
https://en.wikipedia.org/wiki/Theories%20of%20urban%20planning
Planning theory is the body of scientific concepts, definitions, behavioral relationships, and assumptions that define the body of knowledge of urban planning. There is no one unified planning theory but various. Whittemore identifies nine procedural theories that dominated the field between 1959 and 1983: the Rational-Comprehensive approach, the Incremental approach, the Transformative Incremental (TI) approach, the Transactive approach, the Communicative approach, the Advocacy approach, the Equity approach, the Radical approach, and the Humanist or Phenomenological approach. Background Urban planning can include urban renewal, by adapting urban planning methods to existing cities suffering from decline. Alternatively, it can concern the massive challenges associated with urban growth, particularly in the Global South. All in all, urban planning exists in various forms and addresses many different issues. The modern origins of urban planning lie in the movement for urban reform that arose as a reaction against the disorder of the industrial city in the mid-19th century. Many of the early influencers were inspired by anarchism, which was popular in the turn of the 19th and 20th centuries. The new imagined urban form was meant to go hand-in-hand with a new society, based upon voluntary co-operation within self-governing communities. In the late 20th century, the term sustainable development has come to represent an ideal outcome in the sum of all planning goals. Sustainable architecture involves renewable materials and energy sources and is increasing in importance as an environmentally friendly solution Blueprint planning Since at least the Renaissance and the Age of Enlightenment, urban planning had generally been assumed to be the physical planning and design of human communities. Therefore, it was seen as related to architecture and civil engineering, and thereby to be carried out by such experts. This kind of planning was physicalist and design-orientated, and involved the production of masterplans and blueprints which would show precisely what the 'end-state' of land use should be, similar to architectural and engineering plans. Similarly, the theory of urban planning was mainly interested in visionary planning and design which would demonstrate how the ideal city should be organised spatially. Sanitary movement Although it can be seen as an extension of the sort of civic pragmatism seen in Oglethorpe's plan for Savannah or William Penn's plan for Philadelphia, the roots of the rational planning movement lie in Britain's Sanitary movement (1800–1890). During this period, advocates such as Charles Booth argued for central organized, top-down solutions to the problems of industrializing cities. In keeping with the rising power of industry, the source of the planning authority in the Sanitary movement included both traditional governmental offices and private development corporations. In London and its surrounding suburbs, cooperation between these two entities created a network of new communities clustered around the expanding rail system. Garden city movement The Garden city movement was founded by Ebenezer Howard (1850-1928). His ideas were expressed in the book Garden Cities of To-morrow (1898). His influences included Benjamin Walter Richardson, who had published a pamphlet in 1876 calling for low population density, good housing, wide roads, an underground railway and for open space; Thomas Spence who had supported common ownership of land and the sharing of the rents it would produce; Edward Gibbon Wakefield who had pioneered the idea of colonizing planned communities to house the poor in Adelaide (including starting new cities separated by green belts at a certain point); James Silk Buckingham who had designed a model town with a central place, radial avenues and industry in the periphery; as well as Alfred Marshall, Peter Kropotkin and the back-to-the-land movement, which had all called for the moving of masses to the countryside. Howards' vision was to combine the best of both the countryside and the city in a new environment called Town-Country. To make this happen, a group of individuals would establish a limited-dividend company to buy cheap agricultural land, which would then be developed with investment from manufacturers and housing for the workers. No more than 32,000 people would be housed in a settlement, spread over 1,000 acres. Around it would be a permanent green belt of 5,000 acres, with farms and institutions (such as mental institutions) which would benefit from the location. After reaching the limit, a new settlement would be started, connected by an inter-city rail, with the polycentric settlements together forming the "Social City". The lands of the settlements would be jointly owned by the inhabitants, who would use rents received from it to pay off the mortgage necessary to buy the land and then invest the rest in the community through social security. Actual garden cities were built by Howard in Letchworth, Brentham Garden Suburb, and Welwyn Garden City. The movement would also inspire the later New towns movement. Linear city Arturo Soria y Mata's idea of the Linear city (1882) replaced the traditional idea of the city as a centre and a periphery with the idea of constructing linear sections of infrastructure - roads, railways, gas, water, etc.- along an optimal line and then attaching the other components of the city along the length of this line. As compared to the concentric diagrams of Ebenezer Howard and other in the same period, Soria's linear city creates the infrastructure for a controlled process of expansion that joins one growing city to the next in a rational way, instead of letting them both sprawl. The linear city was meant to ‘ruralize the city and urbanize the countryside’, and to be universally applicable as a ring around existing cities, as a strip connecting two cities, or as an entirely new linear town across an unurbanized region. The idea was later taken up by Nikolay Alexandrovich Milyutin in the planning circles of the 1920s Soviet Union. The Ciudad Lineal was a practical application of the concept. Regional planning movement Patrick Geddes (1864-1932) was the founder of regional planning. His main influences were the geographers Élisée Reclus and Paul Vidal de La Blache, as well as the sociologist Pierre Guillaume Frédéric le Play. From these he received the idea of the natural region. According to Geddes, planning must start by surveying such a region by crafting a "Valley Section" which shows the general slope from mountains to the sea that can be identified across scale and place in the world, with the natural environment and the cultural environments produced by it included. This was encapsulated in the motto "Survey before Plan". He saw cities as being changed by technology into more regional settlements, for which he coined the term conurbation. Similar to the garden city movement, he also believed in adding green areas to these urban regions. The Regional Planning Association of America advanced his ideas, coming up with the 'regional city' which would have a variety of urban communities across a green landscape of farms, parks and wilderness with the help of telecommunication and the automobile. This had major influence on the County of London Plan, 1944. City Beautiful movement The City Beautiful movement was inspired by 19th century European capital cities such as Georges-Eugène Haussmann's Paris or the Vienna Ring Road. An influential figure was Daniel Burnham (1846-1912), who was the chief of construction of the World's Columbian Exposition in 1893. Urban problems such as the 1886 Haymarket affair in Chicago had created a perceived need to reform the morality of the city among some of the elites. Burnham's greatest achievement was the Chicago plan of 1909. His aim was "to restore to the city a lost visual and aesthetic harmony, thereby creating the physical prerequisite for the emergence of a harmonious social order", essentially creating social reform through new slum clearance and creating public space, which also endeared it the support of the Progressivist movement. This was also believed to be economically advantageous by drawing in tourists and wealthy migrants. Because of this it has been referred to as "trickle-down urban development" and as "centrocentrist" for focusing only on the core of the city. Other major cities planned according to the movement principles included British colonial capitals in New Delhi, Harare, Lusaka Nairobi and Kampala, as well as that of Canberra in Australia, and Albert Speer's plan for the Nazi capital Germania. Towers in the park Le Corbusier (1887–1965) pioneered a new urban form called towers in the park. His approach was based on defining the house as 'a machine to live in'. The Plan Voisin he devised for Paris, which was never fulfilled, would have involved the demolition of much of historic Paris in favour of 18 uniform 700-foot tower blocks. Ville Contemporaine and the Ville Radieuse formulated his basic principles, including decongestion of the city by increased density and open space by building taller on a smaller footprint. Wide avenues should also be built to the city centre by demolishing old structures, which was criticized for lack of environmental awareness. His generic ethos of planning was based on the rule of experts who would "work out their plans in total freedom from partisan pressures and special interests" and that "once their plans are formulated, they must be implemented without opposition". His influence on the Soviet Union helped inspire the 'urbanists' who wanted to build planned cities full of massive apartment blocks in Soviet countryside. The only city which he ever actually helped plan was Chandigarh in India. Brasília, planned by Oscar Niemeyer, also was heavily influenced by his thought. Both cities suffered from the issue of unplanned settlements growing outside them. Decentralised planning In the United States, Frank Lloyd Wright similarly identified vehicular mobility as a principal planning metric. Car-based suburbs had already been developed in the Country Club District in 1907-1908 (including later the world's first car-based shopping centre of Country Club Plaza), as well as in Beverly Hills in 1914 and Palos Verdes Estates in 1923. Wright began to idealise this vision in his Broadacre City starting in 1924, with similarities to the garden city and regional planning movements. The fundamental idea was for technology to liberate individuals. In his Usonian vision, he described the city as"spacious, well-landscaped highways, grade crossings eliminated by a new kind of integrated by-passing or over- or under-passing all traffic in cultivated or living areas … Giant roads, themselves great architecture, pass public service stations . . . passing by farm units, roadside markets, garden schools, dwelling places, each on its acres of individually adorned and cultivated ground".This was justified as a democratic ideal, as "“Democracy is the ideal of reintegrated decentralization … many free units developing strength as they learn by function and grow together in spacious mutual freedom.” This vision was however criticized by Herbert Muschamp as being contradictory in its call for individualism while relying on the master-architect to design it all. After World War II, suburbs similar to Broadacre City spread throughout the US, but without the social or economic aspects of his ideas. A notable example was that of Levittown, built 1947 to 1951. The suburban design was criticized for their lack of form by Lewis Mumford as it lacked clear boundaries, and by Ian Nairn because "Each building is treated in isolation, nothing binds it to the next one". In the Soviet Union too, the so-called deurbanists (such as Moisei Ginzburg and Mikhail Okhitovich) advocated for the use of electricity and new transportation technologies (especially the car) to disperse the population from the cities to the countryside, with the ultimate aim of a "townless, fully decentralized, and evenly populated country". However, in 1931 the Communist Party ruled such views as forbidden. Opposition to blueprint planning Throughout both the United States and Europe, the rational planning movement declined in the latter half of the 20th century. Key events in the United States include the demolition of the Pruitt-Igoe housing project in St. Louis and the national backlash against urban renewal projects, particularly urban expressway projects. An influential critic of such planning was Jane Jacobs, who wrote The Death and Life of Great American Cities in 1961, claimed to be "one of the most influential books in the short history of city planning". She attacked the garden city movement because its "prescription for saving the city was to do the city in" and because it "conceived of planning also as essentially paternalistic, if not authoritarian". The Corbusians on the other hand were claimed to be egoistic. In contrast, she defended the dense traditional inner-city neighborhoods like Brooklyn Heights or North Beach, San Francisco, and argued that an urban neighbourhood required about 200-300 people per acre, as well as a high net ground coverage at the expense of open space. She also advocated for a diversity of land uses and building types, with the aim of having a constant churn of people throughout the neighbourhood across the times of the day. This essentially meant defending urban environments as they were before modern planning had aimed to start changing them. As she believed that such environments were essentially self-organizing, her approach was effectively one of laissez-faire, and has been criticized for not being able to guarantee "the development of good neighbourhoods". The most radical opposition to blueprint planning was declared in 1969 in a manifesto on the New Society, with the words that: The whole concept of planning (the town-and-country kind at least) has gone cockeyed … Somehow, everything must be watched; nothing must be allowed simply to “happen.” No house can be allowed to be commonplace in the way that things just are commonplace: each project must be weighed, and planned, and approved, and only then built, and only after that discovered to be commonplace after all.Another form of opposition came from the advocacy planning movement, opposes to traditional top-down and technical planning. Modernist planning Cybernetics and modernism inspired the related theories of rational process and systems approaches to urban planning in the 1960s. They were imported into planning from other disciplines. The systems approach was a reaction to the issues associated with the traditional view of planning. It did not understand the social and economic sides of cities, the complexity and interconnectedness of urban life, as well as lacking in flexibility. The 'quantitative revolution' of the 1960s also created a drive for more scientific and precise thinking, while the rise of ecology made the approach more natural. Systems theory Systems theory is based on the conception of phenomena as 'systems', which are themselves coherent entities composed of interconnected and interdependent parts. A city can in this way be conceptualised as a system with interrelated parts of different land uses, connected by transport and other communications. The aim of urban planning thereby becomes that of planning and controlling the system. Similar ideas had been put forward by Geddes, who had seen cities and their regions as analogous to organisms, though they did not receive much attention while planning was dominated by architects. The idea of the city as a system meant that it became critical for planners to understand how cities functioned. It also meant that a change to one part in a city would have effects on others parts as well. There were also doubts raised about the goal of producing detailed blueprints of how cities should look like in the end, instead suggesting the need for more flexible plans with trajectories instead of fixed futures. Planning should also be an ongoing process of monitoring and taking action in the city, rather than just producing the blueprint at one time. The systems approach also necessitated taking into account the economic and social aspects of cities, beyond just the aesthetic and physical ones. Rational process approach The focus on the procedural aspect of planning had already been pioneered by Geddes in his Survey-Analysis-Plan approach. However, this approach had several shortfalls. It did not consider the reasons for doing a survey in the first place. It also suggested that there should be simply a single plan to be considered. Finally, it did not take into account the implementation stage of the plan. There should also be further action in monitoring the outcomes of the plan after that. The rational process, in contrast, identified five different stages: (1) the definition of problems and aims; (2) the identification of alternatives; (3) the evaluation of alternatives; (4) implementation: (5) monitoring. This new approach represented a rejection of blueprint planning. Incrementalism Beginning in the late 1950s and early 1960s, critiques of the rational paradigm began to emerge and formed into several different schools of planning thought. The first of these schools is Lindblom's incrementalism. Lindblom describes planning as "muddling through" and thought that practical planning required decisions to be made incrementally. This incremental approach meant choosing from small number of policy approaches that can only have a small number consequences and are firmly bounded by reality, constantly adjusting the objectives of the planning process and using multiple analyses and evaluations. Mixed scanning model The mixed scanning model, developed by Etzioni, takes a similar approach to Lindblom. Etzioni (1968) suggested that organizations plan on two different levels: the tactical and the strategic. He posited that organizations could accomplish this by essentially scanning the environment on multiple levels and then choose different strategies and tactics to address what they found there. While Lindblom's approach only operated on the functional level Etzioni argued, the mixed scanning approach would allow planning organizations to work on both the functional and more big-picture oriented levels. Political planning In the 1960s, a view emerged of planning as an inherently normative and political activity. Advocates of this approach included Norman Dennis, Martin Meyerson, Edward C. Banfield, Paul Davidoff, and Norton E. Long, the latter remarking that:Plans are policies and policies, in a democracy at any rate, spell politics. The question is not whether planning will reflect politics but whose politics it will reflect. What values and whose values will planners seek to implement? . . . No longer can the planner take refuge in the neutrality of the objectivity of the personally uninvolved scientist.The choices between alternative end points in planning was a key issue which was seen as political. Participatory planning Participatory planning is an urban planning paradigm that emphasizes involving the entire community in the strategic and management processes of urban planning; or, community-level planning processes, urban or rural. It is often considered as part of community development. Participatory planning aims to harmonize views among all of its participants as well as prevent conflict between opposing parties. In addition, marginalized groups have an opportunity to participate in the planning process. Patrick Geddes had first advocated for the "real and active participation" of citizens when working in the British Raj, arguing against the "Dangers of Municipal Government from above" which would cause "detachment from public and popular feeling, and consequently, before long, from public and popular needs and usefulness". Further on, self-build was researched by Raymond Unwin in the 1930s in his Town Planning in Practice. The Italian anarchist architect Giancarlo De Carlo then argued in 1948 that "“The housing problem cannot be solved from above. It is a problem of the people, and it will not be solved, or even boldly faced, except by the concrete will and action of the people themselves", and that planning should exist "as the manifestation of communal collaboration". Through the Architectural Association School of Architecture, his ideas caught John Turner, who started working in Peru with Eduardo Neira. He would go on working in Lima from the mid-'50s to the mid-'60s. There he found that the barrios were not slums, but were rather highly organised and well-functioning. As a result, he came to the conclusion that:"When dwellers control the major decisions and are free to make their own contributions in the design, construction or management of their housing, both this process and the environment produced stimulate individual and social well-being. When people have no control over nor responsibility for key decisions in the housing process, on the other hand, dwelling environments may instead become a barrier to personal fulfillment and a burden on the economy."The role of the government was to provide a framework within which people would be able to work freely, for example by providing them the necessary resources, infrastructure and land. Self-build was later again taken up by Christopher Alexander, who led a project called People Rebuild Berkeley in 1972, with the aim to create "self-sustaining, self-governing" communities, though it ended up being closer to traditional planning. Synoptic planning After the "fall" of blueprint planning in the late 1950s and early 1960s, the synoptic model began to emerge as a dominant force in planning. Lane (2005) describes synoptic planning as having four central elements: "(1) an enhanced emphasis on the specification of goals and targets; (2) an emphasis on quantitative analysis and predication of the environment; (3) a concern to identify and evaluate alternative policy options; and (4) the evaluation of means against ends (page 289)." Public participation was first introduced into this model and it was generally integrated into the system process described above. However, the problem was that the idea of a single public interest still dominated attitudes, effectively devaluing the importance of participation because it suggests the idea that the public interest is relatively easy to find and only requires the most minimal form of participation. Transactive planning Transactive planning was a radical break from previous models. Instead of considering public participation as a method that would be used in addition to the normal training planning process, participation was a central goal. For the first time, the public was encouraged to take on an active role in the policy-setting process, while the planner took on the role of a distributor of information and a feedback source. Transactive planning focuses on interpersonal dialogue that develops ideas, which will be turned into action. One of the central goals is mutual learning where the planner gets more information on the community and citizens to become more educated about planning issues. Advocacy planning Formulated in the 1960s by lawyer and planning scholar Paul Davidoff, the advocacy planning model takes the perspective that there are large inequalities in the political system and in the bargaining process between groups that result in large numbers of people unorganized and unrepresented in the process. It concerns itself with ensuring that all people are equally represented in the planning process by advocating for the interests of the underprivileged and seeking social change. Again, public participation is a central tenet of this model. A plurality of public interests is assumed, and the role of the planner is essentially the one as a facilitator who either advocates directly for underrepresented groups directly or encourages them to become part of the process. Radical planning Radical planning is a stream of urban planning which seeks to manage development in an equitable and community-based manner. The seminal text to the radical planning movement is Foundations for a Radical Concept in Planning (1973), by Stephen Grabow and Allen Heskin. Grabow and Heskin provided a critique of planning as elitist, centralizing and change-resistant, and proposed a new paradigm based upon systems change, decentralization, communal society, facilitation of human development and consideration of ecology. Grabow and Heskin were joined by Head of Department of Town Planning from the Polytechnic of the South Bank Shean McConnell, and his 1981 work Theories for Planning. In 1987 John Friedmann entered the fray with Planning in the Public Domain: From Knowledge to Action, promoting a radical planning model based on "decolonization", "democratization", "self-empowerment" and "reaching out". Friedmann described this model as an "Agropolitan development" paradigm, emphasizing the re-localization of primary production and manufacture. In "Toward a Non-Euclidian Mode of Planning" (1993) Friedmann further promoted the urgency of decentralizing planning, advocating a planning paradigm that is normative, innovative, political, transactive and based on a social learning approach to knowledge and policy. Bargaining model The bargaining model views planning as the result of giving and take on the part of a number of interests who are all involved in the process. It argues that this bargaining is the best way to conduct planning within the bounds of legal and political institutions. The most interesting part of this theory of planning is that it makes public participation the central dynamic in the decision-making process. Decisions are made first and foremost by the public, and the planner plays a more minor role. Communicative approach The communicative approach to planning is perhaps the most difficult to explain. It focuses on using communication to help different interests in the process to understand each other. The idea is that each individual will approach a conversation with his or her own subjective experience in mind and that from that conversation shared goals and possibilities will emerge. Again, participation plays a central role in this model. The model seeks to include a broad range of voice to enhance the debate and negotiation that is supposed to form the core of actual plan making. In this model, participation is actually fundamental to the planning process happening. Without the involvement of concerned interests, there is no planning. Looking at each of these models it becomes clear that participation is not only shaped by the public in a given area or by the attitude of the planning organization or planners that work for it. In fact, public participation is largely influenced by how planning is defined, how planning problems are defined, the kinds of knowledge that planners choose to employ and how the planning context is set. Though some might argue that is too difficult to involve the public through transactive, advocacy, bargaining and communicative models because transportation is some ways more technical than other fields, it is important to note that transportation is perhaps unique among planning fields in that its systems depend on the interaction of a number of individuals and organizations. Process Changes to the planning process Strategic Urban Planning over past decades have witnessed the metamorphosis of the role of the urban planner in the planning process. More citizens calling for democratic planning & development processes have played a huge role in allowing the public to make important decisions as part of the planning process. Community organizers and social workers are now very involved in planning from the grassroots level. The term advocacy planning was coined by Paul Davidoff in his influential 1965 paper, "Advocacy and Pluralism in Planning" which acknowledged the political nature of planning and urged planners to acknowledge that their actions are not value-neutral and encouraged minority and underrepresented voices to be part of planning decisions. Benveniste argued that planners had a political role to play and had to bend some truth to power if their plans were to be implemented. Developers have also played huge roles in development, particularly by planning projects. Many recent developments were results of large and small-scale developers who purchased land, designed the district and constructed the development from scratch. The Melbourne Docklands, for example, was largely an initiative pushed by private developers to redevelop the waterfront into a high-end residential and commercial district. Recent theories of urban planning, espoused, for example by Salingaros see the city as an adaptive system that grows according to process similar to those of plants. They say that urban planning should thus take its cues from such natural processes. Such theories also advocate participation by inhabitants in the design of the urban environment, as opposed to simply leaving all development to large-scale construction firms. In the process of creating an urban plan or urban design, carrier-infill is one mechanism of spatial organization in which the city's figure and ground components are considered separately. The urban figure, namely buildings, is represented as total possible building volumes, which are left to be designed by architects in the following stages. The urban ground, namely in-between spaces and open areas, are designed to a higher level of detail. The carrier-infill approach is defined by an urban design performing as the carrying structure that creates the shape and scale of the spaces, including future building volumes that are then infilled by architects' designs. The contents of the carrier structure may include street pattern, landscape architecture, open space, waterways, and other infrastructure. The infill structure may contain zoning, building codes, quality guidelines, and Solar Access based upon a solar envelope. Carrier-Infill urban design is differentiated from complete urban design, such as in the monumental axis of Brasília, in which the urban design and architecture were created together. In carrier-infill urban design or urban planning, the negative space of the city, including landscape, open space, and infrastructure is designed in detail. The positive space, typically building a site for future construction, is only represented in unresolved volumes. The volumes are representative of the total possible building envelope, which can then be infilled by individual architects. See also Index of urban planning articles Index of urban studies articles List of planned cities List of planning journals List of urban planners List of urban theorists MONU – magazine on urbanism Planetizen Transition Towns (network) Transportation demand management Urban acupuncture Urban vitality References Notes Bibliography (A standard text for many college and graduate courses in city planning in America) Dalley, Stephanie, 1989, Myths from Mesopotamia: Creation, the Flood, Gilgamesh, and Others, Oxford World's Classics, London, pp. 39–136 Hoch, Charles, Linda C. Dalton and Frank S. So, editors (2000). The Practice of Local Government Planning, Intl City County Management Assn; 3rd edition. (The "Green Book") Kemp, Roger L. and Carl J. Stephani (2011). "Cities Going Green: A Handbook of Best Practices." McFarland and Co., Inc., Jefferson, NC, USA, and London, England, UK. . Santamouris, Matheos (2006). Environmental Design of Urban Buildings: An Integrated Approach. Shrady, Nicholas, The Last Day: Wrath, Ruin & Reason in The Great Lisbon Earthquake of 1755, Penguin, 2008, Tunnard, Christopher and Boris Pushkarev (1963). Man-Made America: Chaos or Control?: An Inquiry into Selected Problems of Design in the Urbanized Landscape, New Haven: Yale University Press. (This book won the National Book Award, strictly America; a time capsule of photography and design approach.) Wheeler, Stephen (2004). "Planning Sustainable and Livable Cities", Routledge; 3rd edition. Yiftachel, Oren, 1995, "The Dark Side of Modernism: Planning as Control of an Ethnic Minority," in Sophie Watson and Katherine Gibson, eds., Postmodern Cities and Spaces (Oxford and Cambridge, MA: Blackwell), pp. 216–240. A Short Introduction to Radical Planning Theory and Practice, Doug Aberley Ph.D. MCIP, Winnipeg Inner City Research Alliance Summer Institute, June 2003 McConnell, Shean. Theories for Planning, 1981, David & Charles, London. Further reading Urban Planning, 1794–1918: An International Anthology of Articles, Conference Papers, and Reports, Selected, Edited, and Provided with Headnotes by John W. Reps, Professor Emeritus, Cornell University. City Planning According to Artistic Principles, Camillo Sitte, 1889 Missing Middle Housing: Responding to the Demand for Walkable Urban Living by Daniel Parolek of Opticos Design, Inc., 2012 Kemp, Roger L. and Carl J. Stephani (2011). "Cities Going Green: A Handbook of Best Practices." McFarland and Co., Inc., Jefferson, NC, USA, and London, England, UK. (). Tomorrow: A Peaceful Path to Real Reform, Ebenezer Howard, 1898 The Improvement of Towns and Cities, Charles Mulford Robinson, 1901 Town Planning in practice, Raymond Unwin, 1909 The Principles of Scientific Management, Frederick Winslow Taylor, 1911 Cities in Evolution, Patrick Geddes, 1915 The Image of the City, Kevin Lynch, 1960 The Concise Townscape, Gordon Cullen, 1961 The Death and Life of Great American Cities, Jane Jacobs, 1961 The City in History, Lewis Mumford, 1961 The City is the Frontier, Charles Abrams, Harper & Row Publishing, New York, 1965. A Pattern Language, Christopher Alexander, Sara Ishikawa and Murray Silverstein, 1977 What Do Planners Do?: Power, Politics, and Persuasion, Charles Hoch, American Planning Association, 1994. Planning the Twentieth-Century American City, Christopher Silver and Mary Corbin Sies (Eds.), Johns Hopkins University Press, 1996 "The City Shaped: Urban Patterns and Meanings Through History", Spiro Kostof, 2nd Edition, Thames and Hudson Ltd, 1999 The American City: A Social and Cultural History, Daniel J. Monti, Jr., Oxford, England and Malden, Massachusetts: Blackwell Publishers, 1999. 391 pp. . Urban Development: The Logic Of Making Plans, Lewis D. Hopkins , Island Press, 2001. 'Readings in Planning Theory, 4th edition, Susan Fainstein and James DeFilippis, Oxford, England and Malden, Massachusetts: Blackwell Publishers, 2016. Taylor, Nigel, (2007), Urban Planning Theory since 1945, London, Sage. Planning for the Unplanned: Recovering from Crises in Megacities, by Aseem Inam (published by Routledge USA, 2005). External links Environmental social science Urban geography Urban design
Theories of urban planning
Engineering,Environmental_science
6,859
27,953,949
https://en.wikipedia.org/wiki/Yoshimine%20sort
The Yoshimine sort is an algorithm that is used in quantum chemistry to order lists of two electron repulsion integrals. It is implemented in the IBM Alchemy program suite and in the UK R-matrix package for electron and positron scattering by molecules which is based on the early versions of the IBM Alchemy program suite. Use of basis set expansions in quantum chemistry In quantum chemistry, it is common practice to represent one electron functions in terms of an expansion over a basis set, . The most common choice for this basis set is Gaussian orbitals (GTOs) however for linear molecules Slater orbitals (STOs) can be used. The Schrödinger equation, for a system with two or more electrons, includes the Coulomb repulsion operator. In the basis set expansion approach this leads to the requirement to compute two electron repulsion integrals involving four basis functions. Any given basis set may be ordered so that each function can assigned a unique index. So, for any given basis set, each two electron integral can be described by four indices, that is the indices of the four basis functions involved. It is customary to denote these indices as p,q,r and s and the integral as (pq|rs). Assuming that are real functions, the (pq|rs) are defined by The number of two electron integrals that must be computed for any basis set depends on the number of functions in the basis set and on the symmetry point group of the molecule being studied. Permutational symmetry of the indices The computed two electron integrals are real numbers, , and this implies certain permutational symmetry properties on the indices p,q,r and s. The exact details depend on whether the part of the basis function representing angular behavior is real or complex. For Gaussian orbitals real spherical harmonics are generally used whereas for Slater orbitals the complex spherical harmonics are used. In the case of real orbitals, p can be swapped with q without changing the integral value, or independently r with s. in addition pq as a pair can be swapped with rs as a pair without changing the integral. Putting these interchanges together means that which is eightfold symmetry. If the molecule has no spatial symmetry, in other words it belongs to the point group which has only one irreducible representation, then the permutational symmetry of the integrals indices is the only operation which can be applied. On the other hand, if the molecule has some symmetry operations, then further ordering is possible. The impact of the above symmetry relationship is that an integral can be computed once, but corresponds to eight different index combinations. Point group symmetry of the system The Schrödinger Hamiltonian commutes with the operations of the point symmetry group of the nuclear framework of the molecule. This means that a two electron integral can be non-zero only if the product of the four functions transforms, or contains a component which transforms, as the totally symmetric irreducible representation of the symmetry point group to which the molecule belongs. This means that a computer program for two electron integral processing can precompute the list of basis function symmetry combinations (symmetry blocks) for which integrals may be non zero and ignore all other symmetry combinations. The list of symmetry blocks can also be ordered. Frequently, the totally symmetric irreducible representation is assigned the lowest index in the list, typically 1 in Fortran or 0 in the C programming language. Within any given symmetry block, the permutational symmetry of the integrals still applies and the integrals can be ordered within that block. For example if the molecule belongs to the point group, which has irreducible representations and then integral blocks for the following symmetry combinations are non-zero and integrals blocks for any other symmetry combination are identically zero by group theory. Thus two types of ordering can be used: the non-zero symmetry blocks of two electron integrals are ordered (the programmer is at liberty to define this order) the dimension of each block can be computed since the number of basis functions of each symmetry is known. within each non-zero block, integrals are ordered according to the above symmetry of indices. This means that given the four indices pqrs defining a two electron integral, a unique index may be computed. This is the essence of the Yoshimine procedure. Yoshimine's sorting procedure When the integrals are computed by the integrals program they are written out to a sequential file along with the p,q,r,s indices which define them. The order in which the integrals are computed is defined by the algorithm used in the integration program. The most efficient algorithms do not compute the integrals in order, that is such that the p,q,r and s indices are ordered. This would not be a problem is all of the integrals could be held in CPU memory simultaneously. In that case the computed integral can be assigned into its position in the array of two electron integrals by computing the required index from the p,q,r and s indices. In the 1960s it was essentially impossible to hold all of the two electron integrals in memory simultaneously. Therefore, M Yoshimine developed a sorting algorithm for two-electron integrals which reads the unordered list of integrals from a files and transforms it into an ordered list which is then written to another file. A by-product of this is that the file storing the ordered integrals does not need to contain the p,q,r,s indices for each integral. The ordering process uses a direct access file but the input and output files of integrals are sequential. At the start of the 21st century, computer memory is much larger and for small molecules and/or small basis sets it is sometimes possible to hold all two electron integrals in memory. In general however, the Yoshimine algorithm is still required. References Quantum chemistry
Yoshimine sort
Physics,Chemistry
1,208
34,200,915
https://en.wikipedia.org/wiki/NemHandel
NemHandel is a Danish e-invoicing infrastructure, developed by the National IT and Telecom Agency and launched in 2007. NemHandel is based on open standards (including the Universal Business Language, Reliable Asynchronous Secure Profile (RASP), and UDDI), open source components, and digital certificates. It was launched as part of a Danish Government Globalisation initiative in 2005 under the auspices of Prime Minister Anders Fogh Rasmussen. The public sector in Denmark receives more than 15 million electronic invoices every year from approximately 150,000 suppliers. Non-electronic invoices for a public sector institution will be rejected. There are more than 30,000 public sector e-invoicing end points. An end point can be everything from a municipality to a kindergarten or even a department within a public sector institution. End points are addressed via Global Location Numbers or via Company Registration Numbers (called CVR-numbers in Denmark). History NemHandel was mandated by law in February 2005. The initial version was based on traditional Electronic data interchange (EDI) methods in combination with an early version of Universal Business Language. The current version of NemHandel was launched in 2007 and is based on modern internet technologies. Architecture NemHandel is an example of the 4-Corner Model for interoperability between service providers. This model is best known from the telephony industry where Telco-operators interoperate by roaming traffic between each other. The advantage of this model is that any party in a transaction can switch provider seamlessly without having to notify the other parties with whom they exchange business documents. References External links Video with explanation of EasyTrade in English Official EasyTrade website (in Danish) XML-based standards Technical communication Online services Public eProcurement Government software
NemHandel
Technology
372
405,746
https://en.wikipedia.org/wiki/Analog%20signal%20processing
Analog signal processing is a type of signal processing conducted on continuous analog signals by some analog means (as opposed to the discrete digital signal processing where the signal processing is carried out by a digital process). "Analog" indicates something that is mathematically represented as a set of continuous values. This differs from "digital" which uses a series of discrete quantities to represent signal. Analog values are typically represented as a voltage, electric current, or electric charge around components in the electronic devices. An error or noise affecting such physical quantities will result in a corresponding error in the signals represented by such physical quantities. Examples of analog signal processing include crossover filters in loudspeakers, "bass", "treble" and "volume" controls on stereos, and "tint" controls on TVs. Common analog processing elements include capacitors, resistors and inductors (as the passive elements) and transistors or op-amps (as the active elements). Tools used in analog signal processing A system's behavior can be mathematically modeled and is represented in the time domain as h(t) and in the frequency domain as H(s), where s is a complex number in the form of s=a+ib, or s=a+jb in electrical engineering terms (electrical engineers use "j" instead of "i" because current is represented by the variable i). Input signals are usually called x(t) or X(s) and output signals are usually called y(t) or Y(s). Convolution Convolution is the basic concept in signal processing that states an input signal can be combined with the system's function to find the output signal. It is the integral of the product of two waveforms after one has reversed and shifted; the symbol for convolution is *. That is the convolution integral and is used to find the convolution of a signal and a system; typically a = -∞ and b = +∞. Consider two waveforms f and g. By calculating the convolution, we determine how much a reversed function g must be shifted along the x-axis to become identical to function f. The convolution function essentially reverses and slides function g along the axis, and calculates the integral of their (f and the reversed and shifted g) product for each possible amount of sliding. When the functions match, the value of (f*g) is maximized. This occurs because when positive areas (peaks) or negative areas (troughs) are multiplied, they contribute to the integral. Fourier transform The Fourier transform is a function that transforms a signal or system in the time domain into the frequency domain, but it only works for certain functions. The constraint on which systems or signals can be transformed by the Fourier Transform is that: This is the Fourier transform integral: Usually the Fourier transform integral isn't used to determine the transform; instead, a table of transform pairs is used to find the Fourier transform of a signal or system. The inverse Fourier transform is used to go from frequency domain to time domain: Each signal or system that can be transformed has a unique Fourier transform. There is only one time signal for any frequency signal, and vice versa. Laplace transform The Laplace transform is a generalized Fourier transform. It allows a transform of any system or signal because it is a transform into the complex plane instead of just the jω line like the Fourier transform. The major difference is that the Laplace transform has a region of convergence for which the transform is valid. This implies that a signal in frequency may have more than one signal in time; the correct time signal for the transform is determined by the region of convergence. If the region of convergence includes the jω axis, jω can be substituted into the Laplace transform for s and it's the same as the Fourier transform. The Laplace transform is: and the inverse Laplace transform, if all the singularities of X(s) are in the left half of the complex plane, is: Bode plots Bode plots are plots of magnitude vs. frequency and phase vs. frequency for a system. The magnitude axis is in [Decibel] (dB). The phase axis is in either degrees or radians. The frequency axes are in a [logarithmic scale]. These are useful because for sinusoidal inputs, the output is the input multiplied by the value of the magnitude plot at the frequency and shifted by the value of the phase plot at the frequency. Domains Time domain This is the domain that most people are familiar with. A plot in the time domain shows the amplitude of the signal with respect to time. Frequency domain A plot in the frequency domain shows either the phase shift or magnitude of a signal at each frequency that it exists at. These can be found by taking the Fourier transform of a time signal and are plotted similarly to a bode plot. Signals While any signal can be used in analog signal processing, there are many types of signals that are used very frequently. Sinusoids Sinusoids are the building block of analog signal processing. All real world signals can be represented as an infinite sum of sinusoidal functions via a Fourier series. A sinusoidal function can be represented in terms of an exponential by the application of Euler's Formula. Impulse An impulse (Dirac delta function) is defined as a signal that has an infinite magnitude and an infinitesimally narrow width with an area under it of one, centered at zero. An impulse can be represented as an infinite sum of sinusoids that includes all possible frequencies. It is not, in reality, possible to generate such a signal, but it can be sufficiently approximated with a large amplitude, narrow pulse, to produce the theoretical impulse response in a network to a high degree of accuracy. The symbol for an impulse is δ(t). If an impulse is used as an input to a system, the output is known as the impulse response. The impulse response defines the system because all possible frequencies are represented in the input Step A unit step function, also called the Heaviside step function, is a signal that has a magnitude of zero before zero and a magnitude of one after zero. The symbol for a unit step is u(t). If a step is used as the input to a system, the output is called the step response. The step response shows how a system responds to a sudden input, similar to turning on a switch. The period before the output stabilizes is called the transient part of a signal. The step response can be multiplied with other signals to show how the system responds when an input is suddenly turned on. The unit step function is related to the Dirac delta function by; Systems Linear time-invariant (LTI) Linearity means that if you have two inputs and two corresponding outputs, if you take a linear combination of those two inputs you will get a linear combination of the outputs. An example of a linear system is a first order low-pass or high-pass filter. Linear systems are made out of analog devices that demonstrate linear properties. These devices don't have to be entirely linear, but must have a region of operation that is linear. An operational amplifier is a non-linear device, but has a region of operation that is linear, so it can be modeled as linear within that region of operation. Time-invariance means it doesn't matter when you start a system, the same output will result. For example, if you have a system and put an input into it today, you would get the same output if you started the system tomorrow instead. There aren't any real systems that are LTI, but many systems can be modeled as LTI for simplicity in determining what their output will be. All systems have some dependence on things like temperature, signal level or other factors that cause them to be non-linear or non-time-invariant, but most are stable enough to model as LTI. Linearity and time-invariance are important because they are the only types of systems that can be easily solved using conventional analog signal processing methods. Once a system becomes non-linear or non-time-invariant, it becomes a non-linear differential equations problem, and there are very few of those that can actually be solved. (Haykin & Van Veen 2003) See also Analog electronics Capacitor Comparison of analog and digital recording Digital signal processing Electrical engineering Electronics Inductor Microwave analog signal processing Resistor Signal Signal processing Transistor circuits RC circuit LC circuit RLC circuit Series and parallel circuits filters Band-pass filter Band-stop filter High-pass filter Low-pass filter References Haykin, Simon, and Barry Van Veen. Signals and Systems. 2nd ed. Hoboken, NJ: John Wiley and Sons, Inc., 2003. McClellan, James H., Ronald W. Schafer, and Mark A. Yoder. Signal Processing First. Upper Saddle River, NJ: Pearson Education, Inc., 2003. Signal processing
Analog signal processing
Technology,Engineering
1,858
1,730,071
https://en.wikipedia.org/wiki/Chocolate%20chip
Chocolate chips or chocolate morsels are small chunks of sweetened chocolate, used as an ingredient in a number of desserts (notably chocolate chip cookies and muffins), in trail mix and less commonly in some breakfast foods such as pancakes. They are often manufactured as teardrop-shaped volumes with flat circular bases; another variety of chocolate chips have the shape of rectangular or square blocks. They are available in various sizes, usually less than in diameter. Origin Chocolate chips were created with the invention of chocolate chip cookies in 1937 when Ruth Graves Wakefield of the Toll House Inn in the town of Whitman, Massachusetts added cut-up chunks of a semi-sweet Nestlé chocolate bar to a cookie recipe. (The Nestlé brand Toll House cookies is named for the inn.) The cookies were a huge success, and Wakefield reached an agreement in 1939 with Nestlé to add her recipe to the chocolate bar's packaging in exchange for a lifetime supply of chocolate. Initially, Nestlé included a small chopping tool with the chocolate bars. In 1941, Nestlé and at least one of its competitors started selling the chocolate in "chip" (or "morsel") form. Types Originally, chocolate chips were made of semi-sweet chocolate, but today there are many flavors. These include bittersweet, peanut butter, butterscotch, mint chocolate, white chocolate, dark chocolate, milk chocolate, and white and dark swirled chips. Uses Chocolate chips can be used in cookies, pancakes, waffles, cakes, pudding, muffins, crêpes, pies, hot chocolate, and various pastries. They are also found in many other retail food products such as granola bars, ice cream, and trail mix. Baking and melting Chocolate chips can also be melted and used in sauces and other recipes. The chips melt best at temperatures between . The melting process starts at , when the cocoa butter starts melting in the chips. The cooking temperature must never exceed for milk chocolate and white chocolate, or for dark chocolate, or the chocolate will burn. Although convenient, melted chocolate chips are not always recommended as a substitute for baking chocolate. Because most chocolate chips are designed to retain their shape when baking, they contain less cocoa butter than baking chocolate, and so can be more difficult to work with melted. Availability Chocolate chips are popular as a baking ingredient in the United States. Originating in the US, the chocolate chip cookie is widely available many parts of the world. Nestlé and the Hershey Company are some producers of chocolate chips. References Chocolate Food ingredients
Chocolate chip
Technology
517
646,478
https://en.wikipedia.org/wiki/Abiogenic%20petroleum%20origin
The abiogenic petroleum origin hypothesis proposes that most of earth's petroleum and natural gas deposits were formed inorganically, commonly known as abiotic oil. Scientific evidence overwhelmingly supports a biogenic origin for most of the world's petroleum deposits. Mainstream theories about the formation of hydrocarbons on earth point to an origin from the decomposition of long-dead organisms, though the existence of hydrocarbons on extraterrestrial bodies like Saturn's moon Titan indicates that hydrocarbons are sometimes naturally produced by inorganic means. A historical overview of theories of the abiogenic origins of hydrocarbons has been published. Thomas Gold's "deep gas hypothesis" proposes that some natural gas deposits were formed out of hydrocarbons deep in the Earth's mantle. Earlier studies of mantle-derived rocks from many places have shown that hydrocarbons from the mantle region can be found widely around the globe. However, the content of such hydrocarbons is in low concentration. While there may be large deposits of abiotic hydrocarbons, globally significant amounts of abiotic hydrocarbons are deemed unlikely. Overview hypotheses Some abiogenic hypotheses have proposed that oil and gas did not originate from fossil deposits, but have instead originated from deep carbon deposits, present since the formation of the Earth. The abiogenic hypothesis regained some support in 2009 when researchers at the Royal Institute of Technology (KTH) in Stockholm reported they believed they had proven that fossils from animals and plants are not necessary for crude oil and natural gas to be generated. History An abiogenic hypothesis was first proposed by Georgius Agricola in the 16th century and various additional abiogenic hypotheses were proposed in the 19th century, most notably by Prussian geographer Alexander von Humboldt (1804), the Russian chemist Dmitri Mendeleev (1877) and the French chemist Marcellin Berthelot. Abiogenic hypotheses were revived in the last half of the 20th century by Soviet scientists who had little influence outside the Soviet Union because most of their research was published in Russian. The hypothesis was re-defined and made popular in the West by astronomer Thomas Gold, a prominent proponent of the abiogenic hypothesis, who developed his theories from 1979 to 1998 and published his research in English. Abraham Gottlob Werner and the proponents of neptunism in the 18th century regarded basaltic sills as solidified oils or bitumen. While these notions proved unfounded, the basic idea of an association between petroleum and magmatism persisted. Von Humboldt proposed an inorganic abiogenic hypothesis for petroleum formation after he observed petroleum springs in the Bay of Cumaux (Cumaná) on the northeast coast of Venezuela. He is quoted as saying, "the petroleum is the product of a distillation from great depth and issues from the primitive rocks beneath which the forces of all volcanic action lie". Other early prominent proponents of what would become the generalized abiogenic hypothesis included Dmitri Mendeleev and Berthelot. In 1951, the Soviet geologist Nikolai Alexandrovitch Kudryavtsev proposed the modern abiotic hypothesis of petroleum. On the basis of his analysis of the Athabasca Oil Sands in Alberta, Canada, he concluded that no "source rocks" could form the enormous volume of hydrocarbons, and therefore offered abiotic deep petroleum as the most plausible explanation. (Humic coals have since been proposed for the source rocks.) Others who continued Kudryavtsev's work included Petr N. Kropotkin, Vladimir B. Porfir'ev, Emmanuil B. Chekaliuk, Vladilen A. Krayushkin, Georgi E. Boyko, Georgi I. Voitov, Grygori N. Dolenko, Iona V. Greenberg, Nikolai S. Beskrovny, and Victor F. Linetsky. Following Thomas Gold's death in 2004, Jack Kenney of Gas Resources Corporation has recently come into prominence as a proponent of the theories, supported by studies by researchers at the Royal Institute of Technology (KTH) in Stockholm, Sweden. Foundations of abiogenic hypotheses Within the mantle, carbon may exist as hydrocarbons—chiefly methane—and as elemental carbon, carbon dioxide, and carbonates. The abiotic hypothesis is that the full suite of hydrocarbons found in petroleum can either be generated in the mantle by abiogenic processes, or by biological processing of those abiogenic hydrocarbons, and that the source-hydrocarbons of abiogenic origin can migrate out of the mantle into the crust until they escape to the surface or are trapped by impermeable strata, forming petroleum reservoirs. Abiogenic hypotheses generally reject the supposition that certain molecules found within petroleum, known as biomarkers, are indicative of the biological origin of petroleum. They contend that these molecules mostly come from microbes feeding on petroleum in its upward migration through the crust, that some of them are found in meteorites, which have presumably never contacted living material, and that some can be generated abiogenically by plausible reactions in petroleum. Some of the evidence used to support abiogenic theories includes: Recent investigation of abiogenic hypotheses , little research is directed towards establishing abiogenic petroleum or methane, although the Carnegie Institution for Science has reported that ethane and heavier hydrocarbons can be synthesized under conditions of the upper mantle. Research mostly related to astrobiology and the deep microbial biosphere and serpentinite reactions, however, continues to provide insight into the contribution of abiogenic hydrocarbons into petroleum accumulations. rock porosity and migration pathways for abiogenic petroleum mantle peridotite serpentinization reactions and other natural Fischer–Tropsch analogs Primordial hydrocarbons in meteorites, comets, asteroids and the solid bodies of the Solar System Primordial or ancient sources of hydrocarbons or carbon in Earth Primordial hydrocarbons formed from hydrolysis of metal carbides of the iron peak of cosmic elemental abundance (chromium, iron, nickel, vanadium, manganese, cobalt) isotopic studies of groundwater reservoirs, sedimentary cements, formation gases and the composition of the noble gases and nitrogen in many oil fields Common criticisms include: If oil was created in the mantle, it would be expected that oil would be most commonly found in fault zones, as that would provide the greatest opportunity for oil to migrate into the crust from the mantle. Additionally, the mantle near subduction zones tends to be more oxidizing than the rest. However, the locations of oil deposits have not been found to be correlated with fault zones, with some exceptions. Proposed mechanisms of abiogenic petroleum Primordial deposits Thomas Gold's work was focused on hydrocarbon deposits of primordial origin. Meteorites are believed to represent the major composition of material from which the Earth was formed. Some meteorites, such as carbonaceous chondrites, contain carbonaceous material. If a large amount of this material is still within the Earth, it could have been leaking upward for billions of years. The thermodynamic conditions within the mantle would allow many hydrocarbon molecules to be at equilibrium under high pressure and high temperature. Although molecules in these conditions may disassociate, resulting fragments would be reformed due to the pressure. An average equilibrium of various molecules would exist depending upon conditions and the carbon-hydrogen ratio of the material. Creation within the mantle Russian researchers concluded that hydrocarbon mixes would be created within the mantle. Experiments under high temperatures and pressures produced many hydrocarbons—including n-alkanes through C10H22—from iron oxide, calcium carbonate, and water. Because such materials are in the mantle and in subducted crust, there is no requirement that all hydrocarbons be produced from primordial deposits. Hydrogen generation Hydrogen gas and water have been found more than deep in the upper crust in the Siljan Ring boreholes and the Kola Superdeep Borehole. Data from the western United States suggests that aquifers from near the surface may extend to depths of to . Hydrogen gas can be created by water reacting with silicates, quartz, and feldspar at temperatures in the range of to . These minerals are common in crustal rocks such as granite. Hydrogen may react with dissolved carbon compounds in water to form methane and higher carbon compounds. One reaction not involving silicates which can create hydrogen is: Ferrous oxide + water → magnetite + hydrogen The above reaction operates best at low pressures. At pressures greater than almost no hydrogen is created. Thomas Gold reported that hydrocarbons were found in the Siljan Ring borehole and in general increased with depth, although the venture was not a commercial success. However, several geologists analysed the results and said that no hydrocarbon was found. Serpentinite mechanism In 1967, the Soviet scientist Emmanuil B. Chekaliuk proposed that petroleum could be formed at high temperatures and pressures from inorganic carbon in the form of carbon dioxide, hydrogen or methane . This mechanism is supported by several lines of evidence which are accepted by modern scientific literature. This involves synthesis of oil within the crust via catalysis by chemically reductive rocks. A proposed mechanism for the formation of inorganic hydrocarbons is via natural analogs of the Fischer–Tropsch process known as the serpentinite mechanism or the serpentinite process. (2n+1) Serpentinites are ideal rocks to host this process as they are formed from peridotites and dunites, rocks which contain greater than 80% olivine and usually a percentage of Fe-Ti spinel minerals. Most olivines also contain high nickel concentrations (up to several percent) and may also contain chromite or chromium as a contaminant in olivine, providing the needed transition metals. However, serpentinite synthesis and spinel cracking reactions require hydrothermal alteration of pristine peridotite-dunite, which is a finite process intrinsically related to metamorphism, and further, requires significant addition of water. Serpentinite is unstable at mantle temperatures and is readily dehydrated to granulite, amphibolite, talc–schist and even eclogite. This suggests that methanogenesis in the presence of serpentinites is restricted in space and time to mid-ocean ridges and upper levels of subduction zones. However, water has been found as deep as , so water-based reactions are dependent upon the local conditions. Oil being created by this process in intracratonic regions is limited by the materials and temperature. Serpentinite synthesis A chemical basis for the abiotic petroleum process is the serpentinization of peridotite, beginning with methanogenesis via hydrolysis of olivine into serpentine in the presence of carbon dioxide. Olivine, composed of Forsterite and Fayalite metamorphoses into serpentine, magnetite and silica by the following reactions, with silica from fayalite decomposition (reaction 1a) feeding into the forsterite reaction (1b). Reaction 1a: Fayalite + water → magnetite + aqueous silica + hydrogen Reaction 1b: Forsterite + aqueous silica → serpentinite When this reaction occurs in the presence of dissolved carbon dioxide (carbonic acid) at temperatures above Reaction 2a takes place. Reaction 2a: Olivine + water + carbonic acid → serpentine + magnetite + methane or, in balanced form: However, reaction 2(b) is just as likely, and supported by the presence of abundant talc-carbonate schists and magnesite stringer veins in many serpentinised peridotites; Reaction 2b: Olivine + water + carbonic acid → serpentine + magnetite + magnesite + silica The upgrading of methane to higher n-alkane hydrocarbons is via dehydrogenation of methane in the presence of catalyst transition metals (e.g. Fe, Ni). This can be termed spinel hydrolysis. Spinel polymerization mechanism Magnetite, chromite and ilmenite are Fe-spinel group minerals found in many rocks but rarely as a major component in non-ultramafic rocks. In these rocks, high concentrations of magmatic magnetite, chromite and ilmenite provide a reduced matrix which may allow abiotic cracking of methane to higher hydrocarbons during hydrothermal events. Chemically reduced rocks are required to drive this reaction and high temperatures are required to allow methane to be polymerized to ethane. Note that reaction 1a, above, also creates magnetite. Reaction 3: Methane + magnetite → ethane + hematite Reaction 3 results in n-alkane hydrocarbons, including linear saturated hydrocarbons, alcohols, aldehydes, ketones, aromatics, and cyclic compounds. Carbonate decomposition Calcium carbonate may decompose at around through the following reaction: Reaction 5: Hydrogen + calcium carbonate → methane + calcium oxide + water Note that CaO (lime) is not a mineral species found within natural rocks. Whilst this reaction is possible, it is not plausible. Evidence of abiogenic mechanisms Theoretical calculations by J.F. Kenney using scaled particle theory (a statistical mechanical model) for a simplified perturbed hard-chain predict that methane compressed to or kbar at (conditions in the mantle) is relatively unstable in relation to higher hydrocarbons. However, these calculations do not include methane pyrolysis yielding amorphous carbon and hydrogen, which is recognized as the prevalent reaction at high temperatures. Experiments in diamond anvil high pressure cells have resulted in partial conversion of methane and inorganic carbonates into light hydrocarbons. Biotic (microbial) hydrocarbons The "deep biotic petroleum hypothesis", similar to the abiogenic petroleum origin hypothesis, holds that not all petroleum deposits within the Earth's rocks can be explained purely according to the orthodox view of petroleum geology. Thomas Gold used the term "the deep hot biosphere" to describe the microbes which live underground. This hypothesis is different from biogenic oil in that the role of deep-dwelling microbes is a biological source for oil which is not of a sedimentary origin and is not sourced from surface carbon. Deep microbial life is only a contaminant of primordial hydrocarbons. Parts of microbes yield molecules as biomarkers. Deep biotic oil is considered to be formed as a byproduct of the life cycle of deep microbes. Shallow biotic oil is considered to be formed as a byproduct of the life cycles of shallow microbes. Microbial biomarkers Thomas Gold, in a 1999 book, cited the discovery of thermophile bacteria in the Earth's crust as new support for the postulate that these bacteria could explain the existence of certain biomarkers in extracted petroleum. A rebuttal of biogenic origins based on biomarkers has been offered by Kenney, et al. (2001). Isotopic evidence Methane is ubiquitous in crustal fluid and gas. Research continues to attempt to characterise crustal sources of methane as biogenic or abiogenic using carbon isotope fractionation of observed gases (Lollar & Sherwood 2006). There are few clear examples of abiogenic methane-ethane-butane, as the same processes favor enrichment of light isotopes in all chemical reactions, whether organic or inorganic. δ13C of methane overlaps that of inorganic carbonate and graphite in the crust, which are heavily depleted in 12C, and attain this by isotopic fractionation during metamorphic reactions. One argument for abiogenic oil cites the high carbon depletion of methane as stemming from the observed carbon isotope depletion with depth in the crust. However, diamonds, which are definitively of mantle origin, are not as depleted as methane, which implies that methane carbon isotope fractionation is not controlled by mantle values. Commercially extractable concentrations of helium (greater than 0.3%) are present in natural gas from the Panhandle-Hugoton fields in the US, as well as from some Algerian and Russian gas fields. Helium trapped within most petroleum occurrences, such as the occurrence in Texas, is of a distinctly crustal character with an Ra ratio of less than 0.0001 that of the atmosphere. Biomarker chemicals Certain chemicals found in naturally occurring petroleum contain chemical and structural similarities to compounds found within many living organisms. These include terpenoids, terpenes, pristane, phytane, cholestane, chlorins and porphyrins, which are large, chelating molecules in the same family as heme and chlorophyll. Materials which suggest certain biological processes include The presence of these chemicals in crude oil is a result of the inclusion of biological material in the oil; these chemicals are released by kerogen during the production of hydrocarbon oils, as these are chemicals highly resistant to degradation and plausible chemical paths have been studied. Abiotic defenders state that biomarkers get into oil during its way up as it gets in touch with ancient fossils. However a more plausible explanation is that biomarkers are traces of biological molecules from bacteria (archaea) that feed on primordial hydrocarbons and die in that environment. For example, hopanoids are just parts of the bacterial cell wall present in oil as a contaminant. Trace metals Nickel (Ni), vanadium (V), lead (Pb), arsenic (As), cadmium (Cd), mercury (Hg) and others metals frequently occur in oils. Some heavy crude oils, such as Venezuelan heavy crude have up to 45% vanadium pentoxide content in their ash, high enough that it is a commercial source for vanadium. Abiotic supporters argue that these metals are common in Earth's mantle, but relatively high contents of nickel, vanadium, lead and arsenic can be usually found in almost all marine sediments. Analysis of 22 trace elements in oils correlate significantly better with chondrite, serpentinized fertile mantle peridotite, and the primitive mantle than with oceanic or continental crust, and shows no correlation with seawater. Reduced carbon Sir Robert Robinson studied the chemical makeup of natural petroleum oils in great detail, and concluded that they were mostly far too hydrogen-rich to be a likely product of the decay of plant debris, assuming a dual origin for Earth hydrocarbons. However, several processes which generate hydrogen could supply kerogen hydrogenation which is compatible with the conventional explanation. Olefins, the unsaturated hydrocarbons, would have been expected to predominate by far in any material that was derived in that way. He also wrote: "Petroleum ... [seems to be] a primordial hydrocarbon mixture into which bio-products have been added." This hypothesis was later demonstrated to have been a misunderstanding by Robinson, related to the fact that only short duration experiments were available to him. Olefins are thermally very unstable (which is why natural petroleum normally does not contain such compounds) and in laboratory experiments that last more than a few hours, the olefins are no longer present. The presence of low-oxygen and hydroxyl-poor hydrocarbons in natural living media is supported by the presence of natural waxes (n=30+), oils (n=20+) and lipids in both plant matter and animal matter, for instance fats in phytoplankton, zooplankton and so on. These oils and waxes, however, occur in quantities too small to significantly affect the overall hydrogen/carbon ratio of biological materials. However, after the discovery of highly aliphatic biopolymers in algae, and that oil generating kerogen essentially represents concentrates of such materials, no theoretical problem exists anymore. Also, the millions of source rock samples that have been analyzed for petroleum yield by the petroleum industry have confirmed the large quantities of petroleum found in sedimentary basins. Empirical evidence Occurrences of abiotic petroleum in commercial amounts in the oil wells in offshore Vietnam are sometimes cited, as well as in the Eugene Island block 330 oil field, and the Dnieper-Donets Basin. However, the origins of all these wells can also be explained with the biotic theory. Modern geologists think that commercially profitable deposits of abiotic petroleum could be found, but no current deposit has convincing evidence that it originated from abiotic sources. The Soviet school of thought saw evidence of their hypothesis in the fact that some oil reservoirs exist in non-sedimentary rocks such as granite, metamorphic or porous volcanic rocks. However, opponents noted that non-sedimentary rocks served as reservoirs for biologically originated oil expelled from nearby sedimentary source rock through common migration or re-migration mechanisms. The following observations have been commonly used to argue for the abiogenic hypothesis, however each observation of actual petroleum can also be fully explained by biotic origin: Lost City hydrothermal vent field The Lost City hydrothermal field was determined to have abiogenic hydrocarbon production. Proskurowski et al. wrote, "Radiocarbon evidence rules out seawater bicarbonate as the carbon source for FTT reactions, suggesting that a mantle-derived inorganic carbon source is leached from the host rocks. Our findings illustrate that the abiotic synthesis of hydrocarbons in nature may occur in the presence of ultramafic rocks, water, and moderate amounts of heat." Siljan Ring crater The Siljan Ring meteorite crater, Sweden, was proposed by Thomas Gold as the most likely place to test the hypothesis because it was one of the few places in the world where the granite basement was cracked sufficiently (by meteorite impact) to allow oil to seep up from the mantle; furthermore it is infilled with a relatively thin veneer of sediment, which was sufficient to trap any abiogenic oil, but was modelled as not having been subjected to the heat and pressure conditions (known as the "oil window") normally required to create biogenic oil. However, some geochemists concluded by geochemical analysis that the oil in the seeps came from the organic-rich Ordovician Tretaspis shale, where it was heated by the meteorite impact. In 1986–1990 The Gravberg-1 borehole was drilled through the deepest rock in the Siljan Ring in which proponents had hoped to find hydrocarbon reservoirs. It stopped at the depth of due to drilling problems, after private investors spent $40 million. Some eighty barrels of magnetite paste and hydrocarbon-bearing sludge were recovered from the well; Gold maintained that the hydrocarbons were chemically different from, and not derived from, those added to the borehole, but analyses showed that the hydrocarbons were derived from the diesel fuel-based drilling fluid used in the drilling. This well also sampled over of methane-bearing inclusions. In 1991–1992, a second borehole, Stenberg-1, was drilled a few miles away to a depth of , finding similar results. Bacterial mats Direct observation of bacterial mats and fracture-fill carbonate and humin of bacterial origin in deep boreholes in Australia are also taken as evidence for the abiogenic origin of petroleum. Examples of proposed abiogenic methane deposits Panhandle-Hugoton field (Anadarko Basin) in the south-central United States is the most important gas field with commercial helium content. Some abiogenic proponents interpret this as evidence that both the helium and the natural gas came from the mantle. The Bạch Hổ oil field in Vietnam has been proposed as an example of abiogenic oil because it is 4,000 m of fractured basement granite, at a depth of 5,000 m. However, others argue that it contains biogenic oil which leaked into the basement horst from conventional source rocks within the Cuu Long basin. A major component of mantle-derived carbon is indicated in commercial gas reservoirs in the Pannonian and Vienna basins of Hungary and Austria. Natural gas pools interpreted as being mantle-derived are the Shengli Field and Songliao Basin, northeastern China. The Chimaera gas seep, near Çıralı, Antalya (southwest Turkey), has been continuously active for millennia and it is known to be the source of the first Olympic fire in the Hellenistic period. On the basis of chemical composition and isotopic analysis, the Chimaera gas is said to be about half biogenic and half abiogenic gas, the largest emission of biogenic methane discovered; deep and pressurized gas accumulations necessary to sustain the gas flow for millennia, posited to be from an inorganic source, may be present. Local geology of Chimaera flames, at exact position of flames, reveals contact between serpentinized ophiolite and carbonate rocks. Fischer–Tropsch process can be suitable reaction to form hydrocarbon gases. Geological arguments Incidental arguments for abiogenic oil Given the known occurrence of methane and the probable catalysis of methane into higher atomic weight hydrocarbon molecules, various abiogenic theories consider the following to be key observations in support of abiogenic hypotheses: the serpentinite synthesis, graphite synthesis and spinel catalysation models prove the process is viable the likelihood that abiogenic oil seeping up from the mantle is trapped beneath sediments which effectively seal mantle-tapping faults outdated mass-balance calculations for supergiant oilfields which argued that the calculated source rock could not have supplied the reservoir with the known accumulation of oil, implying deep recharge. the presence of hydrocarbons encapsulated in diamonds The proponents of abiogenic oil also use several arguments which draw on a variety of natural phenomena in order to support the hypothesis: the modeling of some researchers shows the Earth was accreted at relatively low temperature, thereby perhaps preserving primordial carbon deposits within the mantle, to drive abiogenic hydrocarbon production the presence of methane within the gases and fluids of mid-ocean ridge spreading centre hydrothermal fields. the presence of diamond within kimberlites and lamproites which sample the mantle depths proposed as being the source region of mantle methane (by Gold et al.). Incidental arguments against abiogenic oil Arguments against chemical reactions, such as the serpentinite mechanism, being a source of hydrocarbon deposits within the crust include: the lack of available pore space within rocks as depth increases. this is contradicted by numerous studies which have documented the existence of hydrologic systems operating over a range of scales and at all depths in the continental crust. the lack of any hydrocarbon within the crystalline shield areas of the major cratons, especially around key deep-seated structures which are predicted to host oil by the abiogenic hypothesis. See Siljan Lake. lack of conclusive proof that carbon isotope fractionation observed in crustal methane sources is entirely of abiogenic origin (Lollar et al. 2006) drilling of the Siljan Ring failed to find commercial quantities of oil, thus providing a counter example to Kudryavtsev's Rule and failing to locate the predicted abiogenic oil. helium in the Siljan Gravberg-1 well was depleted in 3He and not consistent with a mantle origin The Gravberg-1 well only produced of oil, which later was shown to derive from organic additives, lubricants and mud used in the drilling process. Kudryavtsev's Rule has been explained for oil and gas (not coal)—gas deposits which are below oil deposits can be created from that oil or its source rocks. Because natural gas is less dense than oil, as kerogen and hydrocarbons are generating gas the gas fills the top of the available space. Oil is forced down, and can reach the spill point where oil leaks around the edge(s) of the formation and flows upward. If the original formation becomes completely filled with gas then all the oil will have leaked above the original location. ubiquitous diamondoids in natural hydrocarbons such as oil, gas and condensates are composed of carbon from biological sources, unlike the carbon found in normal diamonds. Field test evidence What unites both theories of oil origin is the low success rate in predicting the locations of giant oil/gas fields: according to the statistics discovering a giant demands drilling 500+ exploration wells. A team of American-Russian scientists (mathematicians, geologists, geophysicists, and computer scientists) developed an Artificial Intelligence software and the appropriate technology for geological applications, and used it for predicting places of giant oil/gas deposits. In 1986 the team published a prognostic map for discovering giant oil and gas fields at the Andes in South America based on abiogenic petroleum origin theory. The model proposed by Prof. Yury Pikovsky (Moscow State University) assumes that petroleum moves from the mantle to the surface through permeable channels created at the intersection of deep faults. The technology uses 1) maps of morphostructural zoning, which outlines the morphostructural nodes (intersections of faults), and 2) pattern recognition program that identify nodes containing giant oil/gas fields. It was forecast that eleven nodes, which had not been developed at that time, contain giant oil or gas fields. These 11 sites covered only 8% of the total area of all the Andes basins. 30 years later (in 2018) was published the result of comparing the prognosis and the reality. Since publication of the prognostic map in 1986 six giant oil/gas fields were discovered in the Andes region: Caño Limón oilfield, Cusiana, Capiagua, Colombia, and Volcanera (Llanos basin, Colombia), Camisea (Ukayali basin, Peru), and Incahuasi (Chaco basin, Bolivia). All discoveries were made in places shown on the 1986 prognostic map as promising areas. During the 1960s, Donald Hings was issued numerous patents for developing practical methods for locating likely locations of the deep morphological nodes most likely to indicate the presence of abiogenic hydrocarbons. His methods and technologies are used to this day by geophysicists to locate deep hydrocarbon deposits. Extraterrestrial argument The presence of methane on Saturn's moon Titan and in the atmospheres of Jupiter, Saturn, Uranus and Neptune is cited as evidence of the formation of hydrocarbons without biological intermediate forms, for example by Thomas Gold. (Terrestrial natural gas is composed primarily of methane). Some comets contain massive amounts of organic compounds, the equivalent of cubic kilometers of such mixed with other material; for instance, corresponding hydrocarbons were detected during a probe flyby through the tail of Comet Halley in 1986. Drill samples from the surface of Mars taken in 2015 by the Curiosity rover's Mars Science Laboratory have found organic molecules of benzene and propane in 3 billion year old rock samples in Gale Crater. See also Eugene Island block 330 oil field Fischer–Tropsch process Fossil fuel Nikolai Alexandrovitch Kudryavtsev Peak oil Thomas Gold References Bibliography Kudryavtsev N.A., 1959. Geological proof of the deep origin of Petroleum. Trudy Vsesoyuz. Neftyan. Nauch. Issledovatel Geologoraz Vedoch. Inst. No.132, pp. 242–262 External links Deep Carbon Observatory "Geochemist Says Oil FieldsMay Be Refilled Naturally", New York Times article by Malcolm W. Browne, September 26, 1995 "No Free Lunch, Part 1: A Critique of Thomas Gold's Claims for Abiotic Oil", by Jean Laherrere, in From The Wilderness "No Free Lunch, Part 2: If Abiotic Oil Exists, Where Is It?", by Dale Allen Pfeiffer, in From The Wilderness The Origin of Methane (and Oil) in the Crust of the Earth, Thomas Gold abstracts from AAPG Origin of Petroleum Conference 06/18/05 Calgary Alberta, Canada Gas Origin Theories to be Studied, Abiogenic Gas Debate 11:2002 (AAPG Explorer) Gas Resources Corporation - J. F. Kenney's collection of documents Peak oil Extremophiles Biological hypotheses Petroleum geology Hypothetical processes Hypotheses
Abiogenic petroleum origin
Chemistry,Biology,Environmental_science
6,665
54,678,642
https://en.wikipedia.org/wiki/Lists%20of%20investigational%20drugs
These are lists of investigational drugs: List of investigational analgesics List of investigational antidepressants List of investigational antipsychotics List of investigational anxiolytics List of investigational attention deficit hyperactivity disorder drugs List of investigational autism and pervasive developmental disorder drugs List of investigational hallucinogens and entactogens List of investigational obsessive–compulsive disorder drugs List of investigational sex-hormonal agents List of investigational sexual dysfunction drugs List of investigational sleep drugs List of investigational social anxiety disorder drugs Drug-related lists Experimental drugs
Lists of investigational drugs
Chemistry
130
2,101,570
https://en.wikipedia.org/wiki/Elephant%20Butte%20Dam
Elephant Butte Dam or Elephant Butte Dike, originally Engle Dam, is a concrete gravity dam on the Rio Grande near Truth or Consequences, New Mexico, in the United States. The dam impounds Elephant Butte Reservoir, which is used mainly for agriculture but also provides for recreation, hydroelectricity, and flood and sediment control. The construction of the dam has reduced the flow of the Rio Grande to a small stream for most of the year, with water being released only during the summer irrigation season or during times of exceptionally heavy snow melt. Etymology Elephant Butte is an exposed volcanic plug in Sierra County, New Mexico. The sides of the volcano have eroded away and left only the solidified butte-shaped core. It is now an island in the lake except at low-water levels, when it is connected to land by an isthmus. The butte was said to have the shape of an elephant lying on its side, and its name has been applied to the area since before the dam's construction. The nearby city of Elephant Butte was named for the rock formation. The original name of the dam was Engle Dam, after the nearby railroad stop at Engle, New Mexico. The stop was named "Engle" after the construction engineer R.L. Engle and was later renamed to "Engel" by the Santa Fe Railroad after their company's vice president, Edward Engel. Locals complained to Congress about the name change but were unsuccessful in having the name reverted. Today, the stop bears its original name, "Engle." Another name proposed for the dam and used in at least one publication was "Woodrow Wilson Dam," after the former U.S. president. The proposed name for the reservoir was "Lake B.M. Hall," after Bureau of Reclamation engineer Benjamin Mortimer Hall, who championed the project. History Drought and floods Like for many other rivers of the American Southwest, runoff in the Rio Grande basin is limited and varies widely from year to year and alternates between devastating droughts and destructive floods. In the 1880s, farmers in the region began to complain that they were not receiving a fair appropriation of river water. By the 1890s, water use in the upper basin was so great that the river's flow near El Paso, Texas, had been reduced to "a trickle in dry summers." To resolve those problems, plans were drafted up for a large storage dam at Elephant Butte, about downstream of Albuquerque, New Mexico. The first to propose a dam for the area were Peter E. Kern, E.V. Berrien, John Campbell, R.M. Loomis and Edward Roberts. They had camped in the area where the dam is now; although Kern encouraged the others to consider building a dam there, its construction was delayed by legal battles relating to the final site and to water rights. A site lower far downriver, at the El Paso narrows (the former site of the ASARCO plant at Smeltertown), was considered for a dam, but it would have flooded much of the lower Mesilla Valley and interfered with railway and other transportation. The site at Elephant Butte was chosen for those reasons and for its mountainous location, which created a natural basin for a reservoir. The river at the site was "too thin to plow, too thick to drink." The proposed dam featured in the 1906 Boundary Waters Convention between the United States and Mexico, which also specified how much water should be delivered to Mexico after the dam's completion. A private dam project backed by British investors was in the works in 1894 just upstream from the dam site and also by the U.S. Department of the Interior. It was eventually blocked by the U.S. Secretary of State "on basis of a technicality that the Rio Grande was arguably a navigable river and permission from the War Department was also needed." Although delayed by legal issues, the injunction against building the dam was lifted in 1897. However, the project failed to proceed, and the investors lost their rights to build the private dam in 1903. The Victorio Land and Cattle Co. owned about three fourths of the site and in 1909 demanded the government pay $17.83 per acre, instead of the substantially lower offer of $1.83 per acre. A lengthy court battle ensued, and the US government condemned 24,730 acres of the company's land and settled on a price of $6.66 per acre for the remaining 30,000 acres. Construction The U.S. Congress passed the Newlands Reclamation Act in 1902, authorizing the Rio Grande Project to provide power and irrigation to south-central New Mexico and western Texas as a Bureau of Reclamation undertaking. For the next two years, surveyors and engineers undertook a comprehensive feasibility study for the project's dams and reservoirs. Construction of the dam was authorized on February 25, 1905, and began in 1911. To accommodate the dam's construction, crews built and improved roads and constructed a Bureau of Reclamation office, water tanks, worker camps, a machine shop, a power plant, and a hospital. A system of three cables, each having a capacity of 15 tons and a span of , was suspended across the canyon over the site. At its peak, the camps housed around 3,500 workers. Two worker camps housed them. The "Upper Camp" was built upstream and housed affluential and skilled workers, such as supervisors and engineers. The "Lower Camp" was downstream of the site, housed the less influential laborers, and was further segregated by American and Mexican workers living in separate areas of the camp. Upper Camp was inundated by the dam's own reservoir, but although it was on dry land, Lower Camp has no trace remaining. During its construction, the dam was the largest irrigation dam ever built except for the Aswan Dam, in Egypt, and impounded the world's largest artificial lake. It was expected that the dam would become the property of the local settlers once a water tax had reimbursed the government for the cost of construction. Elephant Butte Dam was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1976. The dam and Bureau of Reclamation office were listed on the U.S. National Register of Historic Places in 1979. Historic resources from the era of construction of the dam, as well as from the New Deal era development of power generation and recreation facilities in the area, were recognized in the 1997 listing of a area on the National Register as the Elephant Butte Historic District. The historic district listing includes the dam and surrounding historic structures. Characteristics Elephant Butte Dam is 301 feet (91.7 m) high, 1,674 feet (510.2 m) long including the spillway and is made from 618,785 cubic yards (473,095 m³) of concrete. The width at the top of the dam is 18 feet (5.5 m) and 228 feet (69.5 m) at the base. The reservoir has a capacity of of water and controls the runoff from 28,900 square miles (74,850 km²). It provides irrigation to 178,000 acres (720 km²) of land. The dam also contains a 28 MW hydroelectric powerplant. The current turbine was installed in 1940 and generates 38,449,061 kWh per year (as of 2005). See also List of Rio Grande dams and diversions Caballo Dam National Register of Historic Places listings in Sierra County, New Mexico References External links Dams in New Mexico Dams on the Rio Grande Buildings and structures in Sierra County, New Mexico Dams completed in 1916 Dams on the National Register of Historic Places in New Mexico Historic Civil Engineering Landmarks United States Bureau of Reclamation dams National Register of Historic Places in Sierra County, New Mexico 1916 establishments in New Mexico
Elephant Butte Dam
Engineering
1,583
14,117,191
https://en.wikipedia.org/wiki/SIN3A
Paired amphipathic helix protein Sin3a is a protein that in humans is encoded by the SIN3A gene. Function The protein encoded by this gene is a transcriptional regulatory protein. It contains paired amphipathic helix (PAH) domains, which are important for protein-protein interactions and may mediate repression by the Mad-Max complex. Interactions SIN3A has been shown to interact with: CABIN1 HBP1, HDAC1, HDAC9, Histone deacetylase 2, Host cell factor C1, IKZF1, ING1, KLF11, MNT, MXD1, Methyl-CpG-binding domain protein 2, Nuclear receptor co-repressor 2, OGT, PHF12, Promyelocytic leukemia protein, RBBP4, RBBP7, SAP130, SAP30, SMARCA2, SMARCA4, SMARCC1, SUDS3, TAL1, and Zinc finger and BTB domain-containing protein 16. See also Transcription coregulator References Further reading External links Gene expression Transcription coregulators
SIN3A
Chemistry,Biology
233
13,530,805
https://en.wikipedia.org/wiki/TRPM2
Transient receptor potential cation channel, subfamily M, member 2, also known as TRPM2, is a protein that in humans is encoded by the TRPM2 gene. Structure The protein encoded by this gene is a non-selective calcium-permeable cation channel and is part of the Transient Receptor Potential ion channel super family. The closest relative is the cold and menthol activated TRPM8 ion channel. While TRPM2 is not cold sensitive it is activated by heat. The TRPM2 ion channel is activated by free intracellular ADP-ribose in synergy with free intracellular calcium. ADP-Ribose is produced to by the enzyme PARP in response to oxidative stress and confers susceptibility to cell death. Several alternatively spliced transcript variants of this gene have been described, but their full-length nature is not known. Function The TRPM2 gene is highly expressed in the brain and was implicated by both genetic linkage studies in families and then by case control or trio allelic association studies in the genetic aetiology of bipolar affective disorder (Manic Depression). The physiological role of TRPM2 is not well understood. It was shown to be involved in insulin secretion. In the immune cells it mediates parts of the responses to TNF-alpha. A role has been suggested for TRPM2 in activation of NLRP3 inflammasome, the dysregulation of which is strongly associated with a number of auto inflammatory and metabolic diseases, such as gout, obesity and diabetes. In the brain it is involved in the toxicity of amyloid beta, a protein associated with Alzheimer's disease. In 2016, TRPM2 channel was strongly implicated in the detection of non-painful warm stimuli. Chun-Hsiang Tan and Peter McNaughton studied the responses of actual sensory neurons to thermal stimuli, then used an RNA-sequencing strategy to identify TRPM2 as genetically required for warmth detection in the non-noxious range of 33–38 °C. Clinical significance TRPM2 expression and function help preserve cancer cell viability. TRPM2 channels are highly expressed in many cancers, notably neuroblastoma. See also TRPM References Further reading External links Ion channels Biology of bipolar disorder Nudix hydrolases
TRPM2
Chemistry
476
912,904
https://en.wikipedia.org/wiki/Flow%20control%20valve
A flow control valve regulates the flow or pressure of a fluid. Control valves normally respond to signals generated by independent devices such as flow meters or temperature gauges. Operation Control valves are normally fitted with actuators and positioners. Pneumatically-actuated globe valves and diaphragm valves are widely used for control purposes in many industries, although quarter-turn types such as (modified) ball and butterfly valves are also used. Control valves can also work with hydraulic actuators (also known as hydraulic pilots). These types of valves are also known as automatic control valves. The hydraulic actuators respond to changes of pressure or flow and will open/close the valve. Automatic control valves do not require an external power source, meaning that the fluid pressure is enough to open and close them. Automatic control valves include pressure reducing valves, flow control valves, back-pressure sustaining valves, altitude valves, and relief valves. Application Process plants consist of hundreds, or even thousands, of control loops all networked together to produce a product to be offered for sale. Each of these control loops is designed to keep some important process variable, such as pressure, flow, level, or temperature, within a required operating range to ensure the quality of the end product. Each loop receives and internally creates disturbances that detrimentally affect the process variable, and interaction from other loops in the network provides disturbances that influence the process variable. To reduce the effect of these load disturbances, sensors and transmitters collect information about the process variable and its relationship to some desired set point. A controller then processes this information and decides what must be done to get the process variable back to where it should be after a load disturbance occurs. When all the measuring, comparing, and calculating are done, some type of final control element must implement the strategy selected by the controller. The most common final control element in the process control industries is the control valve. The control valve manipulates a flowing fluid, such as gas, steam, water, or chemical compounds, to compensate for the load disturbance and keep the regulated process variable as close as possible to the desired set point. Images See also Ball valve Butterfly valve Check valve Control valve Diaphragm valve Flow limiter Flow measurement Gate valve Globe valve Mass flow controller Needle valve Plastic pressure pipe systems Thermal mass flow meter References Valves
Flow control valve
Physics,Chemistry
470
6,198,403
https://en.wikipedia.org/wiki/Ferric%20subsulfate%20solution
Ferric subsulfate solution is a styptic or hemostatic agent used after superficial skin biopsies. Ferric subsulfate solution is also known as basic ferric sulfate solution or Monsel's solution. It has a recognised formula published in United States Pharmacopeia 29. Active ingredients Ferric subsulfate solution is prepared from ferrous sulfate, sulfuric acid and nitric acid. It contains, per 100 mL, basic ferric sulfate equivalent to not less than 20g and not more than 22g of iron. Storage Ferric subsulfate solution is generally stored in airtight containers at a temperature above 22 degrees Celsius. Crystallization may occur at temperatures below 22 degrees. Warming the solution may redissolve the crystals. The solution is typically protected from light. Other uses Ferric subsulfate (also known as Monsel's solution) is often used by Jewish burial societies (chevra kadisha) to stop post-mortem bleeding. Since Jewish burial does not allow any external skin adhesives such as bandages, tape, glue or resin, ferric subsulfate is an effective way to stop post-mortem bleeding. Most post-mortem bleeding stems from surgery, emergency room situations, autopsies or blood which may result when removing IV lines during Jewish burial preparation. A piece of cotton, or Q-tip, soaked with this solution is pressed against the open wound and held for a few seconds. This is usually enough time for the seal to take effect. For more severe cases, such as arterial lines, if the line is still inside, the solution can be inserted directly into the IV line. Brand names AstrinGyn by CooperSurgical is a thickened and specially modified gel formulation. History It was invented in the late 1840s by Leon Monsel (March 13, 1816 – April 15, 1878), a French military pharmacist. His invention soon became a standard in the French Corps, which saved many lives during battles of the French Army. References Medical hygiene Iron(III) compounds Sulfates
Ferric subsulfate solution
Chemistry
443
22,126,226
https://en.wikipedia.org/wiki/IMes
IMes is an abbreviation for an organic compound that is a common ligand in organometallic chemistry. It is an N-heterocyclic carbene (NHC). The compound, a white solid, is often not isolated but instead is generated upon attachment to the metal centre. First prepared by Arduengo, the heterocycle is synthesized by condensation of 2,4,6-trimethylaniline and glyoxal to give the diimine. In the presence of acid, the resulting glyoxal-bis(mesitylimine) condenses with formaldehyde to give the dimesitylimidazolium cation. This cation is the conjugate acid of the NHC. Related compounds Bulkier than IMes is the NHC ligand IPr (CAS 244187-81-3). IPr features diisopropylphenyl in place of the mesityl substituents. Some variants of IMes and IPr have saturated backbones, two such ligands are SIMes and SIPr. They are prepared by alkylation of substituted anilines with dibromoethane followed by ring closure and dehydrohalogenation of the dihydroimidazolium salt. References Further reading Organometallic chemistry Carbenes Ligands
IMes
Chemistry
281
33,899,975
https://en.wikipedia.org/wiki/Artin%E2%80%93Verdier%20duality
In mathematics, Artin–Verdier duality is a duality theorem for constructible abelian sheaves over the spectrum of a ring of algebraic numbers, introduced by , that generalizes Tate duality. It shows that, as far as etale (or flat) cohomology is concerned, the ring of integers in a number field behaves like a 3-dimensional mathematical object. Statement Let X be the spectrum of the ring of integers in a totally imaginary number field K, and F a constructible étale abelian sheaf on X. Then the Yoneda pairing is a non-degenerate pairing of finite abelian groups, for every integer r. Here, Hr(X,F) is the r-th étale cohomology group of the scheme X with values in F, and Extr(F,G) is the group of r-extensions of the étale sheaf G by the étale sheaf F in the category of étale abelian sheaves on X. Moreover, Gm denotes the étale sheaf of units in the structure sheaf of X. proved Artin–Verdier duality for constructible, but not necessarily torsion sheaves. For such a sheaf F, the above pairing induces isomorphisms where Finite flat group schemes Let U be an open subscheme of the spectrum of the ring of integers in a number field K, and F a finite flat commutative group scheme over U. Then the cup product defines a non-degenerate pairing of finite abelian groups, for all integers r. Here FD denotes the Cartier dual of F, which is another finite flat commutative group scheme over U. Moreover, is the r-th flat cohomology group of the scheme U with values in the flat abelian sheaf F, and is the r-th flat cohomology with compact supports of U with values in the flat abelian sheaf F. The flat cohomology with compact supports is defined to give rise to a long exact sequence The sum is taken over all places of K, which are not in U, including the archimedean ones. The local contribution Hr(Kv, F) is the Galois cohomology of the Henselization Kv of K at the place v, modified a la Tate: Here is a separable closure of References Theorems in number theory Duality theories
Artin–Verdier duality
Mathematics
495
4,487,864
https://en.wikipedia.org/wiki/International%20Committee%20for%20the%20Nanking%20Safety%20Zone
The International Committee was established in 1937 to establish and manage the Nanking Safety Zone. Many Westerners were living in the city at that time, conducting trade or on missionary trips. As the Imperial Japanese Army began to approach Nanjing (also known as Nanking), most of them fled the city. A small number of Western businessmen, journalists and missionaries, however, chose to remain behind. The missionaries were primarily Americans from the Episcopal, Disciples of Christ, Presbyterian, and Methodist churches. To coordinate their efforts, the Westerners formed a committee: the International Committee for the Nanking Safety Zone. German businessman John Rabe was elected as its leader, partly because of his status as a member of the Nazi party, and the existence of the German–Japanese bilateral Anti-Comintern Pact. Rabe and other refugees from foreign countries tried to protect the civilians from being killed by the Japanese. The Japanese army did not completely respect the immunity of the Safety Zone and soldiers would sometimes show up under dubious pretenses to take Chinese women and men into custody. There were also kidnappings of women from the Zone. Such people taken into custody would often either be summarily executed or taken away for rape. Due to Rabe's efforts some 250,000 people were protected during the Nanjing Massacre. In February 1938, as violence by the Japanese Army abated, the International Committee for the Nanking Safety Zone was reorganized as the Nanking International Relief Committee, which did humanitarian work in Nanjing until at least 1941. There are no records of any activity by the committee after 1941 and it is believed that it was likely forced to discontinue its operations after the United States entered World War II. Establishment of the Nanking Safety Zone The Westerners who remained behind established the Nanking Safety Zone, a score of refugee camps bordered by roads on all four sides that occupied an area of about . This is approximately 1.5 times the size of Central Park in New York. Members The fifteen members of the International Committee for the Nanking Safety Zone were as follows: George Ashmore Fitch, was general secretary of the "Foreign YMCA" in Shanghai, advisor to OMEA, active in the humanitarian work, named by John Rabe (chairman) to be director of the ICNSZ, and served as acting mayor of Nanjing after Mayor General Ma Shao-chuan turned over to him treasury resources, some police, and food stores. Most lists do not mention him as a formal member. Perhaps this is because he was elected director while he had been travelling and before he returned to Nanjing. These individuals are not to be confused with the members of the International Red Cross Committee of Nanking, which did similar work. Its 17 members included Robert O. Wilson, an American doctor at Drum Tower Hospital of Nanking University Hospital, James McCallum, an American missionary at the same institution, and Minnie Vautrin, an American missionary at Ginling Girls' College. Activities When Nanjing fell, the Nanking Safety Zone housed over 250,000 refugees. The committee members of the Zone found ways to provide these refugees with the basic needs of food, shelter, and medical care. Whenever Japanese soldiers entered the Zone, they were closely shadowed by one of the Westerners. The Westerners repeatedly refused to comply with demands made of them by Japanese Army soldiers, placing themselves between Japanese soldiers and Chinese civilians. Committee members frequently contacted Consul-General Okazaki Katsuo, Second Secretary (later Acting Consul-General) Fukui Kiyoshi and Attaché Fukuda Tokuyasu to deal with the anarchic situation. M. Searle Bates Miner Searle Bates was one of the leaders of the committee and worked to secure the safety of the population of Nanjing. This task was dangerous and his life was put at risk on many occasions, most notably when he was shoved down a flight of stairs by Japanese military police after inquiring about the fate of a student who had been abducted by Japanese soldiers. According to the testimony of Bates before the International Military Tribunal for the Far East, he visited the Japanese embassy daily for the next three weeks after first protesting there against Japanese atrocities. He testified that the Japanese authorities appeared to him to be "honestly trying to do what little they could in a bad situation". However, as Bates testified, the embassy officials were themselves terrified by the military and could do nothing except forward these communications through Shanghai to Tokyo. Robert O. Wilson Along with John Rabe and Minnie Vautrin, Robert O. Wilson was instrumental in the establishment of the Nanking Safety Zone. He was the sole surgeon responsible for treating the victims of the ongoing atrocities. The selfless work of Dr. Wilson and his associates saved the lives of countless civilians and POWs who would have otherwise perished at the hands of the aggressors. Role in documenting the Nanjing Massacre Several eyewitness accounts of the Nanjing Massacre were provided by members of the committee. Protests to the Japanese Consulate The committee sent 61 letters to the Japanese Consulate which report various incidents which occurred during the period starting Dec 13, 1937 to Feb 9, 1938. These letters are quoted in H.J. Timperley's book “What war means: Japanese terror in China:” (Compiled and edited by H.J. Timperley / Victor Gollancz, July 1938). Other documents M. Searle Bates, John Magee and George Ashmore Fitch, the head of YMCA at Nanjing, actively wrote of the chaotic conditions created by the Japanese troops, mimeographed or retyped their stories over and over and sent them to their friends, government officials, and Christian organizations so as to let the world, especially the American public, know what was going on in the terrorized city. They hoped that the U. S. government would intervene, or at least apply the Neutrality Act of 1937 to the "China Incident," which would have made it illegal for any American business to sell war materials to Japan. For example, a letter of Searle Bates to the American Consul in January 1938 explained how the Safety Zone had been "tenaciously maintained" and needed help "amid dishonor by soldiers, murdering, wounding, wholesale raping, resulting in violent terror." In the United States, the committee on the Far East of the Foreign Missions Conference received scores of letters from missionaries in Nanjing. After weeks of consideration, they decided to release the letters in February 1938 despite the possible adverse effect on the Christian movement in Japan, which led to the eventual publication of their letters in some magazines such as Reader's Digest in mid-1938. Magee films George A. Fitch succeeded in smuggling the films shot by John Magee out of China when he temporarily left the country in January 1938. That year he traveled throughout the United States, giving speeches about what he witnessed in Nanjing along with the films that showed haunting images of Chinese victims. Testimony before the International Military Tribunal for the Far East Several members of the committee took the witness stand to testify about their experiences and observations during the Nanjing Massacre. These included Robert Wilson, Miner Searle Bates and John Magee. George A. Fitch, Lewis Smythe and James McCallum filed affidavits with their diaries and letters. Historiography During the Korean War (1950–53), the government of the People's Republic of China used records of the International Committee to portray its members as part of a propaganda campaign to arouse patriotic anti-American fervor. As part of this propaganda campaign, the Westerners who remained in Nanjing were characterized as foreigners who sacrificed Chinese lives in order to protect their property, guided the Japanese troops into the city and collaborated with them to round up prisoners of war in the refugee camps. As a result of this anti-American propaganda, a detailed study carried out by the researchers at the University of Nanking in 1962 went so far as to assert that Westerners had assisted the Japanese in executing Chinese in Nanjing. The study harshly criticized those foreigners for not having made any effort to prevent the ongoing atrocities. This erroneous perception of the International Committee was eventually corrected in the 1980s as more historical documents became accessible and more thorough studies were published. Today many of the missionaries' private diaries and letters that meticulously documented the scale and character of the Nanjing Massacre are archived at the Yale Divinity School Library. Timeline 22 November 1937 – The International Committee for the Nanking Safety Zone is organized by a group of foreigners to shelter Chinese refugees. 12 December 1937 – Chinese soldiers are ordered to withdraw from Nanjing 13 December 1937 – Japanese troops capture Nanjing 14 December 1937 – The International Committee for the Nanking Safety Zone lodges the first protest letter against Japanese atrocities with the Japanese Embassy. 19 February 1938 – The last of the 69 protest letters against Japanese atrocities is sent by the Safety Zone Committee to the Japanese Embassy and announces the renaming of the committee as the Nanking International Relief Committee. See also Nanjing International Red Cross Committee Nanking (1937-1945) Sources References External links Nanking Nanking Massacre Project at Yale Divinity School Library. Nanjing Massacre Rescue of Chinese in the Nanjing Massacre
International Committee for the Nanking Safety Zone
Biology
1,841
1,124,021
https://en.wikipedia.org/wiki/Unit%20516
Unit 516 (第五一六部隊) was a top secret Japanese chemical weapons facility, operated by the Kempeitai, in Qiqihar, Japanese-occupied northeast China. The name Unit 516 was a code name (Tsūshōgō) of the Unit. It was officially called the Kwantung Army Chemical Weapons Section and operated underneath Unit 731. An estimated 700,000 (Japanese estimation) to 2,000,000 (Chinese estimation) Japanese-produced chemical weapons were buried in China. Until 1995, Japan had refused to acknowledge that it dumped chemical weapons in the Nen River between Heilongjiang and Hulunbei'er, leaving huge amounts behind. Chemical weapons Phosgene Hydrogen cyanide Bromobenzyl cyanide and Chloroacetophenone Diphenylcyanoarsine and Diphenylchloroarsine Arsenic trichloride Sulfur Mustard Lewisite At the end of World War II, the Imperial Japanese Army buried some of their chemical weapons in China, but most were confiscated by Soviet Red Army, the People's Liberation Army and the Kuomintang Army, along with other weapons. The Soviet Union later handed over these weapons to China (ROC), who then buried them. Japanese chemical weapons were later found mixed with Soviet and Chinese chemical weapons. The Japanese National Institute for Defense Studies has a record of Japanese weapons confiscated by Kuomintang Army along with a list of the types of chemical weapons. No confiscation records about ROC / Russia have been found. However, no country has records about the locations of the buried chemical weapons. China has started gathering these abandoned weapons for destruction and burial, and they are currently buried in remote Dunhua County, in Haerbaling, Jilin (吉林) province. The Chemical Weapons Convention One of the focuses of the Chemical Weapons Convention was to assign responsibility for the destruction of old chemical weapons in China. The convention was signed in 1993 and according to it, all chemical weapons created after 1925 must be destroyed by the originating-country. Under the convention, Japan is building a factory in China to destroy chemical weapons. See also Changde chemical weapon attack War crimes in Manchukuo Japan and weapons of mass destruction Other cases of abandoned chemical weapons Air raid on Bari San Jose Project References Chemical warfare facilities Japanese prisoner of war and internment camps Japanese human subject research Second Sino-Japanese War Second Sino-Japanese War crimes Japanese war crimes in China War crimes in Manchukuo
Unit 516
Chemistry
508
9,014,281
https://en.wikipedia.org/wiki/Middle%20Santiam%20Wilderness
The Middle Santiam Wilderness is a wilderness area located near Mount Washington in the central Cascade Range of Oregon, U.S., within the Willamette National Forest. Topography The Middle Santiam Wilderness ranges from steep slopes, high peaks, and ridges at the higher elevations to gently sloping and bench-like terrain in the lower elevations. The most prominent features include Donaca Lake, the Middle Santiam River, and the 4,965-foot Chimney Peak, a lava plug in the northwestern portion of the Wilderness. Not far to the south of the Middle Santiam Wilderness lies Menagerie Wilderness. Flora and fauna Much of the Middle Santiam Wilderness is forested with mature stands of old growth estimated to be 450 years old and tall. Douglas-fir, western redcedar and western hemlock grow at lower elevations and true firs near the ridgelines. Native fish populations, including Chinook salmon during spawning season, thrive in both the Santiam River and Donaca Lake. See also List of Oregon Wildernesses List of U.S. Wilderness Areas Wilderness Act List of old growth forests References External links Willamette National Forest - Middle Santiam Wilderness Willamette National Forest - Middle Santiam Trail Area Forests and Global Warming - Oregon Wild Cascade Range IUCN Category Ib Old-growth forests Wilderness areas of Oregon Protected areas of Linn County, Oregon Willamette National Forest 1984 establishments in Oregon Protected areas established in 1984
Middle Santiam Wilderness
Biology
287
29,966,316
https://en.wikipedia.org/wiki/Queen%20retinue%20pheromone
Queen retinue pheromones (QRP) are a type of honey bee pheromones, so-called because one of their behavioral effects is to attract a circle of bees (a “retinue”) around the queen. In older literature, the queen pheromone is called mandibular pheromone because some of its components were first identified from the mandibular glands of queens. Retinue pheromone may be more accurate because the chemical mix in the pheromone comes from several glands. The following compounds have been identified as present in the QRP, of which, only coniferyl alcohol is found in the mandibular glands. The combination of the five QMP compounds and the four compounds below help create the retinue attraction of worker bees around their queen. Methyl oleate Coniferyl alcohol Hexadecan-1-ol Alpha-linolenic acid References External links Semiochemicals of Genus Apis, at pherobase.com Beekeeping Insect pheromones
Queen retinue pheromone
Chemistry
218
6,248,163
https://en.wikipedia.org/wiki/%CE%91-Bungarotoxin
α-Bungarotoxin is one of the bungarotoxins, components of the venom of the elapid Taiwanese banded krait snake (Bungarus multicinctus). It is a type of α-neurotoxin, a neurotoxic protein that is known to bind competitively and in a relatively irreversible manner to the nicotinic acetylcholine receptor found at the neuromuscular junction, causing paralysis, respiratory failure, and death in the victim. It has also been shown to play an antagonistic role in the binding of the α7 nicotinic acetylcholine receptor in the brain, and as such has numerous applications in neuroscience research. History Bungarotoxins are a group of toxins that are closely related with the neurotoxic proteins predominantly present in the venom of kraits. These toxins are directly linked to the three-finger toxin superfamily. Among them, α-bungarotoxin (α-BTX)  stands out, being a peptide toxin produced by the Elapid Taiwanese banded krait snake, also known as the many-banded krait or the Taiwanese or Chinese krait. Elapid Taiwanese banded family krait snake (Bungarus multicinctus) is part of the Elapide snake family. The krait venom, like the majority of the snake venoms, involves a combination of proteins that together lead to a remarkable range of neurologic consequences. The Elapid snake family is known for their potent α-neurotoxic venom, which has a postsynaptic mechanism of action. These neurotoxins primarily affect the nervous system, blocking the nerve impulse transmission, leading to paralysis and potentially death if untreated. The first time that many-banded krait was described was in 1861 by the scientist Edward Blyth. It was characterized by its distinctive black-and-white banded pattern along its body, with a maximum length of 1.85 m. This very venomous species is found in central and southern China and Southeast Asia. Their venom contains various neurotoxins, being α-BTX one of them. According to later research on its mechanism of action, α-bungarotoxin binds irreversibly to the postsynaptic nicotinic acetylcholine receptor (nAChR) at the neuromuscular junction. By this way, it inhibits the action of acetylcholine competitively, leading to respiratory failure, paralysis and even death. In South and Southeast Asia, envenomation from the many-banded krait bite is a common and life-threatening medical condition when not promptly treated. Upon the snakebite, the venom is injected into the victim's tissues. It starts diffusing and spreading throughout the surrounding tissues via the bloodstream. Once the venom is in the circulatory system, it can reach the target organs and tissues. In this case,  α-bungarotoxin specifically targets the nervous system, interfering with the nerve impulse transmission. Nevertheless, krait bites usually take place at night and do not show any local symptoms, so victims are not aware of the bite. This delays receiving medical care, which makes it the major cause of mortality associated with krait envenomation. The primary target of neurotoxins is the neuromuscular junction of skeletal muscles, where the motor nerve terminal and the nicotinic acetylcholine receptor are the major target sites. Their neurotoxic effect is often referred to as resistant neurotoxicity. This is because of the damage caused to nerve terminals that leads to acetylcholine depletion at the neuromuscular junction. The regeneration of the synapses can take days, which prolongs the paralysis and recovery process for the victim. In addition, the severity of the paralysis ranges from mild to life-threatening depending on the degree of envenomination, its composition and the early therapeutic intervention. Antivenom therapy is the current standard treatment for snake envenoming. In China, the Bungarus multicinctus monovalent antivenom (BMMAV) is produced and, in Taiwan the Neuro bivalent antivenom (NBAV). Both antivenoms are immunoreactive to the neurotoxins found in the venom, including the α-BTX, which neutralize the venom lethality. BMMAV is specifically designed to neutralize the venom of the Bungarus multicinctus, therefore being more efficacious compared to NBAV. On the other hand, NBAV targets the venom from multiple species of snakes that produce neurotoxic effects, including the Bungarus multicinctus. The use of BMMAV or NBAV might differ based on availability, regional protocols and the specific venomous snake that is present in the area. Structure and available forms α-Bungarotoxin consists of an 8 kDa,  single polypeptide chain that contains 74 amino acid residues. This polypeptide chain is cross-linked by five disulfide bridges, categorizing the α-bungarotoxin as a type II α-neurotoxin within the three-finger toxin family. These disulfide bridges are formed between the specific cysteine residues and are important for the stability and function of the toxin. Furthermore, α-bungarotoxin contains ten residues of half-cysteine per molecule. The specific arrangements of disulfide bridges formed by these cysteine residues result in the 11-ring structure within the toxin molecule. This 11-ring structure is particularly essential for the toxin interactions with the target receptors and modulation of the neurotransmission at the neuromuscular junction. The amino acid sequence of the α-bungarotoxin contains a high frequency of homodipeptides, with ten pairs present where serine and proline dipeptides occur twice in the sequence. The active site of the toxin is located in the region from position 24 to position 45 within the sequence. There are some key amino acids commonly found in this region that include cysteine, arginine, glycine, lysine and valine. As previously mentioned, cysteine is crucial for the disulfide bridges formation in proteins. Arginine and lysine can participate in interactions with negatively charged molecules or residues, so they may play a role in the binding to specific receptors or substrates. Glycine may contribute to the flexibility and conformational dynamics of the α-bungarotoxin. Lastly, the valine residue may help maintain the hydrophobic core of the toxin. Similar to other α-neurotoxins within the three-finger toxin family, α-bungarotoxin exhibits a tertiary structure that is characterized by three projecting "finger" loops, a C-terminal tail, and a small globular core stabilized by four disulfide bonds. Notably, an additional disulfide bond is present in the second loop, facilitating a proper binding through the mobility of the tips of fingers I and II. Furthermore, hydrogen bonds contribute to the formation of an antiparallel  β-sheet, maintaining the parallel orientation of the second and third loops. The structural integrity of the three-finger toxin is preserved by four of the disulfide bridges, while the fifth bridge, located on the tip of the second loop, can be reduced without compromising toxicity. The α-bungarotoxin polypeptide chain shows significant sequence homology with other neurotoxins from cobra and sea snake venoms, particularly with the α-toxin from Naja nivea. Comparing α-bungarotoxin with these homologous toxins from cobra and sea snake venoms, it was revealed that there is a high degree of conservation in certain residues. For instance, there are 18 constant residues, which include the eight half-cysteines, that are observed in all toxin sequences. Therefore, α-bungarotoxin shares common structural motifs with other toxins of the three-fingered family. For example, α-cobra toxin, erabutoxin A, and candoxin contain three adjacent loops coming up from a globular, small and hydrophobic core that is cross-linked by four conserved disulfide bridges. This conservation suggests the presence of essential functional elements that are shared among these neurotoxins. Lastly, the abundance of the disulfide bonds and the limited secondary structure that is observed in the α-bungarotoxin explains its exceptional stability, which makes it resistant to denaturation even under extreme conditions such as boiling and exposure to strong acids. Synthesis Chemical synthesis Due to its very large and complex structure, synthesizing α-bungarotoxin has represented a great challenge for synthetic chemists. [16] A study conducted by O. Brun et al. proposed a mechanism for the chemical synthesis of this neurotoxin. It involves a strategy utilizing peptide fragments and native chemical ligation (NCL). Due to its length, synthesizing a full linear peptide using solid-phase peptide synthesis (SPPS) is not achievable, thus, the synthesis was done by choosing three peptide fragments that can further undergo the native chemical ligation. This method produces a native peptide bond between two fragments by reacting thioester (C-terminal) with cysteine (N-terminal). The synthesis strategy employed was from the C-terminus towards the N-terminus. Firstly, the shorter peptide fragments are synthesized via automated SPPS. The first two peptides have a Trp-Cys ligation point, while the ligation with the last fragment occurs in a Gly-Cys ligation point. Additionally, in this study, an alkyne functionality was introduced at the N-terminus of the peptide chain. This allows the conjugation of different molecules such as fluorophores via bioorthogonal reactions. By fluorescently labelling the chemically synthesised peptide it was shown it has the same effect and functionality on the nicotinic receptors as the naturally occurring α-bungarotoxin. Purification Due to the challenging chemical synthesis of the neurotoxin, most studies were conducted using a purified form. To investigate the effects of the α-bungarotoxin, the toxin has to be isolated from the venom of the elapid snake. The purification of the polypeptide is done via column chromatography. Firstly, the venom is dissolved in ammonium acetate buffer and then loaded on the CM-Sephadex column. The elution of the compound is done in two different steps by using an ammonium acetate buffer at a flow rate of 35 nl/h. The steps involve using two linear gradients of buffers while increasing the pH. Biosynthesis α-Bungarotoxin is a peptide, therefore it undergoes the protein synthesis pathway, involving transcription and translation. The specific genes encoding for the protein are transcribed into mRNA, which is then translated via the ribosomes, leading to the synthesis of the prepropeptide. Lastly, post-translational modification and folding occur. The mature peptide is stored in the venom gland until envenomation when it gets released. Mechanism of action The venom of snakes contains numerous proteins and peptide toxins that exhibit high affinity and specificity for a larger range of receptors. α-Bungarotoxin is a nicotinic receptor antagonist that binds irreversibly to the receptor, inhibiting the action of acetylcholine at the neuromuscular junctions. Nicotinic receptors are one of the two subtypes of cholinergic receptors, that respond to the neurotransmitter acetylcholine. Nicotinic acetylcholine receptors (nAChRs) are ligand-gated ion channels, being part of the ionotropic receptors. When a ligand is bound to it, it regulates excitability by controlling the ion flow during action potential during neurotransmission, primarily through the activation of voltage-gated ion channels upon depolarization of the plasma membrane. The depolarization is induced by an influx of cations, mainly that of sodium ions. For the overall modulation of cellular excitability, an influx of sodium ions and an efflux of potassium ions into the intracellular space is necessary. In the central and peripheral nervous system, α-bungarotoxin acts by inducing paralysis in skeletal muscles by binding to a subtype of nicotinic receptors α7. α-Neurotoxins are known as "curare-mimetic toxins" due to their similar effects to the arrow poison tubocurarine. A difference between α-neurotoxins and curare alkaloids is that they bind irreversibly and reversibly specifically. α-Neurotoxins block the action of acetylcholine (ACh) at the postsynaptic membrane by irreversibly inhibiting the ion flow. From the same toxin family of Bungarotoxins (BTX), κ-BTX was shown to act postsynaptically on α3 and α4 neuronal nicotinic receptors with little effect on the muscular nAChRs, targeted by α-BTX. In contrast, β- and γ-BTX act presynaptically by reducing ACh release. It is important to note that neurotoxins are named based on the receptor type they target. The nicotinic receptors are made up of five subunits each and contain two binding sites for snake venom neurotoxins.[20] The α7-nAChR is a homopentamer consisting of five identical α7 subunits. The α7 receptor is known to have a higher Ca2+ permeability compared to other nicotinic receptors. Changes in Ca2+ intracellularly can activate important cellular pathways such as the STAT pathway or the NF-κB signalling. Consistency with experimental data on the amount of toxin per receptor is evident in the observation that a lone molecule of the toxin is adequate to inhibit channel opening. Some computational studies of the mechanism of inhibition using normal mode dynamics suggest that a twist-like motion caused by ACh binding may be responsible for pore opening and that this motion is inhibited by toxin binding. Metabolism The following section describes the ADME (absorption, distribution, metabolism and excretion) of α-bungarotoxin. It is important to note that there is limited information available on the pharmacokinetics of this neurotoxin. More research is needed to be able to fully understand the metabolism of this neurotoxin inside the body. Absorption: α-bungarotoxin enters the body after envenomation into the bloodstream at the bite site. Through the venom, a mixture of proteins and different molecules enter the body.   Distribution: Once in the bloodstream, α-bungarotoxin circulates throughout the body. Its distribution may be influenced by factors such as blood flow, tissue permeability, and the presence of binding proteins. Additionally, knowing it binds to nAChRs, it can be predicted where the neurotoxin would be present: neuromuscular junctions, autonomic ganglia, peripheral nerves, and adrenal medulla. One of the main locations would be also the central nervous system (CNS), including the brain. Specific regions such as the hippocampus, cortex, and basal ganglia contain these receptors. Metabolism: The metabolic pathways of this neurotoxins have not been fully understood yet, however, it is thought to be metabolised in the liver. Researching venom metabolism is challenging due to the multiple components present in it. Toxins that are not bound may undergo elimination through opsonization by the reticuloendothelial system, mainly involving the liver and kidneys, or they may undergo degradation through cellular internalization facilitated by lysosomes. Excretion: It is common for proteins and peptides to be excreted via the hepatic and renal pathways. In the liver, the amino acids present undergo transamination. This way the amino acids are converted into ammonia and keto acids. Lastly, these substances are excreted via the kidney. However, it is important to take into account that α-bungarotoxin binds irreversibly to the receptors, which would result in a very low metabolic and excretion rate, as most of the neurotoxin would be present at the receptor sites. Indications, availability, efficacy, adverse effects Indications The α-bungarotoxin is among the most well-characterized snake toxins, with its high affinity and specificity for nicotinic acetylcholine receptors. It is a competitive antagonist at nAChR, where it irreversibly and competitively blocks the receptor at the acetylcholine binding sites. It binds to the α1 subunit contained in muscle nAChRs, as well as subsets of neuronal nAChRs like α7-α10. In addition, it was shown that α-bungarotoxin binds to, and block, a subset of GABAA receptors where the β3 subunits connect with each other. With this knowledge in mind, researchers can use α-bungarotoxin as an experimental tool for studying the properties of cholinergic receptors. In addition, by knowing the different and specific binding sites, researchers are able to visualize and track receptor localization and dynamics within cells. This technique has been shown to be easy with the use of a 13-amino acid (WRYYESSLEPYPD) mimotope, which forms a high affinity α-bungarotoxin binding site with the receptors. It has been extensively used in research to study the localization and distribution of these receptors. Through techniques like fluorophore or enzyme conjugation followed by microscopy or immunohistochemical staining, respectively, could give insights about the complex organization and function of the nervous system. With the mentioned techniques, researchers can work towardards a drug development, and understand the disease mechanism. They can idenitify potential drug targets by selectively regulating the activity of certain receptors. Therefore, observe how receptors behave when in contact with the α-bungarotoxin compared to when there is no toxin, researchers can study the mechanism of the toxin. Availability α-Bungarotoxin is available for purchase from multiple biotechnological companies, such as Sigma-Aldrich or Biotium. Researchers may purchase it from there to perform a variety of researches on the toxin. Regarding bioavailability, researchers performed a study in the spinal cord during embryonic development in the embryos of chicks. They found that that binding of α-bungarotoxin was specific and saturable within the concentration range of 1-34 mM. Meaning, as the concentration of α-bungarotoxin increased, the binding site became more and more limited. Reaching the maximum number at 34 mM. Once there was no binding sites available anymore, nicotine behaved in a competitive manner and pushed out the already-bound α-bungarotoxin. Another thing they found was that the dissociation constant (Kd) was 8.0 nM - a concentration of α-bungarotoxin where half of the binding site were occupied. Moreover, maximum binding capacity (Bmax) was found to be 106 +/- 12 fmol/mg - the maximum number of binding sites available per unit of protein. Finally, exogenously administered α-bungarotoxin showed to penetrate the spinal cord tissue and bind to its specific sites after 7 days. Efficacy The efficacy of α-bungarotoxin can be assessed by analyzing their binding affinity. It affects how the signal transmits at the skeletal neuromuscular junction by binding to the postsynaptic nAChRs at high affinity. The affinity of the toxin for this receptor is measured with a dissociation constant (Kd), ranging from 10-11  to 10-9 M. In addition to binding to skeletal neuromuscular junctions, it can specifically bind to different neuronal subsets, such as α7. This binding affinity is only slightly lower with Kd measured in the range of  10-9  to 10-8 M. It can also be analyzed through receptor inhibition, specifically inhibiting the action of acetylcholine on nAChRs. One study found that 5 mirograms/ml of the toxin completely blocks the endplate potential and extrajunctional acetylcholine sensitivity of surface fibers, within approximately 35 minutes in normal and chronically denervated muscles. They performed a washout period of 6.5 hours, which resulted in a partial recovery of the endplate potential, with an amplitude of 0.72 +/- 0.033 mV in normal muscles. In denervated muscles, a partial recovery of acetylholine sensitivity was observed, with an amplitude of 41.02 +/- 3.95 mV/nC compared to a control amplitude of 1215 +/- 197 mV/nC. This same study also found a small population of acetylhcoline receptors (1% of the total population) to react with α-bungarotoxin reversibly. With the toxin, either 20microM carbamylcholine or decamethonium was used simultaneously in normal muscles. Once the toxin and the drug were washed out, the muscle restored a twitch to control levels within 2 hours. The susceptibility of different species to the venom of a krait snake, which contains alpha-bungarotoxin, varies based on their genetic makeup. α-Bungarotoxin binds best to the acetylholine alpha-subunit containing aromatic amino acid residues at positions 187 and 189 - e.g. shrews, cats and mice. In species like humans and hedgehogs, which have nonaromatic amino acid residues at the same positions, have a decreased binding affinity of α-bungarotoxin. Finally, snakes and mongooses have specific amino acid substitutions at 187, 189, and 194, alpha-subunits, which makes the binding of the toxin non-existent. Adverse effects In humans, exposure to α-bungarotoxin can lead to various symptoms, such as headache, dizziness, unconsciousness, visual and speech disturbances, and occasionally seizures. Onset of severe abdominal pain and muscular paralysis within 10 hours and may last for 4 days. Finally, respiratory paralysis can lead to death. Additionally, it can also lead to mild symptoms like dermatitis and allergic reactions, or stronger symptoms like blood coagulation, disseminated intravascular coagulation, tissue injury, and hemorrhage. In animals, studies have been done to analyze the effect of the α-bungarotoxin on animals. One study showed this toxin causing paralysis in chickens by blocking neuromuscular transmission at the motor end-plate. This led to muscle weakness and ultimately, paralysis. In ancient days, these venoms were already widespread across the world. Then, folklore medicine utilized plant-based and bioactive inhibitor compounds to treat bites from venomous animals like snakes and scorpions. This approach proved successful in preventing envenomation, effectively mitigating the harmful effects of venom on the victims. Today, treatment for krait bites involves antivenom, which can lead to various undesirable and potentially life-threatening side effects, such as nausea, urticarial, hypotension, cyanosis, and severe allergic reactions. Toxicity α-Bungarotoxin belongs to a group of bungarotoxins, which are a type of poisonous proteins found in the venom of kraits - among the six most deadly snakes in Asia. Their bite can lead to respiratory paralysis and death. α-Bungarotoxin irreversibly and competitively binds to muscular and neuronal acetylcholine receptors. The paralysis happens due to the neuromuscular transmission at the postsynaptic site being blocked. values, representing lethal dose required to cause death in 50%, were studied in mice using different routes of administration. Subcutaneous administration showed that 0.108 mg/kg was needed to kill 50% of mice. Intravenous administration resulted in a slightly higher LD50 value of 0.113 mg/kg. However, when it was administered intraperitoneally, the LD50 value was 0.08 mg/kg. These values can aid in risk assessment of the toxin. See also β-Bungarotoxin κ-Bungarotoxin References External links Structure at GenBank Ion channel toxins Nicotinic antagonists Snake toxins Bungarus Neurotoxins
Α-Bungarotoxin
Chemistry
5,150
47,242,273
https://en.wikipedia.org/wiki/Canebrake
A canebrake or canebreak is a thicket of any of a variety of Arundinaria grasses: A. gigantea, A. tecta and A. appalachiana. As a bamboo, these giant grasses grow in thickets up to tall. A. gigantea is generally found in stream valleys and ravines throughout the southeastern United States. A. tecta is a smaller stature species found on the Atlantic and Gulf Coastal Plains. Finally, A. appalachiana is found in more upland areas at the southern end of the Appalachian Mountains. Cane does not do well on sites that meet wetland classification. Instead, canebrakes are characteristic of moist lowland, floodplain areas that are not as saturated as true wetlands. History Canebrakes were formerly widespread in the Southern United States, potentially covering , The presence of canebrakes signaled to Native Americans and to early European settlers that an area was fertile and ecologically rich. The canebrakes were a striking feature of the landscape to the earliest European explorers, who remarked upon how densely the cane grew and how difficult it was to travel through. For example, in 1728 William Byrd described hacking through a "forest" of cane "more than a furlong [220 yards] in depth" as he blazed a trail along the border of Virginia and Carolina. Likewise, William Bartram described "the most extensive Canebreak that is to be seen on the face of the whole earth," writing that the canes grew 10-12 feet tall and so close together they were completely impassable without hacking a trail through them. However, as European settlers came, the cane gradually disappeared as it was used as high-quality forage for livestock that was available all year round. Pigs in particular destroyed canebrakes rapidly by rooting up their underground rhizomes, and the settlers intentionally used pigs to clear out canebrakes so they could be converted to agricultural land. The absence of the controlled burns used by Native Americans to maintain the canebrakes, conversion to agricultural land, and grazing by livestock has almost eliminated the canebrakes. Destruction of habitat for development and construction is also implicated as a cause of the decline by the Choctaw Nation. Ecology This destruction has impacted a number of species. The survival of the Florida panther (Puma concolor couguar) has been challenged, and Bachman's warbler (Vermivora bachmanii) has probably become extinct. The extinct Carolina parakeet (Conuropsis carolinensis) also depended on canebrakes, and their demise may have been hastened by the canebrakes' decline. Other species considered canebrake specialists include as many as seven moth species and five known butterfly species dependent on Arundinaria bamboos as a host plant, and Swainson's warbler (Limnothlypis swainsonii). Swainson's warbler has recently been found to use pine plantations (widespread across the Southeastern United States) of a particular age, as they may provide the structural features and prey base that the species seeks. Contrary to the characterization of canebrakes as homogenous, they host a great diversity of species, including globally rare species. A survey of canebrakes in the Carolinas found 330 taxa living in the canebrake habitat, a number that would likely increase with more study. Canebrakes provide habitat for the critically endangered Alabama canebrake pitcher plant (Sarracenia alabamensis), which is only found in 11 sites in just two counties of the state of Alabama. Cane can propagate itself rapidly through asexual reproduction, allowing it to persist quietly in the shade of a forest for years and rapidly take advantage of disturbance such as wildfire. Historically, canebrakes were maintained by Native Americans using controlled burns. The fire would burn the aboveground part of the plant but leave the underground rhizomes unharmed. Canebrakes have been identified as important ecosystems for supporting over 70 wildlife species, possibly ideal candidates for mitigating nitrate pollution in groundwater, and crucial to the material cultures of Southeastern Native American nations, but relatively little study has been devoted to them, partially because virtually all canebrakes that still exist are isolated and fragmentary. Canebrakes are unlikely to be reestablished significantly under current methods of land management, but there is interest in finding out how to restore them. Rare plant species Canebrakes have been found to provide habitat for the following rare plants: Lysimachia asperulifolia, rough-leaved loosestrife Lilium pyrophilum, sandhills lily Eupatorium resinosum, pine barrens thoroughwort Dionaea muscipula, Venus flytrap Sarracenia alabamensis, Alabama canebrake pitcher plant Carex austrodeflexa, canebrake sedge Conservation Canebrakes are considered a critically endangered ecosystem by many biologists, but they have been studied very little. Southern Illinois University conducts some ongoing research on restoring canebrakes. There is also a great interest among the Eastern Band of Cherokee Indians and other regional tribal nations to restore canebrakes, to preserve the crucial roles the plant plays in Cherokee culture and to stop the art of river cane basket weaving from dying out. The various insect species that are Arundinaria specialists are at risk due to their sensitivity to habitat fragmentation. A major obstacle to restoring canebrakes is the reproductive habits of Arundinaria bamboos; Arundinaria typically reproduces asexually using rhizomes, forming clonal colonies that spread outward. The plant only flowers every few decades, and usually dies after flowering; additionally, seeds are often not viable. Therefore, propagating the plant must usually be done by dividing existing colonies or growing rhizome cuttings. Conducting studies has been challenging; experimental plantings of cane in a study conducted by researchers at Mississippi State University to test the erosion mitigation potential of canebrakes yielded no results because only 1.2% of seedlings survived the following year. Southern Illinois University researchers have located 140 patches of giant cane and collaborate with many conservation organizations and the U.S. Department of Agriculture Forest Service in the effort to replant 15 acres of cane per year. In South Carolina, the Chattooga Conservancy has formed a collaboration with the Eastern Band of the Cherokee and the USDA Forest Service to restore 29 acres of canebrake. Revitalization of Traditional Cherokee Artisan Resources (RTCAR) has also coordinated the restoration of river cane on a 109-acre site in North Carolina. This restoration area will include educational signage in the Cherokee and English languages. In 2023, the United Keetoowah Band of Cherokee Indians in Oklahoma co-hosted the first Rivercane Gathering in Tahlequah, Oklahoma with the U.S. Forestry Service, an educational event to unite traditional tribal experts and artisans with various researchers and landholders for the continued preservation of canebrakes. References Grasses Habitats Riparian zone Environmental conservation Plant communities of the Eastern United States Plant communities of Kentucky Plant communities of Alabama Plant communities of Tennessee Plant communities of Georgia (U.S. state) Plant communities of Mississippi Plant communities of South Carolina Plant communities of North Carolina Plants by habitat
Canebrake
Biology,Environmental_science
1,482
61,054,835
https://en.wikipedia.org/wiki/QFAB%20Bioinformatics
QFAB Bioinformatics is a Queensland-based organisation concerned with the provision of resources in bioinformatics, biostatistics and specialised computing platforms. QFAB operates Australia-wide and is a key contributor to the EMBL Australia Bioinformatics Resource. History QFAB was established in 2007, with funding from the Queensland Government's National and International Research Alliances Program, as a joint venture between The University of Queensland, Queensland University of Technology, Griffith University, CSIRO’s Australian eHealth Research Centre and the Queensland Government’s Department of Agriculture, Fisheries and Forestry. Mark Ragan from the Institute of Molecular Bioscience (IMB) and Anthony Maeder from the Australian eHealth Research Centre led QFAB's establishment and appointed Jeremy Barker as CEO (2007–2014) to address three critical issues then facing bioinformatics in Queensland: integrated data and high-performance computing in a secure environment affordable network bandwidth access to expert personnel In 2015, Dominique Gorse became CEO of QFAB and led the strategic alliance with QCIF, the Queensland Cyber Infrastructure Foundation; the two organisations merged in April 2016. QCIF operates significant high-performance computing, cloud computing and data storage resources, is part of the national eResearch infrastructure. Queensland Cyber Infrastructure Foundation QFAB Bioinformatics is a unit of the Queensland Cyber Infrastructure Foundation (QCIF), a not-for-profit member-based organisation. Members Central Queensland University Griffith University James Cook University Queensland University of Technology The University of Queensland University of Southern Queensland Affiliate member University of the Sunshine Coast Galaxy Australia QFAB and QCIF, together with the University of Melbourne's Melbourne Bioinformatics, and the University of Queensland's Research Computing Centre jointly built and operate Galaxy Australia, which is a major feature of the Genomics Virtual Laboratory, based on the Galaxy (computational biology) scientific workflow system. References Bioinformatics organizations 2007 establishments in Australia
QFAB Bioinformatics
Biology
403
18,303,699
https://en.wikipedia.org/wiki/Vertical%20boiler
A vertical boiler is a type of fire-tube or water-tube boiler where the boiler barrel is oriented vertically instead of the more common horizontal orientation. Vertical boilers were used for a variety of steam-powered vehicles and other mobile machines, including early steam locomotives. Design considerations Tube arrangements Many different tube arrangements have been used. Examples include: Fire tubes Vertical fire-tube boiler Vertical boiler with horizontal fire-tubes Water tubes Vertical cross-tube boiler Field-tube boiler Thimble tube boiler Spiral watertube boiler Advantages The main advantages of a vertical boiler are: Small footprint – where width and length constraints are critical, use of a vertical boiler permits design of a smaller machine. Water-level tolerance – The water level in a horizontal boiler must be maintained above the crown (top) of the firebox at all times, or the crownplate could overheat and buckle, causing a boiler explosion. For a vehicle application expected to traverse hills, such as a railway locomotive or steam wagon, maintaining the correct water level when the vehicle itself is not level is a skilled task, and one that occupies much of the fireman's time. In a vertical boiler, the water is all sitting on the top of the firebox, and the boiler would need to be extremely low on water before a gradient could cause a risk by uncovering the firebox top. Simpler (major) maintenance – A vertical boiler is usually mounted on a frame on the vehicle, allowing easy replacement. Horizontal boilers, such as those on railway locomotives and traction engines, form an integral part of the vehicle – the vehicle is literally built around the boiler – and hence replacement requires the dismantling of the entire vehicle. Disadvantages The main disadvantages of a vertical boiler are: Size – The benefits of a small footprint are compromised by the much greater height required. The presence of over-bridges limits the height of steam vehicles, and this in turn restricts the size (and hence steam production) of the boiler. Grate area – This is limited to the footprint of the boiler, thus restricting the amount of steam that may be produced. Short tubes – Boiler tubes must be kept short to minimise height. As a result, much of the available heat is lost through the chimney, as it has too little time to heat the tubes. Sediment – Sediment may settle on the bottom tube sheet (the plate above the firebox) insulating the water from the heat and allowing the sheet to burn out. Applications Railway locomotives Several manufacturers produced a significant number of vertical boiler locomotives. Notable amongst these were: Alexander Chaplin & Co. of Glasgow, who produced a range of steam-powered industrial products which included steam cranes, hoists, locomotives, pumping and winding engines, ship's deck engines and sea water distilling apparatus. Between 1860 and 1899, it delivered 135 vertical boiler locomotives similar to the East London Harbour 0-4-0VB to customers around the world. De Winton of Caernarfon, who produced at least 34 narrow gauge locomotives, mainly for use in the slate quarries of Wales. Sentinel Waggon Works of Shrewsbury, who produced a large number of shunters using their high-pressure vertical boilers. These were mainly used on industrial railways in Britain. Société anonyme John Cockerill produced 891 standard gauge shunting locomotives between 1867 and 1942 using a standard design with five sizes. Steam lorries The Sentinel Waggon Works also produced a range of road lorries (steam wagons) based on their high-pressure vertical boilers Steam tractors The Best Manufacturing Company of San Leandro in California produced a range of steam tractors that used vertical boilers. Steam rollers Certain designs of steam roller departed from the conventional traction engine style of a horizontal boiler with an engine mounted above. Vertical-boilered rollers were built around a substantial girder frame chassis, with the boiler being mounted low down between the front and rear rolls. Such designs were not common in the UK. Steam donkeys The traditional form of steam donkey (as a mobile winch used in the logging industry) married a vertical boiler with a steam engine on a rigid base fitted with skids for mobility. Since the ground to be traversed would be rough and rarely level, the water-level -tolerant design of the vertical boiler was an obvious choice. Steam shovels and cranes Construction equipment such as steam cranes and steam shovels used vertical boilers to good effect. On a rotating base, the weight of the boiler would help to counterbalance the load suspended from the shovel bucket or crane jib, mounted on the opposite side of the pivot from the boiler. The compact boiler footprint permitted smaller designs than would have been the case for a horizontal type, thus allowing use on smaller worksites; the extra height of a vertical boiler being less critical for such a generally tall machine. Marine applications Some steam boats, particularly smaller types such as river launches, were designed around a vertical boiler. The small footprint of the boiler permitting smaller, more space-efficient designs, with less of the usable vessel being occupied by the means of propulsion rather than the payload. Stationary applications Vertical types such as the Cochran boiler provided useful, small footprint, package solutions for many stationary applications, including process and space heating. Notes References Vertical boilers Steam boilers Boilers
Vertical boiler
Chemistry
1,063
19,363,582
https://en.wikipedia.org/wiki/RightScale
RightScale was a company that sold software as a service for cloud computing management for multiple providers. The company was based in Santa Barbara, California. It was acquired by Flexera Software in 2018. History Thorsten von Eicken, a former professor of computer science at Cornell University, left to manage systems architecture for Expertcity, the startup company that became Citrix Online. He was joined by RightScale CEO Michael Crandell, and RightScale Vice President of Engineering Rafael H. Saavedra. RightScale received $4.5 million in venture capital in April 2008, $13 million in December 2008, and $25 million in September 2010 at a valuation of $100-$125 million. On November 5, 2012, RightScale announced it was expanding its existing relationship with cloud hosting provider Rackspace to integrate with OpenStack. On July 18, 2012, RightScale announced its acquisition of the Scotland-based PlanForCloud.com (formerly ShopForCloud.com), which provides a free cloud cost forecasting service. In February 2013, RightScale became the first cloud management company to resell Google Compute Engine public cloud services. RightScale introduced the Cloud Maturity Model with the release of its second annual State of the Cloud Report on April 25, 2013. The report findings are based on a RightScale survey of 625 IT decision makers and categorized according to the Cloud Maturity Model, which is an analysis and segmentation of companies based on their varying degrees of cloud adoption. On September 26, 2018, Flexera Software acquired RightScale for an undisclosed amount. References Notes External links Official site Cloud applications Cloud infrastructure Cloud computing providers Software companies established in 2006 Companies based in Santa Barbara County, California 2006 establishments in California American companies disestablished in 2018 American companies established in 2006 Software companies disestablished in 2018 2018 disestablishments in California Defunct software companies of the United States
RightScale
Technology
394
27,880,679
https://en.wikipedia.org/wiki/Co-fermentation
Co-fermentation is the practice in winemaking of fermenting two or more fruits at the same time when producing a wine. This differs from the more common practice of blending separate wine components into a cuvée after fermentation. While co-fermentation in principle could be practiced for any mixture of grape varieties or other fruits, it is today more common for red wines produced from a mixture of red grape varieties and a smaller proportion of white grape varieties. Co-fermentation is an old practice going back to the now uncommon practice of having field blends (mixed plantations of varieties) in vineyards, and the previous practice in some regions (such as Rioja and Tuscany) of using a small proportion of white grapes to "soften" some red wines which tended to have harsh tannins when produced with the winemaking methods of the time. It is believed that the practice may also have been adopted because it was found empirically to give deeper and better colour to wines, which is due to improved co-pigmentation resulting from some components in white grapes. Use today The only classical Old World wine region where co-fermentation is still widely practiced is now the Côte-Rôtie appellation of northern Rhône, while the use of white varieties in red Rioja and Tuscany wine has more or less disappeared. In Côte-Rôtie, the red variety Syrah and the aromatic white variety Viognier (up to 20% is allowed, but 5–10% is more common) must be co-fermented, if Viognier is used. The reason why Viognier has been kept in Côte-Rôtie (while for example the white grapes Marsanne and Roussanne are hardly found any more in red Hermitage or other red Rhône wines where they are allowed) is that it adds signature floral aromas to the wines. The popularity of Côte-Rôtie has led to New World interpretations of this blend, most notably Australian Shiraz-Viognier blends, which are also produced by co-fermentation. The reason why co-fermentation is not more widely practiced is that it "locks in" a certain blend already at the start of the fermentation, which gives the winemaker less possibility to adjust the blend after fermentation. Co-fermentation is also performed in situations where field blend varietals are indistinguishable from each other, thus necessitating co-fermentation. References Fermentation in food processing Winemaking
Co-fermentation
Chemistry
517
4,947,029
https://en.wikipedia.org/wiki/Roy%20and%20Silo
Roy and Silo (born 1987) were two male chinstrap penguins in New York City's Central Park Zoo. They were noted by staff at the zoo in 1998 to be performing mating rituals, and one of them in 1999 attempted to hatch a rock as if it were an egg. This inspired zoo keepers to give them an egg from a pair of penguins, which could not hatch it, resulting in both of them raising a chick that was named Tango. Tango herself was viewed in a similar situation with another female penguin. Roy and Silo drifted apart after several years, and in 2005, Silo paired with a female penguin called Scrappy. Roy and Silo's story has been made into a children's book and featured in a play. The practice of allowing pairs of male penguin couples to adopt eggs has been repeated in other zoos around the world. Both Tango and Roy have since died. History Roy and Silo met at the zoo and they began their relationship in 1998. They were observed conducting mating rituals typical of their species including entwining their necks and mating calls. In 1999 the pair were observed trying to hatch a rock as if it were an egg. They also attempted to steal eggs from other penguin couples. When the zoo staff realized that Roy and Silo were both male, they tested them further by replacing the rock with a dummy egg made of stone and plaster. As it was "incubated real well", it occurred to the zoo keepers to give them the second egg of a penguin couple, a couple which previously had been unable to successfully hatch two eggs at a time. Roy and Silo incubated the egg for 34 days and spent two and a half months raising the healthy young chick, a female named "Tango". When she reached breeding age, Tango paired with another female penguin called Tanuzi. As of 2005, the two had paired for two mating seasons. Shortly after their story broke in the press, Roy and Silo began to separate after a more aggressive pair of penguins forced them out of their nest. In 2005, Silo found another partner, a female called Scrappy, which had been brought from SeaWorld Orlando in 2002, while Roy paired with another male penguin named Blue. Both Tango and Roy have since died. Impact Roy and Silo were not the first same-sex male penguin couple to be known in New York, as a pairing of two penguins named Wendell and Cass at New York Aquarium was reported in 2002. However, attention was first brought to Roy and Silo after The New York Times published a story about them in May 2004. The article described them as "gay penguins", and listed two other pairs of penguins in New York that showed similar behavior. Roy and Silo's story became the basis for two children's books, And Tango Makes Three, by Justin Richardson and Peter Parnell and illustrated by Henry Cole, and the German-language Zwei Papas für Tango (Two Daddies for Tango) by Edith Schrieber-Wicke and Carola Holland. And Tango Makes Three itself became controversial, being listed as one of the top ten most challenged books in public libraries and schools across America for five years in a row, but became a bestseller. Roy and Silo have also been featured as characters in theatrical works, including the play Birds of a Feather, a character-driven piece about both gay and straight relationships, which made its début in Fairfax, Virginia in July 2011. And Then Came Tango, a play/ballet for young audiences by Emily Freeman, was premiered during the March 2011 Cohen New Works Festival at The University of Texas at Austin. The Austin Chronicle recognized the production with an Honorable Mention in its "Top 10 Theatrical Wonders of 2011." The breakup of the pair was well-received by certain groups. Warren Throckmorton said through the Christian right organization Focus on the Family: "For those who have pointed to Roy and Silo as models for us all, these developments must be disappointing. Some gay activists might actually be angry." A spokesperson for the National Gay and Lesbian Task Force responded by explaining that the actions of two penguins is not a good way of answering the question of whether sexual orientation is a choice or inborn. A 2010 study by France's Centre for Functional and Evolutionary Ecology found that homosexual pairings in penguins is widespread, but such pairings do not usually last more than a few years. The publicity on the subject caused public outcry among gay and lesbian communities when stories were published about zoo keepers forcibly splitting up same-sex penguin couples. Dwindling numbers of some species of penguins contributed to those decisions. The act of allowing a same-sex pair of penguins to adopt either an egg or a chick in the same manner as Roy and Silo has been repeated more than once. In 2009, German zookeepers gave an egg to a male same-sex pair of Humboldt penguins named Z and Vielpunkt, which hatched the egg and raised the chick. In 2011, Chinese zoo keepers gave a chick to a male same-sex pair of penguins to look after, once it became apparent that the chick's natural parents could not look after two chicks. In 2018, Sealife Sydney in Australia, saw two male Gentoo penguins successfully hatch an egg, after they were observed with a dummy egg. In 2020 they hatched a second egg, and their first chick also had her own chick. The Central Park Zoo has had other same-sex couples, with both an all-male couple (named Squawk and Milo) and an all-female couple (named Georgey and Mickey) conducting courtship behavior. In 2014, zookeepers at Wingham Wildlife Park, in Kent, UK, gave an egg that had been abandoned by its mother after the father refused to help incubate it to a Humboldt penguin male same-sex pair called Jumbs and Kermit. The park owner stated in a BBC interview, "These two have so far proven to be two of the best penguin parents we have had yet." See also List of individual birds Sphen and Magic Homosexual behavior in penguins List of animals displaying homosexual behavior References Individual penguins 1987 animal births Animal sexuality Individual birds in the United States Bird duos
Roy and Silo
Biology
1,268
52,774,987
https://en.wikipedia.org/wiki/NGC%205050
NGC 5050 is a lenticular galaxy in the constellation Virgo. It was discovered by a German astronomer Albert Marth on April 30, 1864. It is also known as CGCG 44-43, MCG 1-34-12, PGC 46138, UGC 8329. Marth discovered it in Malta with the help of Lassel's 48" reflector. It is faint, small and stellar with an apparent magnitude of 1.4. See also New General Catalogue References External links 5050 Lenticular galaxies Virgo (constellation)
NGC 5050
Astronomy
116
70,524,446
https://en.wikipedia.org/wiki/N-Butylmercuric%20chloride
n-Butylmercuric chloride is an organic mercury salt that is used as a catalyst and a precursor to other organomercuric compounds. Preparation n-Butylmercuric chloride is made by reacting n-butylmagnesium bromide with mercury chloride. It can also be prepared by reacting 1-butene with mercury acetate. References Mercury(II) compounds Organomercury compounds
N-Butylmercuric chloride
Chemistry
87
220,589
https://en.wikipedia.org/wiki/Theory%20of%20imputation
The theory of imputation is based on the so-called theory of factors of production proposed by the French economist Jean-Baptiste Say and elaborated by the American economist John Bates Clark in his work The Distribution of Wealth (1899; Russian translation, 1934). The proponents of the theory of imputation see its main task as elucidating which parts of wealth may be attributed (imputed) to labor and capital, respectively. Principles In economics, the theory of imputation, first expounded by Carl Menger, maintains that factor prices are determined by output prices (i.e. the value of factors of production is the individual contribution of each in the final product, but its value is the value of the last contributed to the final product (the marginal utility before reaching the point Pareto optimal). Thus, Friedrich von Wieser identified a flaw in the theory of imputation as expounded by his teacher, Carl Menger: overvaluation may occur if one is confronted with economies where profits jump (maximums and minimums in his utility function, where its first derivative equals 0). Wieser thus suggested as an alternative, the simultaneous solution of a system of industrial equations: Industry 1: X + Y = 300 Industry 2: 6X + Z = 900 Industry 3: 4Y + 3Z = 1700 ⇒ X = 100, Y = 200, Z = 300. Given that a factor is used in the production of a range of first-order goods, its value is determined by the good that is worth the least among all the goods in the range. This value is determined at the margin, the marginal utility of the last unit of the least valuable good produced by the factor. In connection with his opportunity cost, the value so derived represents an opportunity cost across all industries, and the values of the factors of production and goods are determined in the whole system. Thus, supply and demand do not develop into the determinants of value; the determinant of value is the marginal utility. That is the opposite of the labor theory of value, maintained by classical economists such as Adam Smith and David Ricardo. See also Implicit cost Imputed income Imputed rent References Costs Economics and time Theory of value (economics) Austrian School
Theory of imputation
Physics
457
38,061,403
https://en.wikipedia.org/wiki/Pilea%20cavernicola
Pilea cavernicola is a herbaceous plant about 0.5 meters tall, native to China. A sciophyte, it grows in very low light conditions in caves in Fengshan County, Guangxi, China. References cavernicola Cave organisms
Pilea cavernicola
Biology
55
27,802,151
https://en.wikipedia.org/wiki/Refinaria%20do%20Planalto%20Paulista
Refinaria de Paulínia or simply REPLAN is a petroleum refinery located in the city of Paulínia in the São Paulo state, in Brazil. REPLAN is the largest refinery of the Brazilian company Petrobras, with a capacity of about , which accounts for about 20% of Brazilian overall petroleum refining capacity. About 80% of the processed petroleum is produced in Brazil, mostly from the Campos Basin. In popular culture The REPLAN refinery was destroyed in the Tom Clancy novel Dead or Alive, by terrorists hoping to undermine a deal between the United States and Petrobras to ship oil to the United States at sub-OPEC prices. External links Refinarias Petrobrás Oil refineries in Brazil Petrobras Paulínia Buildings and structures in São Paulo (state)
Refinaria do Planalto Paulista
Chemistry
163
14,799,759
https://en.wikipedia.org/wiki/Sp4%20transcription%20factor
Transcription factor Sp4 is a protein that in humans is encoded by the SP4 gene. Interactions Sp4 transcription factor has been shown to interact with E2F1. References Further reading External links Transcription factors
Sp4 transcription factor
Chemistry,Biology
43
51,270,205
https://en.wikipedia.org/wiki/Ramsay%20grease
Ramsay grease is a vacuum grease, used as a lubrication and a sealant of ground glass joints and cocks on laboratory glassware, e.g. burettes. It is usable to about 10−2 mbar (about 1 Pa) and about 30 °C. Its vapor pressure at 20 °C is about 10−4 mbar (0.01 Pa). It is named after Sir William Ramsay. Different grades exist (e.g. thick or viscous, soft). The viscous one is used for standard stopcocks and ground joints. The soft grade is for large stopcocks and ground joints, desiccators, and for lower temperature use. Ramsay grease consists of paraffin wax, petroleum jelly, and crude natural rubber, in ratio 1:3:7 to 1:8:16. Due to the rubber content it has less tendency to flow. One recipe for a grease usable up to 25 °C consists of 6 parts of petroleum jelly, 1 part of paraffin wax, and 6 parts of Pará rubber. The dropping point of Leybold-brand Ramsay grease is 56 °C; its maximum service temperature is 25-30 °C. Its vapor pressure at 25 °C is 10−7 torr (0.013 mPa), at 38 °C it is 10−4 torr (13 mPa). An equivalent of Ramsay grease can be made by cooking lanolin with natural rubber extracted from golf balls. References Greases Vacuum systems Lubricants
Ramsay grease
Physics,Engineering
306
7,481,381
https://en.wikipedia.org/wiki/Dosage%20form
Dosage forms (also called unit doses) are pharmaceutical drug products presented in a specific form for use. They contain a mixture of active ingredients and inactive components (excipients), configured in a particular way (such as a capsule shell) and apportioned into a specific dose. For example, two products may both be amoxicillin, but one may come in 500 mg capsules, while another may be in 250 mg chewable tablets. The term unit dose can also refer to non-reusable packaging, particularly when each drug product is individually packaged. However, the FDA differentiates this by referring to it as unit-dose "packaging" or "dispensing". Depending on the context, multi(ple) unit dose may refer to multiple distinct drug products packaged together or a single product containing multiple drugs and/or doses. Formulations The term dosage form may also sometimes refer only to the pharmaceutical formulation of a drug product's constituent substances, without considering its final configuration as a consumable product (e.g., capsule, patch, etc.). Due to the somewhat ambiguous nature and overlap of these terms within the pharmaceutical industry, caution is advisable when discussing them with others who may interpret the terminology differently. Types Dosage forms vary depending on the method/route of administration, which can include many types of liquid, solid, and semisolid forms. Common dosage forms include tablets, capsules, drinks, and syrups, among others. A combination drug (or fixed-dose combination; FDC) is a product that contains more than one active ingredient (e.g., one tablet, one capsule, or one syrup with multiple drugs). In naturopathy, dosages can take the form of decoctions and herbal teas, in addition to the more conventional methods mentioned above. Route of administration The route of administration (ROA) for drug delivery depends on the dosage form of the substance. Different dosage forms may be available for a particular drug, especially if certain conditions restrict the ROA. For example, if a patient is unconscious or experiencing persistent nausea and vomiting, oral administration may not be feasible, necessitating the use of alternative routes, such as inhalational, buccal, sublingual, nasal, suppository, or parenteral. A specific dosage form may also be required due to issues such as chemical stability or pharmacokinetic properties. For instance, insulin cannot be given orally because it is extensively metabolized in the gastrointestinal tract (GIT) before it reaches the bloodstream, preventing it from reaching therapeutic target destinations. Similarly, the oral and intravenous doses of a drug like paracetamol differ for the same reason. Oral Pills, i.e. tablets or capsules Liquids such as syrups, solutions, elixers, emulsions, and tinctures Liquids such as decoctions and herbal teas Orally disintegrating tablets Lozenges or candy (electuaries) Thin films (e.g., Listerine Pocketpaks, nitroglycerin) to be placed on top of or underneath the tongue as well as against the cheek Powders or effervescent powder or tablets, often instructed to be mixed into a food item Plants or seeds prepared in various ways such as a cannabis edible Pastes such as high fluoride toothpastes Gases such as oxygen (can also be delivered through the nose) Ophthalmic Eye drops Lotions Ointments Emulsions Inhalation Aerosolized medication Dry-powder Inhalers or metered dose inhalers Nebulizer-administered medication Smoking Vaporizer-administered medication Unintended ingredients Talc is an excipient often used in pharmaceutical tablets that may end up being crushed to a powder against medical advice or for recreational use. Also, illicit drugs that occur as white powder in their pure form are often cut with cheap talc. Natural talc is cheap but contains asbestos while asbestos-free talc is more expensive. Inhaled talc that has asbestos is generally accepted as being able to cause lung cancer if it is inhaled. The evidence about asbestos-free talc is less clear, according to the American Cancer Society. Injection Parenteral Intradermally-administered (ID) Subcutaneously-administered (SC) Intramuscularly-administered (IM) Intraosseous administration (IO) Intraperitoneally-administered (IP) intravenously-administered (IV) Intracavernously-administered (ICI) These are usually solutions and suspensions. Unintended ingredients Safe Eye drops (normal saline in disposable packages) are distributed to syringe users by needle exchange programs. Unsafe The injection of talc from crushed pills has been associated with pulmonary talcosis in intravenous drug users. Topical Creams, liniments, balms (such as lip balm or antiperspirants and deodorants), lotions, or ointments, etc. Gels and hydrogels Ear drops Transdermal and dermal patches to be applied to the skin Powders Unintended use It is not safe to calculate divided doses by cutting and weighing medical skin patches, because there's no guarantee that the substance is evenly distributed on the patch surface. For example, fentanyl transdermal patches are designed to slowly release the substance over 3 days. It is well known that cut fentanyl transdermal consumed orally have cause overdoses and deaths. Single blotting papers for illicit drugs injected from solvents in syringes may also cause uneven distribution across the surface. Other Intravaginal administration Vaginal rings Capsules and tablets Suppositories Rectal administration (enteral) Suppositories Suspensions and solutions in the form of enemas Gels Urethral Nasal sprays See also Classification of Pharmaco-Therapeutic Referrals Drug delivery Route of administration Pharmaceutical packaging References External links Dosage From Development Pharmacokinetics
Dosage form
Chemistry
1,253
19,801,254
https://en.wikipedia.org/wiki/Nocturnal%20house
A nocturnal house, sometimes called a nocturama, is a building in a zoo or research establishment where nocturnal animals are kept and viewable by the public. The unique feature of buildings of this type is that the lighting within is isolated from the outside and reversed; i.e. it is dark during the day and lit at night. This is to enable visitors and researchers to more conveniently study nocturnal animals during daylight hours. Internally, a building usually consists of several glass-walled enclosures containing a replica of the animals' normal environments. In the case of burrowing animals, often their tunnels are 'half-glassed' so the animals can be observed while underground. Notable nocturnal houses Current USA Kingdoms of The Night, Omaha's Henry Doorly Zoo & Aquarium (Nebraska) Nocturnal Building and Aviary, Columbus Zoo & Aquarium (Ohio) Animals of The Night, Memphis Zoo (Tennessee) Bat House in Jaguar Jungle, Audubon Zoo (Louisiana) Brazos by Night, Cameron Park Zoo (Texas) Mouse House, Bronx Zoo (New York) Desert's Edge and Clouded leopard Rain Forest, Brookfield Zoo (Illinois) Night Hunters, Cincinnati Zoo and Botanical Garden (Ohio) Mexico Guadalajara Zoo United Kingdom Nightlife, ZSL London Zoo Fruit Bat Forest, Chester Zoo Europe Berlin Zoological Garden Frankfurt Zoological Garden Moscow Zoo Prague Zoo Plzen Zoo Budapest Zoological and Botanical Garden Australasia Taronga Zoo Wild Life Sydney Adelaide Zoo Perth Zoo David Fleay Wildlife Park Auckland Zoo India Nandankanan Zoological Park Former USA World of Darkness, Bronx Zoo (New York) - closed 2009 The Night Exhibit, Woodland Park Zoo (Seattle, WA) - closed 2010 References External links Queensland Environmental Protection Agency Goodzoos.com Zoos Nocturnal animals
Nocturnal house
Biology
354
62,556,833
https://en.wikipedia.org/wiki/Impulse%20vector
An impulse vector, also known as Kang vector, is a mathematical tool used to graphically design and analyze input shapers that can suppress residual vibration. The impulse vector can be applied to both undamped and underdamped systems, as well as to both positive and negative impulses in a unified manner. The impulse vector makes it easy to obtain impulse time and magnitude of the input shaper graphically. A vector concept for an input shaper was first introduced by W. Singhose for undamped systems with positive impulses. Building on this idea, C.-G. Kang introduced the impulse vector (or Kang vector) to generalize Singhose's idea to undamped and underdamped systems with positive and negative impulses. Definition For a vibratory second-order system with undamped natural frequency and damping ratio , the magnitude and angle of an impulse vector (or Kang vector) corresponding to an impulse function , is defined in a 2-dimensional polar coordinate system as where implies the magnitude of an impulse function, implies the time location of the impulse function, and implies damped natural frequency . For a positive impulse function with , the initial point of the impulse vector is located at the origin of the polar coordinate system, while for a negative impulse function with , the terminal point of the impulse vector is located at the origin. □ In this definition, the magnitude is the product of and a scaling factor for damping during time interval , which represents the magnitude before being damped; the angle is the product of the impulse time and damped natural frequency. represents the Dirac delta function with impulse time at . Note that an impulse function is a purely mathematical quantity, while the impulse vector includes a physical quantity (that is, and of a second-order system) as well as a mathematical impulse function. Representing more than two impulse vectors in the same polar coordinate system makes an impulse vector diagram. The impulse vector diagram is a graphical representation of an impulse sequence. Consider two impulse vectors and in the figure on the right-hand side, in which is an impulse vector with magnitude and angle corresponding to a positive impulse with , and is an impulse vector with magnitude and angle corresponding to a negative impulse with . Since the two time-responses corresponding to and are exactly same after the final impulse time as shown in the figure, the two impulse vectors and can be regarded as the same vector for vector addition and subtraction. Impulse vectors satisfy the commutative and associative laws, as well as the distributive law for scalar multiplication. The magnitude of the impulse vector determines the magnitude of the impulse, and the angle of the impulse vector determines the time location of the impulse. One rotation, angle, on an impulse vector diagram corresponds to one (damped) period of the corresponding impulse response. If it is an undamped system (), the magnitude and angle of the impulse vector become and . Properties Property 1: Resultant of two impulse vectors. The impulse response of a second-order system corresponding to the resultant of two impulse vectors is same as the time response of the system with a two-impulse input corresponding to two impulse vectors after the final impulse time regardless of whether the system is undamped or underdamped. □ Property 2: Zero resultant of impulse vectors. If the resultant of impulse vectors is zero, the time response of a second-order system for the input of the impulse sequence corresponding to the impulse vectors becomes zero also after the final impulse time regardless of whether the system is undamped or underdamped. □ Consider an underdamped second-order system with the transfer function . This system has and . For given impulse vectors and as shown in the figure, the resultant can be represented in two ways, and , in which corresponds to a negative impulse with and , and corresponds to a positive impulse with and . The resultants , can be found as follows. , Note that . The impulse responses and corresponding to and are exactly same with after each impulse time location as shown in green lines of the figure (b). Now, place an impulse vector on the impulse vector diagram to cancel the resultant as shown in the figure. The impulse vector is given by . When the impulse sequence corresponding to three impulse vectors and is applied to a second-order system as an input, the resulting time response causes no residual vibration after the final impulse time as shown in the red line of the bottom figure (b). Of course, another canceling vector can exist, which is the impulse vector with the same magnitude as but with an opposite arrow direction. However, this canceling vector has a longer impulse time that can be as much as a half period compared to . Applications: Design of input shapers using impulse vectors ZVDn shaper Using impulse vectors, we can redesign known input shapers such as zero vibration (ZV), zero vibration and derivative (ZVD), and ZVDn shapers. The ZV shaper is composed of two impulse vectors, in which the first impulse vector is located at 0°, and the second impulse vector with the same magnitude is located at 180° for . Then from the impulse vector diagram of the ZV shaper on the right-hand side, . Therefore, . Since (normalization constraint) must be hold, and , . Therefoere, . Thus, the ZV shaper is given by . The ZVD shaper is composed of three impulse vectors, in which the first impulse vector is located at 0 rad, the second vector at rad, and the third vector at rad, and the magnitude ratio is . Then . From the impulse vector diagram, . Therefore, . Also from the impulse vector diagram, . Since must be hold, . Therefore, . Thus, the ZVD shaper is given by . The ZVD2 shaper is composed of four impulse vectors, in which the first impulse vector is located at 0 rad, the second vector at rad, the third vector at rad, and the fourth vector at rad, and the magnitude ratio is . Then . From the impulse vector diagram, . Therefore, . Also, from the impulse vector diagram, . Since must be hold, . Therefore, . Thus, the ZVD2 shaper is given by . Similarly, the ZVD3 shaper with five impulse vectors can be obtained, in which the first vector is located at 0 rad, the second vector at rad, third vector at rad, the fourth vector at rad, and the fifth vector at rad, and the magnitude ratio is . In general, for the ZVDn shaper, i-th impulse vector is located at rad, and the magnitude ratio is where implies a mathematical combination. ETM shaper Now, consider equal shaping-time and magnitudes (ETM) shapers, with the same magnitude of impulse vectors and with the same angle between impulse vectors. The ETMn shaper satisfies the conditions . Thus, the resultant of the impulse vectors of the ETMn shaper becomes always zero for all . One merit of the ETMn shaper is that, unlike the ZVDn or extra insensitive (EI) shapers, the shaping time is always one (damped) period of the time response even if n increases. The ETM4 shaper with four impulse vectors is obtained from the above conditions together with impulse vector definitions as . . The ETM5 shaper with five impulse vectors is obtained similarly as . . In the same way, the ETMn shaper with can be obtained easily. In general, ETM shapers are less sensitive to modeling errors than ZVDn shapers in a large positive error range. Note that the ZVD shaper is an ETM3 shaper with . NMe shaper Moreover, impulse vectors can be applied to design input shapers with negative impulses. Consider a negative equal-magnitude (NMe) shaper, in which the magnitudes of three impulse vectors are , and the angles are . Then the resultant of three impulse vectors becomes zero, and thus the residual vibration is suppressed. Impulse time of the NMe shaper are obtained as , and impulse magnitudes are obtained easily by solving the simultaneous equations . The resulting NMe shaper is . . The NMe shaper has faster rise time than the ZVD shaper, but it is more sensitive to modeling error than the ZVD shaper. Note that the NMe shaper is the same with the UM shaper if the system is undamped (). Figure (a) in the right side shows a typical block diagram of an input-shaping control system, and figure (b) shows residual vibration suppressions in unit-step responses by ZV, ZVD, ETM4 and NMe shapers. Refer to the reference for sensitivity curves of the above input shapers, which represent the robustness to modeling errors in and . References Dynamics (mechanics) Control theory Mechanical vibrations
Impulse vector
Physics,Mathematics,Engineering
1,830
15,806,309
https://en.wikipedia.org/wiki/Ink%20Serialized%20Format
Ink Serialized Format (ISF) is a Microsoft format to store written ink information. The format is mainly used for mobile devices like Personal digital assistants, tablet PCs and Ultra-Mobile PCs to store data entered with a stylus. An ink object is simply a sequence of strokes, where each stroke is a sequence of points, and the points are X, and Y coordinates. Many of the new mobile devices can also provide information such as pressure, and angle. In addition, it can be used to store custom information along with the ink data. Availability Its specification is freely available for download. Microsoft has added the ISF format to its technologies available under the Open Specification Promise making ISF related technology patent claims available for everybody to use or implement ISF. This allows for ISF format to be used even together with open source software licensing like GPL2. See also InkML External links Integrating Ink on mobile devices – Ink article on MSDN Computer file formats Microsoft Tablet PC
Ink Serialized Format
Technology
198
2,740,949
https://en.wikipedia.org/wiki/Mott%20insulator
Mott insulators are a class of materials that are expected to conduct electricity according to conventional band theories, but turn out to be insulators (particularly at low temperatures). These insulators fail to be correctly described by band theories of solids due to their strong electron–electron interactions, which are not considered in conventional band theory. A Mott transition is a transition from a metal to an insulator, driven by the strong interactions between electrons. One of the simplest models that can capture Mott transition is the Hubbard model. The band gap in a Mott insulator exists between bands of like character, such as 3d electron bands, whereas the band gap in charge-transfer insulators exists between anion and cation states. History Although the band theory of solids had been very successful in describing various electrical properties of materials, in 1937 Jan Hendrik de Boer and Evert Johannes Willem Verwey pointed out that a variety of transition metal oxides predicted to be conductors by band theory are insulators. With an odd number of electrons per unit cell, the valence band is only partially filled, so the Fermi level lies within the band. From the band theory, this implies that such a material has to be a metal. This conclusion fails for several cases, e.g. CoO, one of the strongest insulators known. Nevill Mott and Rudolf Peierls also in 1937 predicted the failing of band theory can be explained by including interactions between electrons. In 1949, in particular, Mott proposed a model for NiO as an insulator, where conduction is based on the formula (Ni2+O2−)2 → Ni3+O2− + Ni1+O2−. In this situation, the formation of an energy gap preventing conduction can be understood as the competition between the Coulomb potential U between 3d electrons and the transfer integral t of 3d electrons between neighboring atoms (the transfer integral is a part of the tight binding approximation). The total energy gap is then Egap = U − 2zt, where z is the number of nearest-neighbor atoms. In general, Mott insulators occur when the repulsive Coulomb potential U is large enough to create an energy gap. One of the simplest theories of Mott insulators is the 1963 Hubbard model. The crossover from a metal to a Mott insulator as U is increased, can be predicted within the so-called dynamical mean field theory. Mott reviewed the subject (with a good overview) in 1968. The subject has been thoroughly reviewed in a comprehensive paper by Masatoshi Imada, Atsushi Fujimori, and Yoshinori Tokura. A recent proposal of a "Griffiths-like phase close to the Mott transition" has been reported in the literature. Mott criterion The Mott criterion describes the critical point of the metal–insulator transition. The criterion is where is the electron density of the material and the effective bohr radius. The constant , according to various estimates, is 2.0, 2.78,4.0, or 4.2. If the criterion is satisfied (i.e. if the density of electrons is sufficiently high) the material becomes conductive (metal) and otherwise it will be an insulator. Mottness Mottism denotes the additional ingredient, aside from antiferromagnetic ordering, which is necessary to fully describe a Mott insulator. In other words, we might write: antiferromagnetic order + mottism = Mott insulator. Thus, mottism accounts for all of the properties of Mott insulators that cannot be attributed simply to antiferromagnetism. There are a number of properties of Mott insulators, derived from both experimental and theoretical observations, which cannot be attributed to antiferromagnetic ordering and thus constitute mottism. These properties include: Spectral weight transfer on the Mott scale Vanishing of the single particle Green function along a connected surface in momentum space in the first Brillouin zone Two sign changes of the Hall coefficient as electron doping goes from to (band insulators have only one sign change at ) The presence of a charge (with the charge of an electron) boson at low energies A pseudogap away from half-filling () Mott transition A Mott transition is a metal-insulator transition in condensed matter. Due to electric field screening the potential energy becomes much more sharply (exponentially) peaked around the equilibrium position of the atom and electrons become localized and can no longer conduct a current. It is named after physicist Nevill Francis Mott. Conceptual explanation In a semiconductor at low temperatures, each 'site' (atom or group of atoms) contains a certain number of electrons and is electrically neutral. For an electron to move away from a site, it requires a certain amount of energy, as the electron is normally pulled back toward the (now positively charged) site by Coulomb forces. If the temperature is high enough that of energy is available per site, the Boltzmann distribution predicts that a significant fraction of electrons will have enough energy to escape their site, leaving an electron hole behind and becoming conduction electrons that conduct current. The result is that at low temperatures a material is insulating, and at high temperatures the material conducts. While the conduction in an n- (p-) type doped semiconductor sets in at high temperatures because the conduction (valence) band is partially filled with electrons (holes) with the original band structure being unchanged, the situation is different in the case of the Mott transition where the band structure itself changes. Mott argued that the transition must be sudden, occurring when the density of free electrons N and the Bohr radius satisfies . Simply put, a Mott transition is a change in a material's behavior from insulating to metallic due to various factors. This transition is known to exist in various systems: mercury metal vapor-liquid, metal NH3 solutions, transition metal chalcogenides and transition metal oxides. In the case of transition metal oxides, the material typically switches from being a good electrical insulator to a good electrical conductor. The insulator-metal transition can also be modified by changes in temperature, pressure or composition (doping). As observed by Nevill Francis Mott in his 1949 publication on Ni-oxide, the origin of this behavior is correlations between electrons and the close relationship this phenomenon has to magnetism. The physical origin of the Mott transition is the interplay between the Coulomb repulsion of electrons and their degree of localization (band width). Once the carrier density becomes too high (e.g. due to doping), the energy of the system can be lowered by the localization of the formerly conducting electrons (band width reduction), leading to the formation of a band gap, e.g. by pressure (i.e. a semiconductor/insulator). In a semiconductor, the doping level also affects the Mott transition. It has been observed that higher dopant concentrations in a semiconductor creates internal stresses that increase the free energy (acting as a change in pressure) of the system, thus reducing the ionization energy. The reduced barrier causes easier transfer by tunneling or by thermal emission from donor to its adjacent donor. The effect is enhanced when pressure is applied for the reason stated previously. When the transport of carriers overcomes a minimum activation energy, the semiconductor has undergone a Mott transition and become metallic. The Mott transition is usually first order, and involves discontinuous changes of physical properties. Theoretical studies of the Mott transition in the limit of large dimension find a first order transition. However in low dimensions and when the lattice geometry leads to frustration of magnetic ordering, it may be only weakly first order or even continuous (i.e second order). Weakly first order Mott transitions are seen in some quasi-two dimensional organic materials. Continuous Mott transitions have been reported in semiconductor moire materials. A theory of a continuous Mott transition is available if the Mott insulating phase is a quantum spin liquid with an emergent fermi surface of neutral fermions. Applications Mott insulators are of growing interest in advanced physics research, and are not yet fully understood. They have applications in thin-film magnetic heterostructures and the strong correlated phenomena in high-temperature superconductivity, for example. This kind of insulator can become a conductor by changing some parameters, which may be composition, pressure, strain, voltage, or magnetic field. The effect is known as a Mott transition and can be used to build smaller field-effect transistors, switches and memory devices than possible with conventional materials. See also (Mott) Notes References Correlated electrons Quantum phases Electric current Phase transitions
Mott insulator
Physics,Chemistry,Materials_science
1,811
64,416,024
https://en.wikipedia.org/wiki/International%20Journal%20of%20Digital%20Earth
The International Journal of Digital Earth is an academic journal about Digital Earth published by Taylor & Francis on behalf of the International Society for Digital Earth. It focus on concepts such as "Earth observation, geographic information systems and [geographic information] science". Its editor-in-chief is Guo Huadong; its 2018 impact factor is 3.985. References Geographic information systems Geography journals Remote sensing journals Taylor & Francis academic journals Academic journals associated with international learned and professional societies
International Journal of Digital Earth
Technology
95
24,406,965
https://en.wikipedia.org/wiki/C12H18N2
{{DISPLAYTITLE:C12H18N2}} The molecular formula C12H18N2 (molar mass: 190.29 g/mol, exact mass: 190.1470 u) may refer to: Methylbenzylpiperazine (MBZP), or 1-methyl-4-benzylpiperazine 3-Methylbenzylpiperazine (3-Me-BZP)
C12H18N2
Chemistry
94
20,193,762
https://en.wikipedia.org/wiki/Mechanical%20powder%20press
A mechanical powder press is a machine press designed to compress powders, commonly used in the production of technical ceramics. Function-Principles of mechanical presses in the field of Technical Ceramics The function principles of the mechanic press machines differ in how to ensure the upper punch–main movement by cams, spindles-and friction drives, eccentric, knuckle-joints or by the round table principle, independent if the die-or lower punch movement is realized by cams- or eccentric systems or other mechanically or hydraulically combined systems. The executions of auxiliary movements are also not decisive for a term-classification. These auxiliary movements can also base on pneumatic and hydraulic principles. In comparison to hydraulic press machines the maximum compaction forces of mechanical powder presses are limited and are placed in the range >/= 5000 kN. For the requirements of wet-and dry pressing techniques in the field of Technical Ceramics cams-, eccentric-, knuckle joint- as well as round table presses have proved and tested, whereas cam-presses especially used for wet-press-techniques of pourable materials. The range of compaction force of mechanical presses for products of the Technical Ceramics is < 2500 kN, what is caused from the less density of the ceramic materials. Normally the upper punch-, lower punch- and die systems of mechanical presses don’t work on base of multi-subdivided punches. Application-Advantages in the field of Technical Ceramics Adaptable cam-executions, which can be made suitable individually to the concerned products, will meet especially the requirements of wet-press-technique of pourable compounds, while applications of eccentric-, knuckle-joint and round table presses are particularly related on the dry-press-technique of compounds with gliding characteristics. The sinusoidal movement of eccentric presses offers advantages especially for bigger strokes < 100 strokes per minute and the distorted sinusoidal movement of the knuckle joint is especially used for products, which need a longer time for deairing in the phase of compaction and a longer time for decompression after the bottom dead centre. For this kind of compaction technology the strokes are limited < 35 strokes per minute, but in contrast to that the compaction speed of round table presses for small and simple shaped products are especially high and quantities < 30,000 pcs per minute can be realized. Context between ceramic compaction technology and design principles of the mechanical presses The unit of product–technology–press machine / tooling is characterizing the context between the ceramic compaction technology and the design principles of the mechanical presses. Distinctions of the die- withdrawal- and the ejection technology of the uniaxial dry- and wet-press- technique in the technical ceramics influence the approachable quality of the compactions in depending on the geometric shape of the products and in depending on the selected type of press. Because the achievement of a speed-proportional compaction relation with pure mechanical presses is very difficult to realize and high mechanical investments are necessary, more and more combined mechanic-hydraulic function principles on base of subdivided punches will win a higher signification, because the requirements of the high-accuracy-pressing can be met better. The design principles of particularly mechanical powder presses, explained in a dissertation of Dr.B.Froherz offer some base confrontation of advantages and disadvantages related to technology and presses especially for tool designers. See also Eccentric (mechanism) Cold compaction Ceramic engineering Powder metallurgy Ceramic engineering Powders
Mechanical powder press
Physics,Engineering
701
490,528
https://en.wikipedia.org/wiki/VLC%20media%20player
VLC media player (previously the VideoLAN Client and commonly known as simply VLC) is a free and open-source, portable, cross-platform media player software and streaming media server developed by the VideoLAN project. VLC is available for desktop operating systems and mobile platforms, such as Android, iOS and iPadOS. VLC is also available on digital distribution platforms such as Apple's App Store, Google Play, and Microsoft Store. VLC supports many audio- and video-compression-methods and file-formats, including DVD-Video, Video CD, and streaming-protocols. It is able to stream media over computer networks and can transcode multimedia files. The default distribution of VLC includes many free decoding and encoding libraries, avoiding the need for finding/calibrating proprietary plugins. The libavcodec library from the FFmpeg project provides many of VLC's codecs, but the player mainly uses its own muxers and demuxers. It also has its own protocol implementations. It also gained distinction as the first player to support playback of encrypted DVDs on Linux and macOS by using the libdvdcss DVD decryption library; however, this library is legally controversial and is not included in many software repositories of Linux distributions as a result. It is available on iOS under the MPLv2. History The VideoLAN software originated as a French academic project in 1996. VLC used to stand for "VideoLAN Client" when VLC was a client of the VideoLAN project. Since VLC is no longer merely a client, that initialism no longer applies. It was intended to consist of a client and server to stream videos from satellite dishes across a campus network. Originally developed by students at the École Centrale Paris, it is now developed by contributors worldwide and is coordinated by VideoLAN, a non-profit organization. Rewritten from scratch in 1998, it was released under GNU General Public License on February 1, 2001, with authorization from the headmaster of the École Centrale Paris. The functionality of the server-program, VideoLan Server (VLS), has mostly been subsumed into VLC and has been deprecated. The project name has been changed to VLC media player because there is no longer a client/server infrastructure. The cone icon used in VLC is a reference to the traffic cones collected by École Centrale's Networking Students' Association. The cone icon design was changed from a hand drawn low resolution icon to a higher resolution CGI-rendered version in 2005, illustrated by Richard Øiestad. In 2007 the VLC project decided, for license compatibility reasons, not to upgrade to the just-released GPLv3. After 13 years of development, version 1.0.0 of VLC media player was released on July 7, 2009. Work began on VLC for Android in 2010 and it has been available for Android devices on the Google Play store since 2011. In September 2010, a company named "Applidium" developed a VLC port for iOS under GPLv2 with the endorsement of the VLC project, which was accepted by Apple for their App Store. In January 2011, after VLC developer Rémi Denis-Courmont's complaint to Apple about the licensing conflict between the VLC's GPLv2 and the App store's policies, the VLC had been withdrawn from the Apple App Store by Apple. Subsequently, in October 2011 the VLC authors began to relicense the engine parts of VLC from the GPL-2.0-or-later to the LGPL-2.1-or-later to achieve better license compatibility, for instance with the Apple App Store. In July 2013 the VLC application could be resubmitted to the iOS App Store under the MPL-2.0. Version 2.0.0 of VLC media player was released on February 18, 2012. The version for the Windows Store was released on March 13, 2014. Support for Windows RT, Windows Phone and Xbox One were added later. VLC is the third in the sourceforge.net overall download count, and there have been more than 6 billion downloads. Version 3.0 was in development for Windows, Linux and macOS since June 2016 and released in February 2018. It contains many new features including Chromecast output support (except subtitles), hardware-accelerated decoding enabled by default, 4K and 8K playback, 10-bit and HDR playback, 360° video and 3D audio, audio passthrough for HD audio codecs, BD-J menu support, and local network drive browsing. In December 2017 the European Parliament approved a budget that funds a bug bounty program for VLC to improve the EU's IT infrastructure. Release history Starting with version 1.1.0, VLC release codenames refer to characters from Terry Pratchett's Discworld novels; an exception is release 2.2.1, which came out shortly after Pratchett's death on March 12, 2015, and which was codenamed Terry Pratchett in honor of the author himself. Design principles Modular design VLC, like most multimedia frameworks, has a very modular design which makes it easier to include modules/plugins for new file formats, codecs, interfaces, or streaming methods. VLC 1.0.0 has more than 380 modules. The VLC core creates its own graph of modules dynamically, depending on the situation: input protocol, input file format, input codec, video card capabilities and other parameters. In VLC, almost everything is a module, like interfaces, video and audio outputs, controls, scalers, codecs, and audio/video filters. Interfaces The default GUI is based on Be API on BeOS, Cocoa for macOS, and Qt 5 for Linux and Windows, but all give a similar standard interface. The old default GUI was based on wxWidgets on Linux and Windows. VLC supports highly customizable skins through the skins2 interface, and also supports Winamp 2 and XMMS skins. Skins are not supported in the macOS version. VLC has ncurses, remote control, and telnet console interfaces. There is also an HTTP interface, as well as interfaces for mouse gestures and keyboard hotkeys. Features Effects (desktop version) The desktop version of VLC media player has some filters that can distort, rotate, split, deinterlace, and mirror videos as well as create display walls or add a logo overlay during playback. It can also output video as ASCII art. An interactive zoom feature allows magnifying into video during playback. Still images can be extracted from video at original resolution, and individual frames can be stepped through, although only in forward direction. Playback can be gamified by splitting the picture inside the viewport into draggable puzzle pieces, where the row and column count can be set as desired. For audio playback, this feature includes an equalizer and other filters that help customize sound quality. Formats Because VLC is a packet-based media player it plays almost all video content. Even some damaged, incomplete, or unfinished files can be played, such as those still downloading via a peer-to-peer (P2P) network. It also plays m2t MPEG transport streams (.TS) files while they are still being digitized from an HDV camera via a FireWire cable, making it possible to monitor the video as it is being recorded. The player can also use libcdio to access .iso files so that users can play files on a disk image, even if the user's operating system cannot work directly with .iso images. VLC supports all audio and video formats supported by libavcodec and libavformat. This means that VLC can play back H.264 or MPEG-4 Part 2 video as well as support FLV or MXF file formats "out of the box" using FFmpeg's libraries. Alternatively, VLC has modules for codecs that are not based on FFmpeg's libraries. VLC is one of the free software DVD players that ignore DVD region coding on RPC-1 firmware drives, making it a region-free player. However, it does not do the same on RPC-2 firmware drives, as in these cases the region coding is enforced by the drive itself, however, it can still brute-force the CSS encryption to play a foreign-region DVD on an RPC-2 drive. VLC media player can play high-definition recordings of D-VHS tapes duplicated to a computer using . This offers another way to archive all D-VHS tapes with the DRM copy freely tag. Using a FireWire connection from cable boxes to computers, VLC can stream live, unencrypted content to a monitor or HDTV. VLC media player can display the playing video as the desktop wallpaper, like Windows DreamScene, by using DirectX, only available on Windows operating systems. VLC media player can record the desktop and save the stream as a file, allowing the user to create screencasts. On Microsoft Windows, VLC also supports the Direct Media Object (DMO) framework and can thus make use of some third-party DLLs (Dynamic-link library). On most platforms, VLC can tune into and view DVB-C, DVB-T, and DVB-S channels. On macOS the separate EyeTV plugin is required, on Windows it requires the card's BDA Drivers. VLC can be installed or run directly from a USB flash drive or other external drive. VLC can be extended through scripting; it uses the Lua scripting language. VLC can play videos in the AVCHD format, a highly compressed format used in recent HD camcorders. VLC can generate a number of music visualization displays. The program is able to convert media files into various supported formats. Both desktop and mobile releases are equipped with an audio equalizer. Christmas logo A red Santa hat appears on top of VLC's traffic-cone logo during Christmas seasons. Keyboard shortcuts There are single-button shortcuts in VLC that don't require Ctrl or Alt button. For example, pressing keys F and G while a video file is running in VLC shifts the file's audio/video sync for 50 millisecond per adjustment. This is useful to fix an issue with the sound being ahead or lagging behind the video. Operating system compatibility VLC media player is cross-platform, with versions for Windows, macOS, Linux, iOS, Android, tvOS, ChromeOS, Windows Phone, various BSD-based systems, Solaris, BeOS, OS/2, and Syllable. However, forward and backward compatibility between versions of VLC media player and different versions of OSes are not maintained over more than a few generations. 64-bit builds are available for 64-bit Windows, starting with version 2.0.1. Windows 8 and 10 support The VLC port for Windows 8 and Windows 10 is backed by a crowdfunding campaign on Kickstarter to add support for a new GUI based on Microsoft's Metro design language, that will run on the Windows Runtime. All the existing features including video filters, subtitle support, and an equalizer are present in Windows 8. A beta version of VLC for Windows 8 was released to the Microsoft Store on March 13, 2014. A universal app was created for Windows 8, 8.1, 10, Windows Phone 8, 8.1 and Windows 10 Mobile. Android support In May 2012, the VLC team stated that a version of VLC for Android was being developed. The stable release version 1.0 was made available on Google Play on December 8, 2014. Use of VLC with other programs Bindings Several APIs can connect to VLC and use its functionality: libVLC API – the VLC Core, for C and C++ VLCKit – an Objective-C framework for macOS LibVLCSharp – Crossplatform .NET bindings to libVLC (C#/F#/VB) JavaScript API – the evolution of ActiveX API and Firefox integration D-Bus controls Go bindings Python controls Java API DirectShow filters Delphi/Pascal API: PasLibVlc by Robert Jędrzejczyk Free Pascal bindings and an OOP wrapper component, via the libvlc.pp and vlc.pp units. This comes standard with the Free Pascal Compiler as of November 6, 2012. The Phonon multimedia API for Qt and KDE applications can optionally use VLC as a backend. Applications that use libVLC VLC can handle some incomplete files and in some cases can be used to preview files being downloaded. Several programs make use of this, including eMule and KCeasy. The free/open-source Internet television application Miro also uses VLC code. HandBrake, an open-source video encoder, used to load libdvdcss from VLC Media Player. Easy Subtitles Synchronizer, a freeware subtitle editing program for Windows, uses VLC to preview the video with the edited subtitles. Format support Input formats VLC can read many formats, depending on the operating system it is running on, including: Container formats: 3GP, ASF, AVI, DVR-MS, FLV, Matroska (MKV), MIDI, QuickTime File Format, MP4, Ogg, OGM, WAV, MPEG-2 (ES, PS, TS, PVA, MP3), AIFF, Raw audio, Raw DV, MXF, VOB, RM, Blu-ray, DVD-Video, VCD, SVCD, CD-DA, DVB, HEIF, AVIF Audio coding formats: AAC, AC3, ALAC, AMR, DTS, DV Audio, XM, FLAC, It, MACE, MOD, Monkey's Audio, MP3, Opus, PLS, QCP, QDM2/QDMC, RealAudio, Speex, Screamtracker 3/S3M, TTA, Vorbis, WavPack, WMA (WMA 1/2, WMA 3 partially). Capture devices: Video4Linux (on Linux), DirectShow (on Windows), Desktop (screencast), Digital TV (DVB-C, DVB-S, DVB-T, DVB-S2, DVB-T2, ATSC, Clear QAM) Network protocols: FTP, HTTP, MMS, RSS/Atom, RTMP, RTP (unicast or multicast), RTSP, UDP, Sat-IP, Smooth Streaming Network streaming formats: Apple HLS, Flash RTMP, MPEG-DASH, MPEG Transport Stream, RTP/RTSP ISMA/3GPP PSS, Windows Media MMS Subtitles: Advanced SubStation Alpha, Closed Captions, DVB, DVD-Video, MPEG-4 Timed Text, MPL2, OGM, SubStation Alpha, SubRip, SVCD, Teletext, Text file, VobSub, WebVTT, TTML Video coding formats: Cinepak, Dirac, DV, H.263, H.264/MPEG-4 AVC, H.265/MPEG HEVC, AV1, HuffYUV, Indeo 3, MJPEG, MPEG-1, MPEG-2, MPEG-4 Part 2, RealVideo 3&4, Sorenson, Theora, VC-1, VP5, VP6, VP8, VP9, DNxHD, ProRes and some WMV. Digital Camcorder formats: MOD and TOD via USB. Output formats VLC can transcode or stream audio and video into several formats depending on the operating system, including: Container formats: ASF, AVI, FLAC, FLV, Fraps, Matroska, MP4, MPJPEG, MPEG-2 (ES, MP3), Ogg, PS, PVA, QuickTime File Format, TS, WAV, WebM Audio coding formats: AAC, AC-3, DV Audio, FLAC, MP3, Speex, Vorbis Streaming protocols: HTTP, MMS, RTSP, RTP, UDP Video coding formats: Dirac, DV, H.263, H.264/MPEG-4 AVC, H.265/MPEG-H HEVC, MJPEG, MPEG-1, MPEG-2, MPEG-4 Part 2, Theora, VP5, VP6, VP8, VP9 Legality The VLC media player software installers for the macOS platform and the Windows platform include the libdvdcss DVD decryption library, even though this library may be legally restricted in certain jurisdictions. India In May 2022, it was reported by MediaNama that VLC was banned in India and its website was inaccessible from India under the provisions of the Information Technology Act, 2000. Neither the developers nor the Indian government offered any explanation to the ban, according to India Today. The official VideoLAN Twitter account stated in August that the website was blocked in India from 13 February 2022. A report by Hindustan Times indicated that the ban could be due to links with China. India had in 2020 banned over 200 Chinese apps following the 2020–2022 China–India skirmishes. Another Hindustan Times report from April quoting Symantec said that Chinese hackers were depending on VLC to launch malware they had previously installed on Windows machines. The technique they used is called DLL side-loading, in which an external library that a legitimate program loads at runtime is substituted with a modified version containing the malware. VideoLan president and lead developer Jean-Baptiste Kempf said that the block was most likely a result of a misunderstanding of the Chinese security issue, although the Indian Government did not provide for a reason as to why it was blocked. In October 2022, VideoLan, with assistance from the Indian digital rights organization Internet Freedom Foundation sent a legal notice to the Indian government asking for an explanation for the block order, following which the Ministry of Electronics and Information Technology removed the ban in November 2022. United States The VLC media player software is able to read audio and video data from DVDs that incorporate Content Scramble System (CSS) encryption, even though the VLC media player software lacks a CSS decryption license. The unauthorized decryption of CSS-encrypted DVD content or unauthorized distribution of CSS decryption tools may violate the US Digital Millennium Copyright Act. Decryption of CSS-encrypted DVD content has been temporarily authorized for certain purposes (such as documentary filmmaking that uses short portions of DVD content for criticism or commentary) under the Digital Millennium Copyright Act anticircumvention exemptions that were issued by the US Copyright Office in 2010. However, these exemptions do not change the DMCA's ban on the distribution of CSS decryption tools; including those distributed with VLC. See also Comparison of video player software List of codecs List of music software Animated ASCII art - VLC can output ASCII animation through the Libcaca module Explanatory notes References External links 2001 software Amiga media players Applications using D-Bus Audio software with JACK support BeOS software BSD software Cross-platform free software Free and open-source Android software Free media players Free video software Linux DVD players Linux media players Lua (programming language)-scriptable software Multimedia frameworks MacOS media players Portable software Software DVD players Software that uses FFmpeg Free software that uses ncurses Software that was ported from wxWidgets to Qt Software using the GNU Lesser General Public License Solaris media players Spoken articles Streaming media systems Streaming software Video software that uses Qt Webcams Windows media players Universal Windows Platform apps Xbox One software Free screencasting software Software Blu-ray players TvOS software
VLC media player
Technology
4,229
416,612
https://en.wikipedia.org/wiki/Cross-validation%20%28statistics%29
Cross-validation, sometimes called rotation estimation or out-of-sample testing, is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set. Cross-validation includes resampling and sample splitting methods that use different portions of the data to test and train a model on different iterations. It is often used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. It can also be used to assess the quality of a fitted model and the stability of its parameters. In a prediction problem, a model is usually given a dataset of known data on which training is run (training dataset), and a dataset of unknown data (or first seen data) against which the model is tested (called the validation dataset or testing set). The goal of cross-validation is to test the model's ability to predict new data that was not used in estimating it, in order to flag problems like overfitting or selection bias and to give an insight on how the model will generalize to an independent dataset (i.e., an unknown dataset, for instance from a real problem). One round of cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset (called the training set), and validating the analysis on the other subset (called the validation set or testing set). To reduce variability, in most methods multiple rounds of cross-validation are performed using different partitions, and the validation results are combined (e.g. averaged) over the rounds to give an estimate of the model's predictive performance. In summary, cross-validation combines (averages) measures of fitness in prediction to derive a more accurate estimate of model prediction performance. Motivation Assume a model with one or more unknown parameters, and a data set to which the model can be fit (the training data set). The fitting process optimizes the model parameters to make the model fit the training data as well as possible. If an independent sample of validation data is taken from the same population as the training data, it will generally turn out that the model does not fit the validation data as well as it fits the training data. The size of this difference is likely to be large especially when the size of the training data set is small, or when the number of parameters in the model is large. Cross-validation is a way to estimate the size of this effect. Example: linear regression In linear regression, there exist real response values , and n p-dimensional vector covariates x1, ..., xn. The components of the vector xi are denoted xi1, ..., xip. If least squares is used to fit a function in the form of a hyperplane ŷ = a + βTx to the data (xi, yi) 1 ≤ i ≤ n, then the fit can be assessed using the mean squared error (MSE). The MSE for given estimated parameter values a and β on the training set (xi, yi) 1 ≤ i ≤ n is defined as: If the model is correctly specified, it can be shown under mild assumptions that the expected value of the MSE for the training set is (n − p − 1)/(n + p + 1) < 1 times the expected value of the MSE for the validation set (the expected value is taken over the distribution of training sets). Thus, a fitted model and computed MSE on the training set will result in an optimistically biased assessment of how well the model will fit an independent data set. This biased estimate is called the in-sample estimate of the fit, whereas the cross-validation estimate is an out-of-sample estimate. Since in linear regression it is possible to directly compute the factor (n − p − 1)/(n + p + 1) by which the training MSE underestimates the validation MSE under the assumption that the model specification is valid, cross-validation can be used for checking whether the model has been overfitted, in which case the MSE in the validation set will substantially exceed its anticipated value. (Cross-validation in the context of linear regression is also useful in that it can be used to select an optimally regularized cost function.) General case In most other regression procedures (e.g. logistic regression), there is no simple formula to compute the expected out-of-sample fit. Cross-validation is, thus, a generally applicable way to predict the performance of a model on unavailable data using numerical computation in place of theoretical analysis. Types Two types of cross-validation can be distinguished: exhaustive and non-exhaustive cross-validation. Exhaustive cross-validation Exhaustive cross-validation methods are cross-validation methods which learn and test on all possible ways to divide the original sample into a training and a validation set. Leave-p-out cross-validation Leave-p-out cross-validation (LpO CV) involves using p observations as the validation set and the remaining observations as the training set. This is repeated on all ways to cut the original sample on a validation set of p observations and a training set. LpO cross-validation require training and validating the model times, where n is the number of observations in the original sample, and where is the binomial coefficient. For p > 1 and for even moderately large n, LpO CV can become computationally infeasible. For example, with n = 100 and p = 30, A variant of LpO cross-validation with p=2 known as leave-pair-out cross-validation has been recommended as a nearly unbiased method for estimating the area under ROC curve of binary classifiers. Leave-one-out cross-validation Leave-one-out cross-validation (LOOCV) is a particular case of leave-p-out cross-validation with p = 1. The process looks similar to jackknife; however, with cross-validation one computes a statistic on the left-out sample(s), while with jackknifing one computes a statistic from the kept samples only. LOO cross-validation requires less computation time than LpO cross-validation because there are only passes rather than . However, passes may still require quite a large computation time, in which case other approaches such as k-fold cross validation may be more appropriate. Pseudo-code algorithm: Input: x, {vector of length N with x-values of incoming points} y, {vector of length N with y-values of the expected result} interpolate( x_in, y_in, x_out ), { returns the estimation for point x_out after the model is trained with x_in-y_in pairs} Output: err, {estimate for the prediction error} Steps: err ← 0 for i ← 1, ..., N do // define the cross-validation subsets x_in ← (x[1], ..., x[i − 1], x[i + 1], ..., x[N]) y_in ← (y[1], ..., y[i − 1], y[i + 1], ..., y[N]) x_out ← x[i] y_out ← interpolate(x_in, y_in, x_out) err ← err + (y[i] − y_out)^2 end for err ← err/N Non-exhaustive cross-validation Non-exhaustive cross validation methods do not compute all ways of splitting the original sample. These methods are approximations of leave-p-out cross-validation. k-fold cross-validation In k-fold cross-validation, the original sample is randomly partitioned into k equal sized subsamples, often referred to as "folds". Of the k subsamples, a single subsample is retained as the validation data for testing the model, and the remaining k − 1 subsamples are used as training data. The cross-validation process is then repeated k times, with each of the k subsamples used exactly once as the validation data. The k results can then be averaged to produce a single estimation. The advantage of this method over repeated random sub-sampling (see below) is that all observations are used for both training and validation, and each observation is used for validation exactly once. 10-fold cross-validation is commonly used, but in general k remains an unfixed parameter. For example, setting k = 2 results in 2-fold cross-validation. In 2-fold cross-validation, we randomly shuffle the dataset into two sets d0 and d1, so that both sets are equal size (this is usually implemented by shuffling the data array and then splitting it in two). We then train on d0 and validate on d1, followed by training on d1 and validating on d0. When k = n (the number of observations), k-fold cross-validation is equivalent to leave-one-out cross-validation. In stratified k-fold cross-validation, the partitions are selected so that the mean response value is approximately equal in all the partitions. In the case of binary classification, this means that each partition contains roughly the same proportions of the two types of class labels. In repeated cross-validation the data is randomly split into k partitions several times. The performance of the model can thereby be averaged over several runs, but this is rarely desirable in practice. When many different statistical or machine learning models are being considered, greedy k-fold cross-validation can be used to quickly identify the most promising candidate models. Holdout method In the holdout method, we randomly assign data points to two sets d0 and d1, usually called the training set and the test set, respectively. The size of each of the sets is arbitrary although typically the test set is smaller than the training set. We then train (build a model) on d0 and test (evaluate its performance) on d1. In typical cross-validation, results of multiple runs of model-testing are averaged together; in contrast, the holdout method, in isolation, involves a single run. It should be used with caution because without such averaging of multiple runs, one may achieve highly misleading results. One's indicator of predictive accuracy (F*) will tend to be unstable since it will not be smoothed out by multiple iterations (see below). Similarly, indicators of the specific role played by various predictor variables (e.g., values of regression coefficients) will tend to be unstable. While the holdout method can be framed as "the simplest kind of cross-validation", many sources instead classify holdout as a type of simple validation, rather than a simple or degenerate form of cross-validation. Repeated random sub-sampling validation This method, also known as Monte Carlo cross-validation, creates multiple random splits of the dataset into training and validation data. For each such split, the model is fit to the training data, and predictive accuracy is assessed using the validation data. The results are then averaged over the splits. The advantage of this method (over k-fold cross validation) is that the proportion of the training/validation split is not dependent on the number of iterations (i.e., the number of partitions). The disadvantage of this method is that some observations may never be selected in the validation subsample, whereas others may be selected more than once. In other words, validation subsets may overlap. This method also exhibits Monte Carlo variation, meaning that the results will vary if the analysis is repeated with different random splits. As the number of random splits approaches infinity, the result of repeated random sub-sampling validation tends towards that of leave-p-out cross-validation. In a stratified variant of this approach, the random samples are generated in such a way that the mean response value (i.e. the dependent variable in the regression) is equal in the training and testing sets. This is particularly useful if the responses are dichotomous with an unbalanced representation of the two response values in the data. A method that applies repeated random sub-sampling is RANSAC. Nested cross-validation When cross-validation is used simultaneously for selection of the best set of hyperparameters and for error estimation (and assessment of generalization capacity), a nested cross-validation is required. Many variants exist. At least two variants can be distinguished: k*l-fold cross-validation This is a truly nested variant which contains an outer loop of k sets and an inner loop of l sets. The total data set is split into k sets. One by one, a set is selected as the (outer) test set and the k - 1 other sets are combined into the corresponding outer training set. This is repeated for each of the k sets. Each outer training set is further sub-divided into l sets. One by one, a set is selected as inner test (validation) set and the l - 1 other sets are combined into the corresponding inner training set. This is repeated for each of the l sets. The inner training sets are used to fit model parameters, while the outer test set is used as a validation set to provide an unbiased evaluation of the model fit. Typically, this is repeated for many different hyperparameters (or even different model types) and the validation set is used to determine the best hyperparameter set (and model type) for this inner training set. After this, a new model is fit on the entire outer training set, using the best set of hyperparameters from the inner cross-validation. The performance of this model is then evaluated using the outer test set. k-fold cross-validation with validation and test set This is a type of k*l-fold cross-validation when l = k - 1. A single k-fold cross-validation is used with both a validation and test set. The total data set is split into k sets. One by one, a set is selected as test set. Then, one by one, one of the remaining sets is used as a validation set and the other k - 2 sets are used as training sets until all possible combinations have been evaluated. Similar to the k*l-fold cross validation, the training set is used for model fitting and the validation set is used for model evaluation for each of the hyperparameter sets. Finally, for the selected parameter set, the test set is used to evaluate the model with the best parameter set. Here, two variants are possible: either evaluating the model that was trained on the training set or evaluating a new model that was fit on the combination of the training and the validation set. Measures of fit The goal of cross-validation is to estimate the expected level of fit of a model to a data set that is independent of the data that were used to train the model. It can be used to estimate any quantitative measure of fit that is appropriate for the data and model. For example, for binary classification problems, each case in the validation set is either predicted correctly or incorrectly. In this situation the misclassification error rate can be used to summarize the fit, although other measures derived from information (e.g., counts, frequency) contained within a contingency table or confusion matrix could also be used. When the value being predicted is continuously distributed, the mean squared error, root mean squared error or median absolute deviation could be used to summarize the errors. Using prior information When users apply cross-validation to select a good configuration , then they might want to balance the cross-validated choice with their own estimate of the configuration. In this way, they can attempt to counter the volatility of cross-validation when the sample size is small and include relevant information from previous research. In a forecasting combination exercise, for instance, cross-validation can be applied to estimate the weights that are assigned to each forecast. Since a simple equal-weighted forecast is difficult to beat, a penalty can be added for deviating from equal weights. Or, if cross-validation is applied to assign individual weights to observations, then one can penalize deviations from equal weights to avoid wasting potentially relevant information. Hoornweg (2018) shows how a tuning parameter can be defined so that a user can intuitively balance between the accuracy of cross-validation and the simplicity of sticking to a reference parameter that is defined by the user. If denotes the candidate configuration that might be selected, then the loss function that is to be minimized can be defined as Relative accuracy can be quantified as , so that the mean squared error of a candidate is made relative to that of a user-specified . The relative simplicity term measures the amount that deviates from relative to the maximum amount of deviation from . Accordingly, relative simplicity can be specified as , where corresponds to the value with the highest permissible deviation from . With , the user determines how high the influence of the reference parameter is relative to cross-validation. One can add relative simplicity terms for multiple configurations by specifying the loss function as Hoornweg (2018) shows that a loss function with such an accuracy-simplicity tradeoff can also be used to intuitively define shrinkage estimators like the (adaptive) lasso and Bayesian / ridge regression. Click on the lasso for an example. Statistical properties Suppose we choose a measure of fit F, and use cross-validation to produce an estimate F* of the expected fit EF of a model to an independent data set drawn from the same population as the training data. If we imagine sampling multiple independent training sets following the same distribution, the resulting values for F* will vary. The statistical properties of F* result from this variation. The variance of F* can be large. For this reason, if two statistical procedures are compared based on the results of cross-validation, the procedure with the better estimated performance may not actually be the better of the two procedures (i.e. it may not have the better value of EF). Some progress has been made on constructing confidence intervals around cross-validation estimates, but this is considered a difficult problem. Computational issues Most forms of cross-validation are straightforward to implement as long as an implementation of the prediction method being studied is available. In particular, the prediction method can be a "black box" – there is no need to have access to the internals of its implementation. If the prediction method is expensive to train, cross-validation can be very slow since the training must be carried out repeatedly. In some cases such as least squares and kernel regression, cross-validation can be sped up significantly by pre-computing certain values that are needed repeatedly in the training, or by using fast "updating rules" such as the Sherman–Morrison formula. However one must be careful to preserve the "total blinding" of the validation set from the training procedure, otherwise bias may result. An extreme example of accelerating cross-validation occurs in linear regression, where the results of cross-validation have a closed-form expression known as the prediction residual error sum of squares (PRESS). Limitations and misuse Cross-validation only yields meaningful results if the validation set and training set are drawn from the same population and only if human biases are controlled. In many applications of predictive modeling, the structure of the system being studied evolves over time (i.e. it is "non-stationary"). Both of these can introduce systematic differences between the training and validation sets. For example, if a model for prediction of trend changes in financial quotations is trained on data for a certain five-year period, it is unrealistic to treat the subsequent five-year period as a draw from the same population. As another example, suppose a model is developed to predict an individual's risk for being diagnosed with a particular disease within the next year. If the model is trained using data from a study involving only a specific population group (e.g. young people or males), but is then applied to the general population, the cross-validation results from the training set could differ greatly from the actual predictive performance. In many applications, models also may be incorrectly specified and vary as a function of modeler biases and/or arbitrary choices. When this occurs, there may be an illusion that the system changes in external samples, whereas the reason is that the model has missed a critical predictor and/or included a confounded predictor. New evidence is that cross-validation by itself is not very predictive of external validity, whereas a form of experimental validation known as swap sampling that does control for human bias can be much more predictive of external validity. As defined by this large MAQC-II study across 30,000 models, swap sampling incorporates cross-validation in the sense that predictions are tested across independent training and validation samples. Yet, models are also developed across these independent samples and by modelers who are blinded to one another. When there is a mismatch in these models developed across these swapped training and validation samples as happens quite frequently, MAQC-II shows that this will be much more predictive of poor external predictive validity than traditional cross-validation. The reason for the success of the swapped sampling is a built-in control for human biases in model building. In addition to placing too much faith in predictions that may vary across modelers and lead to poor external validity due to these confounding modeler effects, these are some other ways that cross-validation can be misused: By performing an initial analysis to identify the most informative features using the entire data set – if feature selection or model tuning is required by the modeling procedure, this must be repeated on every training set. Otherwise, predictions will certainly be upwardly biased. If cross-validation is used to decide which features to use, an inner cross-validation to carry out the feature selection on every training set must be performed. Performing mean-centering, rescaling, dimensionality reduction, outlier removal or any other data-dependent preprocessing using the entire data set. While very common in practice, this has been shown to introduce biases into the cross-validation estimates. By allowing some of the training data to also be included in the test set – this can happen due to "twinning" in the data set, whereby some exactly identical or nearly identical samples are present in the data set, see pseudoreplication. To some extent twinning always takes place even in perfectly independent training and validation samples. This is because some of the training sample observations will have nearly identical values of predictors as validation sample observations. And some of these will correlate with a target at better than chance levels in the same direction in both training and validation when they are actually driven by confounded predictors with poor external validity. If such a cross-validated model is selected from a k-fold set, human confirmation bias will be at work and determine that such a model has been validated. This is why traditional cross-validation needs to be supplemented with controls for human bias and confounded model specification like swap sampling and prospective studies. Cross validation for time-series models Due to correlations, cross-validation with random splits might be problematic for time-series models (if we are more interested in evaluating extrapolation, rather than interpolation). A more appropriate approach might be to use rolling cross-validation. However, if performance is described by a single summary statistic, it is possible that the approach described by Politis and Romano as a stationary bootstrap will work. The statistic of the bootstrap needs to accept an interval of the time series and return the summary statistic on it. The call to the stationary bootstrap needs to specify an appropriate mean interval length. Applications Cross-validation can be used to compare the performances of different predictive modeling procedures. For example, suppose we are interested in optical character recognition, and we are considering using either a Support Vector Machine (SVM) or k-nearest neighbors (KNN) to predict the true character from an image of a handwritten character. Using cross-validation, we can obtain empirical estimates comparing these two methods in terms of their respective fractions of misclassified characters. In contrast, the in-sample estimate will not represent the quantity of interest (i.e. the generalization error). Cross-validation can also be used in variable selection. Suppose we are using the expression levels of 20 proteins to predict whether a cancer patient will respond to a drug. A practical goal would be to determine which subset of the 20 features should be used to produce the best predictive model. For most modeling procedures, if we compare feature subsets using the in-sample error rates, the best performance will occur when all 20 features are used. However under cross-validation, the model with the best fit will generally include only a subset of the features that are deemed truly informative. A recent development in medical statistics is its use in meta-analysis. It forms the basis of the validation statistic, Vn which is used to test the statistical validity of meta-analysis summary estimates. It has also been used in a more conventional sense in meta-analysis to estimate the likely prediction error of meta-analysis results. See also Boosting (machine learning) Bootstrap aggregating (bagging) Out-of-bag error Bootstrapping (statistics) Leakage (machine learning) Model selection Stability (learning theory) Validity (statistics) Notes and references Further reading Model selection Regression variable selection Machine learning
Cross-validation (statistics)
Engineering
5,319
48,512,364
https://en.wikipedia.org/wiki/Pople%20diagram
A Pople diagram or Pople's Diagram is a diagram which describes the relationship between various calculation methods in computational chemistry. It was initially introduced in January 1965 by Sir John Pople, , during the Symposium of Atomic and Molecular Quantum Theory in Florida. The Pople Diagram can be either 2-dimensional or 3-dimensional, with the axes representing ab initio methods, basis sets and treatment of relativity. The diagram attempts to balance calculations by giving all aspects of a computation equal weight. History John Pople first introduced the Pople Diagram during the Symposium on Atomic and Molecular Quantum Theory held on Sanibel Island, Florida, in January 1965. He called it a "hyperbola of quantum chemistry", which illustrates the inverse relationship between the sophistication of a calculational method and the number of electrons in a molecule that can be studied by that method. Alternative (reverse) arrangement of the vertical axis or interchange of the two axes are also possible. Three-Dimensional Pople Diagrams The 2-dimensional Pople diagram describes the convergence of the quantum-mechanical nonrelativistic electronic energy with the size of the basis set and the level of electron correlation included in the wavefunction. In order to reproduce accurate experimental thermochemical properties, secondary energetic contributions have to be considered. The third dimension of the Pople diagram consists of such energetic contributions. These contributions may include: spin–orbit interaction, scalar relativistic, zero-point vibrational energy, and deviations from the Born–Oppenheimer approximation. The three-dimensional Pople diagram (also known as the Csaszar cube.) describes the energy contributions involved in quantum chemistry composite methods. See also Electronic correlation References External links Introduction to Computational Chemistry Introduction to Quantum and Computational Chemistry Quantum chemistry Computational chemistry Theoretical chemistry Molecular modelling Electronic structure methods
Pople diagram
Physics,Chemistry
372
1,348,709
https://en.wikipedia.org/wiki/Cetology
Cetology (from Greek , kētos, "whale"; and , -logia) or whalelore (also known as whaleology) is the branch of marine mammal science that studies the approximately eighty species of whales, dolphins, and porpoises in the scientific infraorder Cetacea. Cetologists, or those who practice cetology, seek to understand and explain cetacean evolution, distribution, morphology, behavior, community dynamics, and other topics. History Observations about Cetacea have been recorded since at least classical times. Ancient Greek fishermen created an artificial notch on the dorsal fin of dolphins entangled in nets so that they could tell them apart years later. Approximately 2,300 years ago, Aristotle carefully took notes on cetaceans while traveling on boats with fishermen in the Aegean Sea. In his book Historia animalium (History of Animals), Aristotle was careful enough to distinguish between the baleen whales and toothed whales, a taxonomical separation still used today. He also described the sperm whale and the common dolphin, stating that they can live for at least 25 or 30 years. His achievement was remarkable for its time, because even today it is very difficult to estimate the life-span of advanced marine animals. After Aristotle's death, much of the knowledge he had gained about cetaceans was lost, only to be re-discovered during the Renaissance. Many of the medieval texts on cetaceans come mainly from Scandinavia and Iceland, most came about the mid-13th century. One of the better known is Speculum Regale. This text describes various species that lived around the island of Iceland. It mentions orcs that had dog-like teeth and would demonstrate the same kind of aggression towards other cetaceans as wild dogs would to other terrestrial animals. The text even illustrated the hunting technique of orcs, which are now called orcas. The Speculum Regale describes other cetaceans, including the sperm whale and narwhal. Many times they were seen as terrible monsters, such as killers of men, and destroyers of ships. They even bore odd names such as "pig whale", "horse whale", and "red whale". But not all creatures described were said to be fierce. Some were seen to be good, such as whales that drove shoals of herring towards the shore. This was seen as very helpful to fisherman. Many of the early studies were based on dead specimens and myth. The little information that was gathered was usually about length, and a rough outer body anatomy. Because these animals live in water their entire lives, early scientists did not have the technology to study these animals further. It was not until the 16th century that things would begin to change. Then cetaceans would be proved to be mammals rather than fish. Aristotle argued they were mammals. But Pliny the Elder stated that they were fish, and it was followed by many naturalists. However, Pierre Belon (1517–1575) and G. Rondelet (1507–1566) persisted on believing they were mammals. They argued that the animals had lungs and a uterus, just like mammals. Not until 1758, when Swedish botanist Carl Linnaeus (1707–1778) published the tenth edition of Systema Naturae, were they seen as mammals. Only decades later, French zoologist and paleontologist Baron Georges Cuvier (1769–1832) described the animals as mammals without any hind legs. Skeletons were assembled and displayed in the first natural history museums, and on a closer look and comparisons with other extinct animal fossils, led zoologists to conclude that cetaceans came from a family of ancient land mammals. Between the 16th and 20th centuries, much of our information on cetaceans came from whalers. Whalers were the most knowledgeable about the animals, but their information was regarding migration routes and outer anatomy, and only little information of behavior. During the 1960s, people began studying the animals intensively, often in dedicated research institutes. The Tethys Institute of Milan, founded in 1986, compiled an extensive cetology database of the Mediterranean. This came from both concern about wild populations and also the capture of larger animals such as the orca, and gaining popularity of dolphin shows in marine parks. Studying cetaceans Studying cetaceans presents numerous challenges. Cetaceans only spend 10% of their time on the surface, and all they do at the surface is breathe. There is very little behavior seen at the surface. It is also impossible to find any signs that an animal has been in an area. Cetaceans do not leave tracks that can be followed. However, the dung of whales often floats and can be collected to tell important information about their diet and about the role they have in the environment. Often cetology involves waiting and paying close attention. Cetologists use equipment including hydrophones to listen to calls of communicating animals, binoculars and other optical devices for scanning the horizon, cameras, notes, and a few other devices and tools. An alternative method of studying cetaceans is through examination of dead carcasses that wash up on the shore. If properly collected and stored, these carcasses can provide important information that is difficult to obtain in field studies. Identifying individuals In recent decades, methods of identifying individual cetaceans have enabled accurate population counts and insights into the life cycles and social structures of various species. One such successful system is photo-identification. This system was popularized by Michael Bigg, a pioneer in modern orca (killer whale) research. During the mid-1970s, Bigg and Graeme Ellis photographed local orcas in the British Columbian seas. After examining the photos, they realized they could recognize certain individual whales by looking at the shape and condition of the dorsal fin, and also the shape of the saddle patch. These are as unique as a human fingerprint; no one animal's appearance exactly like another's. Once Biggs and Ellis found they could recognize certain individuals, they realised that the animals travel in stable groups called pods. Researchers use photo identification to identify specific individuals and pods. The photographic system has also worked well in humpback whale studies. Researchers use the color of the pectoral fins and color and scarring of the fluke to identify individuals. Scars from orca attacks found on the flukes of humpbacks are also used in identification. Related journals Mammal Review Cetology See also :Category:Cetologists Cetology of Moby-Dick Notes References Whales: Giants of the Sea, 2000 Transients: Mammal-Hunting Killer Whales, by John K.B. Ford and Graeme M. Ellis, 1999 External links Dolphins in Greek Mythology Whale Trackers - A Documentary Series about Whales, Dolphins and Porpoises. Cetacean research and conservation Mammalogy Marine biology
Cetology
Biology
1,395
58,622,254
https://en.wikipedia.org/wiki/Aspergillus%20tsurutae
Aspergillus tsurutae is a species of fungus in the genus Aspergillus. It is from the Fumigati section. Several fungi from this section produce heat-resistant ascospores, and the isolates from this section are frequently obtained from locations where natural fires have previously occurred. The species was first described in 2003. Growth and morphology A. tsurutae has been cultivated on both yeast extract sucrose agar (YES) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References tsurutae Fungi described in 2003 Fungus species
Aspergillus tsurutae
Biology
140
49,107,439
https://en.wikipedia.org/wiki/Coherency%20%28homotopy%20theory%29
In mathematics, specifically in homotopy theory and (higher) category theory, coherency is the standard that equalities or diagrams must satisfy when they hold "up to homotopy" or "up to isomorphism". The adjectives such as "pseudo-" and "lax-" are used to refer to the fact equalities are weakened in coherent ways; e.g., pseudo-functor, pseudoalgebra. Coherent isomorphism In some situations, isomorphisms need to be chosen in a coherent way. Often, this can be achieved by choosing canonical isomorphisms. But in some cases, such as prestacks, there can be several canonical isomorphisms and there might not be an obvious choice among them. In practice, coherent isomorphisms arise by weakening equalities; e.g., strict associativity may be replaced by associativity via coherent isomorphisms. For example, via this process, one gets the notion of a weak 2-category from that of a strict 2-category. Replacing coherent isomorphisms by equalities is usually called strictification or rectification. Coherence theorem Mac Lane's coherence theorem states, roughly, that if diagrams of certain types commute, then diagrams of all types commute. A simple proof of that theorem can be obtained using the permutoassociahedron, a polytope whose combinatorial structure appears implicitly in Mac Lane's proof. There are several generalizations of Mac Lane's coherence theorem. Each of them has the rough form that "every weak structure of some sort is equivalent to a stricter one". Homotopy coherence See also Coherence condition Canonical isomorphism Notes References § 5. of Ch. 5 of Further reading Saunders Mac Lane, Topology and Logic as a Source of Algebra (Retiring Presidential Address), Bulletin of the AMS 82:1, January 1976. External links https://ncatlab.org/nlab/show/homotopy+coherent+diagram https://unapologetic.wordpress.com/2007/07/01/the-strictification-theorem/ Homotopy theory
Coherency (homotopy theory)
Mathematics
459
1,436,317
https://en.wikipedia.org/wiki/DNA%20construct
A DNA construct is an artificially-designed segment of DNA borne on a vector that can be used to incorporate genetic material into a target tissue or cell. A DNA construct contains a DNA insert, called a transgene, delivered via a transformation vector which allows the insert sequence to be replicated and/or expressed in the target cell. This gene can be cloned from a naturally occurring gene, or synthetically constructed. The vector can be delivered using physical, chemical or viral methods. Typically, the vectors used in DNA constructs contain an origin of replication, a multiple cloning site, and a selectable marker. Certain vectors can carry additional regulatory elements based on the expression system involved. DNA constructs can be as small as a few thousand base pairs (kbp) of DNA carrying a single gene, using vectors such as plasmids or bacteriophages, or as large as hundreds of kbp for large-scale genomic studies using an artificial chromosome. A DNA construct may express wildtype protein, prevent the expression of certain genes by expressing competitors or inhibitors, or express mutant proteins, such as deletion mutations or missense mutations. DNA constructs are widely adapted in molecular biology research for techniques such as DNA sequencing, protein expression, and RNA studies. History The first standardized vector, pBR220, was designed in 1977 by researchers in Herbert Boyer’s lab. The plasmid contains various restriction enzyme sites and a stable antibiotic-resistance gene free from transposon activities. In 1982, Jeffrey Vieira and Joachim Messing described the development of M13mp7-derived pUC vectors that consist of a multiple cloning site and allow for more efficient sequencing and cloning using a set of universal M13 primers. Three years later, the currently popular pUC19 plasmid was engineered by the same scientists. Construction The gene on a DNA sequence of interest can either be cloned from an existing sequence or developed synthetically. To clone a naturally occurring sequence in an organism, the organism's DNA is first cut with restriction enzymes, which recognize DNA sequences and cut them, around the target gene. The gene can then be amplified using polymerase chain reaction (PCR). Typically, this process includes using short sequences known as primers to initially hybridize to the target sequence; in addition, point mutations can be introduced in the primer sequences and then copied in each cycle in order to modify the target sequence. It is also possible to synthesize a target DNA strand for a DNA construct. Short strands of DNA known as oligonucleotides can be developed using column-based synthesis, in which bases are added one at a time to a strand of DNA attached to a solid phase. Each base has a protecting group to prevent linkage that is not removed until the next base is ready to be added, ensuring that they are linked in the correct sequence. Oligonucleotides can also be synthesized on a microarray, which allows for tens of thousands of sequences to be synthesized at once, in order to reduce cost. To synthesize a larger gene, oligonucleotides are developed with overlapping sequences on the ends and then joined together. The most common method is called polymerase cycling assembly (PCA): fragments hybridize at the overlapping regions and are extended, and larger fragments are created in each cycle. Once a sequence has been isolated, it must be inserted into a vector. The easiest way to do this is to cut the vector DNA using restriction enzymes; if the same enzymes were used to isolate the target sequence, then the same "overhang" sequences will be created on each end allowing for hybridization. Once the target gene has hybridized to the vector DNA, they can be joined using a DNA ligase. An alternative strategy uses recombination between homologous sites on the target gene and the vector sequence, eliminating the need for restriction enzymes. Modes of delivery There are three general categories of DNA construct delivery: physical, chemical, and viral. Physical methods, which deliver the DNA by physically penetrating the cell, include microinjection, electroporation, and biolistics. Chemical methods rely on chemical reactions to deliver the DNA and include transformation with cells made competent using calcium phosphate as well as delivery via lipid nanoparticles. Viral methods use a variety of viral vectors to deliver the DNA, including adenovirus, lentivirus, and herpes simplex virus Vector structure In addition to the target gene, there are three important elements in a vector: an origin of replication, a selectable marker, and a multiple cloning site. An origin of replication is a DNA sequence that starts the process of DNA replication, allowing the vector to clone itself. A multiple cloning site contains binding sites for several restriction enzymes, making it easier to insert different DNA sequences into the vector. A selectable marker confers some trait that can be easily selected for in a host cell, so that it can be determined whether transformation was successful. The most common selectable markers are genes for antibiotic resistance, so that host cells without the construct will die off when exposed to the antibody and only host cells with the construct will remain. Types of DNA constructs Bacterial plasmids are circular sections of DNA that naturally replicate in bacteria. Plasmids are capable of holding inserts up to approximately 20 kbp in length. These types of constructs typically contain a gene offering antibiotic-resistance, an origin of replication, regulatory elements such as Lac inhibitors, a polylinker, and a protein tag which facilitates protein purification. Bacteriophage Vectors are viruses that can infect bacteria and replicate their own DNA. Artificial chromosomes are commonly used in genome project studies due to their ability to hold inserts up to 350 kbp. These vectors are derived from the F plasmid, taking advantage of the high stability and conjugational ability introduced by the F factor. Fosmids are a hybrid between bacterial F plasmids and λ phage cloning techniques. Inserts are pre-packaged into phage particles, then inserted into the host cell with the ability to hold ~45 kbp. They are typically used to generate a DNA library due to their increased stability. Applications DNA constructs can be used to produce proteins, including both naturally occurring proteins and engineered mutant proteins. These proteins can be used to make therapeutic products, such as pharmaceuticals and antibodies. DNA constructs can also change the expression levels of other genes by expressing regulatory sequences such as promoters and inhibitors. Additionally, DNA constructs can be used for research such as creating genomic libraries, sequencing cloned DNA, and studying RNA and protein expression. See also Vector (molecular biology) References DNA Genome editing
DNA construct
Engineering,Biology
1,380
3,428,935
https://en.wikipedia.org/wiki/Density%20on%20a%20manifold
In mathematics, and specifically differential geometry, a density is a spatially varying quantity on a differentiable manifold that can be integrated in an intrinsic manner. Abstractly, a density is a section of a certain line bundle, called the density bundle. An element of the density bundle at x is a function that assigns a volume for the parallelotope spanned by the n given tangent vectors at x. From the operational point of view, a density is a collection of functions on coordinate charts which become multiplied by the absolute value of the Jacobian determinant in the change of coordinates. Densities can be generalized into s-densities, whose coordinate representations become multiplied by the s-th power of the absolute value of the jacobian determinant. On an oriented manifold, 1-densities can be canonically identified with the n-forms on M. On non-orientable manifolds this identification cannot be made, since the density bundle is the tensor product of the orientation bundle of M and the n-th exterior product bundle of TM (see pseudotensor). Motivation (densities in vector spaces) In general, there does not exist a natural concept of a "volume" for a parallelotope generated by vectors in a n-dimensional vector space V. However, if one wishes to define a function that assigns a volume for any such parallelotope, it should satisfy the following properties: If any of the vectors vk is multiplied by , the volume should be multiplied by |λ|. If any linear combination of the vectors v1, ..., vj−1, vj+1, ..., vn is added to the vector vj, the volume should stay invariant. These conditions are equivalent to the statement that μ is given by a translation-invariant measure on V, and they can be rephrased as Any such mapping is called a density on the vector space V. Note that if (v1, ..., vn) is any basis for V, then fixing μ(v1, ..., vn) will fix μ entirely; it follows that the set Vol(V) of all densities on V forms a one-dimensional vector space. Any n-form ω on V defines a density on V by Orientations on a vector space The set Or(V) of all functions that satisfy if are linearly independent and otherwise forms a one-dimensional vector space, and an orientation on V is one of the two elements such that for any linearly independent . Any non-zero n-form ω on V defines an orientation such that and vice versa, any and any density define an n-form ω on V by In terms of tensor product spaces, s-densities on a vector space The s-densities on V are functions such that Just like densities, s-densities form a one-dimensional vector space Vols(V), and any n-form ω on V defines an s-density |ω|s on V by The product of s1- and s2-densities μ1 and μ2 form an (s1+s2)-density μ by In terms of tensor product spaces this fact can be stated as Definition Formally, the s-density bundle Vols(M) of a differentiable manifold M is obtained by an associated bundle construction, intertwining the one-dimensional group representation of the general linear group with the frame bundle of M. The resulting line bundle is known as the bundle of s-densities, and is denoted by A 1-density is also referred to simply as a density. More generally, the associated bundle construction also allows densities to be constructed from any vector bundle E on M. In detail, if (Uα,φα) is an atlas of coordinate charts on M, then there is associated a local trivialization of subordinate to the open cover Uα such that the associated GL(1)-cocycle satisfies Integration Densities play a significant role in the theory of integration on manifolds. Indeed, the definition of a density is motivated by how a measure dx changes under a change of coordinates . Given a 1-density ƒ supported in a coordinate chart Uα, the integral is defined by where the latter integral is with respect to the Lebesgue measure on Rn. The transformation law for 1-densities together with the Jacobian change of variables ensures compatibility on the overlaps of different coordinate charts, and so the integral of a general compactly supported 1-density can be defined by a partition of unity argument. Thus 1-densities are a generalization of the notion of a volume form that does not necessarily require the manifold to be oriented or even orientable. One can more generally develop a general theory of Radon measures as distributional sections of using the Riesz-Markov-Kakutani representation theorem. The set of 1/p-densities such that is a normed linear space whose completion is called the intrinsic Lp space of M. Conventions In some areas, particularly conformal geometry, a different weighting convention is used: the bundle of s-densities is instead associated with the character With this convention, for instance, one integrates n-densities (rather than 1-densities). Also in these conventions, a conformal metric is identified with a tensor density of weight 2. Properties The dual vector bundle of is . Tensor densities are sections of the tensor product of a density bundle with a tensor bundle. References . Differential geometry Manifolds Lp spaces
Density on a manifold
Mathematics
1,132
9,969,312
https://en.wikipedia.org/wiki/Hydrogen-oxidizing%20bacteria
Hydrogen-oxidizing bacteria are a group of facultative autotrophs that can use hydrogen as an electron donor. They can be divided into aerobes and anaerobes. The former use hydrogen as an electron donor and oxygen as an acceptor while the latter use sulphate or nitrogen dioxide as electron acceptors. Species of both types have been isolated from a variety of environments, including fresh waters, sediments, soils, activated sludge, hot springs, hydrothermal vents and percolating water. These bacteria are able to exploit the special properties of molecular hydrogen (for instance redox potential and diffusion coefficient) thanks to the presence of hydrogenases. The aerobic hydrogen-oxidizing bacteria are facultative autotrophs, but they can also have mixotrophic or completely heterotrophic growth. Most of them show greater growth on organic substrates. The use of hydrogen as an electron donor coupled with the ability to synthesize organic matter, through the reductive assimilation of CO2, characterize the hydrogen-oxidizing bacteria. Among the most represented genera of these organisms are Caminibacter, Aquifex, Ralstonia and Paracoccus. Sources of hydrogen Hydrogen is the most widespread element in the universe, representing around three-quarters of all atoms. In the atmosphere, the concentration of molecular hydrogen (H2) gas is about 0.5–0.6 ppm, and so it represents the second-most-abundant trace gas after methane. H2 can be used as energy source in biological processes because it has a highly negative redox potential (E0′ = –0.414 V). It can be coupled with O2, in oxidative respiration (2H2 + O2 → 2H2O), or with oxidized compounds, such as carbon dioxide or sulfate. In an ecosystem, hydrogen can be produced through abiotic and biological processes. The abiotic processes are mainly due to geothermal production and serpentinization. In geothermal processes, hydrogen is usually present as a gas and may be obtained by different reactions: 1.      Water may react with the silicon radical at high temperature: Si· + H2O → SiOH + H· H· + H· → H2 2.      A proposed reaction between iron oxides and water may occur at temperatures higher than 800 °C: 2FeO + H2O → Fe2O3 + H2 2Fe3O4 + H2O → 3Fe2O3 + H2 Occurring at ambient temperature, serpentinization is an exothermic geochemical mechanism that takes place when ultramafic rocks from deep in the Earth rise and encounter water. This process can produce large quantities of H2, as well as methane and organic substances. The main biotic mechanisms that lead to the formation of hydrogen are nitrogen fixation and fermentation. The first happens in bacteria, such as cyanobacteria, that have a specialized enzyme, nitrogenase, which catalyzes the reduction of N2 to NH4+. In addition, these microorganisms have another enzyme, hydrogenase, that oxidizes the H2 released as a by-product. If the nitrogen-fixing bacteria have low amounts of hydrogenase, excess H2 can be released into the environment. The amount of hydrogen released depends on the ratio between H2 production and consumption. The second mechanism, fermentation, is performed by some anaerobic heterotrophic bacteria, in particular Clostridia, that degrade organic molecules, producing hydrogen as one of the products. This type of metabolism mainly occurs in anoxic sites, such as lake sediments, deep-sea hydrothermal vents and the animal gut. The ocean is supersaturated with hydrogen, presumably as a result of these biotic processes. Nitrogen fixation is thought to be the major mechanism involved in the production of H2 in the oceans. Release of hydrogen in the oceans is dependent on solar radiation, with a daily peak at noon. The highest concentrations are in the first metres near the surface, decreasing to the thermocline and reaching their minimum in the deep oceans. Globally, tropical and subtropical oceans have the greatest abundance of H2. Examples Hydrothermal vents H2 is an important electron donor in hydrothermal vents. In this environment hydrogen oxidation represents a significant origin of energy, sufficient to conduct ATP synthesis and autotrophic CO2 fixation, so hydrogen-oxidizing bacteria form an important part of the ecosystem in deep sea habitats. Among the main chemosynthetic reactions that take place in hydrothermal vents, the oxidation of sulphide and hydrogen holds a central role. In particular, for autotrophic carbon fixation, hydrogen oxidation metabolism is more favored than sulfide or thiosulfate oxidation, although less energy is released (only –237 kJ/mol compared to –797 kJ/mol). To fix a mole of carbon during the hydrogen oxidation, one-third of the energy necessary for the sulphide oxidation is used. This is because hydrogen has a more negative redox potential than NAD(P)H. Depending on the relative amounts of sulphide, hydrogen and other species, energy production by oxidation of hydrogen can be as much as 10–18 times higher than production by the oxidation of sulphide. Knallgas bacteria Aerobic hydrogen-oxidizing bacteria, sometimes called knallgas bacteria, are bacteria that oxidize hydrogen with oxygen as final electron acceptor. These bacteria include Hydrogenobacter thermophilus, Cupriavidus necator, and Hydrogenovibrio marinus. There are both Gram positive and Gram negative knallgas bacteria. Most grow best under microaerobic conditions because the hydrogenase enzyme is inhibited by the presence of oxygen and yet oxygen is still needed as a terminal electron acceptor in energy metabolism. The word Knallgas means "oxyhydrogen" (a mixture of hydrogen and oxygen, literally "bang-gas") in German. Strain MH-110 Ocean surface water is characterized by a high concentration of hydrogen. In 1989, an aerobic hydrogen-oxidizing bacterium was isolated from sea water. The MH-110 strain (aka DSM 11271, type strain of Hydrogenovibrio marinus) is able to grow under normal temperature conditions and in an atmosphere (under a continuous gas flow system) characterized by an oxygen saturation of 40% (analogous characteristics are present in the surface water from which the bacteria were isolated, which is a fairly aerated medium). This differs from the usual behaviour of hydrogen-oxidizing bacteria, which in general thrive under microaerophilic conditions (<10% O2 saturation). This strain is also capable of coupling the hydrogen oxidation with the reduction of sulfur compounds such as thiosulfate and tetrathionate. Metabolism Knallgas bacteria are able to fix carbon dioxide using H2 as their chemical energy source. Knallgas bacteria stand out from other hydrogen-oxidizing bacteria that, although using H2 as energy source, are not able to fix CO2, as Knallgas do. This aerobic hydrogen oxidation (H2 + O2 H2O), also known as the Knallgas reaction, releases a considerable amount of energy, having a ΔGo of –237 kJ/mol. The energy is captured as a proton motive force for use by the cell. The key enzymes involved in this reaction are the hydrogenases, which cleave molecular hydrogen and feed its electrons into the electron transport chain, where they are carried to the final acceptor, O2, extracting energy in the process. The hydrogen is ultimately oxidized to water, the end product. The hydrogenases are divided into three categories according to the type of metal present in the active site. These enzymes were first found in Pseudomonas saccharophila, Alcaligenes ruhlandii and Alcaligenese eutrophus, in which there are two types of hydrogenases: cytoplasmic and membrane-bound. While the first enzyme takes up hydrogen and reduces NAD+ to NADH for carbon fixation, the second is involved in the generation of the proton motive force. In most knallgas bacteria only the second is found. While these microorganisms are facultative autotrophs, some are also able to live heterotrophicically using organic substances as electron donors; in this case, the hydrogenase activity is less important or completely absent. However, knallgas bacteria, when growing as chemolithoautotrophs, can integrate a molecule of CO2 to produce, through the Calvin–Benson cycle, biomolecules necessary for the cell: 6H2 + 2O2 + CO2 (CH2O) + 5H2O A study of Alcaligenes eutropha, a representative knallgas bacterium, found that at low concentrations of O2 (about 10 mol %) and consequently with a low ΔH2/ΔCO2 molar ratio (3.3), the energy efficiency of CO2 fixation increases to 50%. Once assimilated, some of the carbon may be stored as polyhydroxybutyrate. Uses Given enough nutrients, H2, O2 and CO2, many knallgas bacteria can be grown quickly in vats using only a small amount of land area. This makes it possible to cultivate them as an environmentally sustainable source of food and other products. For example, the polyhydroxybutyrate the bacteria produce can be used as a feedstock to produce biodegradable plastics in various eco-sustainable applications. Solar Foods is a startup that has sought to commercialize knallgas bacteria for food production, using renewable energy to split hydrogen to grow a neutral-tasting, protein-rich food source for use in products such as artificial meat. Research studies have suggested that knallgas cultivation is more environmentally friendly than traditional agriculture. References Bacteria Hydrogen Lithotrophs
Hydrogen-oxidizing bacteria
Chemistry,Biology
2,085
1,350,910
https://en.wikipedia.org/wiki/Pavers%20%28flooring%29
A paver is a paving stone, tile, brick or brick-like piece of concrete commonly used as exterior flooring. They are generally placed on top of a foundation which is made of layers of compacted stone and sand. The pavers are placed in the desired pattern and the space between pavers that is created with the integrated spacer bar is then filled with concrete sand or a polymeric sand. No actual adhesive or retaining method is used other than the weight of the paver itself except edging. Pavers can be used to make roads, driveways, patios, walkways and other outdoor platforms. In a factory, concrete pavers are made with a mixture of sand, stone, cement and iron oxide pigments in a mold and then cured prior to packaging. Block paving Block paving, also known as brick paving, is a commonly used decorative method of creating a pavement or hardstanding. The main benefit of bricks over other materials is that individual bricks can later be lifted up and replaced. This allows for remedial work to be carried out under the surface of the paving without leaving a lasting mark once the paving bricks have been replaced. Typical areas of use would be for driveways, pavement, patios, town centres, pedestrian precincts and more commonly in road surfacing. Bricks are typically made of concrete or clay, though other composite materials are also used. Each has its own means of construction. The biggest difference is the way they set hard ready for use. A clay brick has to be fired in a kiln to bake the brick hard. A concrete brick has to be allowed to set. The concrete paving bricks are a porous form of brick formed by mixing small stone hardcore, dyes, cement and sand and other materials in various amounts. Many block paving manufacturing methods are now allowing the use of recycled materials in the construction of the paving bricks, such as crushed glass and crushed old building rubble. There are many different laying patterns that can be achieved using block paving. The most common of these is the herringbone pattern. This pattern is the strongest of the block paving bonds as it offers the most interlock, therefore making it a good choice for driveways and road surfacing. A herringbone pattern can be created by setting the blocks at either 45 degrees or 90 degrees to the perpendicular. Other popular types of pattern include stretcher bond and basketweave; with the latter being better suited to paved areas that will only receive light foot traffic, due to its weaker bond. A commonly used base is 'cracker dust' or commonly known as crushed bluemetal. The advantage of using this in residential living is that it compacts a lot harder than yellow brickies sand, which prevents weeds and ants from coming through. Concrete pavers Pavers come in a number of styles, shapes and tones. Pavers manufactured from concrete go well with flag, brick and concrete walkways or patios. Concrete pavers may be used where winter temperatures dip below freezing. They are available in hole, x-shape, y-shape, pentagon, polygon and fan styles. An interlocking concrete paver, also known as a segmental paver, is a type of paver. This paver has emerged over the last couple of decades as a very popular alternative to brick, clay or concrete. An interlocker is a concrete block paver which is designed in such a way that it locks in with the next paver. The locking effect allows for a stronger connection between pavers and with this interlocking effect the paving itself is resistant to movement under traffic. Segmental pavers have been used for thousands of years. The Romans built roads with them that are still there. But it was not until the mid-1940s that pavers began to be produced out of concrete. It started in the Netherlands where all the roads are made to be flexible because much of the country is below sea level and the ground shifts, moves and sinks. Poured concrete is not an option because it will crack. Individual units not set in concrete, and placed in sand perform far better than concrete. Before the paver was made from concrete, either real stone or a clay product was used. The first production of concrete pavers in North America was in Canada, in 1973. Due to their success, paving stone manufacturing plants began to open throughout the United States working their way from east to west. The first concrete pavers were shaped just like a brick, , and they were called Holland Stones. These units turned out to be economical to produce and were exceedingly strong. In addition to being economical, interlocking concrete pavers are also widely available in water-permeable designs, which have added ecological benefits. By allowing water to drain through the pavers in a way that mimics natural absorption, builders and landscapers are able to limit surface runoff and prevent soil erosion or buildup of standing water in the surrounding land area. Some permeable paver installations are designed to harvest rainwater, which can then be repurposed for uses such as irrigation or washing a car. Permeable paver applications have also been found to offer benefits in filtering contaminants of the water being captured. Modern installation method Pavers must have a strong base, a flat bedding and an edge restraint. Base To prevent the soil from absorbing the base layer above, there should be a compacted sub-base (which is the naturally occurring soil) and a layer of landscape fabric. Landscape fabric is not required in every application. All compaction is usually performed with a plate compactor or hand tamper. All sand-containing materials (e.g., concrete sand, rock dust, or minus crushed rock) must be soaked with water for effective compaction. The base layer should be 6" deep for walkways, or 12" deep for driveways. The base material should either be 3/4" crushed stone (to allow water to drain through it) for a 1/4" crushed stone bedding, or 3/4" minus crushed stone (to prevent sand from sinking through it) for a concrete sand bedding. The base should be compacted every 6". If the base layer is deeper than 6", then biaxial geogrid should be added every 6" and spaced evenly throughout the bedding to maintain stability. If reliable concrete is already installed that can be used as the base layer. Bedding Above the base layer, there should be a 1" bedding layer. A 1/4" crushed stone bedding material is favored over concrete sand on walkways for its better drainage that mitigates freeze-thaw shifting, easier compaction especially on rainy days, and less weed growth. A concrete sand (specifically ASTM C33) bedding is preferable for building driveways with tighter joints (i.e., thinner cracks) because the sand is small enough to be raised up into the cracks when the pavers are compacted. This raised concrete sand helps lock the pavers in place so that they can handle more weight. Concrete sand is a more preferable bedding layer than rock dust. Because rock dust retains rather than drains water, it prevents polymeric sand from drying and curing. Additionally, when that water in the rock dust eventually evaporates, it will carry salts through the pavers which will deposit on their surface and stain them with "efflorescence build-up". Additionally, compacting pavers levels them easier on sand than on rock dust. The bedding layer must be flattened by "screeding" it. To screed the bedding, scrape a straightedge (such as a level) along the top of the bedding. To guide the straightedge, it is common to place parallel metal rails on top of the bedding, or have 1" PVC pipes laid on the base so that they reach the top of the bedding. If using pipes, their indentation must be filled with bedding material once removed. A slight slope towards a drain is usually implemented. The pavers are hammered with a mallet when placing them down to help them settle and prevent them from wobbling. Edge restraint and sealing Edge restraints prevent pavers from spreading apart and maintain the integrity of the pavement system to move uniformly with freeze thaw cycles as well. An edge restraint can be a concrete slope which is no steeper than 45 degrees, and meets the edge pavers halfway between its top and bottom surface so that it can be buried. Alternatively, commercial plastic edge restraints can be anchored into the ground with steel spikes. After the pavers have been laid and cleaned with a pressure washer, and they must be dried according recommendations the particular polymeric sand (usually at least for one hot summer day). After they have been dried, sweep the polymeric sand into the cracks, then compact the pavers to help the sand sink in (often with a wood panel between the pavers and the compactor to prevent chipping the pavers), and then rinse according to the polymeric sand's instructions. This sand prevents weeds from growing between them, and helps them lock into place. Do not sweep the polymeric sand more than 10 feet from where it was poured because it will sift out necessary additives. Note that different types of polymeric sand can handle different joint widths and depths, and they often have require slightly different methods of rinsing. Applying paver sealer or concrete sealer bi-annually prevents stains from getting on the pavers. Stone pavers A stone paver is another type of paver. This type of paver is used widely in building and landscaping as it is highly prized for beauty, strength and durability. Stone pavers are made of many materials including limestone, bluestone, basalt (such as that from The Palisades used in New York City), sandstone and granite. Travertine is a durable, low-porous stone that stays cool in direct sunlight, making it a popular choice for pool-sides, patios, walkways and outdoor entertainment areas. Travertine is salt tolerant and has a low sunlight reflection. Granite pavers have high integral strength and density making it easy to maintain and hard-wearing in outdoor use. Limestone pavers are cut from natural limestone blocks, a sedimentary rock found in mountainous areas and ocean sea beds. Limestone tends to have unique natural colour variations. Sandstone pavers are derived from natural stone and tend to be used for sidewalks, patios and backyards. See also Cool pavement Hoggin Macadam Pavement (York) Permeable paving Whitetopping References External links Building materials Concrete Floors Garden features
Pavers (flooring)
Physics,Engineering
2,177