id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
73,084,636
https://en.wikipedia.org/wiki/Prototype%20pollution
Prototype pollution is a class of vulnerabilities in JavaScript runtimes that allows attackers to overwrite arbitrary properties in an object's prototype. In a prototype pollution attack, attackers inject properties into existing JavaScript construct prototypes, trying to compromise the application. References External links Prototype Pollution Prevention Cheat Sheet - OWASP Web security exploits Servers (computing) JavaScript
Prototype pollution
[ "Technology" ]
77
[ "Computer security stubs", "Computing stubs", "Computer security exploits", "Web security exploits" ]
73,088,800
https://en.wikipedia.org/wiki/Conjugated%20oligoelectrolytes
Conjugated oligoelectrolytes, or COEs, are a class of synthetic antimicrobials designed to prevent and circumvent antimicrobial resistance via different mechanism of action than traditional antibiotics. COEs insert into cell membranes and can function as electron transporters, but were found to inhibit bacterial growth. They can also be used for tracking the progress of tumor growth. References Bactericides
Conjugated oligoelectrolytes
[ "Biology" ]
88
[ "Bactericides", "Biotechnology stubs", "Biocides" ]
73,088,985
https://en.wikipedia.org/wiki/IRIS%C2%B2
IRIS² (Infrastructure for Resilience, Interconnectivity and Security by Satellite) is a planned multi-orbit satellite internet constellation to be deployed by the European Union by 2027. It is intended to provide secure communications, location tracking and security surveillance services to governmental agencies directly comparable to the US SpaceX Starshield project (but not with the commercial Starlink). The total cost of the programme was estimated at €6 billion, to which the European Union itself would contribute €2.4 billion from 2022 until 2027. When the contract was signed in December 2024, the estimate had risen to €10.5 billion, of which €6.5 are public funds. IRIS² is part of an overall EU space strategy that will include the forthcoming EU Space Strategy for Security and Defence. History The project was first announced by the Council of the EU in November 2022. A single multi-national industrial consortium, including Airbus Defence and Space, Thales Alenia Space and Arianespace among others, is expected to carry it out. The constellation is expected to be launched by European rockets such as the upcoming Ariane 6. The latter's first launch, initially scheduled for the end of the year 2022, but subsequently delayed several times. finally took place on 9 July 2024. In case of further hiccups, foreign contractors, namely SpaceX, may be considered. In January 2024, it was reported European space giants were putting the final touches on a common proposal for the sovereign broadband constellation amid the looming mid-February deadline to submit their best and final offer to the European Commission. The contract was originally scheduled to be awarded by the end of March but the European Commission appears to have put it on hold. At a meeting of an EU parliamentary committee on April 9 2024, EU commissioner for the internal market, Thierry Breton, stated the commission was still working on finalizing the contract without providing an estimate regarding when it would be completed. In October 2024, the European Commission (EC) announced that the concession contract to develop, deploy and operate IRIS² had been awarded to SpaceRISE, a consortium of three European satellite operators— SES, Eutelsat and Hispasat— which would rely on a core team of 8 European space and telecommunications companies as subcontractors; they are Thales Alenia Space, OHB, Airbus Defence and Space, Telespazio, Deutsche Telekom, Orange, Hisdesat and Thales SIX. The European Commission stated that IRIS² would be funded by the EU, the European Space Agency and private financing, and that the satellite constellation will comprise 290 satellites in multiple orbits, with the first satellites expected to enter service in 2030. The contract with SpaceRISE was signed in Brussels on December 16, 2024. See also Galileo (satellite navigation), the EU's satellite navigation constellation Satellite internet constellation SpaceX Starshield Satellite internet References Communications satellite constellations European Union and science and technology
IRIS²
[ "Astronomy" ]
606
[ "Outer space stubs", "Outer space", "Astronomy stubs" ]
73,089,691
https://en.wikipedia.org/wiki/Eolouka
Eolouka is a paraphyletic phylum of protists localized in the clade Discoba. It contains two lineages: Jakobea and Tsukubea, the last containing only one genus, Tsukubamonas. History of classification In 1999 Cavalier-Smith proposed a new paraphyletic phylum of flagellates called Loukozoa, containing only the jakobids. In 2013 it was modified to contain three subphyla: Eolouka, containing the classes Jakobea and Tsukubea. Metamonada, containing the infraphyla Trichozoa and Anaeromonada. Neolouka, containing the only class Malawimonadea. However, in later years these three groups were raised to the rank of phylum. References Protista
Eolouka
[ "Biology" ]
168
[ "Eukaryotes", "Eukaryote stubs" ]
73,090,137
https://en.wikipedia.org/wiki/Charles%20A.%20Birnbaum
Charles A. Birnbaum (born 1961) is a nationally recognized advocate for the study of American landscapes. He is the President and CEO of The Cultural Landscape Foundation (TCLF) in Washington, DC. Education and early career Charles Alan Birnbaum graduated from the State University of New York College of Environmental Science and Forestry, receiving a Bachelor in Landscape Architecture in 1983. Before founding TCLF, Birnbaum spent fifteen years as the coordinator of the National Park Service Historic Landscape Initiative (HLI). In that position, he developed and helped implement the Secretary of the Interior's Standards for the Treatment of Historic Properties + Guidelines for the Treatment of Cultural Landscapes. His experience includes ten years of private practice in New York City with a focus on landscape and urban design. Advocacy and education Birnbaum is known for his advocacy to identify and protect significant landscapes threatened by erasure, neglect or alteration. He has led efforts to save more than 50 parks, landmarks and gardens through open dialogue and education. TCLF identifies at-risk properties each year as part of its Landslide initiative. "Sites can be at-risk for numerous reasons, ranging from an imminent threat of demolition to an accumulation of factors — storm damage, lack of resources for needed maintenance, vandalism, etc." Among the projects that Birnbaum and TCLF have recently flagged is the A. M. E. Zion Church in Rossville, Staten Island. Another way that Birnbaum achieves consensus is through workshops where Birnbaum gathers members of the landscape community and the public together to discuss the interaction of the natural and the built environment. His first "Bridging the Nature Culture Divide Conference" was held in Rye, New York at the Jay Estate in 2011 and resulted in progress to reimagine historic gardens at the site in a sustainable manner. Subsequent forums were co-sponsored by the Central Park Conservancy (2012) and the Presidio Trust (2015). Birnbaum's "What's Out There" program has also received attention. It has advanced from a website database of over 1350 landscapes to a sophisticated interface technology that can be accessed via a mobile phone. In 2013, Birnbaum galvanized a group of advocates to stop the demolition of Peavey Plaza in Minneapolis and helped negotiate terms for the rehabilitation of the public space. Birnbaum garnered widespread attention when he and others condemned plans by the Frick Museum to eliminate what the museum termed a temporary 1977 Russell Page garden as part of their expansion plans in 2015. The garden was saved when Birnbaum produced a contradictory original document issued by the Frick clearly identifying the space as a permanent garden. Through TCLF, he has brought the names of forgotten landscape designers and stewards, particularly women like Ruth Shellhorn, an advisor to Walt Disney, and Harlem Renaissance poet Anne Spencer, to the forefront. "Women have literally shaped the American landscape and continue to today, but their names and contributions are largely unknown." In 2022, for the bicentennial of Frederick Law Olmsted's birth, Birnbaum issued an alert about the threats to a dozen of Olmsted's iconic green spaces noting the cumulatively negative impacts posed by climate change, coastal erosion and construction. Birnbaum has made numerous media appearances, including several segments on Fox News opposing the Obama Presidential Center in Chicago. Publications, oral history modules and film Birnbaum has written and edited numerous publications. His book Pioneers of American Landscape Design (2000), co-researched with Robin Karson and Laura Byers was called "the first of its kind in America." It is an encyclopedic catalog of landscape architects and public space planners and engineers. Other Birnbaum works include Shaping the American Landscape, Design with Culture: Claiming America's Landscape Heritage, Preserving Modern Landscape Architecture and its follow-up publication, Making Post-War Landscapes Visible. He has produced numerous oral history modules about practitioners in his field along with profiles of notable modern landscape architects like James van Sweden, Cornelia Hahn Oberlander, Lawrence Halprin and Paul Friedberg. Fellowships and awards Birnbaum is a Fellow of the American Society of Landscape Architects (ASLA) and the American Academy in Rome. In 2008, he was the visiting Glimcher Distinguished Professor at Ohio State's Austin E. Knowlton School of Architecture and was also awarded the Alfred B. LaGasse Medal from the ASLA. Birnbaum is also the recipient of the 2017 ASLA Medal and a Garden Club of America awardee in 2020. Birnbaum teaches at the Harvard Graduate School of Design, where he was a Loeb Fellow (1997–98), and has been a visiting professor at Columbia Graduate School of Architecture, Planning and Preservation (GSAPP). See also Conservation and restoration of immovable cultural property References External links The Cultural Landscape Foundation American landscape and garden designers Living people People from New York (state) Landscape architecture 1961 births State University of New York College of Environmental Science and Forestry alumni
Charles A. Birnbaum
[ "Engineering" ]
1,024
[ "Landscape architecture", "Architecture" ]
73,090,783
https://en.wikipedia.org/wiki/Vintage%20computer
A vintage computer is an older computer system that is largely regarded as obsolete. The personal computer has been around since approximately 1971. But in that time, numerous technological revolutions have left generations of obsolete computing equipment on the junk heap. Nevertheless, in that time, these otherwise useless computers have spawned a sub-culture of vintage computer collectors, who often spend large sums to acquire the rarest of these items, not only to display but restore to their fully functioning glory, including active software development and adaptation to modern uses. This often includes homebrew developers and hackers who add on, update and create hybrid composites from new and old computers for uses for which they were otherwise never intended. Ethernet interfaces have been designed for many vintage 8-bit machines to allow limited connectivity to the Internet; where users can access user groups, bulletin boards, and databases of software. Most of this hobby centers on those computers manufactured after 1960, though some collectors specialize in pre-1960 computers as well. The Vintage Computer Festival, an event held by the Vintage Computer Federation for the exhibition and celebration of vintage computers, has been held annually since 1997 and has expanded internationally. By platform MITS Inc. Micro Instrumentation and Telemetry Systems (MITS) produced the Altair 8800 in 1975. According to Harry Garland, the Altair 8800 was the product that catalyzed the microcomputer revolution of the 1970s. IMSAI IMSAI produced a machine similar to the Altair 8800. It was introduced in 1975, first as a kit, and later as an assembled system. The list price was $591 () for a kit, and $931 () assembled. Processor Technology Processor Technology produced the Sol-20. This was one of the first machines to have a case that included a keyboard; a design feature copied by many of later "home computers". SWTPC Southwest Technical Products Corporation (SWTPC) produced the 8-bit SWTPC 6800 and later the 16-bit SWTPC 6809 kits that employed the Motorola 68xx series microprocessors. Apple Inc. The earliest Apple Inc. personal computers, using the MOS Technology 6502 processors, are among some of the most collectible. They are relatively easy to maintain in an operational state thanks to Apple's use of readily available off-the-shelf parts. Apple I (1976): The Apple-1 was Apple's first product and has brought some of the highest prices ever paid for a microcomputer at auction. Apple II (1977): The Apple II series of computers are some of the easiest to adapt, thanks to the original expansion architecture designed for them. New peripheral cards are still being designed by an avid thriving community, thanks to the longevity of this platform, manufactured from 1977 through 1993. Numerous websites exist to support not only legacy users but new adopters who weren't even born when the Apple II was discontinued by Apple. Macintosh (1984): The original Macintosh used a 32-bit Motorola 68000 processor running at 7.8336 MHz and came with 128 K of RAM. The list price was $2495 ().Perhaps because of its friendly design and first commercially successful graphical user interface as well as its enduring Finder application that persists on the most current Macs, the Macintosh is one of the most collected and used vintage computers. With dozens of websites around the world, old Macintosh hardware and software are input into daily use. The Macintosh had a strong presence in many early computer labs, creating a nostalgia factor for former students who recall their first computing experiences. RCA The COSMAC ELF in 1976 was an inexpensive (about $100) single-board computer that was easily built by hobbyists. Many people who could not afford an Altair could afford an ELF, which was based on the RCA 1802 chip. Because the chips are still available from other sources, modern recreations of the ELF are fairly common and there are several fan websites. IBM The IBM 1130 (1965) was a desk-sized small computer. It was the often the first computer used by many college students, still has a following of interested users. Most of the remaining 1130 systems in 2023 are in museums, but an emulator is available for users who don't have access to a physical 1130. The 5100 also has an avid collector and fan base. The PC series (5150 PC, 5155 Portable PC, 5160 PC/XT, 5170 PC/AT) has become very popular in recent years, with the earliest models (PC) being considered the most collectible. Acorn BBC & Archimedes The Acorn BBC Micro was a very popular British computer in the 1980s with home and educational users and enjoyed near-universal usage in British schools into the mid-1990s. It was possible to use 100K -inch disks, and it had many expansion ports. The Archimedes series the de facto successor to the BBC Micro has also enjoyed a following in recent years, thanks to its status as the first computer to be based around ARM's RISC microprocessor. Tandy/Radio Shack The Tandy/RadioShack Model 100 is still widely collected and used as one of the earliest examples of a truly portable computer. Other Tandy offerings, such as the TRS-80 line, are also very popular, and early systems, like the Model I, in good condition can command premium prices on the vintage computer market. Sinclair The Sinclair ZX81 and ZX Spectrum series were the most popular British home computers of the early 1980s, with a wide choice of emulators available for both platforms. The Spectrum in particular enjoys a cult following due to its popularity as a games platform, with new games titles still being developed even today. Original "rubber key" Spectrums fetch the highest prices on the second-hand market, with the later Amstrad-built models attracting less of a following. The earlier ZX81 is not as popular in original hardware form due to its monochrome display and limited abilities next to the Spectrum, but still unassembled ZX81 kits still appear on eBay occasionally. MSX Although nearly nonexistent in the United States, the MSX architecture has strong communities of fans and hobbyists worldwide, particularly in Japan (where the standard was conceived and developed), South Korea (the only country that had an MSX-based game console, Zemmix), Netherlands, Spain, Brazil, Argentina, Russia, Chile, the Middle East, and others. New hardware and software are being actively developed to this day as well. One of the latest fundamental (from hardware and software perspectives) revivals of the MSX is the GR8BIT. Robotron The Robotron Z1013 was an East German home computer produced by VEB Robotron. It had a U880 processor, 16 KB RAM, and a membrane keyboard. The KC 85 series of computers was a modular 8-bit computer system used in East German schools. Commodore VIC-20 Commodore 64 Commodore PET Amiga Xerox The Xerox Alto, designed and manufactured by Xerox PARC and released in 1973, was the first personal computer equipped with a graphic user interface. In 1979, Steve Jobs of Apple Inc. arranged for his engineers to visit Xerox in order to see the Alto. The design concepts of the Alto soon appeared in the Apple Lisa and Macintosh systems. The Xerox Star, also known as the 8010/40, was made available in 1981. It followed on the Alto. Like the Alto, this machine was expensive and was only intended for corporate office usage. Therefore, being out of the price range of the average user, this product had little market penetration. Silicon Graphics The SGI Indy, built in 1993 for Silicon Graphics has a history of usage in the development of the Nintendo 64 as well as various CGI projects throughout the 1990s and early 2000s. The Indy and other machines in the SGI lineup have remained cult classics. See also List of home computers by video hardware Living Computers: Museum + Labs References Computer hardware History of computing Nostalgia
Vintage computer
[ "Technology", "Engineering" ]
1,657
[ "Computing culture", "Computer engineering", "Computer hardware", "Computer systems", "Computer science", "Computing and society", "Computers", "History of computing" ]
73,090,850
https://en.wikipedia.org/wiki/Cadusafos
Cadusafos (2-[butan-2-ylsulfanyl(ethoxy)phosphoryl]sulfanylbutane) is a chemical insecticide and nematicide often used against parasitic nematode populations. The compound acts as a acetylcholinesterase inhibitor. It belongs the chemical class of synthetic organic thiophosphates and it is a volatile and persistent clear liquid. It is used on food crops such as tomatoes, bananas and chickpeas. It is currently not approved by the European Commission for use in the EU. Exposure can occur through inhalation, ingestion or contact with the skin. The compound is highly toxic to nematodes, earthworms and birds but poses no carcinogenic risk to humans. History A patent application for Cadusafos was first filed in Europe on August 13, 1982 by FMC Corporation, an American chemical company which originated as an insecticide producer. In their patent application, they claimed that the compound should preferably be used to “control nematodes and soil insects, but may also control some insects which feed on the above ground portions of the plant.” The patent is expired, meaning that the compound is commercially available from chemical vendors such as Sigma Aldrich. However, the pesticide is not approved for use in Europe due to the lack of information on consumer exposure and the risk to groundwater. Structure and reactivity Cadusafos is a synthetic organic thiophosphate compound which is observed as a volatile and persistent clear liquid. The toxin is an organothiophosphate insecticide. Organothiophosphorus compounds are identified as compounds which contain carbonphosphorus bonds where the phosphorus atom is also bound to sulphur. Many of these compounds serve as insecticides and cholinergic agents. Cadusafos contains the phosphorus atom bound to two sulphurs which are attached to iso-butyl substituents. The phosphorus is also connected to oxygen by a double bond and is bound to an ethyl ether group. The exact reactivity of Cadusafos as well as that of organothiophosphate compounds in general is, as of yet, unknown. However, the cholinesterase enzyme inhibition mechanism of action of these compounds works similarly to other organophosphates. Examples of organophosphates include nerve gasses such as sarin and VX as well as pesticides like malathion. Synthesis The synthesis of Cadusafos can be performed via the substitution reaction of O-ethyl phosphoric dichloride and two equivalents of 2-butanethiol. Mechanism of action Cadusafos is an inhibitor of the enzyme acetylcholinesterase. This enzyme binds to acetylcholine and cleaves it into choline and acetate. Acetylcholine is a neurotransmitter which is used in neurons to pass on a neural stimulus. Cadusafos inhibits the function of acetylcholinesterase by occupying the active site of the enzyme which will no longer be able to function properly, resulting in the accumulation of acetylcholine. This might result in excessive nervous stimulation, respiratory failure and death. Cadusafos is an organothiophosphate, which is a subclass of organophosphates. Organophosphates can act as an inhibitor for acetylcholinesterase in a way for which the mechanism is known. The active site of acetylcholinesterase contains an anionic site and an esteratic site. This esteratic side contains a serine at the 200th position, which usually binds acetylcholine. Organophosphate inhibitors can phosphorylate this serine and with that inhibit the enzyme. Metabolism and biotransformation In a study, 14C radiolabeled Cadusafos was administered orally to rats. The excretion of feces, urine and CO2 was monitored for seven days. This showed that cadusafos is readily absorbed (90-100%) and mainly eliminated via urine (around 75%), followed by elimination via expired air (10-15%) and via feces (5-15%). Over 90% of the administered dose was eliminated within 48 hours after administration. Analysis of tissue and blood samples collected after seven days showed a remaining radioactivity between 1-3%. The majority of this radioactivity was found in fat, liver, kidney and lung tissue and no evidence of accumulation was found. A different study was performed in order to identify the metabolites formed in rats after receiving either an oral or intravenous dose of Cadusafos. The metabolic products were analyzed using several analysis methods (HPLC, TLC, GC-MS, 1H-NMR and liquid scintillation). This indicated the presence of the parent compound, Cadusafos, as well as 10 other metabolites. The main pathway of metabolism involves the cleavage of the thio-(sec-butyl) group, forming two primary products: Sec-butyl mercaptan and Oethyl-S-(2-butyl) phosphorothioic acid (OSPA). These intermediate compounds are then degraded further into several metabolites. The major metabolites were hydroxysulfones, followed by phosphorothionic acids and sulfonic acids, which then form conjugates. Toxicity A study has been conducted by the Joint FAO/WHO Meeting on Pesticide Residues (JMPR), on rats in which the lethal dose of Cadusafos was investigated. The researchers found a median lethal dose via the oral pathway of 68.4 mg/kg bodyweight (bw) in male rats and 82.1 mg/kg bw in female rats. The rats died of typical symptoms of acetylcholinesterase inhibition. Via the dermal pathway, lower median lethal doses were found; mg/kg bw in males and 41.8 mg/kg bw in females. Considering the toxicity in humans, there is no data available yet regarding the median lethal dose for a human. The United States Environmental Protection Agency (EPA), did publish a report on the safety concerns of Cadusafos used as a pesticide on bananas and concluded that “Potential acute and chronic dietary exposures from eating bananas treated with Cadusafos are below the level of concern for the entire U.S. population, including infants and children.” Effects on animals Cadusafos has been proved to be toxic to fish, aquatic invertebrates, bees, earthworms and other arthropods. Further research was conducted on terrestrial vertebrates, and it is expected to have toxic effects on mammals. Besides its direct toxicity to multiple species, Cadusafos also has a potential to bioaccumulate so secondary poisoning for earthworm eating mammals and birds should also be taken into consideration. The estimated risk to bees and aquatic organisms is low due to the application of Cadusafos, even though the toxicity to bees is high. The compound is also estimated to be highly toxic to earthworms and birds. A multigeneration study in rats has established a No Adverse Effect Level (NOAEL) of 0.03 mg/kg bw per day for the inhibition of cholinesterase activity in plasma and erythrocytes. There has been no adequate evidence that Cadusafos could prove a genotoxic compound. Due to this and additional research on mice and rats which proved Cadusafos as non-carcinogenic, it can be concluded that Cadusafos is non-carcinogenic for humans. Efficacy Cadusafos proved to be very effective against parasitic nematode populations such as Rotylenchulus reniformis and Meloidogyne incognita. It showed to be more effective against endoparasitic nematodes than ectoparasitic nematodes and when compared to other nematicides like triazophos, methyl bromide, aldicarb, carbofuran and phorate, Cadusafos proved to be the most efficient. The effectiveness of Cadusafos improves when increasing the dosage or the exposure time. Efficacy after application for several successive cropping seasons seemed to remain the same for up to four seasons. However, when it is used for more than 4 consecutive seasons, this can cause a linear decrease in the efficacy. References Nematicides Ethyl esters Phosphorodithioates Insecticides Thioesters Sec-Butyl compounds
Cadusafos
[ "Chemistry" ]
1,796
[ "Thioesters", "Functional groups", "Phosphorodithioates" ]
73,091,262
https://en.wikipedia.org/wiki/Environmental%20Science%20Center
The Environmental Science Center is a research center at Qatar University and was established in 1980 to promote environmental studies across the state of Qatar with main focus on marine science, atmospheric and biological sciences. For the past 18 years, ESC monitored and studied Hawksbill turtle nesting sites in Qatar. History in 1980 it was named Scientific and Applied Research Center (SARC). in 2005 it was restructured and renamed Environmental Studies Center (ESC). in 2015, the business name was changed to Environmental Science Center (ESC) to better reflect the research-driven objectives. Research clusters The ESC has 3 major research clusters that cover areas of strategic importance to Qatar. The clusters are: Atmospheric sciences cluster Earth sciences cluster Marine sciences cluster with 2 majors: Terrestrial Ecology Physical and Chemical Oceanography UNESCO Chair in marine sciences The first of its kind in the Arabian Gulf region, United Nations Educational, Scientific and Cultural Organization (UNESCO) have announced the establishment of the UNESCO Chair in marine sciences at QU's Environmental Science Center. The chair is aiming to providing sustainable marine environment in the Arabian Gulf and protection of marine ecosystems. Inventions Marine clutch technology. Mushroom artificial reef technology (mushroom forest). Accreditation The ESC labs have been granted ISO/IEC 17025 by American Association of Laboratory Accreditation (A2LA), affirming their status as world-class facilities operating to best practice. Facilities ESC is the home of wide range of facilities. The most notable one is the mobile labs on board the JANAN Research Vessel. JANAN is a 42.80 m. multipurpose Research Vessel and was named after the island located in the western coast of the Qatari peninsula. It was donated to Qatar University by H.H. Sheikh Tamim bin Hamad Al Thani the Amir of Qatar. JANAN is used extensively in studying the state of marine environment in the Exclusive Economic Zone (EEZ) of the State of Qatar and to advance critical marine environmental studies and research in Qatar and the wider Gulf. The center also has 12 labs equipped with state-of-arts instruments. See also Qatar University Qatar University Library Mariam Al Maadeed Center for Advanced Materials (CAM) External links Research and Graduate Studies Office at Qatar University Qatar University Newsroom References 1980 establishments in Qatar Organisations based in Doha Research institutes in Qatar Educational institutions established in 1980 Qatar University Education by subject Human impact on the environment Oceanographic organizations Fishing and the environment Earth science research institutes Biological research institutes Environmental research institutes
Environmental Science Center
[ "Environmental_science" ]
496
[ "Environmental research institutes", "Environmental research" ]
44,450,362
https://en.wikipedia.org/wiki/Network%20medicine
Network medicine is the application of network science towards identifying, preventing, and treating diseases. This field focuses on using network topology and network dynamics towards identifying diseases and developing medical drugs. Biological networks, such as protein-protein interactions and metabolic pathways, are utilized by network medicine. Disease networks, which map relationships between diseases and biological factors, also play an important role in the field. Epidemiology is extensively studied using network science as well; social networks and transportation networks are used to model the spreading of disease across populations. Network medicine is a medically focused area of systems biology. Background The term "network medicine" was introduced by Albert-László Barabási in an the article "Network Medicine – From Obesity to the 'Diseasome, published in The New England Journal of Medicine, in 2007. Barabási states that biological systems, similarly to social and technological systems, contain many components that are connected in complicated relationships but are organized by simple principles. Relaying on the tools and principles of network theory, the organizing principles can be analyzed by representing systems as complex networks, which are collections of nodes linked together by a particular biological or molecular relationship. For networks pertaining to medicine, nodes represent biological factors (biomolecules, diseases, phenotypes, etc.) and links (edges) represent their relationships (physical interactions, shared metabolic pathway, shared gene, shared trait, etc.). Barabasi suggested that understanding human disease requires us to focus on three key networks, the metabolic network, the disease network, and the social network. The network medicine is based on the idea that understanding complexity of gene regulation, metabolic reactions, and protein-protein interactions and that representing these as complex networks will shed light on the causes and mechanisms of diseases. It is possible, for example, to infer a bipartite graph representing the connections of diseases to their associated genes using the OMIM database. The projection of the diseases, called the human disease network (HDN), is a network of diseases connected to each other if they share a common gene. Using the HDN, diseases can be classified and analyzed through the genetic relationships between them. Network medicine has proven to be a valuable tool in analyzing big biomedical data. Research areas Interactome The whole set of molecular interactions in the human cell, also known as the interactome, can be used for disease identification and prevention. These networks have been technically classified as scale-free, disassortative, small-world networks, having a high betweenness centrality. Protein-protein interactions have been mapped, using proteins as nodes and their interactions between each other as links. These maps utilize databases such as BioGRID and the Human Protein Reference Database. The metabolic network encompasses the biochemical reactions in metabolic pathways, connecting two metabolites if they are in the same pathway. Researchers have used databases such as KEGG to map these networks. Others networks include cell signaling networks, gene regulatory networks, and RNA networks. Using interactome networks, one can discover and classify diseases, as well as develop treatments through knowledge of its associations and their role in the networks. One observation is that diseases can be classified not by their principle phenotypes (pathophenotype) but by their disease module, which is a neighborhood or group of components in the interactome that, if disrupted, results in a specific pathophenotype. Disease modules can be used in a variety of ways, such as predicting disease genes that have not been discovered yet. Therefore, network medicine looks to identify the disease module for a specific pathophenotype using clustering algorithms. Diseasome Human disease networks, also called the diseasome, are networks in which the nodes are diseases and the links, the strength of correlation between them. This correlation is commonly quantified based on associated cellular components that two diseases share. The first-published human disease network (HDN) looked at genes, finding that many of the disease associated genes are non-essential genes, as these are the genes that do not completely disrupt the network and are able to be passed down generations. Metabolic disease networks (MDN), in which two diseases are connected by a shared metabolite or metabolic pathway, have also been extensively studied and is especially relevant in the case of metabolic disorders. Three representations of the diseasome are: Shared gene formalism states that if a gene is linked to two different disease phenotypes, then the two diseases likely have a common genetic origin (genetic disorders). Shared metabolic pathway formalism states that if a metabolic pathway is linked to two different diseases, then the two diseases likely have a shared metabolic origin (metabolic disorders). Disease comorbidity formalism uses phenotypic disease networks (PDN), where two diseases are linked if the observed comorbidity between their phenotypes exceeds a predefined threshold. This does not look at the mechanism of action of diseases, but captures disease progression and how highly connected diseases correlate to higher mortality rates. Some disease networks connect diseases to associated factors outside the human cell. Networks of environmental and genetic etiological factors linked with shared diseases, called the "etiome", can be also used to assess the clustering of environmental factors in these networks and understand the role of the environment on the interactome. The human symptom-disease network (HSDN), published in June 2014, showed that the symptoms of disease and disease associated cellular components were strongly correlated and that diseases of the same categories tend to form highly connected communities, with respect to their symptoms. Pharmacology Network pharmacology is a developing field based in systems pharmacology that looks at the effect of drugs on both the interactome and the diseasome. The topology of a biochemical reaction network determines the shape of drug dose-response curve as well as the type of drug-drug interactions, thus can help design efficient and safe therapeutic strategies. In addition, the drug-target network (DTN) can play an important role in understanding the mechanisms of action of approved and experimental drugs. The network theory view of pharmaceuticals is based on the effect of the drug in the interactome, especially the region that the drug target occupies. Combination therapy for a complex disease (polypharmacology) is suggested in this field since one active pharmaceutical ingredient (API) aimed at one target may not affect the entire disease module. The concept of disease modules can be used to aid in drug discovery, drug design, and the development of biomarkers for disease detection. There can be a variety of ways to identifying drugs using network pharmacology; a simple example of this is the "guilt by association" method. This states if two diseases are treated by the same drug, a drug that treats one disease may treat the other. Drug repurposing, drug-drug interactions and drug side-effects have also been studied in this field. The next iteration of network pharmacology used entirely different disease definitions, defined as dysfunction in signaling modules derived from protein-protein interaction modules. The latter as well as the interactome had many conceptual shortcomings, e.g., each protein appears only once in the interactome, whereas in reality, one protein can occur in different contexts and different cellular locations. Such signaling modules are therapeutically best targeted at several sites, which is now the new and clinically applied definition of network pharmacology. To achieve higher than current precision, patients must not be selected solely on descriptive phenotypes but also based on diagnostics that detect the module dysregulation. Moreover, such mechanism-based network pharmacology has the advantage that each of the drugs used within one module is highly synergistic, which allows for reducing the doses of each drug, which then reduces the potential of these drugs acting on other proteins outside the module and hence the chance for unwanted side effects. Network epidemics Network epidemics has been built by applying network science to existing epidemic models, as many transportation networks and social networks play a role in the spread of disease. Social networks have been used to assess the role of social ties in the spread of obesity in populations. Epidemic models and concepts, such as spreading and contact tracing, have been adapted to be used in network analysis. These models can be used in public health policies, in order to implement strategies such as targeted immunization and has been recently used to model the spread of the Ebola virus epidemic in West Africa across countries and continents. Drug prescription networks (DPNs) Recently, some researchers tended to represent medication use in form of networks. The nodes in these networks represent medications and the edges represent some sort of relationship between these medications. Cavallo et al. (2013) described the topology of a co-prescription network to demonstrate which drug classes are most co-prescribed. Bazzoni et al. (2015) concluded that the DPNs of co-prescribed medications are dense, highly clustered, modular and assortative. Askar et al. (2021) created a network of the severe drug-drug interactions (DDIs) showing that it consisted of many clusters. Other networks The development of organs and other biological systems can be modelled as network structures where the clinical (e.g., radiographic, functional) characteristics can be represented as nodes and the relationships between these characteristics are represented as the links among such nodes. Therefore, it is possible to use networks to model how organ systems dynamically interact. Educational and clinical implementation The Channing Division of Network Medicine at Brigham and Women's Hospital was created in 2012 to study, reclassify, and develop treatments for complex diseases using network science and systems biology. It currently involves more than 80 Harvard Medical School (HMS) faculty and focuses on three areas: Chronic Disease Epidemiology uses genomics and metabolomics in large, long-term epidemiology studies, such as the Nurses' Health Study. Systems Genetics & Genomics focuses on complex respiratory diseases, specifically COPD and asthma, in smaller population studies. Systems Pathology uses multidisciplinary approaches, including as control theory, dynamical systems, and combinatorial optimization, to understand complex diseases and guide biomarker design. Massachusetts Institute of Technology offers an undergraduate course called "Network Medicine: Using Systems Biology and Signaling Networks to Create Novel Cancer Therapeutics". Also, Harvard Catalyst (The Harvard Clinical and Translational Science Center) offers a three-day course entitled "Introduction to Network Medicine", open to clinical and science professionals with doctorate degrees. Current worldwide efforts in network medicine are coordinated by the Network Medicine Institute and Global Alliance, representing 33 leading universities and institutions around the world committed to improving global health. See also Biological network Biological network inference Bioinformatics Complex network Glossary of graph theory Graph theory Graphical models Human disease network Interactome Metabolic network Network dynamics Network science Network theory Network topology Pharmacology Systems biology Systems pharmacology Targeted immunization strategies References Network theory
Network medicine
[ "Mathematics" ]
2,238
[ "Network theory", "Mathematical relations", "Graph theory" ]
44,450,755
https://en.wikipedia.org/wiki/Lysophosphatidylinositol
Lysophosphatidylinositol (LPI, lysoPI), or L-α-lysophosphatidylinositol, is an endogenous lysophospholipid and endocannabinoid neurotransmitter. LPI, along with its 2-arachidonoyl- derivative, 2-arachidonoyl lysophosphatidylinositol (2-ALPI), have been proposed as the endogenous ligands of GPR55. See also Phosphatidylinositol Cannabinoid receptor References Endocannabinoids Neurotransmitters Phospholipids
Lysophosphatidylinositol
[ "Chemistry", "Biology" ]
151
[ "Phospholipids", "Inositol", "Biotechnology stubs", "Neurotransmitters", "Signal transduction", "Biochemistry stubs", "Biochemistry", "Neurochemistry" ]
44,453,027
https://en.wikipedia.org/wiki/Aero-engined%20car
An aero-engined car is an automobile powered by an engine designed for aircraft use. Most such cars have been built for racing, and many have attempted to set world land speed records. While the practice of fitting cars with aircraft engines predates World War I by a few years, it was most popular in the interwar period between the world wars when military-surplus aircraft engines were readily available and used to power numerous high-performance racing cars. Initially powered by piston aircraft engines, a number of post-World War II aero-engined cars have been powered by aviation turbine and jet engines instead. Piston-engined, turbine-engined, and jet-engined cars have all set world land speed records. There have also been some non-racing automotive applications for aircraft engines, including production vehicles such as the Tucker 48 and prototypes such as the Chrysler Turbine Car, Fiat Turbina, and General Motors Firebirds. In the late 20th century and into the 21st century, there has also been a revival of interest in piston-powered aero-engined racing cars. Background In the early 20th century, automotive engines were fairly limited in terms of revolutions per minute (rpm), with 3,000 rpm constituting an upper limit. This meant that the easiest way to increase the power output of an engine was to increase its displacement. In the decade of the 1900s, engine construction necessitated extremely large displacements in order to simply reach the mark. Furthermore, while it was difficult to fit such a large engine into a car, it was very much possible, and the fact that most of the aircraft engines of the period were liquid-cooled made them more adaptable for automotive use. Racing Pre-World War I A number of early European automobile manufacturers experimented with the automotive use of aircraft engines, including Hispano-Suiza, Renault, and Rolls-Royce, although it was Fiat that made perhaps the first true aero-engined car when it created the Tipo S76 in 1910. Nicknamed "The Beast of Turin", the vehicle consisted of a 1907–08 Fiat production chassis mated to a four-cylinder Tipo S76DA airship engine that had a displacement of and developed at 1,500 rpm. Daryl Murphy speculates that the car was built to capture the world land speed record, which at the time stood at after the Blitzen Benz had established the mark at the English track Brooklands in 1909. While the Tipo S76 did race at Brooklands, it never exceeded more than about . It later returned to continental Europe and ultimately disappeared during World War I. Sunbeam also manufactured aircraft engines before World War I, and at the suggestion of chief designer Louis Coatalen it decided to install one of its flathead V12 engines (which would later be developed into the Sunbeam Mohawk) into an automobile chassis in 1913. Nicknamed "Toodles", the car achieved at Brooklands before it was shipped to the United States, where it was raced by Ralph DePalma. DePalma later sold Toodles to the Packard Motor Car Company, which used the car's engine as the inspiration for its Twin Six, which became the world's first production 12-cylinder engine in 1916, as well as a , V12 aircraft engine in 1917. Sunbeam also developed a second aero-engined car before World War I, which began life as an Indianapolis 500 racing car before Warwick Wright augmented it with a V8 Sunbeam Sirdar airship engine. The car developed at 2,200 rpm, which enabled it to achieve a top speed of approximately . By 1923, this Sunbeam was listed for sale for £1,000. Interwar period 1920s By the 1920s, after the end of World War I, interest in and development of aero-engined cars reached a new level. Coatalen built another aero-engined racing car, the Sunbeam 350HP, which featured a Sunbeam Manitou engine that had been designed to power Royal Naval Air Service flying boats. With an engine displacement of and the ability to generate at 2,100 rpm, the 350HP achieved a top speed of in 1922. In 1923, Ernest Eldridge began building the Mefistofele, which consisted of a Fiat SB4 chassis and a Fiat A.12 bis aircraft engine that produced at 1,800 rpm. On 12 July 1924, Eldridge drove the car to a world-record speed of on public roads in Arpajon, France, which marked the last time that a land speed record would be set on public roads. The car's name was bestowed upon it by the press due to the tremendous amount of noise and smoke generated by its engine. Argentine racer Adolfo Scandroglio built his Fiat Botafogo Special in the image of the Mefistofele, using a 1917 Fiat chassis and the same Fiat A.12 engine that had been chosen by Eldridge. The car, which was named after a famed racehorse, was capable of producing at 1,800 rpm. In 1949, Scandroglio was killed while racing the Botafogo Special, and the car was presumed to have been lost before its engine was rediscovered in the 1990s. After its rediscovery, the Argentine company Pur Sang, which is noted for creating exact replicas of Alfa Romeo 8C 2300s and Bugatti Type 35s, reconstructed the Botafogo Special. In 2011, the rebuilt car was purchased from Pur Sang by Jay Leno. In 1923, the Sunbeam 350HP was purchased by Malcolm Campbell, who made modifications to the coachwork as well as the engine in his endeavor to increase its speed. He also renamed the car Blue Bird, and on 25 September 1924 used it to set the official world land speed record with a speed of at Pendine Sands in Wales. The next year, on 21 July 1925, Campbell returned to Pendine, where he became the first person to exceed as he set a new record of . Perhaps the most well-known aero-engined cars of the interwar period were the series of amateur, chain-driven creations of Louis Zborowski that were each known as Chitty Bang Bang. They later attained fame as the namesake for the children's book Chitty-Chitty-Bang-Bang, written by Ian Fleming, as well as the film and musical of the same title. Although the origin of the name is unknown, it is thought to derive from either a lewd World War I soldier's song or simply the sound of the aircraft engines that powered the cars. The first car, Chitty 1, featured a customized pre-war Mercedes chassis and a , six-cylinder Maybach airplane engine that had powered a Gotha G.V bomber before it was surrendered by Germany as a war reparation. The engine could produce at a relatively modest 1,500 rpm. Chitty 1 achieved celebrity status at Brooklands in 1921, where it won races at speeds in excess of . In 1922, Zborowski returned to Brooklands to achieve his highest ever speed in the car, , although that autumn Chitty 1 was destroyed in a racing accident. Zborowski began working on a second car of the same name, Chitty 2, in 1921. While its use of an older model Mercedes for its chassis made it similar to its predecessor, this iteration of Chitty Bang Bang was powered by an Benz Bz.IV engine that manufactured . Chitty 2 placed second in its only race at Brooklands, although it did record a speed of over . In 1922, Zborowski and his wife took the car on a lengthy excursion across France and Algeria, all the way to the edge of the Sahara, where a dearth of sufficient radiator water caused such substantial engine damage that he was forced to retire the car from racing. Zborowski himself was killed at Monza while competing in the 1924 Italian Grand Prix, and Chitty 2 passed through a series of owners (including Arthur Conan Doyle) before being acquired by the Crawford Auto-Aviation Collection in Cleveland. The third of Zborowski's cars, Chitty 3, was also built around a modified Mercedes chassis, this time mated to a six-cylinder Mercedes aircraft engine originally rated at that had been tuned to develop . Once again, this car raced at Brooklands, where it achieved a top speed of . Zborowski's fourth and final aero-engined car was the Higham Special, which he named in a nod to his manor, the Higham House. Created in 1924 for the purpose of making an attempt on the land speed record, the car was powered by a World War I V12 Liberty L-12 engine with a displacement of , which made it the largest-capacity engine to ever race at Brooklands. With an engine producing and the gearbox and chain-drive of a pre-war Blitzen Benz, the Higham Special achieved a speed of with Zborowski at the wheel. After Zborowski's death at Monza, racing enthusiast J. G. Parry-Thomas bought the car and, after streamlining the body and modifying the engine, rechristened it "Babs". In 1926, Parry-Thomas took the car back to Brooklands, where he set a new world record with a speed of . He then took Babs to Pendine, where he achieved on the sands. After Malcolm Campbell took back the record with a run in his Blue Bird, Parry-Thomas returned to Pendine in 1927 with a more streamlined Babs. However, on his first run, he was killed in a crash. Parry-Thomas' crew buried Babs in the sand, where it remained until Owen Wyn Owen began excavating it in 1969. Wyn Owen ultimately restored the car to working order by 1985. In 2013, Babs was placed on display at the National Waterfront Museum in Swansea. In 1927, Henry Segrave broke Campbell's world speed record with a run of at Daytona Beach, Florida, in his Sunbeam 1000 hp, which was powered by two V12 Sunbeam Matabele aircraft engines. The new record made him the first person to surpass the mark. The following year, Campbell raced at Daytona to retake the record with a speed of , only to have it eclipsed just two months later by Ray Keech and his Triplex Special, which was powered by three V12 Liberty engines. On 11 March 1929, Segrave captured the world record once more at Daytona with a speed of in his Golden Arrow, which was powered by a W12 Napier Lion aircraft engine with a displacement of that manufactured at 3,300 rpm. The very next day, while attempting to re-take the record with the Triplex Special, driver Lee Bible lost his life in a fatal crash that also killed a film cameraman. 1930s In 1931, Campbell returned to competition with an upgraded Blue Bird that was sleeker and lower than its predecessor. Fitted with a Napier Lion engine, the car successfully set a new land speed record with a run of . By 1933, Campbell had created another Blue Bird that was powered by a Rolls-Royce R, which had achieved fame as the engine that helped the Supermarine S.6B seaplane win the Schneider Trophy. With this engine, which produced and had a displacement of , Blue Bird achieved a speed in excess of at Daytona. However, as performance continued to increase, the relatively limited area of Daytona Beach began to prevent cars from reaching their true top speeds. In September 1935, Campbell took Blue Bird to Utah's Bonneville Salt Flats, where it exceeded . Ab Jenkins, who in October 1935 had set speed records for one hour and for 24 hours in a factory-modified Duesenberg SJ on a circuit marked out in the Bonneville Salt Flats, realized that it was no longer possible for a modified production car to compete against aero-engined cars for long-distance speed records. Jenkins had his SJ special further modified, replacing the modified SJ engine with a Curtiss Conqueror engine. The Conqueror-engined special was named "Mormon Meteor" by a contest held by the Deseret News. In 1936 the Mormon Meteor set the record at (breaking a record set by George Eyston), the 24‑hour record at , and the 48‑hour record at . The Mormon Meteor set another 24‑hour record in 1937, averaging . Jenkins then commissioned August Duesenberg to build a chassis that was better able to handle the weight, power, and torque of the Conqueror engine. The result was the Mormon Meteor III, which broke the 12‑hour record in 1939 and set a 24‑hour record of in 1940. In 1937, Eyston brought his Thunderbolt to Bonneville, where its twin Rolls-Royce R engines powered it to a world-record speed of . That year on the salt flats, something of a rivalry developed between Eyston and John Cobb, who had previously raced the Napier-Railton at Brooklands as well as at Bonneville. For 1937, Cobb had built the teardrop-shaped, streamlined Railton Special, which featured four-wheel drive and two Napier Lion engines. Over the span of just a few weeks, Eyston and his Thunderbolt set a new record of , which Cobb and his Railton Special answered with a run of just over , before Eyston retook the title by achieving . The following year, 1938, Cobb returned to Bonneville and set a new world record of , which would stand until 1947 due in part to the hiatus of competition caused by the outbreak of World War II. By 1939, the Mercedes-Benz T80 emerged as the result of a three-year collaboration between German auto racer Hans Stuck, Mercedes-Benz, and Adolf Hitler, the latter of whom had a strong interest in motorsport and was committed to subsidizing German racing endeavors in an effort to showcase his country's technological superiority on the world stage. Costing an astounding 600,000 Reichsmarks, the six-wheeled, streamlined T80 was largely designed and developed by Ferdinand Porsche. The T80 was powered by the Daimler-Benz DB 603, an inverted V12 aviation engine that boasted a displacement of and was capable of producing , which had been derived from the Daimler-Benz DB 601 that powered the Messerschmitt Bf 109 fighter aircraft. The T80's engine ran on a fuel mixture that consisted mostly of methyl alcohol (63%), as well as smaller percentages of benzene, ethanol, acetone, nitrobenzene, avgas, and ether. After initially being set at , the car's targeted top speed was ultimately increased to by late 1939. A world speed record attempt was planned for January 1940 on the Dessauer Rennstrecke segment of the Reichsautobahn Berlin-Halle/Leipzig, with Stuck at the controls, although the outbreak of World War II prevented the run from ever happening. After surviving the war in storage in Carinthia, Austria, the T80 was ultimately acquired by the Mercedes-Benz Museum in Stuttgart. Post-World War II Piston-engined cars After the conclusion of World War II, John Cobb returned to Utah in 1947, where he improved upon his own world record by achieving an official speed of in his rebuilt Railton Mobil Special. On one of the requisite two-way runs, Cobb exceeded . Cobb's record would stand for 16 years, and would mark the last time that a piston-engined car would hold the world land speed record. In 1951, hot rod and drag racing enthusiast Art Arfons began building a series of aero-engined racing cars each known as the Green Monster. The first was a two-ton Ford truck chassis mated to an Allison V-1710 piston engine that was altogether capable of a record in a quarter-mile drag race. Arfons went on to build 12 more piston-engined Green Monsters before he began experimenting with jet engines. Turbine-engined cars First raced in 1960, the Bluebird-Proteus CN7 was built at a cost of £1 million and powered by a Bristol-Siddeley Proteus turboshaft gas turbine engine. The engine, which was rated at , drove all four wheels. After a serious crash at Bonneville, a tail fin was added to the original design before the Bluebird-Proteus CN7 made another run at the world record at Lake Eyre, South Australia. There, on 17 July 1964, Donald Campbell piloted the car to a new world record speed of . A number of other turbine-engined racing cars have been built, including two designed to compete for the world land speed record: Pioneer 2M and the Renault Étoile Filante. Turbine-engined cars have also raced in other types of motorsports, including both open-wheel racing (Lotus 56 and STP-Paxton Turbocar) as well as sports car racing (Howmet TX and Rover-BRM). Jet-engined cars In 1952, Soviet aircraft designer Aleksey Smolin developed the GAZ-TR, which was powered by a turbojet. Built in 1954, it was designed to reach , but due to the lack of adequate tires and an insufficiently long track it failed to exceed during a test run on November 14, 1954. The GAZ-TR crashed during testing, injuring driver MA Meteleva and leading to the cancellation of the program. Wreckage from the car is on display at the GAZ factory museum. In 1962, jet engines made their first appearances at Bonneville in three different cars that were each based around the General Electric J47 engine, which also powered the North American F-86 Sabre jet fighter. One of the cars was the Flying Caduceus, which was driven to a speed of by Nathan Ostich, a physician who built the first jet car. The second was piloted by Glenn Leasher, who approached the mark before he was killed in a crash. The third was the needle-nosed Spirit of America, designed and raced by drag racer Craig Breedlove. Breedlove also contended for the speed record that year, although he did not capture the title until he recorded a speed of in 1963. In 1964, brothers Art and Walt Arfons arrived at Bonneville with jet cars of their own. Walt had acquired a Westinghouse J46 jet engine, which had been designed for the Vought F7U Cutlass, that he used to power his Wingfoot Express. Art had opted for a General Electric J79, the same engine that powered the Lockheed F-104 Starfighter and the Convair B-58 Hustler bomber, and built a new, jet-powered Green Monster. After Walt Arfons crashed and suffered a heart attack while testing the Wingfoot Express, designer Tom Green was selected to drive the car. Despite never having driven over before, on 2 October 1964 he piloted the car to a world-record speed of . The record stood for just three days, however, before it was broken by Art Arfons and his Green Monster with a speed of . Just one week after the Green Monster's record run, Breedlove broke the barrier before surviving a high-speed crash. The 1964 season ended with Art Arfons retaking the speed title when he made a run at after making modifications to his engine. In 1965, Breedlove returned to the Bonneville Salt Flats with his new Spirit of America - Sonic I, which was powered by a GE J79 engine. Challenged by Walt Arfons and his modified, JATO-assisted Wingfoot Express, Breedlove recorded a speed of in his new car. While Walt was unable to match Breedlove's speed, his brother Art surpassed it just a week later with a run of , despite shredding a tire in the process. Ultimately it was Breedlove, immortalized by the Beach Boys in the song "Spirit of America", who emerged victorious as he posted a speed of on 15 November 1965. In 1970, Gary Gabelich piloted the rocket-powered Blue Flame to a new world record at Bonneville with a speed of . In 1983, this record was eclipsed by Thrust2, which was powered by a Rolls-Royce Avon jet engine and driven by Richard Noble to a speed of . In 1997, the world land speed record was bested once more by ThrustSSC, which achieved a speed of in the Black Rock Desert with Andy Green at the controls. The car, which was powered by two Rolls-Royce Spey jet engines that manufacture a combined and of thrust, became the first vehicle to break the sound barrier on land. Jet-powered drag racing cars have also appeared in National Hot Rod Association (NHRA) events since the 1970s. Jet cars were first sanctioned by the NHRA in 1974, and in 1980 official approval was granted for jet-powered Funny Cars. In 1975, drag racer Phillip "Al" Eierdam created Emergency 1, a jet car powered by a Westinghouse J34 engine and stylized to mimic a fire engine. In the 1980s, Eierdam built and raced the rocket-engined Invader, often against his friend Sammy Miller and his rocket-powered Funny Car, Vanishing Point. The two contested the first side-by-side drag races between rocket-powered cars at Santa Pod Raceway in England. By 1989, Roger Gustin had built more jet cars than anyone else in drag racing and had won the Jet Car Nationals on five separate occasions. In the 2010s, jet cars have continued to be major attractions at NHRA events, participating in exhibitions such as four-wide races and achieving speeds in excess of . During the 2012 season, Elaine Larsen and Marisha Falk both drove jet dragsters powered by General Electric J85 engines capable of producing . Non-racing applications Although rare, aircraft engines have occasionally been chosen as the powerplant for road-going cars. One prime example is the Tucker 48, which was produced in 1947 and 1948 and powered by a flat-six Franklin O-335 helicopter engine. With a displacement of , the engine produced at 3,200 rpm and produced a maximum of of torque at 2,000 rpm, yet due largely to its all-alloy construction only weighed . The engine enabled the Tucker 48 to reach a top speed of approximately and to accelerate from 0 to in 10 seconds. While the original O-335 helicopter engine was air-cooled, Tucker engineers modified it to water cooling, which helped improve the powerplant's durability while also giving the car the automotive industry's first fully sealed water-cooling system. In the 1960s, British engineer Paul Jameson and transmission specialist John Dodd collaborated to build The Beast, a road car fitted with a Rolls-Royce Merlin engine. Using a General Motors Turbo-Hydramatic gearbox, the back axle from a Jaguar XJ12, doors cast from a Ford Cortina Mk III, and a fiberglass body reminiscent of a Ford Capri, the finished car had a "phallically long front end" that measured 10 feet. The Beast's engine produced approximately at 2,500 rpm which propelled it to a top speed in excess of . The car averages less than . Once listed by the Guinness Book of Records as the world's most powerful road car, by 2012 The Beast had been located in Málaga. Turbine engines have also been utilized in concept and prototype road cars, such as the three General Motors Firebirds, the Fiat Turbina, and the Chrysler Turbine Car. In 1953, the General Motors XP-21 Firebird I became the first car powered by a gas turbine engine to be built in the United States. Never intended for production, the car was purely a design exercise to determine the feasibility of turbine-powered road cars. The car's body, which was made of plastic reinforced by fiberglass, was designed by Harley J. Earl, while its Whirlfire Turbo-Power engine was developed by Charles L. McCuen and the GM Research Laboratories Division. Driving the rear wheels of the car via a conventional transmission, the engine was able to produce at 13,000 rpm. Its successor, Firebird II, debuted at General Motors Motorama in 1956. In addition to its regenerative gas turbine, the car featured a titanium body, fully independent suspension, power disc brakes, electric gear selection, and air conditioning that could be individually controlled. The last of GM's Firebirds, Firebird III, was built in 1958. It was the only Firebird to influence any GM production cars; both the 1959 and 1961 Cadillac lineups took styling cues from it. Noted for its extravagant tailfins, Firebird III also broke a number of Earl's styling rules with its very reserved use of chrome and lack of parallel lines. While GM planned a Firebird IV, it never came to fruition, although the three Firebirds did ultimately become the namesake of the Pontiac Firebird pony cars that debuted in 1967. In 1954, Fiat introduced its own experimental turbine-engined prototype, the Turbina. The car was powered by a two-stage turbine that powered the wheels through a geared reduction unit, while its body was streamlined based on the results of wind tunnel testing. The Turbina's engine enabled it to achieve a top speed of as well as to produce at 22,000 rpm. Introduced to the public in 1963, the Chrysler Turbine Car was powered by a turbine that produced and of torque, which made its output roughly equivalent to a V8 engine. The turbine engine offered numerous advantages in a road car, including less need for maintenance due to fewer moving parts, general operating smoothness, greater dependability of starting in cold weather, lack of a need for antifreeze, minimal oil consumption, and the ability to run on almost any combustible liquid; the car is claimed to have run on fuels as diverse as peanut oil, Chanel No. 5 perfume, and tequila. However, there were also significant drawbacks with using a turbine engine in the car, namely high internal heat, lack of inherent engine braking, and high emissions of . Furthermore, the engine was better suited to the relatively continuous operation and constant speeds of aviation use than it was to the more disruptive, stop-and-go conditions of automotive use. On the highway, the car could achieve , but because the engine idled at 22,000 rpm it was less efficient in city driving. In addition to being less fuel efficient than a comparable V8-engined car, the Turbine Car was also substantially more expensive; Jay Leno estimates that the car would have cost around $16,000 if it was ever sold to the public, compared to about $5,000 for a piston-engined car of comparable performance. Revival Even after the period in which they were competitive in the quest for the world land speed record, there has been continued and renewed interest in piston-driven aero-engined cars. One of the earliest cars created during this revival era is the Napier-Bentley, which was built by Peter Morley and David Llewellyn in 1972 in the spirit of the aero-engined cars that raced at Brooklands. The Napier-Bentley consists of a 1929 Bentley chassis and a Napier Sea Lion aircraft engine, which produces and of torque. The car has been raced regularly, and was once involved in a crash that hospitalized Morley for a few weeks. In 1998, the Napier-Bentley was sold to Chris Williams. Williams has also designed and built the Packard-Bentley, which he envisioned as a tribute to the interwar aero-engined racing cars that competed at Brooklands. Built over a period of seven years, the car, which is nicknamed "Mavis", made its debut at the 2010 Cholmondeley Pageant of Power. The Packard-Bentley is made up of a Bentley 8 Litre chassis and a , V12 Packard engine taken from an American World War II torpedo boat. The engine gives the car at 2,400 rpm, while allowing it to achieve a top speed of approximately and a fuel efficiency of per minute. The Packard-Bentley is valued at around £350,000. Aero-engined cars also made an appearance on the British television program Top Gear on 4 March 2012, during the sixth episode of Season 18, when both the BMW-engined "Brutus" and Rolls-Royce-engined "Meteor" were featured. The Brutus was built in Germany shortly after World War II, when a 1908 American LaFrance car was fitted with a V12 BMW aircraft engine that dates to 1925. The car was created over several years at a workshop at the Auto & Technik Museum in Sinsheim, Germany, which still owns it. According to the Museum, the Brutus can produce at 1,500 rpm, while its fuel efficiency averages . After driving the car on Top Gear, presenter Jeremy Clarkson described the experience as akin to "doing a crossword while being eaten by a tiger". The Meteor that appeared during the same episode has the chassis from a 1930s Rolls-Royce Phantom and a World War II-vintage, Rolls-Royce Meteor engine. The engine produces , which allows the car to achieve a top speed of and a fuel efficiency of roughly . In 2013, the Meteor went on sale for a price "in excess of £500,000". See also Aircraft engine Vehicles powered by Napier Lion engines Blastolene Special (custom-built car powered by a Continental AV1790-5B tank engine) References * Cars by engine
Aero-engined car
[ "Technology" ]
5,976
[ "Cars by engine", "Engines" ]
44,453,564
https://en.wikipedia.org/wiki/Attack%20tolerance
In the context of complex networks, attack tolerance is the network's robustness meaning the ability to maintain the overall connectivity and diameter of the network as nodes are removed. Several graph metrics have been proposed to predicate network robustness. Algebraic connectivity is a graph metric that shows the best graph robustness among them. Attack types If an attack was to be mounted on a network, it would not be through random nodes but ones that were the most significant to the network. Different methods of ranking are utilized to determine the nodes priority in the network. Average node degree This form of attack prioritizes the most connected nodes as the most important ones. This takes into account the network (represented by graph ) changing over time, by analyzing the network as a series of snapshots (indexed by ); we denote the snapshot at time by . The average of the degree of a node, labeled , within a given snapshot , throughout a time interval (a sequence of snapshots) , is given by: Node persistence This form of attack prioritizes nodes that occur most frequently over a period of time. The equation below calculates the frequency that a node (i) occurs in a time interval . When the node is present during the snapshot then equation is equal to 1, but if the node is not present then it is equal to 0. Where Temporal closeness This form of attack prioritizes nodes by the summation of temporal distances from one node to all other nodes over a period of time. The equation below calculates the temporal distance of a node (i) by averaging the sum of all the temporal distances for the interval [t1,tn]. Network model tolerances Not all networks are the same, so it is no surprise that an attack on different networks would have different results. The common method for measuring change in the network is through the average of the size of all the isolated clusters, <s>, and the fraction of the nodes contained in the largest cluster, S. When no nodes have been attacked, both S and <s> equal 1. Erdős–Rényi model In the ER model, the network generated is homogeneous, meaning each node has the same number of links. This is considered to be an exponential network. When comparing the connectivity of the ER model when it undergoes random failures vs directed attacks, we are shown that the exponential network reacts the same way to a random failure as it does to a directed attack. This is due to the homogeneity of the network, making it so that it does not matter whether a random node is selected or one is specifically targeted. All the nodes on average are the same in degree therefore attacking one shouldn't cause anymore damage than attacking another. As the number of attacks go up and more nodes are removed, we observe that S decreases non-linearly and acts as if a threshold exists when a fraction of the nodes (f) has been removed, (f≈.28). At this point, S goes to zero. The average size of the isolated clusters behaves opposite, increasing exponentially to <s>= 2, also approaching the threshold line f≈.28, except decreases back to 1 after. This model was tested for a large range of nodes and proven to maintain the same pattern. Scale-free model In the scale-free model, the network is defined by its degree distribution following the power law, which means that each node has no set number of links, unlike the exponential network. This makes the scale-free model more vulnerable because there are nodes that are more important than others, and if these nodes were to be deliberately attacked the network would break down. However this inhomogeneous network has its strengths when it comes to random failures. Due to the power law there are many more nodes in the system that have very few links, and probability estimates that these are the nodes that will be targeted (because there are more of them). Severing these smaller nodes will not affect the network as a whole and therefore allows the structure of the network to stay approximately the same. When the scale-free model undergoes random failures, S slowly decreases with no threshold-like behavior and <s> remains approximately 1. This indicates that the network is being broken apart one by one and not by large clusters. However, when the scale-free model undergoes deliberate attack the system behaves similarly to an exponential system, except it breaks down much quicker. As the number of attacks increases, S decreases with a threshold close to f=0.05, and <s> increases to the same threshold and then decreases back to one. The speed at which this type of network breaks down shows the vulnerability of common networks that are used everyday, such as the Internet. References Network theory
Attack tolerance
[ "Mathematics" ]
967
[ "Network theory", "Mathematical relations", "Graph theory" ]
44,454,257
https://en.wikipedia.org/wiki/Louvain%20method
The Louvain method for community detection is a greedy optimization method intended to extract non-overlapping communities from large networks created by Blondel et al. from the University of Louvain (the source of this method's name). Modularity optimization The inspiration for this method of community detection is the optimization of modularity as the algorithm progresses. Modularity is a scale value between −1 (non-modular clustering) and 1 (fully modular clustering) that measures the relative density of edges inside communities with respect to edges outside communities. Optimizing this value theoretically results in the best possible grouping of the nodes of a given network. But because going through all possible iterations of the nodes into groups is impractical, heuristic algorithms are used. In the Louvain Method of community detection, first small communities are found by optimizing modularity locally on all nodes, then each small community is grouped into one node and the first step is repeated. The method is similar to the earlier method by Clauset, Newman and Moore that connects communities whose amalgamation produces the largest increase in modularity. The Louvain algorithm was shown to correctly identify the community structure when it exists, in particular in the stochastic block model. Algorithm Description Modularity The value to be optimized is modularity, defined as a value in the range that measures the density of links inside communities compared to links between communities. For a weighted graph, modularity is defined as: where: represents the edge weight between nodes and ; see Adjacency matrix; and are the sum of the weights of the edges attached to nodes and , respectively; is the sum of all of the edge weights in the graph; is the total number of nodes in the graph; and are the communities to which the nodes and belong; and is Kronecker delta function: Based on the above equation, the modularity of a community can be calculated as: where is the sum of edge weights between nodes within the community (each edge is considered twice); and is the sum of all edge weights for nodes within the community (including edges which link to other communities). As nodes in different communities do not contribute to the modularity , it can be written as: The Louvain Method Algorithm The Louvain method works by repeating two phases. In phase one, nodes are sorted into communities based on how the modularity of the graph changes when a node moves communities. In phase two, the graph is reinterpreted so that communities are seen as individual nodes. A detailed explanation is provided below. Phase 1 Each node in the network is assigned to its own community. The Louvain method begins by considering each node in a graph to be its own community. This can be seen in Figure 1, where each dot (representing nodes) is a unique color (representing which community the node belongs to). Nodes are grouped into communities For each node , we consider how moving from its current community into a neighboring community will affect the modularity of the graph partition. In the pseudo-code below, this happens in the for-loop. We select the community with the greatest change in modularity, and if the change is positive, we move into ; otherwise we leave it where it is. This continues until the modularity stops improving. function moveNodes(Graph G, Partition P): do old_modularity <- current_modularity_of_partition for v in V(G), do # find the community that causes the largest increase in modularity when v is moved into it C' <- argmax(delta_Q) # delta_Q is the change in modularity if delta_Q > 0, then move v into C' end if end for update current_modularity_of_partition while current_modularity_of_partition > old_modularity return P end function This process is applied repeatedly and sequentially to all nodes until no modularity increase can occur. Once this local maximum of modularity is hit, the first phase has ended. Figure 2 shows how the graph in Figure 1 might look after one iteration of phase 1. Phase 2 Communities are reduced to a single node For each community in our graph's partition, the individual nodes making up that community are combined and the community itself becomes a node. The edges connecting distinct communities are used to weight the new edges connecting our aggregate nodes. This process is modeled in the pseudo-code, where the function returns a new graph whose vertices are the partition of the old graph, and whose edges are calculated using the old graph. This function does not show the edges being weighted, but a simple modification would allow for that information to be tracked. function aggregateGraph(Graph G, Partition P): V <- P E <- [(A,B) | (x,y) is in E(G), x is in A and A is in P, y is in B and B is in P] return Graph(V,E) end function Figure 3 shows what the graph from Figure 2 would look like after being aggregated. This graph is analogous to the graph in Figure 1 in the sense that each node is assigned to a single community. From here, the process can be repeated so that more nodes are moved into existing communities until an optimal level of modularity is reached. The pseudo-code below shows how the previous two functions work together to complete the process. function louvain(Graph G, Partition P): do P <- moveNodes(G, P) done <- length(P) == length(V(G)) # every community is a single node, despite running moveNodes if not done, then: G <- aggregateGraph(G, P) P <- singletonPartition(G) end if while not done end function function singletonPartition(Graph G): return [{v} | v is in V(G)] # each node is placed in its own community end function Time Complexity Generally, the Louvain method is assumed to have a time complexity of . Richard Blondel, co-author of the paper that originally published the Louvain method, seems to support this notion, but other sources claim the time complexity is "essentially linear in the number of links in the graph," meaning the time complexity would instead be , where is the number of edges in the graph. Unfortunately, no source has published an analysis of the Louvain method's time complexity so one is attempted here. In the pseudo-code above, the function controls the execution of the algorithm. It's clear to see that inside of , will be repeated until it is no longer possible to combine nodes into communities. This depends on two factors: how much the modularity of the graph can improve and, in the worst case, if the modularity can improve with every iteration of , it depends on how quickly will reduce the graph down to a single node. If, in each iteration of , is only able to move one node into a community, then will only be able to reduce the size of the graph by one. This would cause to repeat times. Since iterates through all nodes in a graph, this would result in a time complexity of , where is the number of nodes. It is unclear if this situation is possible, so the above result should be considered a loose bound. Blondel et al. state in their original publication that most of the run time is spent in the early iterations of the algorithm because "the number of communities decreases drastically after just a few passes." This can be understood by considering a scenario where is able to move each node so that every community has two nodes. In this case, would return a graph half the size of the original. If this continued, then the Louvain method would have a runtime of , although it is unclear if this would be the worst case, best case, average case, or none of those. Additionally, there is no guarantee the size of the graph would be reduced by the same factor with each iteration, and so no single logarithm function can perfectly describe the time complexity. Previous uses Twitter social Network (2.4 Million nodes, 38 million links) by Josep Pujol, Vijay Erramilli, and Pablo Rodriguez: The authors explore the problem of partitioning Online Social Networks onto different machines. Mobile phone Network (4 Million nodes, 100 Million links) by Derek Greene, Donal Doyle, and Padraig Cunningham: Community-tracking strategies for identifying dynamic communities of different dynamic social networks. Detecting species in network-based dynamical model. Disadvantages Louvain produces only non-overlapping communities, which means that each node can belong to at most one community. This is highly unrealistic in many real-world applications. For example, in social networks, most people belong to multiple communities: their family, their friends, their co-workers, old school buddies, etc. In biological networks, most genes or proteins belong to more than one pathway or complex. Furthermore, Louvain has been shown to sometimes produce arbitrarily badly connected communities, and has been effectively superseded (at least in the non-overlapping case) by the Leiden algorithm. These badly connected communities arise when a node that had been acting as a "bridge" between two groups of nodes in its community is moved to a new community, leaving the old one disconnected. The remaining nodes in the old community may also be relocated, but if their connection to the community is strong enough despite the removal of the "bridge" node, they will instead remain in place. For an example of this, see the image to the right; note how the removal of the bridge node, node 0, caused the red community to be split into two disjoint subgroups. While this is the worst-case scenario, there are other, more subtle problems with the Louvain algorithm that can also lead to arbitrarily badly connected communities, such as the formation of communities using nodes that are only weakly connected. Another common issue with the Louvain algorithm is the resolution limit of modularity - that is, multiple small communities being grouped together into a larger community. This causes the smaller communities to be hidden; for an example of this, see the visual depiction of the resolution limit to the right. Note how, when the green community is absorbed into the blue community to increase the graph's modularity, the smaller group of nodes that it represented is lost. There is no longer a way to differentiate those nodes from the nodes that were already in the blue community. Conversely, the nodes that were already in the blue community no longer appear distinct from those that were in the green community; in other words, whatever difference caused them to initially be placed in separate communities has been obscured. Both the resolution limit of modularity and the arbitrarily badly connected community problem are further exasperated by each iteration of the algorithm. Ultimately, the only thing the Louvain algorithm guarantees is that the resulting communities cannot be merged further; in other words, they're well-separated. To avoid the problems that arise from arbitrarily badly connected communities and the resolution limit of modularity, it is recommended to use the Leiden algorithm instead, as its refinement phase and other various adjustments have corrected these issues. Comparison to other methods of non-overlapping community detection When comparing modularity optimization methods, the two measures of importance are the speed and the resulting modularity value. A higher speed is better as it shows a method is more efficient than others and a higher modularity value is desirable as it points to having better-defined communities. The compared methods are, the algorithm of Clauset, Newman, and Moore, Pons and Latapy, and Wakita and Tsurumi. -/- in the table refers to a method that took over 24hrs to run. This table (from) shows that the Louvain method outperforms many similar modularity optimization methods in both the modularity and the time categories. See also Leiden algorithm Modularity (networks) Community structure Network science K-means clustering References "The Louvain method for community detection in large networks" Vincent Blondel http://perso.uclouvain.be/vincent.blondel/research/louvain.html Network theory
Louvain method
[ "Mathematics" ]
2,532
[ "Network theory", "Mathematical relations", "Graph theory" ]
44,454,705
https://en.wikipedia.org/wiki/Megazol
Megazol (CL 64855) is a nitroimidazole based drug that cures some protozoan infections. A study of nitroimidazoles found the drug extremely effective against T. cruzi and T. brucei which cause Chagas disease and African sleeping sickness, respectively. The drug is considerably more effective than standard benznidazole therapy (for Chagas) which is considered the gold standard. This is despite that other nitroimidazoles proved ineffective against these pathogens. References Antiprotozoal agents Nitroimidazole antibiotics Thiadiazoles
Megazol
[ "Biology" ]
129
[ "Antiprotozoal agents", "Biocides" ]
44,455,145
https://en.wikipedia.org/wiki/Automation%20engineering
Automation engineering is the provision of automated solutions to physical activities and industries. Automation engineer Automation engineers are experts who have the knowledge and ability to design, create, develop and manage machines and systems, for example, factory automation, process automation and warehouse automation. Automation technicians are also involved. Scope Automation engineering is the integration of standard engineering fields. Automatic control of various control systems for operating various systems or machines to reduce human efforts & time to increase accuracy. Automation engineers design and service electromechanical devices and systems for high-speed robotics and programmable logic controllers (PLCs). Work and career after graduation Graduates can work for both government and private sector entities such as industrial production, companies that create and use automation systems, for example the paper industry, automotive industry, metallurgical industry, food and agricultural industry, water treatment, and oil & gas sectors such as refineries, rolling mills and power plants. Job Description Automation engineers can design, program, simulate and test automated machinery and processes, and are usually employed in industries such as the energy sector in plants, car manufacturing facilities, food processing plants, and robots. Automation engineers are responsible for creating detailed design specifications and other documents, developing automation based on specific requirements for the process involved, and conforming to international standards like IEC-61508, local standards, and other process specific guidelines and specifications, simulate, test and commission electronic equipment for automation. See also Automation Artificial intelligence Control engineering Mechatronics engineering References Engineering disciplines Knowledge economy Automation software
Automation engineering
[ "Engineering" ]
305
[ "Control engineering", "nan", "Automation software", "Automation" ]
44,455,412
https://en.wikipedia.org/wiki/List%20of%20Royal%20Society%20of%20Chemistry%20medals%20and%20awards
The Royal Society of Chemistry grants a number of medals and awards. All those named "prize" (except the Beilby Medal and Prize) are awarded with a £5,000 bursary. The Chemistry World Entrepreneur of the Year award has one of £4,000. As of 2014, these are: Applied Catalysis Award Applied Inorganic Chemistry Award Apprentice of the Year Award Bader Award Geoffrey Barker Medal Barrer Award Sir Derek Barton Gold Medal Beilby Medal and Prize Ronald Belcher Award Anne Bennett Memorial Award for Distinguished Service Becquerel Medal Bill Newton Award Bioinorganic Chemistry Award Bioorganic Chemistry Award Materials for Industry – Derek Birchall Award Joseph Black Award Bourke Award Bourke–Liversidge Award Robert Boyle Prize for Analytical Science S F Boys–A Rahman Award Catalysis in Organic Chemistry Award Centenary Prize Joseph Chatt Award Chemical Dynamics Award Chemistry of Transition Metals Award Chemistry World Entrepreneur of the Year Corday–Morgan Prize Rita and John Cornforth Award Creativity in Industry Prize Dalton Young Researchers Award Peter Day Award De Gennes Prize Education Award Environment Prize Environment, Sustainability and Energy Division Early Career Award Faraday Lectureship Prize Faraday Medal (electrochemistry) Frankland Award Sir Edward Frankland Fellowship Gibson–Fawcett Award John B. Goodenough Award Green Chemistry Award Harrison–Meldola Memorial Prizes Haworth Memorial Lectureship Norman Heatley Award Hickinbottom Award Higher Education Teaching Award Homogeneous Catalysis Award Industrial Analytical Science Award Inorganic Mechanisms Awards Inspiration and Industry Interdisciplinary Prizes John Jeyes Award Khorana Prize Jeremy Knowles Award Lord Lewis Prize Liversidge Award Longstaff Prize Main Group Chemistry Award Marlow Award Merck Award Ludwig Mond Award Natural Product Chemistry Award Nyholm Prize for Education Nyholm Prize for Inorganic Chemistry Organic Industrial Chemistry Award Organic Stereochemistry Award Organometallic Chemistry Award Pedler Award Perkin Prize for Organic Chemistry Physical Organic Chemistry Award Theophilus Redwood Award Radiochemistry Group Young Researcher's Award Charles Rees Award Robert Robinson Award Schools Education Award Soft Matter and Biophysical Chemistry Award George and Christine Sosnovsky Award in Cancer Therapy Sir George Stokes Award Supramolecular Chemistry Award Surfaces and Interfaces Award Sustainable Energy Award Sustainable Water Award Synthetic Organic Chemistry Award Teamwork in Innovation Technician of the Year Award (Higher Education and Research) Technician of the Year Award (Industry) Tilden Prizes Toxicology Award Rising Star in Industry Award Discontinued awards (incomplete list) Hugo Müller Lectureship, discontinued 2008 Solid State Chemistry Award, discontinued 2008 See also Honorary Fellows of the Royal Society of Chemistry References History of the chemical industry
List of Royal Society of Chemistry medals and awards
[ "Chemistry" ]
525
[ "History of the chemical industry" ]
44,455,674
https://en.wikipedia.org/wiki/Conflict-free%20replicated%20data%20type
In distributed computing, a conflict-free replicated data type (CRDT) is a data structure that is replicated across multiple computers in a network, with the following features: The application can update any replica independently, concurrently and without coordinating with other replicas. An algorithm (itself part of the data type) automatically resolves any inconsistencies that might occur. Although replicas may have different state at any particular point in time, they are guaranteed to eventually converge. The CRDT concept was formally defined in 2011 by Marc Shapiro, Nuno Preguiça, Carlos Baquero and Marek Zawirski. Development was initially motivated by collaborative text editing and mobile computing. CRDTs have also been used in online chat systems, online gambling, and in the SoundCloud audio distribution platform. The NoSQL distributed databases Redis, Riak and Cosmos DB have CRDT data types. Background Concurrent updates to multiple replicas of the same data, without coordination between the computers hosting the replicas, can result in inconsistencies between the replicas, which in the general case may not be resolvable. Restoring consistency and data integrity when there are conflicts between updates may require some or all of the updates to be entirely or partially dropped. Accordingly, much of distributed computing focuses on the problem of how to prevent concurrent updates to replicated data. But another possible approach is optimistic replication, where all concurrent updates are allowed to go through, with inconsistencies possibly created, and the results are merged or "resolved" later. In this approach, consistency between the replicas is eventually re-established via "merges" of differing replicas. While optimistic replication might not work in the general case, there is a significant and practically useful class of data structures, CRDTs, where it does work — where it is always possible to merge or resolve concurrent updates on different replicas of the data structure without conflicts. This makes CRDTs ideal for optimistic replication. As an example, a one-way Boolean event flag is a trivial CRDT: one bit, with a value of true or false. True means some particular event has occurred at least once. False means the event has not occurred. Once set to true, the flag cannot be set back to false (an event having occurred cannot un-occur). The resolution method is "true wins": when merging a replica where the flag is true (that replica has observed the event), and another one where the flag is false (that replica hasn't observed the event), the resolved result is true — the event has been observed. Types of CRDTs There are two approaches to CRDTs, both of which can provide strong eventual consistency: state-based CRDTs and operation-based CRDTs. State-based CRDTs State-based CRDTs (also called convergent replicated data types, or CvRDTs) are defined by two types, a type for local states and a type for actions on the state, together with three functions: A function to produce an initial state, a merge function of states, and a function to apply an action to update a state. State-based CRDTs simply send their full local state to other replicas on every update, where the received new state is then merged into the local state. To ensure eventual convergence the functions should fulfill the following properties: The merge function should compute the join for any pair of replica states, and should form a semilattice with the initial state as the neutral element. In particular this means, that the merge function must be commutative, associative, and idempotent. The intuition behind commutativity, associativity and idempotence is that these properties are used to make the CRDT invariant under package re-ordering and duplication. Furthermore, the update function must be monotone with regard to the partial order defined by said semilattice. Delta state CRDTs (or simply Delta CRDTs) are optimized state-based CRDTs where only recently applied changes to a state are disseminated instead of the entire state. Operation-based CRDTs Operation-based CRDTs (also called commutative replicated data types, or CmRDTs) are defined without a merge function. Instead of transmitting states, the update actions are transmitted directly to replicas and applied. For example, an operation-based CRDT of a single integer might broadcast the operations (+10) or (−20). The application of operations should still be commutative and associative. However, instead of requiring that application of operations is idempotent, stronger assumptions on the communications infrastructure are expected -- all operations must be delivered to the other replicas without duplication. Pure operation-based CRDTs are a variant of operation-based CRDTs that reduces the metadata size. Comparison The two alternatives are theoretically equivalent, as each can emulate the other. However, there are practical differences. State-based CRDTs are often simpler to design and to implement; their only requirement from the communication substrate is some kind of gossip protocol. Their drawback is that the entire state of every CRDT must be transmitted eventually to every other replica, which may be costly. In contrast, operation-based CRDTs transmit only the update operations, which are typically small. However, operation-based CRDTs require guarantees from the communication middleware; that the operations are not dropped or duplicated when transmitted to the other replicas, and that they are delivered in causal order. While operations-based CRDTs place more requirements on the protocol for transmitting operations between replicas, they use less bandwidth than state-based CRDTs when the number of transactions is small in comparison to the size of internal state. However, since the state-based CRDT merge function is associative, merging with the state of some replica yields all previous updates to that replica. Gossip protocols work well for propagating state-based CRDT state to other replicas while reducing network use and handling topology changes. Some lower bounds on the storage complexity of state-based CRDTs are known. Known CRDTs G-Counter (Grow-only Counter) This state-based CRDT implements a counter for a cluster of n nodes. Each node in the cluster is assigned an ID from 0 to n - 1, which is retrieved with a call to myId(). Thus each node is assigned its own slot in the array P, which it increments locally. Updates are propagated in the background, and merged by taking the max() of every element in P. The compare function is included to illustrate a partial order on the states. The merge function is commutative, associative, and idempotent. The update function monotonically increases the internal state according to the compare function. This is thus a correctly defined state-based CRDT and will provide strong eventual consistency. The operations-based CRDT equivalent broadcasts increment operations as they are received. PN-Counter (Positive-Negative Counter) A common strategy in CRDT development is to combine multiple CRDTs to make a more complex CRDT. In this case, two G-Counters are combined to create a data type supporting both increment and decrement operations. The "P" G-Counter counts increments; and the "N" G-Counter counts decrements. The value of the PN-Counter is the value of the P counter minus the value of the N counter. Merge is handled by letting the merged P counter be the merge of the two P G-Counters, and similarly for N counters. Note that the CRDT's internal state must increase monotonically, even though its external state as exposed through query can return to previous values. G-Set (Grow-only Set) The G-Set (grow-only set) is a set which only allows adds. An element, once added, cannot be removed. The merger of two G-Sets is their union. 2P-Set (Two-Phase Set) Two G-Sets (grow-only sets) are combined to create the 2P-set. With the addition of a remove set (called the "tombstone" set), elements can be added and also removed. Once removed, an element cannot be re-added; that is, once an element e is in the tombstone set, query will never again return True for that element. The 2P-set uses "remove-wins" semantics, so remove(e) takes precedence over add(e). LWW-Element-Set (Last-Write-Wins-Element-Set) LWW-Element-Set is similar to 2P-Set in that it consists of an "add set" and a "remove set", with a timestamp for each element. Elements are added to an LWW-Element-Set by inserting the element into the add set, with a timestamp. Elements are removed from the LWW-Element-Set by being added to the remove set, again with a timestamp. An element is a member of the LWW-Element-Set if it is in the add set, and either not in the remove set, or in the remove set but with an earlier timestamp than the latest timestamp in the add set. Merging two replicas of the LWW-Element-Set consists of taking the union of the add sets and the union of the remove sets. When timestamps are equal, the "bias" of the LWW-Element-Set comes into play. A LWW-Element-Set can be biased towards adds or removals. The advantage of LWW-Element-Set over 2P-Set is that, unlike 2P-Set, LWW-Element-Set allows an element to be reinserted after having been removed. OR-Set (Observed-Remove Set) OR-Set resembles LWW-Element-Set, but using unique tags instead of timestamps. For each element in the set, a list of add-tags and a list of remove-tags are maintained. An element is inserted into the OR-Set by having a new unique tag generated and added to the add-tag list for the element. Elements are removed from the OR-Set by having all the tags in the element's add-tag list added to the element's remove-tag (tombstone) list. To merge two OR-Sets, for each element, let its add-tag list be the union of the two add-tag lists, and likewise for the two remove-tag lists. An element is a member of the set if and only if the add-tag list less the remove-tag list is nonempty. An optimization that eliminates the need for maintaining a tombstone set is possible; this avoids the potentially unbounded growth of the tombstone set. The optimization is achieved by maintaining a vector of timestamps for each replica. Sequence CRDTs A sequence, list, or ordered set CRDT can be used to build a collaborative real-time editor, as an alternative to operational transformation (OT). Some known Sequence CRDTs are Treedoc, RGA, Woot, Logoot, and LSEQ. CRATE is a decentralized real-time editor built on top of LSEQSplit (an extension of LSEQ) and runnable on a network of browsers using WebRTC. LogootSplit was proposed as an extension of Logoot in order to reduce the metadata for sequence CRDTs. MUTE is an online web-based peer-to-peer real-time collaborative editor relying on the LogootSplit algorithm. Industrial sequence CRDTs, including open-source ones, are known to out-perform academic implementations due to optimizations and a more realistic testing methodology. The main popular example is Yjs CRDT, a pioneer in using a plainlist instead of a tree (ala Kleppmann's automerge). Industry use Fluid Framework is an open-source collaborative platform built by Microsoft that provides both server reference implementations and client-side SDKs for creating modern real-time web applications using CRDTs. Nimbus Note is a collaborative note-taking application that uses the Yjs CRDT for collaborative editing. Redis is a distributed, highly available, and scalable in-memory database with a "CRDT-enabled database" feature. SoundCloud open-sourced Roshi, a LWW-element-set CRDT for the SoundCloud stream implemented on top of Redis. Riak is a distributed NoSQL key-value data store based on CRDTs. League of Legends uses the Riak CRDT implementation for its in-game chat system, which handles 7.5 million concurrent users and 11,000 messages per second. Bet365 stores hundreds of megabytes of data in the Riak implementation of OR-Set. TomTom employs CRDTs to synchronize navigation data between the devices of a user. Phoenix, a web framework written in Elixir, uses CRDTs to support real-time multi-node information sharing in version 1.2. Facebook implements CRDTs in their Apollo low-latency "consistency at scale" database. Facebook uses CRDTs in their FlightTracker system for managing the Facebook graph internally. Teletype for Atom employs CRDTs to enable developers share their workspace with team members and collaborate on code in real time. Apple implements CRDTs in the Notes app for syncing offline edits between multiple devices. Novell, Inc. introduced a state-based CRDT with "loosely consistent" directory replication (NetWare Directory Services), included in NetWare 4.0 in 1995. The successor product, eDirectory, delivered improvements to the replication process. See also Data synchronization Collaborative real-time editors Consistency models Optimistic replication Operational transformation Self-stabilizing algorithms References External links A collection of resources and papers on CRDTs "Strong Eventual Consistency and Conflict-free Replicated Data Types" (A talk on CRDTs) by Marc Shapiro Readings in conflict-free replicated data types by Christopher Meiklejohn CAP theorem and CRDTs: CAP 12 years later. How the rules have changed by Eric Brewer Distributed data structures Distributed algorithms Fault-tolerant computer systems
Conflict-free replicated data type
[ "Technology", "Engineering" ]
2,993
[ "Fault-tolerant computer systems", "Reliability engineering", "Computer systems" ]
44,456,093
https://en.wikipedia.org/wiki/Cohomological%20descent
In algebraic geometry, a cohomological descent is, roughly, a "derived" version of a fully faithful descent in the classical descent theory. This point is made precise by the below: the following are equivalent: in an appropriate setting, given a map a from a simplicial space X to a space S, is fully faithful. The natural transformation is an isomorphism. The map a is then said to be a morphism of cohomological descent. The treatment in SGA uses a lot of topos theory. Conrad's notes gives a more down-to-earth exposition. See also hypercovering, of which a cohomological descent is a generalization References SGA4 Vbis P. Deligne, Théorie des Hodge III, Publ. Math. IHÉS 44 (1975), pp. 6–77. External links http://ncatlab.org/nlab/show/cohomological+descent Algebraic geometry
Cohomological descent
[ "Mathematics" ]
202
[ "Topology stubs", "Fields of abstract algebra", "Topology", "Algebraic geometry" ]
44,457,082
https://en.wikipedia.org/wiki/Robustness%20of%20complex%20networks
Robustness, the ability to withstand failures and perturbations, is a critical attribute of many complex systems including complex networks. The study of robustness in complex networks is important for many fields. In ecology, robustness is an important attribute of ecosystems, and can give insight into the reaction to disturbances such as the extinction of species. For biologists, network robustness can help the study of diseases and mutations, and how to recover from some mutations. In economics, network robustness principles can help understanding of the stability and risks of banking systems. And in engineering, network robustness can help to evaluate the resilience of infrastructure networks such as the Internet or power grids. Percolation theory The focus of robustness in complex networks is the response of the network to the removal of nodes or links. The mathematical model of such a process can be thought of as an inverse percolation process. Percolation theory models the process of randomly placing pebbles on an n-dimensional lattice with probability p, and predicts the sudden formation of a single large cluster at a critical probability . In percolation theory this cluster is named the percolating cluster. This phenomenon is quantified in percolation theory by a number of quantities, for example the average cluster size . This quantity represents the average size of all finite clusters and is given by the following equation. We can see the average cluster size suddenly diverges around the critical probability, indicating the formation of a single large cluster. It is also important to note that the exponent is universal for all lattices, while is not. This is important as it indicates a universal phase transition behavior, at a point dependent on the topology. The problem of robustness in complex networks can be seen as starting with the percolating cluster, and removing a critical fraction of the pebbles for the cluster to break down. Analogous to the formation of the percolation cluster in percolation theory, the breaking down of a complex network happens abruptly during a phase transition at some critical fraction of nodes removed. Critical threshold for random failures The mathematical derivation for the threshold at which a complex network will lose its giant component is based on the Molloy–Reed criterion. The Molloy–Reed criterion is derived from the basic principle that in order for a giant component to exist, on average each node in the network must have at least two links. This is analogous to each person holding two others' hands in order to form a chain. Using this criterion and an involved mathematical proof, one can derive a critical threshold for the fraction of nodes needed to be removed for the breakdown of the giant component of a complex network. An important property of this finding is that the critical threshold is only dependent on the first and second moment of the degree distribution and is valid for an arbitrary degree distribution. Random network Using for an Erdős–Rényi (ER) random graph, one can re-express the critical point for a random network. As a random network gets denser, the critical threshold increases, meaning a higher fraction of the nodes must be removed to disconnect the giant component. Scale-free network By re-expressing the critical threshold as a function of the gamma exponent for a scale-free network, we can draw a couple of important conclusions regarding scale-free network robustness. For , the critical threshold only depends on gamma and the minimum degree, and in this regime the network acts like a random network breaking when a finite fraction of its nodes are removed. For , diverges in the limit as N trends toward infinity. In this case, for large scale-free networks, the critical threshold approaches 1. This essentially means almost all nodes must be removed in order to destroy the giant component, and large scale-free networks are very robust with regard to random failures. One can make intuitive sense of this conclusion by thinking about the heterogeneity of scale-free networks and of the hubs in particular. Because there are relatively few hubs, they are less likely to be removed through random failures while small low-degree nodes are more likely to be removed. Because the low-degree nodes are of little importance in connecting the giant component, their removal has little impact. Targeted attacks on scale-free networks Although scale-free networks are resilient to random failures, we might imagine them being quite vulnerable to targeted hub removal. In this case we consider the robustness of scale free networks in response to targeted attacks, performed with thorough prior knowledge of the network topology. By considering the changes induced by the removal of a hub, specifically the change in the maximum degree and the degrees of the connected nodes, we can derive another formula for the critical threshold considering targeted attacks on a scale free network. This equation cannot be solved analytically, but can be graphed numerically. To summarize the important points, when gamma is large, the network acts as a random network, and attack robustness become similar to random failure robustness of a random network. However, when gamma is smaller, the critical threshold for attacks on scale-free networks becomes relatively small, indicating a weakness to targeted attacks. For more detailed information on the attack tolerance of complex networks please see the attack tolerance page. Cascading failures An important aspect of failures in many networks is that a single failure in one node might induce failures in neighboring nodes. When a small number of failures induces more failures, resulting in a large number of failures relative to the network size, a cascading failure has occurred. There are many models for cascading failures. These models differ in many details, and model different physical propagation phenomenon from power failures to information flow over Twitter, but have some shared principals. Each model focuses on some sort of propagation or cascade, there is some threshold determining when a node will fail or activate and contribute towards propagation, and there is some mechanism defined by which propagation will be directed when nodes fail or activate. All of these models predict some critical state, in which the distribution of the size of potential cascades matches a power law, and the exponent is uniquely determined by the degree exponent of the underlying network. Because of the differences in the models and the consensus of this result, we are led to believe the underlying phenomenon is universal and model-independent. For more detailed information on modeling cascading failures, see the global cascades model page. References Network theory Reliability analysis
Robustness of complex networks
[ "Mathematics", "Engineering" ]
1,297
[ "Reliability analysis", "Reliability engineering", "Graph theory", "Network theory", "Mathematical relations" ]
44,457,499
https://en.wikipedia.org/wiki/Wolfgang%20Kautek
Wolfgang Kautek is an Austrian Physical chemist and the head of the Physical chemistry department at the University of Vienna. He is the President of the Erwin Schrödinger Society for Nanosciences (ESG) and the Chairman of the Research Group "Physical Chemistry" at the Austrian Chemical Society (GÖCh). References Physical chemists Austrian chemists Academic staff of the University of Vienna Living people Austrian physical chemists Laser researchers 1953 births
Wolfgang Kautek
[ "Chemistry" ]
91
[ "Physical chemists" ]
44,457,563
https://en.wikipedia.org/wiki/O-GlcNAc
O-GlcNAc (short for O-linked GlcNAc or O-linked β-N-acetylglucosamine) is a reversible enzymatic post-translational modification that is found on serine and threonine residues of nucleocytoplasmic proteins. The modification is characterized by a β-glycosidic bond between the hydroxyl group of serine or threonine side chains and N-acetylglucosamine (GlcNAc). O-GlcNAc differs from other forms of protein glycosylation: (i) O-GlcNAc is not elongated or modified to form more complex glycan structures, (ii) O-GlcNAc is almost exclusively found on nuclear and cytoplasmic proteins rather than membrane proteins and secretory proteins, and (iii) O-GlcNAc is a highly dynamic modification that turns over more rapidly than the proteins which it modifies. O-GlcNAc is conserved across metazoans. Due to the dynamic nature of O-GlcNAc and its presence on serine and threonine residues, O-GlcNAcylation is similar to protein phosphorylation in some respects. While there are roughly 500 kinases and 150 phosphatases that regulate protein phosphorylation in humans, there are only 2 enzymes that regulate the cycling of O-GlcNAc: O-GlcNAc transferase (OGT) and O-GlcNAcase (OGA) catalyze the addition and removal of O-GlcNAc, respectively. OGT utilizes UDP-GlcNAc as the donor sugar for sugar transfer. First reported in 1984, this post-translational modification has since been identified on over 9,000 proteins in H. sapiens. Numerous functional roles for O-GlcNAcylation have been reported including crosstalking with serine/threonine phosphorylation, regulating protein-protein interactions, altering protein structure or enzyme activity, changing protein subcellular localization, and modulating protein stability and degradation. Numerous components of the cell's transcription machinery have been identified as being modified by O-GlcNAc, and many studies have reported links between O-GlcNAc, transcription, and epigenetics. Many other cellular processes are influenced by O-GlcNAc such as apoptosis, the cell cycle, and stress responses. As UDP-GlcNAc is the final product of the hexosamine biosynthetic pathway, which integrates amino acid, carbohydrate, fatty acid, and nucleotide metabolism, it has been suggested that O-GlcNAc acts as a "nutrient sensor" and responds to the cell's metabolic status. Dysregulation of O-GlcNAc has been implicated in many pathologies including Alzheimer's disease, cancer, diabetes, and neurodegenerative disorders. Discovery In 1984, the Hart lab was probing for terminal GlcNAc residues on the surfaces of thymocytes and lymphocytes. Bovine milk β-1,4-galactosyltransferase, which reacts with terminal GlcNAc residues, was used to perform radiolabeling with UDP-[3H]galactose. β-elimination of serine and threonine residues demonstrated that most of the [3H]galactose was attached to proteins O-glycosidically; chromatography revealed that the major β-elimination product was Galβ1-4GlcNAcitol. Insensitivity to peptide N-glycosidase treatment provided additional evidence for O-linked GlcNAc. Permeabilizing cells with detergent prior to radiolabeling greatly increased the amount of [3H]galactose incorporated into Galβ1-4GlcNAcitol, leading the authors to conclude that most of the O-linked GlcNAc monosaccharide residues were intracellular. Mechanism O-GlcNAc is generally a dynamic modification that can be cycled on and off various proteins. Some residues are thought to be constitutively modified by O-GlcNAc. The O-GlcNAc modification is installed by OGT in a sequential bi-bi mechanism where the donor sugar, UDP-GlcNAc, binds to OGT first followed by the substrate protein. The O-GlcNAc modification is removed by OGA in a hydrolysis mechanism involving anchimeric assistance (substrate-assisted catalysis) to yield the unmodified protein and GlcNAc. While crystal structures have been reported for both OGT and OGA, the exact mechanisms by which OGT and OGA recognize substrates have not been completely elucidated. Unlike N-linked glycosylation, for which glycosylation occurs in a specific consensus sequence (Asn-X-Ser/Thr, where X is any amino acid except Pro), no definitive consensus sequence has been identified for O-GlcNAc. Consequently, predicting sites of O-GlcNAc modification is challenging, and identifying modification sites generally requires mass spectrometry methods. For OGT, studies have shown that substrate recognition is regulated by a number of factors including aspartate and asparagine ladder motifs in the lumen of the superhelical TPR domain, active site residues, and adaptor proteins. As crystal structures have shown that OGT requires its substrate to be in an extended conformation, it has been proposed that OGT has a preference for flexible substrates. In in vitro kinetic experiments measuring OGT and OGA activity on a panel of protein substrates, kinetic parameters for OGT were shown to be variable between various proteins while kinetic parameters for OGA were relatively constant between various proteins. This result suggested that OGT is the "senior partner" in regulating O-GlcNAc and OGA primarily recognizes substrates via the presence of O-GlcNAc rather than the identity of the modified protein. Detection and characterization Several methods exist to detect the presence of O-GlcNAc and characterize the specific residues modified. Lectins Wheat germ agglutinin, a plant lectin, is able to recognize terminal GlcNAc residues and is thus often used for detection of O-GlcNAc. This lectin has been applied in lectin affinity chromatography for the enrichment and detection of O-GlcNAc. Antibodies Pan-O-GlcNAc antibodies that recognize the O-GlcNAc modification largely irrespective of the modified protein's identity are commonly used. These include RL2, an IgG antibody raised against O-GlcNAcylated nuclear pore complex proteins, and CTD110.6, an IgM antibody raised against an immunogenic peptide with a single serine O-GlcNAc modification. Other O-GlcNAc-specific antibodies have been reported and demonstrated to have some dependence on the identity of the modified protein. Metabolic labeling Many metabolic chemical reporters have been developed to identify O-GlcNAc. Metabolic chemical reporters are generally sugar analogues that bear an additional chemical moiety allowing for additional reactivity. For example, peracetylated GlcNAc (Ac4GlcNAz) is a cell-permeable azido sugar that is de-esterified intracellularly by esterases to GlcNAz and converted to UDP-GlcNAz in the hexosamine salvage pathway. UDP-GlcNAz can be utilized as a sugar donor by OGT to yield the O-GlcNAz modification. The presence of the azido sugar can then be visualized via alkyne-containing bioorthogonal chemical probes in an azide-alkyne cycloaddition reaction. These probes can incorporate easily identifiable tags such as the FLAG peptide, biotin, and dye molecules. Mass tags based on polyethylene glycol (PEG) have also been used to measure O-GlcNAc stoichiometry. Conjugation of 5 kDa PEG molecules leads to a mass shift for modified proteins - more heavily O-GlcNAcylated proteins will have multiple PEG molecules and thus migrate more slowly in gel electrophoresis. Other metabolic chemical reporters bearing azides or alkynes (generally at the 2 or 6 positions) have been reported. Instead of GlcNAc analogues, GalNAc analogues may be used as well as UDP-GalNAc is in equilibrium with UDP-GlcNAc in cells due to the action of UDP-galactose-4'-epimerase (GALE). Ac4GalNAz shows enhanced labeling of O-GlcNAc versus Ac4GlcNAz, possibly due to a bottleneck in UDP-GlcNAc pyrophosphorylase processing of GlcNAz-1-P to UDP-GlcNAz. Ac3GlcN-β-Ala-NBD-α-1-P(Ac-SATE)2, a metabolic chemical reporter that is processed intracellularly to a fluorophore-labeled UDP-GlcNAc analogue, has been shown to achieve one-step fluorescent labeling of O-GlcNAc in live cells. Metabolic labeling may also be used to identify binding partners of O-GlcNAcylated proteins. The N-acetyl group may be elongated to incorporate a diazirine moiety. Treatment of cells with peracetylated, phosphate-protected Ac3GlcNDAz-1-P(Ac-SATE)2 leads to modification of proteins with O-GlcNDAz. UV irradiation then induces photocrosslinking between proteins bearing the O-GlcNDaz modification and interacting proteins. Some issues have been identified with various metabolic chemical reporters, e.g., their use may inhibit the hexosamine biosynthetic pathway, they may not be recognized by OGA and therefore are not able to capture O-GlcNAc cycling, or they may be incorporated into glycosylation modifications besides O-GlcNAc as seen in secreted proteins. Metabolic chemical reporters with chemical handles at the N-acetyl position may also label acetylated proteins as the acetyl group may be hydrolyzed into acetate analogues that can be utilized for protein acetylation. Additionally, per-O-acetylated monosaccharides have been identified to react with cysteines leading to artificial S-glycosylation via an elimination-addition mechanism. Next-generation metabolic chemical reporters have been developed to overcome this off-target reactivity. Chemoenzymatic labeling Chemoenzymatic labeling provides an alternative strategy to incorporate handles for click chemistry. The Click-IT O-GlcNAc Enzymatic Labeling System, developed by the Hsieh-Wilson group and subsequently commercialized by Invitrogen, utilizes a mutant GalT Y289L enzyme that is able to transfer azidogalactose (GalNAz) onto O-GlcNAc. The presence of GalNAz (and therefore also O-GlcNAc) can be detected with various alkyne-containing probes with identifiable tags such as biotin, dye molecules, and PEG. Förster resonance energy transfer biosensor An engineered protein biosensor has been developed that can detect changes in O-GlcNAc levels using Förster resonance energy transfer. This sensor consists of four components linked together in the following order: cyan fluorescent protein (CFP), an O-GlcNAc binding domain (based on GafD, a lectin sensitive for terminal β-O-GlcNAc), a CKII peptide that is a known OGT substrate, and yellow fluorescent protein (YFP). Upon O-GlcNAcylation of the CKII peptide, the GafD domain binds the O-GlcNAc moiety, bringing the CFP and YFP domains into close proximity and generating a FRET signal. Generation of this signal is reversible and can be used to monitor O-GlcNAc dynamics in response to various treatments. This sensor may be genetically encoded and used in cells. Addition of a localization sequence allows for targeting of this O-GlcNAc sensor to the nucleus, cytoplasm, or plasma membrane. Mass spectrometry Biochemical approaches such as Western blotting may provide supporting evidence that a protein is modified by O-GlcNAc; mass spectrometry (MS) is able to provide definitive evidence as to the presence of O-GlcNAc. Glycoproteomic studies applying MS have contributed to the identification of proteins modified by O-GlcNAc. As O-GlcNAc is substoichiometric and ion suppression occurs in the presence of unmodified peptides, an enrichment step is usually performed prior to mass spectrometry analysis. This may be accomplished using lectins, antibodies, or chemical tagging. The O-GlcNAc modification is labile under collision-induced fragmentation methods such as collision-induced dissociation (CID) and higher-energy collisional dissociation (HCD), so these methods in isolation are not readily applicable for O-GlcNAc site mapping. HCD generates fragment ions characteristic of N-acetylhexosamines that can be used to determine O-GlcNAcylation status. In order to facilitate site mapping with HCD, β-elimination followed by Michael addition with dithiothreitol (BEMAD) may be used to convert the labile O-GlcNAc modification into a more stable mass tag. For BEMAD mapping of O-GlcNAc, the sample must be treated with phosphatatase otherwise other serine/threonine post-translational modifications such as phosphorylation may be detected. Electron-transfer dissociation (ETD) is used for site mapping as ETD causes peptide backbone cleavage while leaving post-translational modifications such as O-GlcNAc intact. Traditional proteomic studies perform tandem MS on the most abundant species in the full-scan mass spectra, prohibiting full characterization of lower-abundance species. One modern strategy for targeted proteomics uses isotopic labels, e.g., dibromide, to tag O-GlcNAcylated proteins. This method allows for algorithmic detection of low-abundance species, which are then sequenced by tandem MS. Directed tandem MS and targeted glycopeptide assignment allow for identification of O-GlcNAcylated peptide sequences. One example probe consists of a biotin affinity tag, an acid-cleavable silane, an isotopic recoding motif, and an alkyne. Unambiguous site mapping is possible for peptides with only one serine/threonine residue. The general procedure for this isotope-targeted glycoproteomics (IsoTaG) method is the following: Metabolically label O-GlcNAc to install O-GlcNAz onto proteins Use click chemistry to link IsoTaG probe to O-GlcNAz Use streptavidin beads to enrich for tagged proteins Treat beads with trypsin to release non-modified peptides Cleave isotopically recoded glycopeptides from beads using mild acid Obtain a full-scan mass spectrum from isotopically recoded glycopeptides Apply algorithm to detect unique isotope signature from probe Perform tandem MS on the isotopically recoded species to obtain glycopeptide amino acid sequences Search protein database for identified sequences Other methodologies have been developed for quantitative profiling of O-GlcNAc using differential isotopic labeling. Example probes generally consist of a biotin affinity tag, a cleavable linker (acid- or photo-cleavable), a heavy or light isotopic tag, and an alkyne. O-GlcNAc modification has also been recently reported on tyrosine residues, though these represent roughly 5% of all O-GlcNAc modifications. Strategies for manipulating O-GlcNAc Various chemical and genetic strategies have been developed to manipulate O-GlcNAc, both on a proteome-wide basis and on specific proteins. Chemical methods Small molecule inhibitors have been reported for both OGT and OGA that function in cells or in vivo. OGT inhibitors result in a global decrease of O-GlcNAc while OGA inhibitors result in a global increase of O-GlcNAc; these inhibitors are not able to modulate O-GlcNAc on specific proteins. Inhibition of the hexosamine biosynthetic pathway is also able to decrease O-GlcNAc levels. For instance, glutamine analogues azaserine and 6-diazo-5-oxo-L-norleucine (DON) can inhibit GFAT, though these molecules may also non-specifically affect other pathways. Protein synthesis Expressed protein ligation has been used to prepare O-GlcNAc-modified proteins in a site-specific manner. Methods exist for solid-phase peptide synthesis incorporation of GlcNAc-modified serine, threonine, or cysteine. Genetic methods Site-directed mutagenesis Site-directed mutagenesis of O-GlcNAc-modified serine or threonine residues to alanine may be used to evaluate the function of O-GlcNAc at specific residues. As alanine's side chain is a methyl group and is thus not able to act as an O-GlcNAc site, this mutation effectively permanently removes O-GlcNAc at a specific residue. While serine/threonine phosphorylation may be modeled by mutagenesis to aspartate or glutamate, which have negatively charged carboxylate side chains, none of the 20 canonical amino acids sufficiently recapitulate the properties of O-GlcNAc. Mutagenesis to tryptophan has been used to mimic the steric bulk of O-GlcNAc, though tryptophan is much more hydrophobic than O-GlcNAc. Mutagenesis may also perturb other post-translational modifications, e.g., if a serine is alternatively phosphorylated or O-GlcNAcylated, alanine mutagenesis permanently eliminates the possibilities of both phosphorylation and O-GlcNAcylation. S-GlcNAc Mass spectrometry identified S-GlcNAc as a post-translational modification found on cysteine residues. In vitro experiments demonstrated that OGT could catalyze the formation of S-GlcNAc and that OGA is incapable of hydrolyzing S-GlcNAc. Though a previous report suggested that OGA is capable of hydrolyzing thioglycosides, this was only demonstrated on the aryl thioglycoside para-nitrophenol-S-GlcNAc; para-nitrothiophenol is a more activated leaving group than a cysteine residue. Recent studies have supported the use of S-GlcNAc as an enzymatically stable structural model of O-GlcNAc that can be incorporated through solid-phase peptide synthesis or site-directed mutagenesis. Engineered OGT Fusion constructs of a nanobody and TPR-truncated OGT allow for proximity-induced protein-specific O-GlcNAcylation in cells. The nanobody may be directed towards protein tags, e.g., GFP, that are fused to the target protein, or the nanobody may be directed towards endogenous proteins. For example, a nanobody recognizing a C-terminal EPEA sequence can direct OGT enzymatic activity to α-synuclein. Functions of O-GlcNAc Apoptosis Apoptosis, a form of controlled cell death, has been suggested to be regulated by O-GlcNAc. In various cancers, elevated O-GlcNAc levels have been reported to suppress apoptosis. Caspase-3, caspase-8, and caspase-9 have been reported to be modified by O-GlcNAc. Caspase-8 is modified near its cleavage/activation sites; O-GlcNAc modification may block caspase-8 cleavage and activation by steric hindrance. Pharmacological lowering of O-GlcNAc with 5S-GlcNAc accelerated caspase activation while pharmacological raising of O-GlcNAc with thiamet-G inhibited caspase activation. Epigenetics Writers and Erasers The proteins that regulate genetics are often categorized as writers, readers, and erasers, i.e., enzymes that install epigenetic modifications, proteins that recognize these modifications, and enzymes that remove these modifications. To date, O-GlcNAc has been identified on writer and eraser enzymes. O-GlcNAc is found in multiple locations on EZH2, the catalytic methyltransferase subunit of PRC2, and is thought to stabilize EZH2 prior to PRC2 complex formation and regulate di- and tri-methyltransferase activity. All three members of the ten-eleven translocation (TET) family of dioxygenases (TET1, TET2, and TET3) are known to be modified by O-GlcNAc. O-GlcNAc has been suggested to cause nuclear export of TET3, reducing its enzymatic activity by depleting it from the nucleus. O-GlcNAcylation of HDAC1 is associated with elevated activating phosphorylation of HDAC1. Histone O-GlcNAcylation Histone proteins, the primary protein component of chromatin, have been reported to be modified by O-GlcNAc, though other studies have not been able to detect histone O-GlcNAc. The presence of O-GlcNAc on histones has been suggested to affect gene transcription as well as other histone marks such as acetylation and monoubiquitination. TET2 has been reported to interact with the TPR domain of OGT and facilitate recruitment of OGT to histones. Phosphorylation of OGT T444 via AMPK has been found to inhibit OGT-chromatin association and downregulate H2B S112 O-GlcNAc. Nutrient sensing The hexosamine biosynthetic pathway's product, UDP-GlcNAc, is utilized by OGT to catalyze the addition of O-GlcNAc. This pathway integrates information about the concentrations of various metabolites including amino acids, carbohydrates, fatty acids, and nucleotides. Consequently, UDP-GlcNAc levels are sensitive to cellular metabolite levels. OGT activity is in part regulated by UDP-GlcNAc concentration, making a link between cellular nutrient status and O-GlcNAc. Glucose deprivation causes a decline in UDP-GlcNAc levels and an initial decline in O-GlcNAc, but counterintuitively, O-GlcNAc is later significantly upregulated. This later increase has been shown to be dependent on AMPK and p38 MAPK activation, and this effect is partially due to increases in OGT mRNA and protein levels. It has also been suggested that this effect is dependent on calcium and CaMKII. Activated p38 is able to recruit OGT to specific protein targets, including neurofilament H; O-GlcNAc modification of neurofilament H enhances its solubility. During glucose deprivation, glycogen synthase is modified by O-GlcNAc which inhibits its activity. Oxidative stress NRF2, a transcription factor associated with the cellular response to oxidative stress, has been found to be indirectly regulated by O-GlcNAc. KEAP1, an adaptor protein for the cullin 3-dependent E3 ubiquitin ligase complex, mediates the degradation of NRF2; oxidative stress leads to conformational changes in KEAP1 that repress degradation of NRF2. O-GlcNAc modification of KEAP1 at S104 is required for efficient ubiquitination and subsequent degradation of NRF2, linking O-GlcNAc to oxidative stress. Glucose deprivation leads to a reduction in O-GlcNAc and reduces NRF2 degradation. Cells expressing a KEAP1 S104A mutant are resistant to erastin-induced ferroptosis, consistent with higher NRF2 levels upon removal of S104 O-GlcNAc. Elevated O-GlcNAc levels have been associated with diminished synthesis of hepatic glutathione, an important cellular antioxidant. Acetaminophen overdose leads to accumulation of the strongly oxidizing metabolite NAPQI in the liver, which is detoxified by glutathione. In mice, OGT knockout has a protective effect against acetaminophen-induced liver injury, while OGA inhibition with thiamet-G exacerbates acetaminophen-induced liver injury. Protein aggregation O-GlcNAc has been found to slow protein aggregation, though the generality of this phenomenon is unknown. Solid-phase peptide synthesis was used to prepare full-length α-synuclein with an O-GlcNAc modification at T72. Thioflavin T aggregation assays and transmission electron microscopy demonstrated that this modified α-synuclein does not readily form aggregates. Treatment of JNPL3 tau transgenic mice with an OGA inhibitor was shown to increase microtubule-associated protein tau O-GlcNAcylation. Immunohistochemistry analysis of the brainstem revealed decreased formation of neurofibrillary tangles. Recombinant O-GlcNAcylated tau was shown to aggregate slower than unmodified tau in an in vitro thioflavin S aggregation assay. Similar results were obtained for a recombinantly prepared O-GlcNAcylated TAB1 construct versus its unmodified form. Protein phosphorylation Crosstalk Many known phosphorylation sites and O-GlcNAcylation sites are nearby each other or overlapping. As protein O-GlcNAcylation and phosphorylation both occur on serine and threonine residues, these post-translational modifications can regulate each other. For example, in CKIIα, S347 O-GlcNAc has been shown to antagonize T344 phosphorylation. Reciprocal inhibition, i.e., phosphorylation inhibition of O-GlcNAcylation and O-GlcNAcylation of phosphorylation, has been observed on other proteins including murine estrogen receptor β, RNA Pol II, tau, p53, CaMKIV, p65, β-catenin, and α-synuclein. Positive cooperativity has also been observed between these two post-translational modifications, i.e., phosphorylation induces O-GlcNAcylation or O-GlcNAcylation induces phosphorylation. This has been demonstrated on MeCP2 and HDAC1. In other proteins, e.g., cofilin, phosphorylation and O-GlcNAcylation appear to occur independently of each other. In some cases, therapeutic strategies are under investigation to modulate O-GlcNAcylation to have a downstream effect on phosphorylation. For instance, elevating tau O-GlcNAcylation may offer therapeutic benefit by inhibiting pathological tau hyperphosphorylation. Besides phosphorylation, O-GlcNAc has been found to influence other post-translational modifications such as lysine acetylation and monoubiquitination. Kinases Protein kinases are the enzymes responsible for phosphorylation of serine and threonine residues. O-GlcNAc has been identified on over 100 (~20% of the human kinome) kinases, and this modification is often associated with alterations in kinase activity or kinase substrate scope. O-GlcNAc may have diverse functional consequences on kinases such as interfering with ATP binding, altering substrate recognition, or regulating other PTMs on kinases. Complex cross-talk relations can also exist where OGT and a kinase, e.g., AMPK, modify each other. Phosphatases Protein phosphatase 1 subunits PP1β and PP1γ have been shown to form functional complexes with OGT. A synthetic phosphopeptide was able to be dephosphorylated and O-GlcNAcylated by an OGT immunoprecipitate. This complex has been referred to as a "yin-yang complex" as it replaces a phosphate modification with an O-GlcNAc modification. PP1γ also exists in a heterotrimer with OGT and URI under high glucose conditions. MYPT1 is another protein phosphatase subunit that forms complexes with OGT and is itself O-GlcNAcylated. MYPT1 appears to have a role in directing OGT towards specific substrates. Protein-protein interactions O-GlcNAcylation of a protein can alter its interactome. As O-GlcNAc is highly hydrophilic, its presence may disrupt hydrophobic protein-protein interactions. For example, O-GlcNAc disrupts Sp1 interaction with TAFII110, and O-GlcNAc disrupts CREB interaction with TAFII130 and CRTC. Some studies have also identified instances where protein-protein interactions are induced by O-GlcNAc. Metabolic labeling with the diazirine-containing O-GlcNDAz has been applied to identify protein-protein interactions induced by O-GlcNAc. Using a bait glycopeptide based roughly on a consensus sequence for O-GlcNAc, α-enolase, EBP1, and 14-3-3 were identified as potential O-GlcNAc readers. X-ray crystallography showed that 14-3-3 recognized O-GlcNAc through an amphipathic groove that also binds phosphorylated ligands. Hsp70 has also been proposed to act as a lectin to recognize O-GlcNAc. It has been suggested that O-GlcNAc plays a role in the interaction of α-catenin and β-catenin. Protein stability and degradation Co-translational O-GlcNAc has been identified on Sp1 and Nup62. This modification suppresses co-translational ubiquitination and thus protects nascent polypeptides from proteasomal degradation. Similar protective effects of O-GlcNAc on full-length Sp1 have been observed. It is unknown if this pattern is universal or only applicable to specific proteins. Protein phosphorylation is often used as a mark for subsequent degradation. Tumor suppressor protein p53 is targeted for proteasomal degradation via COP9 signalosome-mediated phosphorylation of T155. O-GlcNAcylation of p53 S149 has been associated with decreased T155 phosphorylation and protection of p53 from degradation. β-catenin O-GlcNAcylation competes with T41 phosphorylation, which signals β-catenin for degradation, stabilizing the protein. O-GlcNAcylation of the Rpt2 ATPase subunit of the 26S proteasome has been shown to inhibit proteasome activity. Testing various peptide sequences revealed that this modification slows proteasomal degradation of hydrophobic peptides, degradation of hydrophilic peptides does not appear to be affected. This modification has been shown to suppress other pathways that activate the proteasome such as Rpt6 phosphorylation by cAMP-dependent protein kinase. OGA-S localizes to lipid droplets and has been proposed to locally activate the proteasome to promote remodeling of lipid droplet surface proteins. Stress response Various cellular stress stimuli have been associated with changes in O-GlcNAc. Treatment with hydrogen peroxide, cobalt(II) chloride, UVB light, ethanol, sodium chloride, heat shock, and sodium arsenite, all result in elevated O-GlcNAc. Knockout of OGT sensitizes cells to thermal stress. Elevated O-GlcNAc has been associated with expression of Hsp40 and Hsp70. Therapeutic relevance Neurodegeneration Pathological protein aggregation is a major hallmark of multiple neurodegenerative diseases. O-GlcNAc on various proteins has been found to play roles in suppressing protein aggregation, motivating clinical efforts to inhibit OGA and elevate cellular O-GlcNAc levels. This strategy is being evaluated by companies for Alzheimer's disease, Parkinson's disease, progressive supranuclear palsy, and amyotrophic lateral sclerosis (ALS). Multiple companies have advanced OGA inhibitors into the clinic including Alectos Therapeutics, Asceneuron, Biogen, Eli Lilly, and Merck. Alzheimer's disease Numerous studies have identified aberrant phosphorylation of tau as a hallmark of Alzheimer's disease. O-GlcNAcylation of bovine tau was first characterized in 1996. A subsequent report in 2004 demonstrated that human brain tau is also modified by O-GlcNAc. O-GlcNAcylation of tau was demonstrated to regulate tau phosphorylation with hyperphosphorylation of tau observed in the brain of mice lacking OGT, which has been associated with the formation of neurofibrillary tangles. Analysis of brain samples showed that protein O-GlcNAcylation is compromised in Alzheimer's disease and paired helical fragment-tau was not recognized by traditional O-GlcNAc detection methods, suggesting that pathological tau has impaired O-GlcNAcylation relative to tau isolated from control brain samples. Elevating tau O-GlcNAcylation was proposed as a therapeutic strategy for reducing tau phosphorylation. To test this therapeutic hypothesis, a selective and blood-brain barrier-permeable OGA inhibitor, thiamet-G, was developed. Thiamet-G treatment was able to increase tau O-GlcNAcylation and suppress tau phosphorylation in cell culture and in vivo in healthy Sprague-Dawley rats. A subsequent study showed that thiamet-G treatment also increased tau O-GlcNAcylation in a JNPL3 tau transgenic mouse model. In this model, tau phosphorylation was not significantly affected by thiamet-G treatment, though decreased numbers of neurofibrillary tangles and slower motor neuron loss were observed. Additionally, O-GlcNAcylation of tau was noted to slow tau aggregation in vitro. OGA inhibition with MK-8719 is being investigated in clinical trials as a potential treatment strategy for Alzheimer's disease and other tauopathies including progressive supranuclear palsy. Parkinson's disease Parkinson's disease is associated with aggregation of α-synuclein. As O-GlcNAc modification of α-synuclein has been found to inhibit its aggregation, elevating α-synuclein O-GlcNAc is being explored as a therapeutic strategy to treat Parkinson's disease. Cancer Dysregulation of O-GlcNAc is associated with cancer cell proliferation and tumor growth. O-GlcNAcylation of the glycolytic enzyme PFK1 at S529 has been found to inhibit PFK1 enzymatic activity, reducing glycolytic flux and redirecting glucose towards the pentose phosphate pathway. Structural modeling and biochemical experiments suggested that O-GlcNAc at S529 would inhibit PFK1 allosteric activation by fructose 2,6-bisphosphate and oligomerization into active forms. In a mouse model, mice injected with cells expressing PFK1 S529A mutant showed lower tumor growth than mice injected with cells expressing PFK1 wild-type. Additionally, OGT overexpression enhanced tumor growth in the latter system but had no significant effect on the system with mutant PFK1. Hypoxia induces PFK1 S529 O-GlcNAc and increases flux through the pentose phosphate pathway to generate more NADPH, which maintains glutathione levels and detoxifies reactive oxygen species, imparting a growth advantage to cancer cells. PFK1 was found to be glycosylated in human breast and lung tumor tissues. OGT has also been reported to positively regulate HIF-1α. HIF-1α is normally degraded under normoxic conditions by prolyl hydroxylases that utilize α-ketoglutarate as a co-substrate. OGT suppresses α-ketoglutarate levels, protecting HIF-1α from proteasomal degradation by pVHL and promoting aerobic glycolysis. In contrast with the previous study on PFK1, this study found that elevating OGT or O-GlcNAc upregulated PFK1, though the two studies are consistent in finding that O-GlcNAc levels are positively associated with flux through the pentose phosphate pathway. This study also found that decreasing O-GlcNAc selectively killed cancer cells via ER stress-induced apoptosis. Human pancreatic ductal adenocarcinoma (PDAC) cell lines have higher O-GlcNAc levels than human pancreatic duct epithelial (HPDE) cells. PDAC cells have some dependency upon O-GlcNAc for survival as OGT knockdown selectively inhibited PDAC cell proliferation (OGT knockdown did not significantly affect HPDE cell proliferation), and inhibition of OGT with 5S-GlcNAc showed the same result. Hyper-O-GlcNAcylation in PDAC cells appeared to be anti-apoptotic, inhibiting cleavage and activation of caspase-3 and caspase-9. Numerous sites on the p65 subunit of NF-κB were found to be modified by O-GlcNAc in a dynamic manner; O-GlcNAc at p65 T305 and S319 in turn positively regulate other modifications associated with NF-κB activation such as p300-mediated K310 acetylation and IKK-mediated S536 phosphorylation. These results suggested that NF-κB is constitutively activated by O-GlcNAc in pancreatic cancer. OGT stabilization of EZH2 in various breast cancer cell lines has been found to inhibit expression of tumor suppressor genes. In hepatocellular carcinoma models, O-GlcNAc is associated with activating phosphorylation of HDAC1, which in turn regulates expression of the cell cycle regulator p21Waf1/Cip1 and cell motility regulator E-cadherin. OGT has been found to stabilize SREBP-1 and activate lipogenesis in breast cancer cell lines. This stabilization was dependent on the proteasome and AMPK. OGT knockdown resulted in decreased nuclear SREBP-1, but proteasomal inhibition with MG132 blocked this effect. OGT knockdown also increased the interaction between SREBP-1 and the E3 ubiquitin ligase FBW7. AMPK is activated by T172 phosphorylation upon OGT knockdown, and AMPK phosphorylates SREBP-1 S372 to inhibit its cleavage and maturation. OGT knockdown had a diminished effect on SREBP-1 levels in AMPK-null cell lines. In a mouse model, OGT knockdown inhibited tumor growth but SREBP-1 overexpression partly rescued this effect. These results contrast from those of a previous study which found that OGT knockdown/inhibition inhibited AMPK T172 phosphorylation and increased lipogenesis. In breast and prostate cancer cell lines, high levels of OGT and O-GlcNAc have been associated both in vitro and in vivo with processes associated with disease progression, e.g., angiogenesis, invasion, and metastasis. OGT knockdown or inhibition was found to downregulate the transcription factor FoxM1 and upregulate the cell-cycle inhibitor p27Kip1 (which is regulated by FoxM1-dependent expression of the E3 ubiquitin ligase component Skp2), causing G1 cell cycle arrest. This appeared to be dependent on proteasomal degradation of FoxM1, as expression of a FoxM1 mutant lacking a degron rescued the effects of OGT knockdown. FoxM1 was found not to be directly modified by O-GlcNAc, suggesting that hyper-O-GlcNAcylation of FoxM1 regulators impairs FoxM1 degradation. Targeting OGT also lowered levels of FoxM1-regulated proteins associated with cancer invasion and metastasis (MMP-2 & MMP-9), and angiogenesis (VEGF). O-GlcNAc modification of cofilin S108 has also been reported to be important for breast cancer cell invasion by regulating cofilin subcellular localization in invadopodia. Diabetes Dysregulation of O-GlcNAc has been associated with diabetes and associated diabetic complications. In general, elevated O-GlcNAc is associated with an insulin resistance phenotype. Pancreatic β cells synthesize and secrete insulin to regulate blood glucose levels. One study found that inhibition of OGA with streptozotocin followed by glucosamine treatment resulted in O-GlcNAc accumulation and apoptosis in β cells; a subsequent study showed that a galactose-based analogue of streptozotocin was unable to inhibit OGA but still resulted in apoptosis, suggesting that the apoptotic effects of streptozotocin are not directly due to OGA inhibition. O-GlcNAc has been suggested to attenuate insulin signaling. In 3T3-L1 adipocytes, OGA inhibition with PUGNAc inhibited insulin-mediated glucose uptake. PUGNAc treatment also inhibited insulin-stimulated Akt T308 phosphorylation and downstream GSK3β S9 phosphorylation. In a later study, insulin stimulation of COS-7 cells caused OGT to localize to the plasma membrane. Inhibition of PI3K with wortmannin reversed this effect, suggesting dependence on phosphatidylinositol(3,4,5)-triphosphate. Increasing O-GlcNAc levels by subjecting cells to high glucose conditions or PUGNAc treatment inhibited insulin-stimulated phosphorylation of Akt T308 and Akt activity. IRS1 phosphorylation at S307 and S632/S635, which is associated with attenuated insulin signaling, was enhanced. Subsequent experiments in mice with adenoviral delivery of OGT showed that OGT overexpression negatively regulated insulin signaling in vivo. Many components of the insulin signaling pathway, including β-catenin, IR-β, IRS1, Akt, PDK1, and the p110α subunit of PI3K were found to be directly modified by O-GlcNAc. Insulin signaling has also been reported to lead to OGT tyrosine phosphorylation and OGT activation, resulting in increased O-GlcNAc levels. As PUGNAc also inhibits lysosomal β-hexosaminidases, the OGA-selective inhibitor NButGT was developed to further probe the relationship between O-GlcNAc and insulin signaling in 3T3-L1 adipocytes. This study also found that PUGNAc resulted in impaired insulin signaling, but NButGT did not, as measured by changes in phosphorylation of Akt T308, suggesting that the effects observed with PUGNAc may be due to off-target effects besides OGA inhibition. Infectious disease Bacterial Treatment of macrophages with lipopolysaccharide (LPS), a major component of the Gram-negative bacteria outer membrane, results in elevated O-GlcNAc in cellular and mouse models. During infection, cytosolic OGT was de-S-nitrosylated and activated. Suppressing O-GlcNAc with DON inhibited the O-GlcNAcylation and nuclear translocation of NF-κB, as well as downstream induction of inducible nitric oxide synthase and IL-1β production. DON treatment also improved cell survival during LPS treatment. Viral O-GlcNAc has been implicated in influenza A virus (IAV)-induced cytokine storm. Specifically, O-GlcNAcylation of S430 on interferon regulatory factor-5 (IRF5) has been shown to promote its interaction with TNF receptor-associated factor 6 (TRAF6) in cellular and mouse models. TRAF6 mediates K63-linked ubiquitination of IRF5 which is necessary for IRF5 activity and subsequent cytokine production. Analysis of clinical samples showed that blood glucose levels were elevated in IAV-infected patients compared to healthy individuals. In IAV-infected patients, blood glucose levels positively correlated with IL-6 and IL-8 levels. O-GlcNAcylation of IRF5 was also relatively higher in peripheral blood mononuclear cells of IAV-infected patients. Other applications Peptide therapeutics such as are attractive for their high specificity and potency, but they often have poor pharmacokinetic profiles due to their degradation by serum proteases. Though O-GlcNAc is generally associated with intracellular proteins, it has been found that engineered peptide therapeutics modified by O-GlcNAc have enhanced serum stability in a mouse model and have similar structure and activity compared to the respective unmodified peptides. This method has been applied to engineer GLP-1 and PTH peptides. See also O-GlcNAc transferase (OGT) O-GlcNAcase (OGA) O-linked glycosylation References Further reading Zachara, Natasha; Akimoto, Yoshihiro; Hart, Gerald W. (2015), Varki, Ajit; Cummings, Richard D.; Esko, Jeffrey D.; Stanley, Pamela (eds.), "The O-GlcNAc Modification", Essentials of Glycobiology (3rd ed.), Cold Spring Harbor Laboratory Press, PMID 28876858. External links Post-translational modification Carbohydrates Biochemistry Cell signaling Cell biology Signal transduction
O-GlcNAc
[ "Chemistry", "Biology" ]
9,957
[ "Biomolecules by chemical classification", "Carbohydrates", "Cell biology", "Gene expression", "Biochemical reactions", "Signal transduction", "Organic compounds", "Post-translational modification", "Carbohydrate chemistry", "nan", "Biochemistry", "Neurochemistry" ]
44,458,017
https://en.wikipedia.org/wiki/CERAWeek
CERAWeek is an annual energy conference organized by the information and insights company S&P Global in Houston, Texas. The conference provides a platform for discussion on a range of energy-related topics; CERAWeek 2019 featured sessions on the world economic outlook, geopolitics, energy policy and regulation, climate change and technological innovation, among other topics. The conference features prominent speakers from energy, policy, technology, and financial industries, and is chaired by Pulitzer Prize winner Daniel Yergin, vice-chairman, IHS Markit and Jamey Rosenfield, vice chair, CERAWeek, senior vice president, IHS Markit. Both are co-founders of Cambridge Energy Research Associates. The 39th annual CERAWeek conference, scheduled for March 9 to 13, 2020, in Houston, Texas, was canceled. Speakers and Attendees CERAWeek attracts executives, government officials and thought leaders from the energy, policy, technology, and financial industries to Houston each year. In 2019, there were over 5,500 delegates from over 1,000 organizations representing 85 countries. These include over 650 CEOs and Chairmen, over 1,400 C-Suite executives and over 90 ministers and government representatives. Participants encompass all regions and industry segments: oil, natural gas, electric power, coal, nuclear and renewables; as well as technology, finance, mobility and more. Recent Speakers at CERAWeek have included: President Bill Clinton President George W. Bush Prime Minister of Canada Justin Trudeau Prime Minister of India Narendra Modi Henry Kissinger, former US Secretary of State Nizar Al-Adsani, chairman and managing director, Kuwait Petroleum Corporation Vagit Alekperov, president and CEO, LUKOIL Ben Bernanke, former chairman of the US Federal Reserve Bob Dudley, former chief executive, BP Khalid Al-Falih, chief executive, Saudi Aramco Bill Gates, co-chair, Gates Foundation John Hess, chairman and CEO, Hess Corporation Berat Albayrak, Minister of Energy and Natural Resources of the Republic of Turkey Hon. John Hickenlooper, Governor of Colorado Walter Isaacson, CEO, Aspen Institute Jeffrey Immelt, chairman and CEO, General Electric Joe Kaeser, CEO, Siemens Hon. John Kasich, Governor of Ohio Fred Krupp, president, Environmental Defense Fund Ryan Lance, CEO, ConocoPhillips Andrew Liveris, chairman and CEO, The Dow Chemical Company Bernard Looney, CEO, BP Emilio Lozoya, CEO, Pemex Helge Lund, president and CEO, Statoil Christophe de Margerie, chairman and CEO, Total Gina McCarthy, administrator, US EPA Ernie Moniz, U.S. Secretary of Energy Admiral Mike Mullen, former chairman, US Joint Chiefs Ali Naimi, Minister of Petroleum, Saudi Arabia Marvin Odum, president, Shell Oil Igor Sechin, executive chairman, Rosneft Jeffery Smisek, president and CEO, United Airlines Rex Tillerson, chairman and CEO, ExxonMobil Peter Voser, chief executive officer, Royal Dutch Shell John Watson, chairman and CEO, Chevron Andrew Wheeler, administrator, US EPA Baosen Zheng, managing director and group head, State Grid Corporation of China Jiping Zhou, president, China National Petroleum Co. CERAWeek’s Energy Innovation Pioneers Each year at CERAWeek, the Energy Innovation Pioneer program recognizes companies and entrepreneurs whose technologies and business plans have the potential to transform the energy industry's future. Companies are selected based on several criteria, including the feasibility of their plan and scalability of their technology, and are presented at an Energy Insight Breakfast session during the conference. In 2019, CERAWeek recognized 8 pioneers. In 2014, CERAWeek recognized 24 pioneers. Media Coverage CERAWeek has been widely covered by the media, including the following news outlets: The Wall Street Journal, The New York Times, Business Week, and Forbes. References External links Conferences in the United States Business conferences Energy organizations
CERAWeek
[ "Engineering" ]
813
[ "Energy organizations" ]
44,459,119
https://en.wikipedia.org/wiki/Detekt
Detekt is a discontinued free tool by Amnesty International, Digitale Gesellschaft, EFF, and Privacy International to scan for surveillance software on Microsoft Windows. It was intended for use by activists and journalists to scan for known spyware. The tool Detekt was available for free download. The tool did not guarantee detection of all spyware, nor was it meant to give a false sense of security, and was meant to be used with other methods to combat malware and spyware. In 2014, the Coalition Against Unlawful Surveillance Exports estimated that the global trade in surveillance technologies was worth more than 3 billion GBP annually. Detekt was available in Amharic, Arabic, English, German, Italian, and Spanish. Technical The tool required no installation, and was designed to scan for surveillance software on Windows PCs, from XP to Windows 8.1. The tool scanned for current surveillance software, and after scanning, it would display a summary indicating if any spyware was found or not. It would generate a log file containing the details. The tool did not guarantee absolute protection from surveillance software, as it scanned for known spyware (at the time of release), which could be modified to circumvent detection, or as new software would become available. Therefore, a clean bill of health didn't necessarily mean that the PC was free of surveillance software. The website instructed the user to disconnect the internet connection, and close all applications, before running, and not to turn the connection back on if any spyware was found. Detekt was released under the GPLv3 free license. Detekt was developed by Claudio Guarnieri with the help of Bill Marczak, Morgan Marquis-Boire, Eva Galperin, Tanya O'Carroll, Andre Meister, Jillian York, Michael Ligh, Endalkachew Chala. It was provided with patterns for the following malware: DarkComet RAT, XtremeRAT, BlackShades RAT, njRAT, FinFisher FinSpy, HackingTeam RCS, ShadowTech RAT, Gh0st RAT. See also Computer and network surveillance Computer surveillance in the workplace Internet censorship Internet privacy Freedom of information Tor (anonymity network) 2013 mass surveillance disclosures References External links Computer forensics Computer surveillance Internet security
Detekt
[ "Engineering" ]
473
[ "Cybersecurity engineering", "Computer forensics" ]
44,459,435
https://en.wikipedia.org/wiki/Cinazepam
Cinazepam (BD-798, sold under brand name Levana) is an atypical benzodiazepine derivative. It produces pronounced hypnotic, sedative, and anxiolytic effects with minimal myorelaxant side effects. In addition, unlike many other benzodiazepine and nonbenzodiazepine hypnotics such as diazepam, flunitrazepam, and zopiclone, cinazepam does not violate sleep architecture, and the continuity of slow-wave sleep and REM sleep are proportionally increased. As such, cinazepam produces a sleep state close to physiological, and for that reason, may be advantageous compared to other, related drugs in the treatment of insomnia and other sleep disorders. Cinazepam has an order of magnitude lower affinity for the benzodiazepine receptor of the GABAA complex relative to other well-known hypnotic benzodiazepines such as nitrazepam and phenazepam. Moreover, in mice, it is rapidly metabolized, with only 5% of the base compound remaining within 30 minutes of administration. As such, cinazepam is considered to be a benzodiazepine prodrug; specifically, to 3-hydroxyphenazepam, as the main active metabolite. Synthesis The reaction between 2-amino-5-bromo-2'-chlorobenzophenone [60773-49-1] (1) and bromoacetyl bromide [598-21-0] gives 5-bromo-2'-chloro-2-bromoacetamido-benzophenone, PC33695403 (2). Finkelstein reaction with sodium iodide gives PC11375008 (3). Reaction with hydroxylamine preferentially causes alkylation by displacement of the leaving group than oxime formation. Hence, the product of this step is PC129780422 (4). Ring closure in acid led to Phenazepam 4-Oxide [1177751-52-8] (5). Treatment with acetic anhydride and Polonovski rearrangement gave PC630731 (6). Saponification of the ester yielded 3-Hydroxyphenazepam [70030-11-4] (7). Treatment with succinic anhydride completed the synthesis of Cinazepam (8). See also Gidazepam Cloxazolam References Abandoned drugs Anxiolytics Benzodiazepines 2-Chlorophenyl compounds Hypnotics Bromobenzene derivatives Carboxylic acids Prodrugs
Cinazepam
[ "Chemistry", "Biology" ]
597
[ "Hypnotics", "Behavior", "Sleep", "Carboxylic acids", "Functional groups", "Drug safety", "Prodrugs", "Chemicals in medicine", "Abandoned drugs" ]
44,459,690
https://en.wikipedia.org/wiki/3-Hydroxyphenazepam
3-Hydroxyphenazepam is a benzodiazepine with hypnotic, sedative, anxiolytic, and anticonvulsant properties. It is an active metabolite of phenazepam, as well as the active metabolite of the benzodiazepine prodrug cinazepam. Relative to phenazepam, 3-hydroxyphenazepam has diminished myorelaxant properties, but is about equivalent in most other regards. Like other benzodiazepines, 3-hydroxyphenazepam behaves as a positive allosteric modulator of the benzodiazepine site of the GABAA receptor with an EC50 value of 10.3 nM. It has been sold as a designer drug. See also Lorazepam, licensed medication Nifoxipam Nitemazepam References Hypnotics Anticonvulsants Sedatives Anxiolytics Benzodiazepines 2-Chlorophenyl compounds Bromoarenes Human drug metabolites Designer drugs Lactims
3-Hydroxyphenazepam
[ "Chemistry", "Biology" ]
237
[ "Hypnotics", "Behavior", "Sleep", "Human drug metabolites", "Chemicals in medicine" ]
44,459,743
https://en.wikipedia.org/wiki/Jordan%20map
In theoretical physics, the Jordan map, often also called the Jordan–Schwinger map is a map from matrices to bilinear expressions of quantum oscillators which expedites computation of representations of Lie algebras occurring in physics. It was introduced by Pascual Jordan in 1935 and was utilized by Julian Schwinger in 1952 to re-work out the theory of quantum angular momentum efficiently, given that map’s ease of organizing the (symmetric) representations of su(2) in Fock space. The map utilizes several creation and annihilation operators and of routine use in quantum field theories and many-body problems, each pair representing a quantum harmonic oscillator. The commutation relations of creation and annihilation operators in a multiple-boson system are, where is the commutator and is the Kronecker delta. These operators change the eigenvalues of the number operator, , by one, as for multidimensional quantum harmonic oscillators. The Jordan map from a set of matrices to Fock space bilinear operators , is clearly a Lie algebra isomorphism, i.e. the operators satisfy the same commutation relations as the matrices . The example of angular momentum For example, the image of the Pauli matrices of SU(2) in this map, for two-vector a†s, and as satisfy the same commutation relations of SU(2) as well, and moreover, by reliance on the completeness relation for Pauli matrices, This is the starting point of Schwinger’s treatment of the theory of quantum angular momentum, predicated on the action of these operators on Fock states built of arbitrary higher powers of such operators. For instance, acting on an (unnormalized) Fock eigenstate, while so that, for , this is proportional to the eigenstate , Observe and , as well as . Fermions Antisymmetric representations of Lie algebras can further be accommodated by use of the fermionic operators and , as also suggested by Jordan. For fermions, the commutator is replaced by the anticommutator , Therefore, exchanging disjoint (i.e. ) operators in a product of creation of annihilation operators will reverse the sign in fermion systems, but not in boson systems. This formalism has been used by A. A. Abrikosov in the theory of the Kondo effect to represent the localized spin-1/2, and is called Abrikosov fermions in the solid-state physics literature. See also Borel-Weil-Bott Theorem Current algebra Angular momentum operator Klein transformation Bogoliubov transformation Holstein–Primakoff transformation Jordan–Wigner transformation Clebsch–Gordan coefficients for SU(3)#Symmetry group of the 3D oscillator Hamiltonian operator References Representation theory of Lie algebras Mathematical physics
Jordan map
[ "Physics", "Mathematics" ]
608
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
44,459,872
https://en.wikipedia.org/wiki/Computer%20Engineer%20Barbie
Computer Engineer Barbie is the 126th career version of Mattel's Barbie doll. In response to poll results indicating strong support for computer engineers, the doll set was created and introduced in 2010. In 2014, Mattel apologized for the accompanying book, I Can Be a Computer Engineer, after complaints that it represented Barbie as incompetent in the field, needing the help of men. Description The doll has a pink laptop and a pink smartphone, and is wearing geometric pink glasses, a pink watch, black leggings, a T-shirt decorated with "Barbie" spelled in binary code, a fitted vest with saddle-stitch detailing, pink wedges, and a Bluetooth headset. The packaging included a code to unlock exclusive game content on the Barbie website. Female engineers including Betty Shanahan, CEO of the Society of Women Engineers, and Alice Agogino of the National Academy of Engineering were consulted on her wardrobe and work environment. They suggested that for authenticity she needed "a Coke can and a bag of Doritos" on her desk; she has a coffee cup. One mockup also included a Linux penguin; Barbie is running Linux on her dual-monitor set-up. History In 2010, Mattel invited people to vote for Barbie's 126th career, the first instance of this in the company's history. Voters were able to choose between five choices: computer engineer, architect, environmentalist, news anchor, and surgeon. Although girls preferred news anchor, computer engineer was the most popular choice in online polling, partly because of promotion by the Society of Women Engineers. The two dolls were launched together at the 2010 American International Toy Fair. Reception Many writers for tech publications and other reviewers were encouraged by the choice of career, hoping it would encourage girls to consider careers in computer science. However, the amount of pink, the hairstyle, and the stylish clothes struck some women as unrealistic and stereotyped. The accompanying book, I Can Be a Computer Engineer, was issued in 2013 together with I Can Be an Actress. The book received extensive criticism, especially beginning in November 2014, for depicting Barbie as relying on two male friends to program the game she is designing. In addition, they need to help her after she accidentally infects her and her sister Skipper's computer with a virus (via the pink heart-shaped USB stick she wears around her neck), after ignoring advice from her (female) computer teacher. A website was created to enable people to replace segments of the book's text with their own, and Mattel pulled the title from Amazon after many critical reviews. The publisher stated it was being discontinued. A Mattel spokesperson said that the book had first been published in 2010 and was outdated, and the company apologized. The book's Barbie says she's "only creating the design ideas" and that her two male friends will have to do the coding; the author, who proclaimed herself a feminist, said her assignment had been to portray Barbie as a designer and "regrets that she may have let stereotypes slip into the book". In response to the complaints of the book, Mattel posted an apology on their official Facebook page for Barbie, stating that the "portrayal of Barbie in this specific story doesn't reflect the Brand's vision for what Barbie stands for." References 2010s toys Barbie Women in computing Women in science and technology Toy controversies 2010 introductions
Computer Engineer Barbie
[ "Technology" ]
694
[ "Women in science and technology" ]
44,459,944
https://en.wikipedia.org/wiki/Sarcomyxa%20serotina
Sarcomyxa serotina is a species of fungus in the family Sarcomyxaceae. Its recommended English name in the UK is olive oysterling. In North America it is known as late fall oyster or late oyster mushroom. Fruit bodies grow as greenish, overlapping fan- or oyster-shaped caps on the wood of both coniferous and deciduous trees. The gills on the underside are closely spaced, bright orange yellow, and have an adnate attachment to the stipe. It produces a yellow spore print; spores are smooth, amyloid, and measure 4–6 by 1–2 μm. The species is considered to be either edible or inedible, with the taste ranging from mild to bitter. Research has revealed that two separate species exist, Sarcomyxa serotina and Sarcomyxa edulis (unknown in Europe). The latter is cultivated for food in China and Japan. References Fungi described in 1793 Fungi of Asia Fungi of Europe Fungi of North America Fungus species
Sarcomyxa serotina
[ "Biology" ]
205
[ "Fungi", "Fungus species" ]
44,460,166
https://en.wikipedia.org/wiki/Staggered%20tuning
Staggered tuning is a technique used in the design of multi-stage tuned amplifiers whereby each stage is tuned to a slightly different frequency. In comparison to synchronous tuning (where each stage is tuned identically) it produces a wider bandwidth at the expense of reduced gain. It also produces a sharper transition from the passband to the stopband. Both staggered tuning and synchronous tuning circuits are easier to tune and manufacture than many other filter types. The function of stagger-tuned circuits can be expressed as a rational function and hence they can be designed to any of the major filter responses such as Butterworth and Chebyshev. The poles of the circuit are easy to manipulate to achieve the desired response because of the amplifier buffering between stages. Applications include television IF amplifiers (mostly 20th century receivers) and wireless LAN. Rationale Staggered tuning improves the bandwidth of a multi-stage tuned amplifier at the expense of the overall gain. Staggered tuning also increases the steepness of passband skirts and hence improves selectivity. The value of staggered tuning is best explained by first looking at the shortcomings of tuning every stage identically. This method is called synchronous tuning. Each stage of the amplifier will reduce the bandwidth. In an amplifier with multiple identical stages, the of the response after the first stage will become the points of the second stage. Each successive stage will add a further to what was the band edge of the first stage. Thus the bandwidth becomes progressively narrower with each additional stage. As an example, a four-stage amplifier will have its points at the points of an individual stage. The fractional bandwidth of an LC circuit is given by, where m is the power ratio of the power at resonance to that at the band edge frequency (equal to 2 for the point and 1.19 for the point) and Q is the quality factor. The bandwidth is thus reduced by a factor of . In terms of the number of stages . Thus, the four stage synchronously tuned amplifier will have a bandwidth of only 19% of a single stage. Even in a two-stage amplifier the bandwidth is reduced to 41% of the original. Staggered tuning allows the bandwidth to be widened at the expense of overall gain. The overall gain is reduced because when any one stage is at resonance (and thus maximum gain) the others are not, unlike synchronous tuning where all stages are at maximum gain at the same frequency. A two-stage stagger-tuned amplifier will have a gain less than a synchronously tuned amplifier. Even in a design that is intended to be synchronously tuned, some staggered tuning effect is inevitable because of the practical impossibility of keeping all tuned circuits perfectly in step and because of feedback effects. This can be a problem in very narrow band applications where essentially only one spot frequency is of interest, such as a local oscillator feed or a wave trap. The overall gain of a synchronously tuned amplifier will always be less than the theoretical maximum because of this. Both synchronously tuned and stagger-tuned schemes have a number of advantages over schemes that place all the tuning components in a single aggregated filter circuit separate from the amplifier such as ladder networks or coupled resonators. One advantage is that they are easy to tune. Each resonator is buffered from the others by the amplifier stages so have little effect on each other. The resonators in aggregated circuits, on the other hand, will all interact with each other, particularly their nearest neighbours. Another advantage is that the components need not be close to ideal. Every LC resonator is directly working into a resistor which lowers the Q anyway so any losses in the L and C components can be absorbed into this resistor in the design. Aggregated designs usually require high Q resonators. Also, stagger-tuned circuits have resonator components with values that are quite close to each other and in synchronously tuned circuits they can be identical. The spread of component values is thus less in stagger-tuned circuits than in aggregated circuits. Design Tuned amplifiers such as the one illustrated at the beginning of this article can be more generically depicted as a chain of transconductance amplifiers each loaded with a tuned circuit. where for each stage (omitting the suffixes) gm is the amplifier transconductance C is the tuned circuit capacitance L is the tuned circuit inductance G is the sum of the amplifier output conductance and the input conductance of the next amplifier. Stage gain The gain A(s), of one stage of this amplifier is given by; where s is the complex frequency operator. This can be written in a more generic form, that is, not assuming that the resonators are the LC type, with the following substitutions, (the resonant frequency) (the gain at resonance) (the stage quality factor) Resulting in, Stage bandwidth The gain expression can be given as a function of (angular) frequency by making the substitution where i is the imaginary unit and ω is the angular frequency The frequency at the band edges, ωc, can be found from this expression by equating the value of the gain at the band edge to the magnitude of the expression, where m is defined as above and equal to two if the points are desired. Solving this for ωc and taking the difference between the two positive solutions finds the bandwidth Δω, and the fractional bandwidth B, Overall response The overall response of the amplifier is given by the product of the individual stages, It is desirable to be able to design the filter from a standard low-pass prototype filter of the required specification. Frequently, a smooth Butterworth response will be chosen but other polynomial functions can be used that allow ripple in the response. A popular choice for a polynomial with ripple is the Chebyshev response for its steep skirt. For the purpose of transformation, the stage gain expression can be rewritten in the more suggestive form, This can be transformed into a low-pass prototype filter with the transform where ω'''c is the cutoff frequency of the low-pass prototype. This can be done straightforwardly for the complete filter in the case of synchronously tuned amplifiers where every stage has the same ω0 but for a stagger-tuned amplifier there is no simple analytical solution to the transform. Stagger-tuned designs can be approached instead by calculating the poles of a low-pass prototype of the desired form (e.g. Butterworth) and then transforming those poles to a band-pass response. The poles so calculated can then be used to define the tuned circuits of the individual stages. Poles The stage gain can be rewritten in terms of the poles by factorising the denominator; where p, p* are a complex conjugate pair of poles and the overall response is, where the ak = A0kω0k/Q0k From the band-pass to low-pass transform given above, an expression can be found for the poles in terms of the poles of the low-pass prototype, qk, where ω0B is the desired band-pass centre frequency and Qeff is the effective Q of the overall circuit. Each pole in the prototype transforms to a complex conjugate pair of poles in the band-pass and corresponds to one stage of the amplifier. This expression is greatly simplified if the cutoff frequency of the prototype, ω'c, is set to the final filter bandwidth ω0B/Qeff. In the case of a narrowband design which can be used to make a further simplification with the approximation, These poles can be inserted into the stage gain expression in terms of poles. By comparing with the stage gain expression in terms of component values, those component values can then be calculated. Applications Staggered tuning is of most benefit in wideband applications. It was formerly commonly used in television receiver IF amplifiers. However, SAW filters are more likely to be used in that role nowadays. Staggered tuning has advantages in VLSI for radio applications such as wireless LAN. The low spread of component values make it much easier to implement in integrated circuits than traditional ladder networks. See also Double-tuned amplifier References Bibliography Chattopadhyay, D., Electronics: Fundamentals and Applications, New Age International, 2006 . Gulati, R. R., Modern Television Practice Principles, Technology and Servicing, New Age International, 2002 . Iniewski, Krzysztof, CMOS Nanoelectronics: Analog and RF VLSI Circuits, McGraw Hill Professional, 2011 . Maheswari, L. K.; Anand, M. M. S., Analog Electronics, PHI Learning, 2009 . Moxon, L. A., Recent Advances in Radio Receivers, Cambridge University Press, 1949 . Pederson, Donald O.; Mayaram, Kartikeya, Analog Integrated Circuits for Communication, Springer, 2007 . Sedha, R. S., A Textbook of Electronic Circuits, S. Chand, 2008 . Wiser, Robert, Tunable Bandpass RF Filters for CMOS Wireless Transmitters'', ProQuest, 2008 . Electronic amplifiers Signal processing filter
Staggered tuning
[ "Chemistry", "Technology" ]
1,888
[ "Amplifiers", "Electronic amplifiers", "Filters", "Signal processing filter" ]
44,460,457
https://en.wikipedia.org/wiki/Incidence%20poset
In mathematics, an incidence poset or incidence order is a type of partially ordered set that represents the incidence relation between vertices and edges of an undirected graph. The incidence poset of a graph G has an element for each vertex or edge in G; in this poset, there is an order relation x ≤ y if and only if either x = y or x is a vertex, y is an edge, and x is an endpoint of y. Example As an example, a zigzag poset or fence with an odd number of elements, with alternating order relations a < b > c < d... is an incidence poset of a path graph. Properties Every incidence poset of a non-empty graph has height two. Its width equals the number of edges plus the number of acyclic connected components. Incidence posets have been particularly studied with respect to their order dimension, and its relation to the properties of the underlying graph. The incidence poset of a connected graph G has order dimension at most two if and only if G is a path graph, and has order dimension at most three if and only if G is at most planar (Schnyder's theorem). However, graphs whose incidence posets have order dimension 4 may be dense and may have unbounded chromatic number. Every complete graph on n vertices, and by extension every graph on n vertices, has an incidence poset with order dimension O(log log n). If an incidence poset has high dimension then it must contain copies of the incidence posets of all small trees either as sub-orders or as the duals of sub-orders. See also Line graph, a related construction References Graph theory Order theory
Incidence poset
[ "Mathematics" ]
355
[ "Discrete mathematics", "Graph theory", "Combinatorics", "Mathematical relations", "Order theory" ]
44,462,309
https://en.wikipedia.org/wiki/Alexander%20Tuzhilin
Alexander Sergei Tuzhilin (born 1957) is a Professor of Data Science and Information Systems and the Leonard N. Stern Endowed Professor of Business at New York University's Stern School of Business. He also serves as the Dean of Computer Science at the University of the People on the pro bono basis. Professor Tuzhilin is known for his work on personalization, recommender systems, machine learning and AI, where he has made several contributions, including being instrumental in developing the area of Context-Aware Recommender Systems (CARS), proposing novel methods of providing unexpected and cross-domain recommendations based on the principles of deep-learning, developing novel approaches to customer segmentation, and discovery of unexpected patterns in data. Education Tuzhilin received his B.A. in Mathematics from the New York University in 1980, M.S. in Engineering Economics from the Department of Management Science and Engineering at Stanford University in 1981, and Ph.D. in computer science from NYU's Courant Institute of Mathematical Sciences in 1989, his doctoral advisor being Zvi Kedem. Career Tuzhilin joined the faculty at the New York University Stern School of Business in 1989 as an Assistant Professor of Information Systems. He is currently the Leonard N. Stern Professor of Business. He is also the Dean of Computer Science at the University of the People. Research Tuzhilin researches data mining in databases, personalization, recommender systems, and customer relationship management. In 2006, Tuzhilin was hired by Google and given access to its monitoring systems to do a study on click fraud. This was part of a class-action settlement requiring Google to offer advertisers up to $60 million in refunds. Tuzhilin concluded that defining and tracking click fraud will be difficult, because it is often not possible to decipher whether Web surfers were clicking on an advertising link out of malice or as part of an innocent online excursion. Patents In 2001, Tuzhilin patented a method of building customer profiles and using them to recommend products and services. Tuzhilin said of the patent, 'It's very broad and very general, and occupies some prime real estate in this space. It essentially covers technologies that are crucial for implementation of customer relationship management.' He added that the patent was careful not to stipulate that the technology was designed for Internet applications. Others pointed out that there were legal exceptions to business methods patents. Any individuals or companies that can show they have been engaged in a business practice for at least a year before a patent application for that practice was filed may be able to circumvent the patent. In March 2012, Yahoo sued Facebook for violating 10 of its patents. Facebook countersued Yahoo, claiming that it violated Facebook patents that covered 80% of the Yahoo's 2011 revenues. Three of Facebook's patents were originally granted to Tuzhilin. References External links New York University Stern School of Business faculty Living people 1957 births University of the People faculty Courant Institute of Mathematical Sciences alumni American information theorists American data scientists Data miners Information systems researchers 20th-century American scientists 21st-century American scientists Stanford University School of Engineering alumni American university and college faculty deans
Alexander Tuzhilin
[ "Technology" ]
643
[ "Information systems", "Information systems researchers" ]
62,094,228
https://en.wikipedia.org/wiki/Tank%20leak%20detection
Tank leak detection is implemented to alert the operator to a suspected release from any part of a storage tank system, what enables to prevent from soil contamination and loss of product. In many countries regulated UST are required to have an approved leak detection method so that leaks are discovered quickly and the release is stopped in time. Leak detection standards in Europe European Committee for Standardization EN 13160 shows five different classes (technical methods) of leak detection systems to be used on tanks and pipes. The number of the class indicates the effectiveness of the installed leak detection system. Class 1 being the highest and class 5 being the lowest level. Class 1 System inherently safe. Leak is detected before any liquid enters the environment. These systems detect a leak above or below the liquid level of a double wall system. Once a leak is detected, fuel can be removed from the tank before any product enters the environment. Class 2 System that monitors pressure of a liquid filling the interstitial space of a double wall system. The system alarms on any leak. However, once the tank is breached, the liquid contaminates the product or flows into the ground - in both situations contamination cannot be prevented. Class 3 Liquid/vapour sensors are placed at the lowest point in a system and detect the presence of liquid or hydrocarbon vapour within the interstitial space. Once a leak is detected an alarm will sound. The sensors cannot detect the failure of outer wall. The product may enter the environment. Class 4 The system analyses rates of change in tank contents (i.e. leakage into or out of the tank). If a leak is found when operating on a single wall system, the product will always be released to the environment before the leak is detected. For tanks there are 2 subclasses of the system. 4a System based on fuel reconciliation (measurement of amount sold through the dispenser against the amount that goes out of the tank according to the tank gauge).Any discrepancies release an alarm. 4b Detection of tank leak in quiet periods (liquid level is changing while the tank is not dispensing fuel). Class 5 In this system monitoring wells with installed sensors are located around the tank site. The sensors detect a leak from the installation. As in case of class 4, the product will always be released to the environment before the leak is detected. Leak detection standards in the USA In the USA, the Environmental Protection Agency (EPA) requires owners and operators detect releases from their UST systems. EPA allows three categories of release detection: interstitial, internal, and external. These three categories include seven release detection methods. Interstitial method – secondary containment with interstitial monitoring; secondary containment and under-dispenser containment Internal methods – automatic tank gauging (ATG) systems; statistical inventory reconciliation (SIR); continuous in-tank leak detection External method – monitoring for vapors in the soil; monitoring for liquids on the groundwater Leak detection methods Automatic Tank Gauging (ATG) – the basic function of the system is to monitor the fuel level tanks permanently to see if the tank is leaking. A probe installed in the tank is linked electronically to a nearby control device where received data (product level and temperature) are recorded and automatically analyzed. These systems automatically calculate the changes in product volume that can indicate a leaking tank. The ATG must be operated in one of the following modes: Inventory mode – activities of an in-service tank together with deliveries are recorded. Test mode – the test is performed when the tank is shut down and there is no dispensing or delivery. The product level and temperature are measured for at least one hour. However, some systems, known as continuous ATGS, do not require the tank to be taken out of service to perform a test. There are methods combining automatic tank gauges with statistical inventory reconciliation where gauge provides liquid level and temperature data to a computer running SIR software, which performs the analysis to detect leaks. Statistical Inventory Reconciliation (SIR) SIR was born in the early 1980s. In SIR methods statistical techniques are applied to inventory, delivery and dispensed data collected over time and are used to determine whether or not a tank system is leaking. On a regular basis, information about the current tank level and complete records of withdrawals and deliveries to UST are proceeded and calculated with the use of computer program that performs a statistical analysis of received data. Replacing simple arithmetic with appropriate statistical procedures allows the leak detection capability of inventory reconciliation to be considerably improved. SIR vendors must demonstrate that they can detect leaks of 0.2 gallons per hour in order to be acceptable as a monthly leak detection method. Such solution enables not only detected tank leakage but also possible theft, over-dispensing or short deliveries. Vapour Monitoring Vapour Monitoring detects fumes from leaked product in the soil around the leaked tank. It can be categorised into 2 types. Active Monitoring where special tracer chemical added to the UST are detected. Passive Monitoring measures product vapours in the soil around the UST. Special monitoring wells or sampling points must be placed in the tank backfill. A minimum of two wells is recommended for a single tank excavation. Three or more wells are recommended for an excavation with two or more tanks. Used equipment can immediately analyse a gathered vapour or only gather a sample which is then analysed in the laboratory. The system is not inherently safe - by the time the vapor sensors go to alarm, the contamination has likely already occurred. Interstitial Monitoring The method requires a secondary containment, it can be a double wall of the UST where the outer tank wall provides a barrier between the inner tank and the environment. Interstitial methods include the use of a hydrocarbon-sensitive sensor cables or probes connected to a monitoring console. Once the hydrocarbons is detected an alarm goes off. The other method is vacuum monitoring where vapour sensor monitors interstitial spaces of the tank. In case of the leakage the vacuum of the space begins to change. It is also possible to partially fill the interstitial space of the tank with a monitoring fluid (brine or glycol solutions ). Once the level of the fluid changes, a leak may be present. Monitoring for Contamination in Groundwater Monitoring wells are placed close to the UST and allow continuous measurements for leaked product. This methods enables to detect the presence of liquid product floating on the groundwater. The wells can be monitored periodically (at least once every 30-days) with hand-held equipment or with the use of permanently installed monitoring devices. This method cannot be used at sites where groundwater is more than 20 feet below the surface and the subsurface soil or backfill material (or both) consists of gravels, coarse to medium sands, or other similarly permeable materials. A minimum of two wells is recommended for a single tank excavation. Three or more wells are recommended for an excavation with two or more tanks. Product is released to the environment before a leak is detected. Manual Tank Gauging The method requires keeping the tank undisturbed (no liquid is added/subtracted) for a designated period (e.g. 36hours). The length of the testing period depends on the size of the tank and whether the method is used alone or in combination with tank tightness testing. During this period the contents of the tank are measured manually twice, at the beginning and at the end of the period. Significant changes in the volume of the tank’s contents over the test period can indicate a possible leak. References Fuels
Tank leak detection
[ "Chemistry" ]
1,544
[ "Fuels", "Chemical energy sources" ]
62,094,909
https://en.wikipedia.org/wiki/Brewer%27s%20spent%20grain
Brewer's spent grain (BSG) or draff is a food waste that is a byproduct of the brewing industry that makes up 85 percent of brewing waste. BSG is obtained as a mostly solid residue after wort production in the brewing process. The product is initially wet, with a short shelf-life, but can be dried and processed in various ways to preserve it. Because spent grain is widely available wherever beer is consumed and is frequently available at a low cost, many potential uses for this waste have been suggested and studied as a means of reducing its environmental impact, such as use as a food additive, animal feed, fertilizer or paper. Composition The majority of BSG is composed of barley malt grain husks in combination with parts of the pericarp and seed coat layers of the barley. Though the composition of BSG can vary, depending on the type of barley used, the way it was grown, and other factors, BSG is usually rich in cellulose, hemicelluloses, lignin, and protein. BSG is also naturally high in fiber, making it of great interest as a food additive, replacing low-fiber ingredients. Food additive The high protein and fiber content of BSG makes it an obvious target to add to human foods. BSG can be ground and then sifted into a powder that can increase fiber and protein content while decreasing caloric content, possibly replacing flour in baked goods and other foods, such as breadsticks, cookies, and even hot dogs. Some breweries that also operate restaurants re-use their BSG in recipes, such as in pizza crust. Grainstone is an Australian based company that has developed a world leading modular energy efficient process to convert BSG to a premium specialty flour and a process to produce nutraceuticals; including protein isolate, soluble dietary fibre and antioxident. Livestock feed The low cost and high availability of BSG has led to its use as livestock feed. BSG can be fed to livestock immediate in its wet stage or once processed and dried. The high protein content in BSG offers a wide variety of amino acids essential in the diet of livestock. In fact, supplementing BSG in cow diets may increase milk yield, milk total solids content, and milk fat yield, when compared to maize. Fertilizer BSG may be an effective, affordable soil amendment for agricultural purposes. Its high protein content translates to high nitrogen availability in soils, which could be ideal for many common crops such as beets, spinach, kale, and onions. In combination with compost, BSG may improve germination rate and the availability of organic matter in soil. Studies have shown that BSG in addition to compost has a stronger, positive effect on germination than compost alone. An additional study has shown that the inclusion of BSG in soil is more effective at sodic soil reclamation and corn seed germination than gypsum, which is traditionally used in sodic soils. Paper BSG can be used to making recycled paper. It's called Craft Beer Paper or Beer Paper. Because of BSG, the paper shows subtle beer hue. It has been made into postcards, coasters, gift boxes, and A4 and B5-sized beer paper sheets that can be printed just like regular white printing paper. Beer Paper had obtained FSC® certification. Ceramics Brewery's spent grains could be recycled through their incorporation in a traditional ceramic paste used in the manufacture of common bricks. This incorporation affects the mechanical strength, porosity, and thermal conductivity of the ceramic material. Mushroom Substrate Both completely dried BSG, and fresh BSG (with an average ~79% water content) have successfully been utilized in small home and small commercial mushroom substrate production. Additional, alternative sources of lignin from coffee, soft wood pine/douglas fir sawdust, and harder wood sawdust (in pellet form as well) have been shown to help in mushroom production block manufacturing. These industrial byproducts may offer less expensive material costs in block production as well as a higher level of waste reuse in mushroom production. The highest average yield (there are many growing factors involved) was found to be with a ratio of fuel pellet : BSG : hot tap water of 450 : 725 : 825 (all measured in grams in the study per each T3 filter patch bag utilized for the production blocks), with testing for field capacity being a final assessment of fine tuning these ratios (to get a 'feel' for proper hydration, by experienced block producers, via hand compression testing of a hydrated and mixed batch of substrate.) See also Dietary fiber References Biological waste Brewing
Brewer's spent grain
[ "Biology" ]
965
[ "nan" ]
62,095,046
https://en.wikipedia.org/wiki/Taxonomic%20boundary%20paradox
The term boundary paradox refers to the conflict between traditional, rank-based classification of life and evolutionary thinking. In the hierarchy of ranked categories it is implicitly assumed that the morphological gap is growing along with increasing ranks: two species from the same genus are more similar than other two species from different genera in the same family, these latter two species are more similar than any two species from different families of the same order, and so on. However, this requirement may only satisfy for the classification of contemporary organisms; difficulties arise if we wish to classify descendants together with their ancestors. Theoretically, such a classification necessarily involves segmentation of the spatio-temporal continuum of populations into groups with crisp boundaries. However, the problem is not only that many parent populations would separate at species level from their offspring. The truly paradoxical situation is that some between-species boundaries would necessarily coincide with between-genus boundaries, and a few between-genus boundaries with borders between families, and so on. This apparent ambiguity cannot be resolved in Linnaean systems; resolution is only possible if classification is cladistic (see below). Historical background Jean-Baptiste Lamarck, in Philosophie zoologique (1809), was the first who questioned the objectivity of rank-based classification of life, by saying: Half a century later, Charles Darwin explained that sharp separation of groups of organisms observed at present becomes less obvious if we go back into the past: In his book on orchids, Darwin also warned that the system of ranks would not work if we knew more details about past life: Finally, Richard Dawkins has argued recently that and with the following conclusion: Illustrative models The paradox may be best illustrated by model diagrams similar to Darwin’s single evolutionary tree in On the Origin of Species. In these tree graphs, dots represent populations and edges correspond to parent-offspring relations. The trees are placed into a coordinate system which is one-dimensional (time) for a single lineage, and two-dimensional (differentiation vs. time) for cladogenesis or evolution with divergence. In the single lineage model we now consider a sequence of populations along an extremely long time axis, say several hundred million years, with the last dot representing an extant population. In the figure there is space for a few dots even though edges between adjacent populations are hidden. We could use a second axis to express differentiation, but it is not necessary for our purposes. Here we assume that there is no extinction and all branching events are disregarded (if there were no branches at all, then the changes would correspond to a typical anagenesis. Classification of organisms along this sequence into species is shown by small ellipses. If the differences between certain species are judged to be large enough to justify classification into distinct genera, then generic separators must each coincide with a between-species boundary. If differences reach family-level differentiation, which is easy to imagine over the very long time we consider here, the consequence is that a family-level border must overlap with a between-genus and, in turn, a between-species border (gray arrow in the figure). One cannot imagine, however, that a parent and its offspring are so distinct that they should be classified to different families, or even genera – that would be paradoxical. This illustrates Dawkins’ above argumentation on human ancestry at the level of genera, Homo and Australopithecus. Darwin placed emphasis on divergence, that is, when a parent population splits and these offspring populations diverge gradually, each following their own anagenetic sequence potentially with further divergence events. In this case, evolutionary (say morphological) divergence is expressed on a new, horizontal, axis and time becomes the vertical axis. At time point 1 an imaginary taxonomist judges populations A and B to belong to different species, but within the same genus. Their respective descendants, C and D are observed at time 2, and considered to represent two separate genera because their morphological difference is large. The paradox is that while A and C, as well as B and D remain within generic limits but C and D do not, so that ancestors cannot be classified together with their descendants meaningfully in a Linnaean system. This figure illustrates the problem Darwin has discussed in the fish and reptile example. Let us consider a hypothetical evolutionary tree with four recent species, A to D, classified into two genera that are fairly distant from each other morphologically. We assume, further, that from the fossil record we only know their common ancestor, E, representing yet another genus for a taxonomist because it takes “intermediate” position between the other two – yet considerably different from both. All other forms went extinct; therefore we have classification of these five species into three genera, which would be illogical if more fossils were known. This illustrates Darwin’s and Dawkins’ examples on the role of gaps in the fossil record in classification – and nomenclature. Resolution As demonstrated, given a Darwinian evolutionary model, descendants and their ancestors cannot be classified together within the system of Linnean ranks. Solution is provided by cladistic classification in which each group is composed of an ancestor and all of its descendant populations, a condition called monophyly. In the above models monophyletic groups may be obtained by cutting a branch (subtree) from the tree at places where, for instance, new apomorphic (evolutionary derived) characters appear. For these groups there is no need to consider how much change occurred between members of one group as compared to those of the other. See also Clade Sister group Temporal paradox (paleontology) References Biological classification Phylogenetics History of biology
Taxonomic boundary paradox
[ "Biology" ]
1,148
[ "Bioinformatics", "Phylogenetics", "Taxonomy (biology)", "nan" ]
62,095,297
https://en.wikipedia.org/wiki/S%C3%B3nia%20Rocha
Sónia Maria Campos Soares da Rocha, usually referred to as Professor Sónia Rocha, is a Portuguese cell biologist who holds a personal chair in biochemistry at the University of Liverpool, where she is the head of the Department of Biochemistry. Rocha runs an active multidisciplinary cell signaling research group studying hypoxia, and focused around transcription factors such as Hypoxia-inducible factors and NF-κB. Her laboratory is currently based in the Institute of Integrative Biology. Early life and education Rocha was born in Vila Nova de Gaia, Portugal, and was educated at the University of Porto, where she received the equivalent of a UK first-class honours degree in biology from the Faculty of Science. She subsequently studied for a PhD at the Swiss Federal Institute of Technology ETH Zurich in Zurich, Switzerland, graduating in 2000 after working alongside Martin Pruschy and K. H. Winterhalter. Career and research highlights Following her PhD, Rocha took up a postdoctoral research position in the Centre for Gene Regulation and Expression, in the School of Life Sciences at the University of Dundee, where she was supervised by Neil Perkins. In 2005, she was awarded an Independent RCUK Fellowship to continue her work in the molecular basis of transcription, taking up a position as a RCUK fellow and tenure-track principal investigator. She then became a principal investigator in 2011, and in the same year, was awarded a prestigious Cancer Research UK senior research fellowship, which was taken up in the Centre for Gene Regulation and Expression, in Dundee between 2011 and 2017. She was deputy director of the Centre for Gene Regulation and Expression, between 2012 and 2017, and was then promoted to professor of molecular and cellular biology in 2016. In July 2017, she took up the position of head of the Biochemistry Department in the Institute of Integrative Biology, and from 2020 has been promoted to become executive dean of the Institute of Systems, Molecular & Integrative Biology at the University of Liverpool. Rocha is an experienced undergraduate and postgraduate teacher and convenor, and has delivered courses on cell signalling, genes and cancer module, genes and proteins, and was a lecturer in gene regulation and expression modules at the University of Dundee, including module coordination. As of 2019, she has published >60 peer-reviewed research articles, co-authored 12 peer-reviewed research reviews and contributed to one book chapter. Amongst her most cited work is the discovery that Hypoxia-Inducible Factor is regulated by NF-κB and a recent paper in Science that demonstrates Hypoxia modulates histone methylation and reprograms chromatin. This paper was published back-to-back with a study co-authored by 2019 Nobel Prize in Physiology or Medicine winner for Medicine William Kaelin Jr. These papers were highlighted in an independent editorial. Other relevant publications include a chemical biology approach to analysing hypoxic signalling, analysis of the targeted degradation of regulation of the oxygen-regulated prolylyl hydoxylase enzyme, also known as Procollagen-proline dioxygenase, and molecular analysis of the regulation of PHD1 by protein phosphorylation. Research networks, scientific service and scientific outreach Rocha's funding portfolio includes a Wellcome Trust Collaborative Award in Science, for which she is lead (2017–2022) and significant further PI funding from CRUK, the MRC, the AICR, the Royal Society and BBSRC equipment funding as part of multi-disciplinary applications. Through her international research group, Rocha has supported the research training of many staff and students, graduating 11 PhD students as of 2019. She is active in the promotion of science and research to policy makers and the wider scientific community, as evidenced by her position on several committees, including (2019–present) co-chair of the organising committee for the Genes and Cancer meeting, chair of the organising committee for the UK and Ireland NF-kappaB and IKK workshop 2019. (2019–present chair of the LIFE_16 Molecular and Cellular Biology panel, Finnish Science Academy), 2018–present external reviewer for Newcastle University Tenure Track Fellows. Faculty of Medicine, 2018–present member of the Biomedicines editorial board, 2018–present member of the evaluation team for “Fundacion La Caixa” grants, 2017–present member of the editorial advisory board for FEBS Journal, 2017–present. Member of the North West Cancer Research Centre executive committee, 2017–2018 member of the Health and Science, Molecular and cellular Biology panel, Finnish Science Academy. Since 2016, she has been a member of the Henry Dale Fellowship panel Wellcome Trust, was a member of the CRUK travel award panel (2016–2019) and between 2015 and present is a member of the academic editorial board, Scientific Reports. Between 2015 and 2018 she was a member of the Breast Cancer Now scientific advisory board, and between 2013 and 2017. was the head of the SLS University of Dundee MRC PhD programme. Since 2013 she has been a member of the academic editorial board of the open-access journal PLOS One. Since 2008, she has been a member of the editorial advisory board of the Biochemical Journal. Rocha has given over 50 invited seminars at universities across the world. She has also acted as an external examiner at a number of UK universities, including Belfast University, University of Oulu, Finland, University of Barcelona, Imperial College London, University of Cambridge and Newcastle University. Awards and honours Rocha is an invited member of the Research Excellence Framework 2021 sub-panel for UoA5, and an elected Fellow of the Royal Society of Biology. She was the recipient of both an Independent Cancer Research UK senior fellowship and an Independent RCUK Junior Fellowship In 2011, her 2008 Biochemical Journal paper was awarded 'paper of the year' in the Biochemical Journal-Gene section and in 2009, her citation classic on the subject of gene regulation by hypoxia was cited as the most cited paper of the year in the Biochemical Journal. In 2008, Rocha also received the outstanding achievement award, at both the 13th World Congress on Advances in Oncology and the 11th International Symposium on Molecular Medicine. Greece. References People from Vila Nova de Gaia Cell biologists Academics of the University of Liverpool University of Porto alumni Alumni of the University of Dundee Living people Portuguese women scientists Portuguese biochemists Women biochemists Year of birth missing (living people)
Sónia Rocha
[ "Chemistry" ]
1,314
[ "Biochemists", "Women biochemists" ]
62,095,796
https://en.wikipedia.org/wiki/27-Norcholestane
27-Norcholestane, is a chemical compound with the formula , that is a steroid derivative. 27-Norcholestane is used as a biomarker to constrain the source age of sediments and petroleum through the ratio between 24-norcholestane and 27-norcholestane (norcholestane ratio, NCR), especially when used with other age diagnostic biomarkers, like oleanane. See also Biomarker Cholestane Nor- References Steroids Biomarkers
27-Norcholestane
[ "Biology" ]
114
[ "Biomarkers" ]
62,096,300
https://en.wikipedia.org/wiki/Journal%20of%20Construction%20Engineering%20%26%20Management
The Journal of Construction Engineering and Management is a monthly peer-reviewed scientific journal published by the American Society of Civil Engineers covering construction material handling, equipment, production planning, scheduling, estimating, labor productivity, contract administration, and construction management. Abstracting and indexing The journal is abstracted and indexed in Ei Compendex, ProQuest databases, Civil Engineering Database, Inspec, Scopus, and EBSCO databases. References External links Civil engineering journals American Society of Civil Engineers academic journals Academic journals established in 1956 English-language journals Monthly journals
Journal of Construction Engineering & Management
[ "Engineering" ]
115
[ "Civil engineering journals", "Civil engineering" ]
62,096,452
https://en.wikipedia.org/wiki/Marta%20Catellani
Marta Catellani is an Italian chemist known for her discovery of the eponymous Catellani reaction in 1997. She was elected to the European Academy of Sciences in 2016. Catellani earned her Ph.D. in chemistry in 1971 from the University of Parma, where, as of 2019, she is a professor and chairs the Department of Organic Chemistry. Catellani completed her postdoctoral education at the University of Chicago. She has served as a visiting professor at Moscow State University (1992), Beijing Institute of Technology (2004), and University of Xi'an (2004). She was awarded a fellowship at the Japan Society for the Promotion of Science in 2012. Her research focuses on palladium as a catalyst for multistep organic reactions. The Catellani Reaction In chemistry there is a practice known as synthesis. This process is used to form complex chemical compounds from simpler ones. These complex compounds are desirable for their ranging abilities and properties. In order to produce the complex compounds, the simpler ones must “cooperate” in a specific way. This can be very difficult and requires patience, because of the time required to make the bonds so their uses and properties can be tested. There was a need for optimization of this process in order to speed up the development and testing of new compounds. Catellani and her team in 1997 found such a method to optimize this process. Catellani discovered a chain reaction process that simplified and increased yield for desirable complex compounds. One bond the Catellani Reaction is heavily used to create is Carbon-Carbon bonds. These bonds are desirable for their stability and strength. These qualities make the bonds very useful in the makeup of more complex compounds. Since its discovery, the Catellani Reaction has opened the door to other discoveries or improvements in chemistry. Specifically in the world of pharmaceuticals, the Catellani Reaction has been a useful tool for synthesizing drugs in a more efficient way to aid in their development. Lenoxipen is an example of one of the complex compounds now much easier to achieve with the discovery of Catellani Reactions. This compound belongs to a group of compounds known as Lignans that are useful for relieving pain and may provide benefits to cancer patients. These examples of the uses for Catellani Reactions show the vast and indirect benefits to its discovery. To chemists, the Catellani Reaction is a tool that acts to optimize the process for making new compounds. These new compounds are pivotal for advancing what is possible through chemistry. As new scientists study and try to build upon the Catellani Reaction, it is important to remember who provided the first understanding as it would open up a new world of opportunity. References Living people 21st-century Italian chemists Italian women chemists Italian women scientists Members of the European Academy of Sciences and Arts Academic staff of the University of Parma Organic chemists Year of birth missing (living people) 20th-century Italian chemists
Marta Catellani
[ "Chemistry" ]
595
[ "Organic chemists" ]
62,096,516
https://en.wikipedia.org/wiki/RAB2B
Ras-related protein Rab-2B is a protein that in humans is encoded by the RAB2B gene. RAB2B is required for protein transport from the endoplasmic reticulum to the Golgi complex. It belongs to the small GTPase superfamily, specifically to the RAB protein family. Small GTPases are a type of hydrolase enzymes that can attach to a GTP to form a GDP. This process makes small GTPases active when bonded to a GTP and inactive when bonded to a GDP. Inside this small GTPase superfamily we can find the RAS subfamily. This family is divided into 5 groups: Ras, Rho, Ran, Rab and Arf GTPases. RAB2B’s main function is regulating vesicle transport and membrane fusion. Structure RAB2B is a human protein whose gene is located in the fourteenth chromosome. It has a core made of basic elements such as oxygen, carbon, nitrogen and phosphorus. Its secondary structure contains eight alpha helix and six beta strands. Moreover, it has attached a magnesium ion and a GDP. Mature RAB2B contains three post-translational modifications, a phosphoserine is found in the location 202 instead of a normal serine, and two lipidations can be found in locations 215-216. It has a motif domain between the amino acids 35 and 43. Due to the alternative splicing, two isoforms of this same protein exist. Isoform 1 is the canonical sequence, meaning it is the most common one, having a molecular weight of 24,214 Da. Isoform 2 consists just of 151 amino acids, having a mass of 16,667 Da. Function Small GTPases of the RAB superfamily are recognized as key players of the protein machinery involved in vesicular transport and organelle dynamics in eukaryotic cells. RAB2B follows mainly exocytic pathways, from the endoplasmic reticulum to Golgi complex. RAB proteins are involved in docking and fusion of transport vesicles with their target membranes. These proteins associate with effector proteins (GARIL4 and GARIL5) to create complexes. In order to do its biological function, RAB2B has to switch from the GDP form to the GTP form, and this is possible thanks to the catalysation of GEF proteins, which are guanine exchange factors. On the other hand, when RAB2B are inactive in the cytosol, the GDP form is maintained due to the interaction with GDI proteins, which are GDP dissociation proteins. Studies have also shown that RAB2B affects the IFN antiviral response which is induced by cytosolic DNA. Because of this, RAB2B deficiency allows for vaccinia virus to replicate itself. After DNA stimulation, RAB2B attaches itself on the Golgi apparatus, along with stimulator of interferon genes (STING), the downstream signal mediator of cGAS. In order for RAB2B to adhere to the Golgi apparatus, RAB2B’s GTP-binding activity is required, as well as for the recruitment of GARIL5, so the RAB2B-GARIL5 complex can be formed. It is also shown that GARIL5 deficiency affects the IFN antiviral response, indicating that the entire RAB2B-GARIL5 complex regulates the cGAS-STING signalling axis, thus promoting IFN responses against cytosolic DNA (DNA viruses). RAB2B isoform knockdown affects the morphology of the Golgi complex in mammals, inducing its fragmentation. Even though these RAB family proteins are highly homologous to each other (RAB2A and RAB2B have 85.8% amino acid identity), the knockdown of any of them (from RAB1A to RAB8A) causes Golgi complex to disperse through the cell's cytoplasm. Because of this, the RAB2B-GARIL5 complex stops functioning properly, affecting the IFN response and enhancing the replication of many viruses, as seen previously. Tissue distribution The expression pattern of the human RAB2B gene reveals a transcript in kidney, prostate, lung, thymus, and colon, and a lower expression level in placenta, pancreas and skeletal muscle. Moreover, it is shown that the transcript is over-expressed in colon adenocarcinoma, as well as pancreatic cancer. This observation entails that this protein could have a close relationship with colon tumours. References External links G proteins Human proteins Genes on human chromosome 14
RAB2B
[ "Chemistry" ]
972
[ "G proteins", "Signal transduction" ]
62,096,903
https://en.wikipedia.org/wiki/Winnie%20Wong-Ng
Winnie Kwai-Wah Wong-Ng () is a Chinese-American physical chemist. She is a research chemist at the ceramics division at the National Institute of Standards and Technology. Her research includes energy applications, crystallography, thermoelectric standards, metrology, and data, sorbent materials for sustainability, and high throughput combinatorial approach for novel materials discovery and property optimization for energy conversion applications. She is a fellow of the International Centre for Diffraction Data, American Ceramic Society, American Crystallographic Association, and the American Association for the Advancement of Science. Wong-Ng was twice awarded the Department of Commerce Bronze Medal. Education Wong-Ng completed a B.Sc. in chemistry and physics at Chinese University of Hong Kong in 1969. She earned a Ph.D. in inorganic and physical chemistry at Louisiana State University in 1974. Career and research Wong-Ng was a research associate and lecturer in the chemistry department at University of Toronto. From 1981 to 1985, she was a critical review scientist at the International Centre for Diffraction Data. Wong-Ng was a research scientist in the chemistry department at University of Maryland, College Park and a research associate in the ceramics division at the National Bureau of Standards from 1985 to 1988. Since 1988, Wong-Ng works as a research chemist in the ceramics division at the National Institute of Standards and Technology. She served as president the Association of NIST Asian Pacific Americans from 2000 to 2003. Wong-Ng's research interest includes materials for energy applications, thermoelectric standards, metrology, and data, sorbent materials for sustainability, and high throughput combinatorial approach for novel materials discovery and property optimization for energy conversion applications. She also researches crystallography, phase equilibria, and crystal chemistry of energy materials to understand their structure and property relationships. Structural studies involve synchrotron X-ray and neutron diffraction techniques. Awards and honors In 2000, Wong-Ng became a fellow of the International Centre for Diffraction Data (ICDD). She was awarded fellow of the American Ceramic Society in 2002. In 2002 and 2008, she won the Department of Commerce Bronze Medal. In 2014, Wong-Ng was made fellow of the American Crystallographic Association. In 2012, she became a distinguished fellow of the ICDD and a Fellow of the American Association for the Advancement of Science. She became an academician of the World Academy of Ceramics in 2018. References Living people 20th-century American chemists 20th-century Chinese scientists 20th-century American women scientists 21st-century American chemists 21st-century Chinese chemists 21st-century American women scientists Alumni of the Chinese University of Hong Kong American physical chemists American women chemists Chinese physical chemists American crystallographers Chinese expatriate academics in the United States Fellows of the American Association for the Advancement of Science Hong Kong emigrants to the United States Hong Kong women scientists Inorganic chemists Louisiana State University alumni National Institute of Standards and Technology people University of Maryland, College Park faculty Academic staff of the University of Toronto Women physical chemists Place of birth missing (living people) Year of birth missing (living people) Fellows of the American Ceramic Society 21st-century Chinese women scientists
Winnie Wong-Ng
[ "Chemistry" ]
657
[ "Inorganic chemists", "Women physical chemists", "Physical chemists" ]
62,097,245
https://en.wikipedia.org/wiki/DFDT
Difluorodiphenyltrichloroethane (DFDT) is a chemical compound. Its composition is the same as that of the insecticide DDT, except that two of DDT's chlorine atoms are replaced by two fluorine atoms. DFDT was developed as an insecticide by German scientists during World War II. It is possible that Hoechst wanted to avoid license fees for DDT to Schering or the original developer J. R. Geigy (the later Ciba-Geigy). It was documented by Allied military intelligence, but for Americans it remained in obscurity after the war. In 2019, New York University chemists reported that DFDT and a mono-fluorinated derivative, MFDT, might be a more effective insecticide than DDT, and might therefore be used to combat malaria with less of an environmental impact. A later study of DFDT found it to be encumbered by the same resistance as DDT while being less effective in Drosophila melanogaster, and "unlikely to be a viable public health vector control insecticide". References DDT Fluoroarenes Endocrine disruptors Organochloride insecticides Trichloromethyl compounds
DFDT
[ "Chemistry" ]
263
[ "Endocrine disruptors" ]
62,098,551
https://en.wikipedia.org/wiki/Democratic%20Tsunami
Democratic Tsunami (, ) is a Catalan protest group advocating a self-determination referendum in Catalonia, formed and organized in the lead up to the final judgement on the Trial of the Catalonia independence leaders. It organizes supporters of the Catalan independence movement through the use of social media, apps and other online resources. It used a 'bespoke' Android app, along with a Telegram account with over 410,000 followers in order to mobilize and organize demonstrations during the 2019 Catalan protests. Distributed outside of the official market for Android apps, the application (making use of overseas servers) circumvents the European legislation for data protection in regards of geolocalization. Goals As stated in press notes and interviews, their objectives are the freedom of prisoners, exiles and reprisals; defense of fundamental rights and the self-determination of Catalonia. In a statement following the judgment of the trial of the Catalonia independence leaders, read by Pep Guardiola, they defended the rights of assembly and demonstration, freedom of expression and the right to a fair trial. They also held an independence referendum similar to that of Quebec or Scotland, and called on the international community to position itself for a "conflict resolution based on dialogue and respect." They defend civil disobedience and nonviolence. History Activity In one of its first actions, the group managed to organize a large protest at Barcelona Airport, which led to a major disruption and the cancellation of over 100 flights. The group endorses nonviolence and has supported occupation of government buildings and other protest acts, which were condemned by the Spanish government. The group's actions appear to mimic those of the 2019–20 Hong Kong protests, which also occupied a key airport. The group also used similar language to the Hong Kong Protesters, urging protesters to "add up like drops of water". The group managed to assert itself as one of the main organizers of the 2019 Catalan Protests. Origin and identity The group's identity is unknown as none of the group's members has publicly shown their identity as of October 2019. The group however insists that it has no links to other pro-independence groups or political parties in Catalonia, stating that its name was derived from an expression used by pardoned Catalan independence leader Jordi Cuixart. It added that it followed a doctrine of strict non-violence, instead advocating for "mass civil disobedience". Swiss publication Le Temps, along with the Spanish Press, alleged that the movement was founded in late August 2019 after a meeting in the Geneva countryside. It was allegedly attended by leading Catalan independence leaders, including Catalan President Quim Torra and his predecessor Carles Puigdemont, as well as two Swiss politicians which supported the idea of Catalan independence. The Associated Press, on the other hand, pegged the group's creation under similar conditions in July 2019 with the endorsement of top pro-independence officials. Protest app The movement created a mobile app, which was released as a sideload for devices running Android. The app organizes and mobilizes small, localized groups of supporters to carry out protest acts across the entire territory of Catalonia. It allows the Democratic Tsunami to monitor and give directions to individual protesters or groups of protesters, while claiming that the user's location is approximated and obfuscated to avoid police tracking. It also required the user to activate it by scanning a QR code, a measure intended to limit activation to "stages" in order to avoid infiltration by government authorities. For the same reason, users are allowed to invite only one other person to the app, and even successfully invited and activated users can only see protests within their immediate vicinity. The group announced that it had 15,000 successful QR code activations as of 17 October, 3 days after the beginning of the protests. Prosecution by the Spanish Government On 18 October 2019, a Spanish Judge ordered the closure of several web pages belonging to the group. The group immediately migrated it's homepage to a new address. It later published instructions on how to "avoid Spanish censorship". Spain's interior minister, Fernando Grande-Marlaska stated that the Spanish Authorities had launched an investigation aiming to discover the individuals behind the group. The Spanish Government was reportedly looking into whether or not Carles Puigdemont was behind the group. The group refused to comment on the allegation, while Puigdemont denied it and stated that he did not know who the organizers were. Spain sent a takedown request to GitHub, demanding the Tsunami Democràtic app to be removed and defining the organization "as a criminal organization driving people to commit terrorist attacks". GitHub complied but published the takedown request in one of their public repositories. See also Trial of Catalonia independence leaders References Anonymity Information society Internet-based activism Internet culture Internet vigilantism Organizations established in 2019 Anonymity pseudonyms Advocacy groups Catalan independence movement Protests in Catalonia
Democratic Tsunami
[ "Technology" ]
1,001
[ "Computing and society", "Information society" ]
62,098,761
https://en.wikipedia.org/wiki/Latino%20urbanism
Latino urbanism is a field of study that examines urban planning and urbanism from the perspective of Latino studies.  It aims to highlight the contributions of Latinos to the making of American cities, and the theoretical interventions that Latino studies scholarship have generated in response to urban scholars lack of engagement with Latino populations. Scholars have attributed this lack of attention to disciplinary boundaries between urban studies and ethnic studies. Latino urbanism as a field is inherently interdisciplinary and includes scholars working in literature, history, anthropology, urban planning, American studies, and more. A key characteristic is its attention to the ways communities act on the built environment, and how they in turn develop "barrio urbanisms," or new knowledges and interventions about the use and organization of urban space. The work of urban planner James Rojas provides an example of the field's attention to Latinos as actors, agents of change and innovators. His art making workshops wrest communities vernacular knowledges to develop urban planning solutions . Some scholars champion the Chicano practice of Rasquachismo—to suggest “placekeeping” as an inventive, make do, popular strategy that can help advance racial justice goals by expanding definitions of urbanism. This scholarship views grassroots interventions into space as strategic and resourceful. See also Urban vitality References Urban planning Latin American studies
Latino urbanism
[ "Engineering" ]
266
[ "Urban planning", "Architecture" ]
62,099,756
https://en.wikipedia.org/wiki/List%20of%20Easter%20eggs%20in%20Tesla%20products
Tesla products include a significant number of software and hardware Easter eggs among other notable and unique software features, such as a suite of video games, doggy mode, emissions testing mode, "caraoke", and romance mode. Back to the Future phone app Touching the battery icon inside the Tesla mobile app with the vehicle at exactly 121 miles (or 121 km) of range was discovered to launch a Back to the Future Easter egg. All aspects of this Easter egg were observed to occur within the mobile app. A pop-up message displays "Time Circuits Off" and "Be sure to reset your clock to account for temporal displacement". The name of the vehicle changes to "OUTATIME" within the app. "Charging scheduled" changes to "Time Circuits On". "121 miles" changes to "1.21 GW". The "Charging" tab changes to "Fuel Chamber". Below the now "Fuel Chamber" tab reads as "Current Output: 300R" which may refer to the number of Back to the Future replica cars being made per year by DeLorean Motor Company. The vehicle location display changes to "1600 S Azusa Ave Rowland Heights", one of the movie filming locations, and a service appointment appears to be scheduled for November 5, 1955, which is an important day within the film. Voice commands Rick and Morty – sentry mode voice activation Voice command "Keep Tesla Safe" or "Keep Summer Safe" were discovered to activate sentry mode. Sentry mode, which was originally depicted on the in-vehicle display as HAL 9000 from 2001: A Space Odyssey, but replaced by what appears to be the eye of a sentient sentry turret from the Valve video game Portal, is a Tesla security feature that can be toggled on or off using voice commands, "Enable/Disable Sentry Mode" or "Turn Sentry Mode On/Off". Voice commands "Keep Tesla Safe" or "Keep Summer Safe" can also enable sentry mode. The extra commands are a reference to a scene from season 2, episode 6 of Rick and Morty, entitled "The Ricks Must Be Crazy", where Rick instructs his vehicle to keep Morty's sister, Summer, safe while Rick and Morty venture into Rick's microverse car battery. Elon Musk notably wore a Butterbot T-shirt to the 2018 Tesla Annual Shareholder's Meeting indicating his interest in the show. Charge-port alternative voice commands Voice commands "open butthole" and "close butthole" open and close the charge port, but may open the trunk instead in some cases. "Open bunghole" and "close bunghole" also work and may be a reference to Beavis and Butt-Head. Seat heater alternative voice command Voice command "my balls are cold" turns on the seat heaters. Voice command "eject X seat" where X = Driver or Passenger, will turn on that seat's heater to max. Voice command "turn on X seat bacon" where X = Driver or Passenger, will turn on that seat's heater. Alternatively, "Turn on 1, 2, or 3 seat bacons" will activate the seat heater to low, medium, or high respectively. Climate control alternative voice command Voice command "enable/disable life support" turns the climate control on or off. Santa mode Voice commands "Ho Ho Ho" or "Ho Ho Ho Not Funny" will activate the Santa Mode Easter egg. If voice command "Ho Ho Ho" is used, Run Rudolph Run by Chuck Berry will play inside of the vehicle. If voice command "Ho Ho Ho Not Funny" is used, Grandma Got Run Over by a Reindeer will play inside of the vehicle instead. Otherwise, the two commands activate the same Santa Mode Easter egg. While driving or in park, a snow effect appears above the depiction of the vehicle. When parked, the image of the car is replaced by Santa Claus on his sleigh. Using the turn signal will result in the sound of sleigh bells in addition to the normal turn signal sound. In previous versions, the vehicle was depicted as Santa Claus on his sleigh while driving as well as in park, Computer vision showed the road as ice, and other cars were depicted as reindeer while driving. Mars, Mars rover and Starship Tesla vehicles incorporate a Mars, Mars rover and Starship themed Easter egg. Upon activation, the GPS map on the touchscreen display shows the surface of Mars instead of the surface of the Earth. The surface moves and turns as the car travels just as the normal GPS would. The arrow representing the vehicle on the GPS map is replaced by a depiction of a Mars rover. Finally, the "About Your Tesla" menu, previously available by pressing the Tesla "T" icon at the top left of the touchscreen display, shows the SpaceX Interplanetary Spaceship design, which was presented in 2016 as part of the Interplanetary Transport System. The vehicle has since been redesigned and renamed as Big Falcon Rocket and then Starship. SpaceX and Tesla, Inc. are linked in a number of ways other than the depiction of the Starship. Elon Musk is CEO of both companies and the two companies collaborate often. In early 2018, Elon Musk's Tesla Roadster was used as a payload for the Falcon Heavy test flight. It was originally planned that Elon's roadster, which carries Starman and a number of its own Easter eggs (to confuse the aliens), would end up in orbit around Mars. Instead, the roadster ended up in an orbit around the Sun, as it was more important to demonstrate the full capacity of Falcon Heavy. Mario Kart: Rainbow Road and Don't Fear the Reaper/SNL: More Cowbell Autosteer capable vehicles with autosteer engaged can activate an Easter egg involving the in-car audio and a change in the on screen animation. If autopilot is activated four times in quick succession, the computer vision generated road that the car is driving on, denoted by two lines, will change into a rainbow which is similar to that of Rainbow Road, the final level in each version of the video game Mario Kart. At the same time, the song (Don't Fear) The Reaper by Blue Öyster Cult plays in the cabin of the vehicle. Notably, the version of the song that plays is taken from the Saturday Night Live skit, More Cowbell, in which music producer "The Bruce Dickinson", played by host Christopher Walken, encourages Gene Frenkle, played by then cast member Will Ferrell, to play his instrument, the cowbell, with zeal. As part of the Easter egg, Christopher Walken can be heard stating his lines from the skit, that he "has a fever" and that the "only prescription, is more cowbell". Rainbow chargeport light While the Tesla vehicle is plugged in, pushing the charge port control button on the charger handle 10 times quickly will activate the Easter egg. The charge port light will cycle through all of the colors of the rainbow. Monty Python Tesla vehicles may be assigned a name within the settings available on the touchscreen. Naming the car either "Patsy", "Rabbit of Caerbannog", "Mr. Creosote" (with or without the period), "Flying Circus", "Biggus Dickus" or "Unladen Swallow" will activate the Monty Python easter egg. Once this is done, The Foot of Cupid will immediately drop down the length of the screen. The Foot of Cupid is a trademarked recurring gag in the Monty Python series, Monty Python's Flying Circus. The foot is accompanied by the sound of flatulence. The foot will disappear and upon opening Theater Mode, a new Monty Python option will appear. This option is essentially the same as YouTube except that it opens directly to the Monty Python channel. The first to discover the Easter egg was Iwan Eberhart, a Model 3 owner in Switzerland who named his Model 3, "The Rabbit of Caerbannog" with no foreknowledge of the Easter egg. This is not the first time that Monty Python has been purposefully added to Tesla vehicles. Model X Light Show and Trans-Siberian Orchestra A Model X exclusive holiday light show is initiated by activating this Easter egg. The light show utilizes the headlights, fog lights and turn signals. Wizards in Winter by the Trans-Siberian Orchestra plays and portions of the vehicle, including the front doors and the falcon wing back doors will open and close autonomously in time with the music. The rear view mirrors will also retract in time with the music. James Bond – Lotus Esprit submarine This Easter egg applies only to vehicles with the air suspension package. In the controls menu, under the suspension tab, the usual image of the Tesla is replaced by the submarine version of the Lotus Esprit that James Bond drove off a pier into the ocean in the movie The Spy Who Loved Me. A new "Depth (Leagues)" drop down menu appears next to the Esprit. The air suspension will raise and lower depending on the selected depth. Activating the Easter egg a second time will result in the submarine fins being replaced by wheels. Once again, adjusting the "Depth" will adjust the air suspension, changing the position of the esprit with respect to the wheels. Tesla, Inc., Elon, and 007 have other notable connections besides the easter egg. In 2013, Musk won an auction and took possession of the original Bond submersible used in the film. In 2019, Elon Musk announced at a shareholder meeting that Tesla had a design for a real, electric submarine car. Superbottle and Octovalve Under the frunk of the Model 3, a component called the "superbottle" has been used to control many heating and cooling functions in the vehicle. The superbottle's engineering is notable for condensing multiple functions into a highly efficient device. To paraphrase Sandy Munro from Munro and Associates on episode 447 of Autoline Detroit, the superbottle shows Tesla's ability to innovate by crossing traditionally difficult design boundaries in the car industry. Hidden on the superbottle is a caricature of a bottle as a superhero with a cape, muscular arms and legs, and a Tesla "T" logo on its front. "SUPERBOTTLE" in all caps also appears on the component. During his teardown of the Tesla Model Y, Sandy Munro found a component that has been referred to as the "octovalve", which appears to be the next iteration of the superbottle component used in the Model 3. In the same way that a cape-wearing superhero is depicted on the superbottle, an octopus with a snowflake on its head is embossed on the surface of the octovalve component. The Model Y uses a heat pump, and the octovalve is believed to support it as part of the car's thermal management system. Backgammon Lost Reference In the entertainment menu, above the "play game" button for backgammon, there is text that reads "Two players, two sides. One is light, one is dark." This is a quote from one of the early episodes of Lost in which John Locke teaches Walt how to play backgammon. The quote is believed by some to have significant meaning in the series. Once a game of Backgammon is started on the Tesla in-vehicle display, the lower right corner of the backgammon board can be observed to display the numbers "4 8 15 16 23 42". These numbers are part of the Mythology of Lost and recur throughout the series. Whenever a game score matches one of the numbers, the game score turns green rather than the usual grey. Sketchpad In previous software versions, a sketchpad could be accessed by quickly tapping the Tesla "T" at the top of the touchscreen display three times. Activating the sketchpad turns the in vehicle display into a sandbox where one can draw a picture and submit the result to Tesla. Drawing options include a marker and eraser, control of color, control of marker width, the ability to undo errors, and a "fill" option. When the "submit" button is used, a text box pops up and asks "Are you sure you want Tesla to critique your artistic masterpiece?" As a nod to The Matrix, a metaphorical Red pill and blue pill option is given in the form of two buttons at the bottom of the same text box. A blue button reads "No, the world isn't ready for my art" while a red button exclaims "Yes, I am an artist!" Elon Musk has featured sketchpad submissions on his Twitter feed in the past. The sketchpad may be upgraded to include animation support. Performance mode In previous software versions, the performance mode Easter egg added a drop-down menu to the "About Your Tesla" menu. The performance mode Easter egg allowed the driver to choose any version of the car. Performance mode did not appear to modify any features of the car. This Easter egg was accessed by holding the Tesla "T" icon at the top of the touchscreen display for five seconds. Once the "T" was released, a text box was revealed along with a keyboard for entering text. The text box read "please enter access code" with a text entry field below and button options "OK" and "Cancel" below that. This text box was mainly used by service centers and showrooms for purposes such as service mode and showroom mode, but also allowed access to certain Easter eggs. Entering the word "Performance" and pressing "OK" activated the Performance Mode Easter egg. Spaceballs and Ludicrous+ This Easter egg is included only in vehicles that feature the ludicrous mode option. The Easter egg is activated from the controls menu, by switching the software-controlled acceleration from "sport" to "ludicrous" and then tapping and holding the "ludicrous" text for 5 seconds. The screen will go black for a short time. A star field will swiftly appear and zoom forward, closely resembling a jump to ludicrous speed from the movie Spaceballs. In subsequent updates, this Easter egg was co-opted to activate a genuine "ludicrous+" performance enhancement beyond the normal ludicrous mode. When the Easter egg is activated, the star field zooms forward until the entire screen is momentarily white. When the flash fades, a text box is revealed which asks "Are you sure you want to push the limits? This will cause accelerated wear of the motor, gearbox and battery". As a nod to The Matrix, a metaphorical red pill and blue pill option is given in the form of two buttons at the bottom of the text box. A blue button reads "No, I want my Mommy" while a red button exclaims "Yes, bring it on!". If "Yes, bring it on!" is chosen, the car may prepare itself by heating the battery. The smaller display in front of the driver changes to give a purple indicator for battery temperature, to show the front and rear motors on the car graphic, and to give a table with values including peak longitudinal acceleration. In the movie Spaceballs, there is only one speed which exceeds ludicrous. As a continuation of Tesla's use of Spaceballs terminology, future versions of the Model S and Model X, as well as the Tesla Roadster (2020), will include a new mode of acceleration which is even faster than Ludicrous+. This new mode is called "Plaid". The Hitchhiker's Guide to the Galaxy Entering the number "42" as the name of the Tesla vehicle activates The Hitchhiker's Guide to the Galaxy Easter egg. The name of the car is changed to "Life, the Universe, and Everything". In Douglas Adams' science fiction comedy The Hitchhiker's Guide to the Galaxy, "42" is determined to be the "Ultimate Answer to Life, the Universe, and Everything". There is some difficulty, however, in determining the corresponding Ultimate Question. Spinal Tap As a nod to the movie This is Spinal Tap, volume and climate control fan settings go up to 11. Cybertruck in Camp Mode screensaver After the official Cybertruck reveal occurred on November 22, 2019, Tesla vehicles including the S, X and 3 received an update which included camp mode. When in camp mode, Tesla vehicles are able to maintain airflow, temperature, interior lighting, play music, and power devices for an extended period of time while the car is in park. When camp mode is enabled for more than 10 minutes an animated screensaver of a campground appears on the screen. Several months after the release of camp mode, a partly obscured Cybertruck was noticed in the background. Easter eggs in products that are not for sale Website – Starman The background for the login page of the Tesla Inc. website, is a picture of the inside of the Tesla Semi cabin. Clicking on the background toggles it to an image of Elon Musk's Tesla Roadster which was transported to space by SpaceX. Elon Musk is CEO of both companies and the two companies collaborate often. In early 2018, Elon Musk's Tesla Roadster was used as a payload for the Falcon Heavy test flight. It was originally planned that Elon's roadster, which carries Starman and a number of its own easter eggs (to confuse the aliens), would end up in orbit around Mars. Instead, the roadster ended up in an orbit around the Sun, as it was more important to demonstrate the full capacity of Falcon Heavy. S3XY At the top of the Tesla website are links to available Tesla models, the Model S, the Model 3, Model X, and the Model Y. Since the number 3 is similar to the letter E, this menu of links appears to spell out "SEX" ("S3X"), and with the Y included, "SEXY" ("S3XY"). The chronology of the cars is out of order, since the Model X began sales well before the Model 3. Based on the comments of Elon Musk it is well documented that the hidden message was purposeful. At the reveal event for the Model Y, Musk discussed the naming of the Tesla vehicles including the joke. On multiple occasions, Musk has discussed that the intended name for the "Model 3" would have been the "Model E", however Ford, having the rights to the name "Model E", would not allow Tesla use it. According to Musk, "Ford killed SEX". Nice Try – Model Y teaser image Before the reveal of the Tesla Model Y, a teaser image was released. YouTuber Marques Brownlee (MKBHD) put the image into an image editor to see if increasing the brightness would reveal more of the highly anticipated vehicle's exterior. No other exterior hints were forthcoming. Instead, the result was an Easter egg showing that Tesla had anticipated this approach by fans. Where the licence plate would be on the Model Y, a message read "NICE TRY." Hidden Tesla Tequila Bottle After Tesla Tequila was made available for sale, the product image was noticed hidden in a "Power Everything" poster that had been used in Tesla sales and service centers and online for some time. Some believed that the hidden image of the Tesla Tequila bottle had been there, unnoticed, for several years. It has since been revealed that the poster may have been altered after the Tesla Tequila product launch. Tesla Model W As an April Fool's joke on April 1, 2015, Tesla announced the Tesla Model W watch. Many were fooled by the announcement with Tesla’s stock jumped within the first minute of the news of the announcement. Removed and unreleased Easter eggs Model S Team Photo This is the first Tesla Easter egg discovered. In previous software versions an "About your Tesla" menu could be accessed by tapping the Tesla "T" at the top of the touchscreen display. By tapping and holding the bottom right corner of the "About your Tesla" menu (the model designation number), the depicted Model S would zoom away and be replaced with a picture of the vehicle development team. After a 2020 update the photo is no longer accessible. Model 3 team photo and silhouette In previous software versions, an "About Your Tesla" menu could be accessed by tapping on the Tesla "T" icon at the top left of the touchscreen display. On the "About Your Tesla" menu, pressing and holding the "3" of "Model 3" for about 10 seconds would bring up a picture of the Model 3 development team. After some time, the Tesla team picture was removed by over the air software update. Instead, pressing the "3" for 10 seconds would cause the Model 3 depiction in the "About Your Tesla" menu to zoom away and be replaced by a black line silhouette of the Model 3. Neither the team photo nor the silhouette can be accessed currently. Marilyn Monroe On March 7, 2018, CEO of Tesla Inc., Elon Musk stated on Twitter that the Model X would "do a cover of Happy Birthday by Marilyn Monroe". The Easter egg was not released. Notable omissions Tesla easter eggs often involve popular media for which Tesla CEO and Product Architect, Elon Musk, is known to be a fan. An example would be the Rick and Morty easter egg where voice commands, "Keep Tesla Safe" and "Keep Summer Safe" both activate Sentry Mode. Musk notably wore a Butterbot T-shirt to the 2018 Tesla Annual Shareholder's Meeting indicating his interest in the show before the easter egg was found. Musk has also shown that he is a fan of Monty Python, James Bond, Spaceballs, and The Hitchhiker's Guide to the Galaxy. Each are represented as easter eggs in Tesla products. Elon Musk has expressed interest in a wide variety of other media for which no easter eggs have been found in Tesla products. Some examples include the Foundation series by Isaac Asimov, Star Wars, and The Lord of the Rings. Elon has been known to hold Star Trek in high regard, referring to it as a rare example of a depiction of a positive future for humanity, Notably, the Star Trek franchise also complimented Musk in the fourth episode of Star Trek: Discovery, comparing him to the Wright Brothers, and the fictional inventor of the warp drive, Zefram Cochrane. Hoax Easter eggs The Tesla beating heart This Easter egg was a hoax and cannot be activated. On November 17, 2018, Joel Paglione made a post to Imgur claiming that if both charge port-like panels are pushed at the same time, which looks like one is awkwardly hugging the back of the car, a second charge port opens up on the right side of the car and a pulsing/beating red heart light is displayed. This Easter egg was determined to be a complete fabrication. See also List of Easter eggs in Microsoft products List of Google Easter eggs Tesla, Inc. Easter egg (media) Elon Musk Tesla Model S Tesla Model 3 Tesla Model X Tesla Model Y Tesla Cybertruck Tesla Semi Tesla Roadster (first generation) Tesla Roadster (second generation) References Easter eggs Elon Musk In-jokes Tesla, Inc.-related lists Lists of Easter eggs
List of Easter eggs in Tesla products
[ "Technology" ]
4,815
[ "Computing-related lists", "Lists of Easter eggs" ]
62,099,854
https://en.wikipedia.org/wiki/Diazaborine
Diazaborine is a chemical compound with properties intermediate between benzene and borazine. Its chemical formula is CBNH. It resembles a benzene ring, except that three carbons are replaced by two nitrogen and boron, respectively. Notable molecules contain this moiety include diazaborine B. References Organoboron compounds Boron heterocycles Nitrogen heterocycles Six-membered rings Boron–nitrogen compounds Simple aromatic rings
Diazaborine
[ "Chemistry" ]
94
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
62,100,048
https://en.wikipedia.org/wiki/Carborazine
Carborazine is a six-membered aromatic ring with two carbon atoms, two nitrogen atoms and two boron atoms in opposing pairs. See also 1,2-Dihydro-1,2-azaborine — an aromatic chemical compound with properties intermediate between benzene and borazine. Borazine References Aromatic compounds Boron heterocycles Nitrogen heterocycles Six-membered rings Boron–nitrogen compounds
Carborazine
[ "Chemistry" ]
92
[ "Organic compounds", "Aromatic compounds" ]
62,100,056
https://en.wikipedia.org/wiki/Low-FODMAP%20diet
A low-FODMAP diet is a person's global restriction of consumption of all fermentable carbohydrates (FODMAPs), recommended only for a short time. A low-FODMAP diet is recommended for managing patients with irritable bowel syndrome (IBS) and can reduce digestive symptoms of IBS including bloating and flatulence. If the problem lies with indigestible fiber instead, the patient may be directed to a low-residue diet. Effectiveness and risks A low-FODMAP diet might help to improve short-term digestive symptoms in adults with functional abdominal bloating and irritable bowel syndrome, but its long-term use can have negative effects because it causes a detrimental impact on the gut microbiota and metabolome. It should only be used for short periods of time and under the advice of a specialist. More studies are needed to evaluate its effectiveness in children with irritable bowel syndrome. There is only a little evidence of its effectiveness in treating functional symptoms in inflammatory bowel disease from small studies that are susceptible to bias. More studies are needed to assess the true impact of this diet on health. In addition, the use of a low-FODMAP diet without medical advice can lead to serious health risks, including nutritional deficiencies and misdiagnosis, so it is advisable to conduct a complete medical evaluation before starting a low-FODMAP diet to ensure a correct diagnosis and that the appropriate therapy may be undertaken. Since the consumption of gluten is suppressed or reduced with a low-FODMAP diet, the improvement of the digestive symptoms with this diet may not be related to the withdrawal of the FODMAPs, but of gluten, indicating the presence of an unrecognized celiac disease, avoiding its diagnosis and correct treatment, with the consequent risk of several serious health complications, including various types of cancer. A low-FODMAP diet is highly restrictive in various groups of nutrients, can be impractical to follow in the long-term and may add an unnecessary financial burden. Suggested foods Below are low-FODMAP foods categorized by group according to the Monash University "Low-FODMAP Diet". Vegetables: alfalfa, bean sprouts, green beans, bok choy, capsicum (bell pepper), carrot, chives, fresh herbs, choy sum, cucumber, lettuce, tomato, zucchini, the green parts of leeks and spring onions Fruits: orange, grapes, honeydew melon (not watermelon) Protein: meats, fish, chicken, eggs, tofu (not silken), tempeh Dairy: lactose-free milk, lactose-free yoghurts, hard cheese Breads and cereals: rice, crisped rice, maize or corn, potatoes, quinoa, and breads made with their flours alone; however, oats and spelt are relatively low in FODMAPs Biscuits (cookies) and snacks: made with flour of cereals listed above, without high FODMAP ingredients added (such as onion, pear, honey, or polyol artificial sweeteners) Nuts and seeds: almonds (no more than ten nuts per serving), pumpkin seeds; not cashews or pistachios Beverage options: water, coffee, tea Other sources confirm the suitability of these and suggest some additional foods. History The basis of many functional gastrointestinal disorders (FGIDs) is distension of the intestinal lumen. Such luminal distension may induce pain, a sensation of bloating, abdominal distension and motility disorders. Therapeutic approaches seek to reduce factors that lead to distension, particularly of the distal small and proximal large intestine. Food substances that can induce distension are those that are poorly absorbed in the proximal small intestine, osmotically active, and fermented by intestinal bacteria with hydrogen (as opposed to methane) production. The small molecule FODMAPs exhibit these characteristics. Over many years, there have been multiple observations that ingestion of certain short-chain carbohydrates, including lactose, fructose and sorbitol, fructans and galactooligosaccharides, can induce gastrointestinal discomfort similar to that of people with irritable bowel syndrome. These studies also showed that dietary restriction of short-chain carbohydrates was associated with symptoms improvement. These short-chain carbohydrates (lactose, fructose and sorbitol, fructans and GOS) behave similarly in the intestine. Firstly, being small molecules and either poorly absorbed or not absorbed at all, they drag water into the intestine via osmosis. Secondly, these molecules are readily fermented by colonic bacteria, so upon malabsorption in the small intestine they enter the large intestine where they generate gases (hydrogen, carbon dioxide and methane). The dual actions of these carbohydrates cause an expansion in volume of intestinal contents, which stretches the intestinal wall and stimulates nerves in the gut. It is this 'stretching' that triggers the sensations of pain and discomfort that are commonly experienced by people with IBS. The FODMAP concept was first published in 2005 as part of a hypothesis paper. In this paper, it was proposed that a collective reduction in the dietary intake of all indigestible or slowly absorbed, short-chain carbohydrates would minimise stretching of the intestinal wall. This was proposed to reduce stimulation of the gut's nervous system and provide the best chance of reducing symptom generation in people with IBS (see below). At the time, there was no collective term for indigestible or slowly absorbed, short-chain carbohydrates, so the term 'FODMAP' was created to improve understanding and facilitate communication of the concept. The low FODMAP diet was originally developed by a research team at Monash University in Melbourne, Australia. The Monash team undertook the first research to investigate whether a low FODMAP diet improved symptom control in patients with IBS and established the mechanism by which the diet exerted its effect. Monash University also established a rigorous food analysis program to measure the FODMAP content of a wide selection of Australian and international foods. The FODMAP composition data generated by Monash University updated previous data that was based on limited literature, with guesses (sometimes wrong) made where there was little information. References External links Diets Gastroenterology Carbohydrates
Low-FODMAP diet
[ "Chemistry" ]
1,423
[ "Organic compounds", "Biomolecules by chemical classification", "Carbohydrates", "Carbohydrate chemistry" ]
62,103,363
https://en.wikipedia.org/wiki/List%20of%20telescopes%20of%20Australia
The list below is split between telescopes located in Australia, and telescopes sponsored by Australia such as a space telescope or foreign installation. Australia can access the Southern skies, which was a popular trend in the 20th century (many telescope had been built for the northern hemisphere). The third largest optical telescope in the world in 1974 was Anglo-Australian Telescope, one of the really large telescopes of that time and built in Australia. There are several radio telescopes also, and Sydney Observatory has taken observations for over a century. One of the largest telescopes of the 19th century was the Great Melbourne Telescope, one of the last big metal mirror reflecting telescopes before the silver-on-glass designs came to predominate; this was purchased with money from an Australian Gold boom. In country optical telescopes Anglo-Australian Telescope (3.9m, 1974-) Automated Patrol Telescope (5m, 1989-2008) Faulkes Telescope South (2m, 2004-) SkyMapper (1.35m) UTas H127 (1.27m) Great Melbourne Telescope (48 inches/ ~1.22m, 1868) Siding Spring 2.3 m Telescope (2.3 m) Sydney Observatory instruments Mt. Kent Observatory - Shared Skies (0.7 m) Penrith Observatory (0.6m) Perth-Lowell Telescope (0.6m) Mt. Kent Observatory - Shared Skies (0.5m) Uppsala Southern Schmidt Telescope (0.5m) Radio telescopes See also Lists of telescopes List of largest optical telescopes in the British Isles List of largest optical telescopes in the North America External links 10 of the best Australian observatories Astronomy in Australia Lists of telescopes
List of telescopes of Australia
[ "Astronomy" ]
340
[ "Astronomy-related lists", "Lists of telescopes" ]
62,103,981
https://en.wikipedia.org/wiki/G%C3%A9rard%20Iooss
Gérard Iooss (born 14 June 1944 in Charbonnier-les-Mines, Puy-de-Dôme) is a French mathematician, specializing in dynamical systems and mathematical problems of hydrodynamics. Education and career Iooss attended school in Clermont-Ferrand and studied at the École Polytechnique from 1964 to 1966. From 1967 to 1972 he was with the Office National d'Etudes et de Recherches Aérospatiales (ONERA). In 1971 he received his doctorate from the Pierre and Marie Curie University (Paris 6) with thesis Théorie non linéaire de la stabilit des écoulements laminaires under the supervision of Jean-Pierre Guireaud. Iooss was a professor from 1972 to 1974 at the University of Paris-Sud in Orsay, and from 1974 at the University of Nice Sophia-Antipolis, where he retired in 2007. From 1994 to 2004 he was at the Institut Universitaire de France. He is today at the Laboratoire J. A. Dieudonné of the University of Nice. (The Laboratoire J. A. Diedonné is a unité mixte de recherche (UMR) of the CNRS.) He was also from 1970 to 1985 Maître de conférences at the École Polytechnique. He was a visiting professor at the University of Minnesota (1977/78), at the University of California, Berkeley (1978), and at the University of Stuttgart (1990, 1995, 1997), where he collaborated with Klaus Kirchgässner on reversible dynamical systems. Iooss's research deals with functional analysis of the Navier-Stokes equation, nonlinear hydrodynamic stability theory and water waves of different kinds, and general behavior (such as symmetry breaking and normal forms) of bifurcations (branching of solutions) in dynamic systems. In 1971, independently of David H. Sattinger, he treated the Hopf bifurcation in solutions of the Navier-Stokes equation as an infinite dimensional dynamical system. He studied in particular the Couette flow (Taylor-Couette) and discovered there theoretically several waveforms, which were later confirmed experimentally. He collaborated with Alain Chenciner on bifurcation of invariant tori. Iooss, with Pierre Coullet, classified the instabilities of spatially periodic patterns in translation-invariant and mirror-symmetric systems. Iooss was elected in 1990 a corresponding member of the Académie des sciences. In 1993 he received the . He received in 2008 the Prix Ampère and in 1978 the Prix Henri de Partille of the Académie des sciences. In 1998 he was Invited Speaker with talk Traveling water waves as a paradigm for bifurcations in reversible infinite dimensional dynamical systems at the International Congress of Mathematicians in Berlin. Selected publications Articles 1979 A.Chenciner, G.Iooss. Bifurcations de tores invariants. Arch. Rat. Mech. Anal. 69, 2, 109-198. 1987 C.Elphick, E.Tirapegui, M.Brachet, P.Coullet, G.Iooss. A simple global characterization for normal forms of singular vector fields. Physica 29D, 95-127. 1990 P.Coullet, G.Iooss. Instabilities of one-dimensional cellular patterns. Phys. Rev. Lett. 64, 8, 866-869 1993 G.Iooss., M.C.Pérouème. Perturbed homoclinic solutions in reversible 1:1 resonance vector fields. J.Diff. Equ. 102, 1, 62-88. 2000 G.Iooss., K.Kirchgässner. Travelling waves in a chain of coupled nonlinear oscillators. Com. Math. Phys. 211, 439-464. 2003 F.Dias, G.Iooss. Water-waves as a spatial dynamical system. Handbook of Mathematical Fluid Dynamics, chapter 10, 443-499. S.Friedlander, D.Serre, eds., Elsevier. 2005 G.Iooss, P.Plotnikov, J.F.Toland .  Standing waves on an infinitely deep perfect fluid under gravity. Arch. Rat. Mech. Anal. 177, 3, 367-478. 2005 G.Iooss, E.Lombardi. Polynomial normal forms with exponentially small remainder for analytical vector fields. J.Diff. Equ. 212, 1-61. 2011 G.Iooss, P.Plotnikov. Asymmetrical three-dimensional travelling gravity waves. (91p.) Arch. Rat. Mech. Anal. 200, 3 (2011), 789-880. 2019 B.Braaksma, G.Iooss. Existence of bifurcating quasipatterns in steady Bénard-Rayleigh convection. Arch. Rat. Mech. Anal. 231, 3 (2019), 1917-1981. Books Bifurcation of Maps and Applications, North Holland Math Studies 36, 1979 Elementary Stability and Bifurcation Theory, with D. Joseph, Springer Verlag, Undergraduate Texts in Mathematics, 1980, 2nd edition 1990, 2013 pbk reprint The Couette-Taylor Problem, Applied Mathematics Series 102, with P. Chossat, Springer Verlag 1994. 2012 pbk reprint Topics in Bifurcation Theory and Applications, Advanced Series in Nonlinear Dynamics, with Moritz Adelmeyer, World Scientific 1992, 2nd edition 1999 Local bifurcations, center manifolds, and normal forms in infinite dimensional dynamical systems, with M. Haragus, EDP Sciences/Springer Verlag 2011 References 1944 births Living people 20th-century French mathematicians 21st-century French mathematicians École Polytechnique alumni University of Paris alumni Academic staff of Côte d'Azur University Dynamical systems theorists
Gérard Iooss
[ "Mathematics" ]
1,218
[ "Dynamical systems theorists", "Dynamical systems" ]
74,508,938
https://en.wikipedia.org/wiki/Ministers%27%20Wings
The Ministers' wings are outbuildings of the Palace of Versailles located in the Cour d'Honneur; the south wing now houses the Princes' bookshop and the ticket office, while the north wing is used to welcome groups of visitors. History Four pavilions were built for the Secretaries of State in 1671. Jules Hardouin-Mansart had the Ministers' wings built on the basis of these pavilions in 1679. The soberly ornamented Ministers' Wings, attached to the château, mark the end of the era of all-powerful ministers such as Fouquet, who defied the king with the construction of his château at Vaux-le-Vicomte. Each of the four secretaries of state occupied half a wing, and had access to all floors. The ground floor was devoted to work and reception areas, the second floor housed their apartments, their families were accommodated on the third floor, and the attic was used for clerks. The two pavilions overlooking the Place d'Armes, at the end of the Ministers' wings, served under the Ancien Régime as guardhouses for the French and Swiss Guards, responsible for the castle's external protection. The French Guards occupied the end of the south wing, while the Swiss Guards occupied the north pavilion. Their officers had bedrooms on the upper floor of the guardhouse; they also had their own dining room and an "assembly room", where they could play tric-trac. From 1958 onwards, the Ministers' wings housed the official residences and reception rooms for the presidents of the assemblies and the quaestors. The premises were returned to the Palace of Versailles in 2005 at the suggestion of National Assembly President Jean-Louis Debré. The northern ministers' wing houses the lecturers' entrance and the school locker room, while the southern ministers' wing houses the princes' bookshop and the château's ticket office. References Palace of Versailles Architecture by city
Ministers' Wings
[ "Engineering" ]
395
[ "Architecture by city", "Architecture" ]
74,509,192
https://en.wikipedia.org/wiki/Chirality-induced%20spin%20selectivity
Chirality-induced spin selectivity (CISS) refers to multiple phenomena where handedness of a chiral chemical compound influences the spin of transmitted or emitted electrons. This effect was discovered by Prof. Ron Naaman and co-workers. Experiments were able to demonstrate the effect in the form of polarization of electrons scattered from chiral molecules, spin-dependent transmission probabilities through layers of chiral molecules, spin-selectivity of electron-transport in a chiral medium and enantio-selectivity in chemical reactions induced by spin-polarized electrons. Theoretical models were able to qualitatively explain the effect using spin-orbit coupling (SOC). But quantitatively the predicted effect was always orders of magnitude smaller than what was measured in experiments. The mechanism underlying CISS is not completely understood. References Stereochemistry
Chirality-induced spin selectivity
[ "Physics", "Chemistry" ]
171
[ " and optical physics stubs", "Stereochemistry", "Space", " molecular", "nan", "Atomic", "Spacetime", "Physical chemistry stubs", " and optical physics" ]
74,510,963
https://en.wikipedia.org/wiki/Spheroidene
Spheroidene is a carotenoid pigment. It is a component of the photosynthetic reaction center of certain purple bacteria of the Rhodospirillaceae family, including Rhodobacter sphaeroides and Rhodopseudomonas sphaeroides. Like other carotenoids, it is a tetraterpenoid. In purified form, it is a brick-red solid soluble in benzene. Spheroidene was discovered by microbiologist C. B. van Niel, who named it "pigment Y". It was renamed by Basil Weedon, who was the first to prepare it synthetically, and to determine its structure, in the mid-1960s. Function Spheroidene is bound to the type II photosynthetic reaction center of purple bacteria, and together with the bacteriochlorophyll forms part of the light-harvesting complex. Spheroidene has two major functions in the complex. First, it absorbs visible light in the blue-green part of the visible spectrum (320–500 nm), where bacteriochlorophyll has little absorbance. It then transfers energy to the bacteriochlorophyll via singlet–singlet energy transfer. In this manner the reaction center is able to harness more of the visible light spectrum than would be possible with bacteriochlorophyll alone. Second, spheroidene quenches excited singlet states of bacteriochlorophyll by forming a stable triplet state. This quenching helps to prevent the formation of harmful singlet oxygen. Other functions of spheroidene may include scavenging of singlet oxygen, nonradiative dissipation of excess light energy, and structural stabilization of the photosystem proteins. Spheroidene is thought to exist as the 15,15'-cis isomer, and not the all-trans isomer commonly shown in the literature, in native photosynthetic reaction centers. Biosynthesis The proteins involved in spheroidene biosynthesis are encoded by a gene cluster. Geranylgeranyl pyrophosphate (GGPP) is the precursor to spheroidene and the other carotenoids; two molecules of GGPP condense to form the symmetric tetraterpene phytoene. This molecule then undergoes three desaturations to form neurosporene, which is then hydroxylated, desaturated again, and methoxylated to produce spheroidene. In some species, spheroidene is further oxygenated to produce the ketone spheroidenone. See also Photosynthesis Förster resonance energy transfer Antioxidant References Carotenoids Photosynthetic pigments Methoxy compounds
Spheroidene
[ "Chemistry", "Biology" ]
595
[ "Biomarkers", "Photosynthetic pigments", "Photosynthesis", "Carotenoids" ]
74,512,327
https://en.wikipedia.org/wiki/Stauroteuthis%20kengrahami
Stauroteuthis kengrahami is a species of small pelagic cirrate octopus. It is currently only known from off eastern Australia (Tasman Sea). Description Stauroteuthis kengrahami, is generally similar to the other species in the genus. It is principally distinguished by having the cirri (long finger-like projections flanking the suckers) terminating at a much more distal sucker, but there are other differences in its V-shaped shell, and digestive system. It is only known from a female specimen, and the suckers of this specimen are much smaller than in S. gilchristi. Distribution Stauroteuthis kengrahami is known from a single specimen collected off the coast at Batemans Bay, New South Wales, Australia, at a depth of . References Octopuses Cephalopods described in 2023 Cephalopods of Australia Species known from a single specimen
Stauroteuthis kengrahami
[ "Biology" ]
193
[ "Individual organisms", "Species known from a single specimen" ]
74,514,095
https://en.wikipedia.org/wiki/Robert%20Ramage%20%28chemist%29
Robert 'Bob' Ramage FRS (4 October 1935 — 16 October 2019) was an organic chemist, born in Glasgow, who specialised in the synthesis and biosynthesis of natural products, peptides, and proteins. Following his undergraduate degree in chemistry and the University of Glasgow, he stayed on for a PhD in organic chemistry. After his time at Glasgow, he followed his interest in natural products synthesis to Harvard and then Basel, before taking up a lectureship in organic chemistry at the University of Liverpool where his attention was drawn to peptides. His peptide synthesis research continued at the University of Manchester Institute of Science and Technology (UMIST), where he also served as head of department. He returned to Scotland in 1984, taking up the Forbes chair of organic chemistry at the University of Edinburgh, where he remained until retirement in 2000. Outside of academia, in 1994 he founded the company Albachem, which utilised his work with peptides. He was elected Fellow of the Royal Society of Chemistry (1977), Royal Society of Edinburgh (1986), and the Royal Society (1992). References 1935 births 2019 deaths Scottish chemists Organic chemists Scientists from Glasgow Alumni of the University of Glasgow
Robert Ramage (chemist)
[ "Chemistry" ]
243
[ "Organic chemists" ]
74,514,146
https://en.wikipedia.org/wiki/UP%20Diliman%20Department%20of%20Chemical%20Engineering
The Department of Chemical Engineering (DChE) is an academic department operating under the College of Engineering of the University of the Philippines Diliman. The department was established in 1956 and has an overall 90% passing rate in the licensure examinations held in the Philippines. It also contributes about 10% to 60% of the total number of new chemical engineers in the Philippines every year. Course offerings The department offers undergraduate and graduate programs leading to the degree of chemical engineering: Bachelor of Science in Chemical Engineering (BS ChE) — five-year program leading to the understanding of transport processes, chemical engineering thermodynamics and their applications to unit operations design, thermodynamics and reaction kinetics. Master of Science in Chemical Engineering (MS ChE) — 24-unit coursework that includes core and elective courses related to chemical engineering and six units of master's thesis. Doctor of Philosophy in Chemical Engineering (PhD ChE) Research laboratories The department consists of thirteen (13) research laboratories in different fields of chemical engineering and allied fields, and also hosts the Chemical Engineering Analytical Laboratory (CEAL), which offers analytical services to the university and industry. CEAL houses a Scanning Electron Microscope (SEM), a Fourier-Transform Infrared (FTIR) Spectroscope, a Universal Testing Machine (UTM); gas chromatographs (FID, TCD, MS), Ion Chromatographs, and high-performance liquid chromatograph (HPLC); the Department also has a Kjeldahl apparatus, a Karl Fischer apparatus, and an atomic absorption spectrophotometer (AAS). There is a real-time PCR, and digital gradient electrophoresis, shaking incubators and refrigerated incubators for biological studies. The thirteen (13) research laboratories are the following: Advanced Materials and Organic Synthesis Laboratory Bioprocess Engineering Laboratory Catalysis Research Laboratory Chemical Engineering Intelligence Learning Laboratory Environmental Process Engineering Laboratory Fuels, Energy and Thermal Systems Laboratory Green Materials Laboratory Inorganic Synthesis Laboratory Laboratory of Electrochemical Engineering Molecular Modelling Laboratory Nanotechnology Research Laboratory Process Systems Engineering Laboratory Sustainable Production & Responsible Consumption Laboratory References External links Official website Facebook page UP Diliman College of Engineering Chemical engineering organizations
UP Diliman Department of Chemical Engineering
[ "Chemistry", "Engineering" ]
455
[ "Chemical engineering", "Chemical engineering organizations" ]
74,515,823
https://en.wikipedia.org/wiki/Melike%20Lakadamyali
Melike Lakadamyali is a Cypriot physicist and a Full Professor of Physiology and of Cell and Developmental Biology (secondary) at the University of Pennsylvania in Philadelphia, renowned for her work in super-resolution microscopy and Single Molecule Biophysics. She is the Group Leader of the Lakadamyali Lab. Education From 1997 to 2001, Lakadamyali studied Physics at the University of Texas at Austin, USA. During the end of her university time, she gained some practical experience by working in the labs of Prof. Ken Shih and Prof. Josef A. Käs. From 2001 to 2006, she earned her Ph.D. in Physics at Harvard University, Cambridge, MA, USA, advised by Prof Xiaowei Zhuang, focusing on the visualization of viral infection and intracellular transport in live cells. Career and research For her postgraduate work, Lakadamyali worked as a postdoctoral researcher under Prof Jeff Lichtman at the Center for Brain Science at Harvard University, MA, USA. Between 2010 and 2016, Lakadamyali was a Group Leader at ICFO - The Institute of Photonic Sciences in Barcelona, Spain, holding a Junior (2010-2015) and Senior (2015-2016) Group Leader position, respectively. Her group's superresolution microscopy study investigating the genome gained widespread attention. It reveals that our genome needs to be regularly packaged and links these packaging differences to stem cell state. In 2017, Lakadamyali returned to the United States to work as an Assistant Professor of Physiology and of Cell and Developmental Biology (secondary) at the University of Pennsylvania in Philadelphia. In 2020, Lakadamyali was promoted to Associate Professor of Physiology and in 2024 she was promoted to Full Professor. Lakadamyali's area of research is focused on examining biology at the level of its macromolecular machines. She aims to obtain a quantitative and biophysical comprehension of how these machines propel critical cell biological processes. Hence, she is also involved in designing sophisticated microscopy techniques that strive to surmount the current limitations of existing methods, thereby enabling them to observe the macromolecular machinery of the cell in motion with superior spatiotemporal resolution. Lakadamyali is a well-known microscopist and biophysicist. Hence, she is frequently invited to speak at workshops and conferences in the field. Since 2019, Lakadamyali has been a Reviewing Editor (Cell Biology) at eLife. Awards and honours 1997 Cyprus-America-Scholarship Program and Fulbright Commission scholar 2013 EMBO Young Investigator Award 2013 ERC Starting Grant - MOTORS Grant 2017 Profiled in “Cell Scientist to Watch,” Journal of Cell Science 2017 Profiled in “Author File,” Nature Methods References External links Capturing life's processes with light: Melike Lakadamyali at TEDxBarcelona CEMB Faculty Feature: Melike Lakadamyali Living people Women in optics Microscopists Optical physicists European Research Council grantees University of Pennsylvania faculty University of Texas at Austin alumni Harvard University alumni Year of birth missing (living people) 21st-century women physicists
Melike Lakadamyali
[ "Chemistry" ]
644
[ "Microscopists", "Microscopy" ]
74,515,933
https://en.wikipedia.org/wiki/Acoustic%20circulator
In acoustical engineering, an acoustic circulator is a non-reciprocal three-port device that couples airborne sound waves only to an adjacent port in the direction of circulation. Compared to radio frequency (RF) and microwave circulators, acoustic circulators are for airborne sound waves rather than for RF and microwave electromagnetic signals. In 2014, Fleury et al. reported and experimentally demonstrated an acoustic Y-circulator by exploiting the acoustic analogue of the Zeeman effect: the structure is composed of a ring cavity with a circulating fluid that facilitates the nonreciprocal transmission of sound waves between acoustic waveguides. Similar circulator designs based on temporal modulation of the effective acoustic index and natural convection were later reported. References Audio engineering Acoustics
Acoustic circulator
[ "Physics", "Engineering" ]
159
[ "Electrical engineering", "Audio engineering", "Classical mechanics", "Acoustics" ]
74,517,421
https://en.wikipedia.org/wiki/Potassium%20trithiocarbonate
Potassium trithiocarbonate is the inorganic compound with the chemical formula . It is the potassium salt of trithiocarbonic acid. It consists of two potassium cations and the trigonal planar trithiocarbonate dianion . It is a white solid, although impure samples often appear brown. It is prepared by the reaction of potassium sulfide or potassium hydrosulfide with carbon disulfide. Potassium trithiocarbonate reacts with alkylating agents to give trithiocarbonate esters: (X = halogen, R = monovalent organyl group) References Inorganic carbon compounds Inorganic sulfur compounds Thiocarbonyl compounds
Potassium trithiocarbonate
[ "Chemistry" ]
135
[ "Inorganic carbon compounds", "Inorganic compounds", "Inorganic sulfur compounds" ]
74,518,773
https://en.wikipedia.org/wiki/S/2020%20S%209
S/2020 S 9 is a natural satellite of Saturn. Its discovery was announced by Edward Ashton, Brett J. Gladman, Jean-Marc Petit and Mike Alexandersen on May 15, 2023 from observations taken between August 23, 2019 and August 16, 2020. S/2020 S 9 is about 4 kilometers in diameter, and orbits Saturn at a distance of 25.434 Gm in 1,534.97 days, at an inclination of 161.4, orbits in retrograde direction and eccentricity of 0.531. S/2020 S 9 belongs to the Norse group and one of the most distant moons from Saturn along with S/2004 S 26, S/2004 S 52 and S/2019 S 21. References Norse group Moons of Saturn Astronomical objects discovered in 2020 Irregular satellites Astronomical objects discovered in 2023 Moons with a retrograde orbit
S/2020 S 9
[ "Astronomy" ]
175
[ "Astronomy stubs", "Planetary science stubs" ]
74,520,478
https://en.wikipedia.org/wiki/Gregor%20Sch%C3%B6ner
Gregor Schöner (born 1958 in Sindelfingen) is a German computational neuroscientist. He is professor for the theory of cognitive systems at the Ruhr University Bochum, as well as the director of the Institute for Neuroinformatics located there. Life and work From 1983 to 1985 Gregor Schöner studied physics and mathematics at Saarland University. In the year 1985, he received his PhD in theoretical physics at the University of Stuttgart under Herrmann Haken. For the next four years, he devoted himself to applications of the theory of stochastic dynamical systems to the coordination of biological motion under J. A. Scott Kelso at Florida Atlantic University. From 1989 to 1994, he led a research group for the first time at the Institute of Neuroinformatics at Ruhr University in Bochum. In that time, he and his group extended the application of dynamical systems to models of perception, motion, and autonomous robotics. After a six-year stay at the Centre de Recherche en Neurosciences Cognitives in Marseille, Gregor Schöner returned to the institute in 2001. He took over its leadership in 2003, succeeding Christoph von der Malsburg, and remains in this position until today. Since September 2022, he is additionally chairman of the Society for Cognitive Science in Germany. Gregor Schöner and his research group are known for the scientific development, applications, and software packages on Dynamic Field Theory (DFT). DFT provides a neurally plausible framework for the mathematical modeling of human cognition according to the theories of embodied cognition. The theory builds upon the continuous attractor networks models of Hugh R. Wilson and Jack D. Cowan (the "Wilson-Cowan model") and Shun'ichi Amari (the "neural field model"), which describe the interaction between excitatory and inhibitory coupled populations of cortical neurons. Schöner's research group publishes on visual search, spatial and relational language, and autonomous robotics. Publications Gregor Schöner, John P. Spencer and the DFT Research Group (2015). A primer on dynamic field theory. Oxford University Press, ISBN 978-0-19-930056-3 Esther Thelen, Gregor Schöner, Christian Scheier, and Linda B. Smith (2001). "The dynamics of embodiment: A field theory of infant perseverative reaching". Behavioral and Brain Sciences 24(1), 1–34. doi:10.1017/S0140525X01003910 References External links Publication list on the website of the Institute for Neuroinformatics at the Ruhr University Bochum Living people 1958 births German cognitive neuroscientists Neuroinformatics Computational neuroscientists German lecturers Ruhr University Bochum
Gregor Schöner
[ "Biology" ]
587
[ "Bioinformatics", "Neuroinformatics" ]
70,236,771
https://en.wikipedia.org/wiki/Naganishia%20adeliensis
Naganishia adeliensis (synonym Cryptococcus adeliensis) is a species of fungus in the family Filobasidiaceae. It is currently only known from its yeast state, isolated from decaying algae in Antarctica. When plated on agar Naganishia adeliensis produces colonies that are cream, with a smooth, glossy appearance. The colonies frequently appear to have a soft texture. The optimal growth range for this species is at 25 degrees Celsius. Naganishia adeliensis is incapable of fermentation, as is typical of Naganishia species. This species is able to use sucrose, maltose, cellbiose, trehalose, raffinose, citrate, inositol ethanol, soluble starch, melezitose, xylitol, saccharate, salicin as well as many other compounds as sole carbon sources. Naganishia adeliensis is able to use nitrate, nitrite and cadaverine (a protein created when animals decay and which produces the putrid smell associated with this decay) as sources of nitrogen. This species forms starch as it grows. Naganishia adeliensis also grows on 0.01% cycloheximide. References External links Tremellomycetes Fungi of Antarctica Fungi described in 2000 Fungus species
Naganishia adeliensis
[ "Biology" ]
277
[ "Fungi", "Fungus species" ]
70,236,954
https://en.wikipedia.org/wiki/Katharina%20T.%20Huber
Katharina Theresia Huber (born 1965) is a German applied mathematician and mathematical biologist whose research concerns phylogenetic trees, evolutionary analysis, their mathematical foundations, and their mathematical visualization. She is an associate professor in the School of Computing Sciences at the University of East Anglia in England, and the school's director of postgraduate research. Education and career Huber completed a doctorate in mathematics at Bielefeld University in 1997. Her dissertation, A T-theoretical Approach to Phylogenetic Analysis and Cluster Analysis, was jointly supervised by Andreas Dress and Walter Deuber. After postdoctoral research at Massey University in New Zealand, Huber became a lecturer in mathematics at Mid Sweden University in Sundsvall, Sweden in 2000. She moved to the Department of Biometry and Engineering of the Uppsala University in Sweden in 2003, and to the School of Computing Sciences at the University of East Anglia in 2004, where she became a senior lecturer in 2012. Contributions Huber is a coauthor of the book Basic Phylogenetic Combinatorics (Cambridge University Press, 2012), and a codeveloper of the ape package for evolutionary analysis in the R statistical programming system. Her other research publications include: References 1965 births Living people 21st-century German biologists 20th-century German mathematicians German women biologists German women mathematicians Theoretical biologists German applied mathematicians Bielefeld University alumni Academic staff of Mid Sweden University Academic staff of Uppsala University Academics of the University of East Anglia 21st-century German mathematicians
Katharina T. Huber
[ "Biology" ]
295
[ "Bioinformatics", "Theoretical biologists" ]
70,236,977
https://en.wikipedia.org/wiki/Naganishia%20albidosimilis
Naganishia albidosimilis (synonym Cryptococcus albidosimilis) is a species of fungus in the family Filobasidiaceae. It is currently only known from its yeast state, isolated from soil in Antarctica. When plated on agar Naganishia albidosimilis produces colonies that are shining white. The colonies appear to be mucosoid when plated on agar. When grown in liquid media, the yeast fails to grow well unless the media is constantly agitated. This species is considered mesophilic, with optimal growth temperature at 25 °C. The yeast cells are ovoid and produce a capsule. Naganishia albidosimilis reproduces through budding and it does not appear as though this species reproduces through any sexual means. When mature, the cell size is approximately 4.9μm to 6.6μm. Naganishia albidosimilis can use L-arabinose, cellobiose, citrate at pH 6.0, ethanol, D-glucitol, gluconate at pH 5.8, glucuronate at pH 5.5, myo-inositol, lactose, maltose, mannitol, melezitose, α-methylglucoside, L-rhamnose, salicin, soluble starch, succinate at pH 5.5, sucrose and xylose as sole carbon sources. Naganishia albidosimilis can also use L-lysine, nitrate and cadaverine as sole nitrogen sources. This species cannot ferment. Naganishia albidosimilis is DBB positive, and produces amylose. References External links Tremellomycetes Fungi of Antarctica Fungus species
Naganishia albidosimilis
[ "Biology" ]
376
[ "Fungi", "Fungus species" ]
70,237,826
https://en.wikipedia.org/wiki/OSAM-1
OSAM-1 (On-orbit Servicing, Assembly, and Manufacturing 1) was a 2016-2024 conceptual NASA mission and spacecraft designed to test on-orbit refilling of satellites. The program was cancelled in 2024, two years ahead of its planned launch date. It was initially known as Restore-L. Originally scheduled to launch in 2020, its launch at the time of cancellation was planned for no earlier than 2026. The primary objective of the concept mission and spacecraft was the complex refueling of Landsat 7, a satellite launched in 1999, that was not designed for on-orbit servicing. This would have involved grasping the satellite with a mechanical arm, gaining access to the satellite's fuel tank by cutting through insulation and wires and unscrewing a bolt, and then attaching a hose to pump in hydrazine fuel. At the time the mission was conceived, it was expected to be the first refueling of a satellite in space, and a demonstration of the potential to repair some of the thousands of active satellites in orbit and keep them in operation for a longer time. Because the satellites now in space were not designed to be serviced, there are significant challenges to doing so successfully. OSAM-1's second objective, added in 2020, was to deploy a separate robot called SPIDER (Space Infrastructure Dexterous Robot) to build a new structure in space. Using robots to build and assemble new structural components from scratch would be an important step towards a type of space-based construction that had been impossible to date. Description The OSAM-1 spacecraft was to include: two arms to grapple the target satellite; the attached payload for SPIDER. History In 2016, NASA's Restore-L satellite was intended to refuel Landsat 7. In 2020, SPIDER was added and the name was changed from Restore-L to OSAM-1. In Feb 2022, OSAM-1 passed its Critical Design Review. On 04 Sept 2023, NASA notified Congress of their intent to cancel OSAM-1. On 20 Sept 2023 the satellite bus arrived at NASA Goddard from Maxar. On 1 March 2024, NASA announced that OSAM-1 had been cancelled due to "continued technical, cost, and schedule challenges, and a broader community evolution away from refueling unprepared spacecraft." Cost & legacy OSAM-1 was funded by NASA’s Space Technology Mission Directorate through its Technology Demonstration Missions program. At cancellation in 2024, about $2 billion had been invested in the project. Progression A subsequent mission, OSAM-2, would have also had two robotic arms. OSAM-2 would have used ModuLink software which is based on xLink. In 2023, NASA decided to conclude the OSAM-2 project without proceeding to a flight demonstration. See also In-orbit refueling References External links On-orbit Servicing Assembly and Manufacturing 1 Mission (OSAM-1) OSAM-2 Robotic Refueling Mission 3 (RRM3) Proposed NASA satellites Robotic satellite repair vehicles Cancelled spacecraft
OSAM-1
[ "Astronomy" ]
614
[ "Astronomy stubs", "Spacecraft stubs" ]
70,238,111
https://en.wikipedia.org/wiki/Visible%20Embryo%20Project
The Visible Embryo Project (VEP) is a multi-institutional, multidisciplinary research project originally created in the early 1990s as a collaboration between the Developmental Anatomy Center at the National Museum of Health and Medicine and the Biomedical Visualization Laboratory (BVL) at the University of Illinois at Chicago, "to develop software strategies for the development of distributed biostructural databases using cutting-edge technologies for high-performance computing and communications (HPCC), and to implement these tools in the creation of a large-scale digital archive of multidimensional data on normal and abnormal human development." This project related to BVL's other research in the areas of health informatics, educational multimedia, and biomedical imaging science. Over the following decades, the list of VEP collaborators grew to include over a dozen universities, national laboratories, and companies around the world. An early (1993) goal of the project was to enable what it called "Spatial Genomics," to create tools and systems for three-dimensional morphological mapping of gene expression, to correlate data from the Human Genome Project with the multidimensional location of genomic expression activity within the morphological context of organisms. This led to the invention in the late 1990s by VEP collaborators of the first system for Spatial transcriptomics. Other areas that VEP researchers pioneered include early web technologies, cloud computing, blockchain, and virtual assistant technology. Early history The VEP was created in 1992 as a collaboration between the UIC Biomedical Visualization Laboratory, directed by Michael Doyle, and the Human Developmental Anatomy Center at the National Museum of Health and Medicine (NMHM), directed by Adrianne Noe. Doyle had been appointed to the oversight committee of the Visible Human Project at the National Library of Medicine, but it would be several years before that data would become available. Looking for other sources of high-resolution volume data on the human anatomical structure, he came across the Carnegie Collection of Human Embryology, housed at the NMHM. During a sabbatical working on methods for magnetic resonance microscopy (MRM) in the laboratory of Paul Lauterbur, 2003 Nobel Laureate, Dr. Doyle created a plan for the VEP and worked with Dr. Noe to recruit a large group of prominent researchers to join as initial collaborators. A primary goal of the project was to provide a testbed for the development of new technologies, and the refinement of existing ones, for the application of high-speed, high-performance computing and communications to current problems in biomedical science. Data Much of the early work involved creating serial section reconstructions from microscope slides and extracting volumetric data from the NMHM specimens, rather than just surface data. Sets of serial microscopic cross-sections through human embryos (prepared by Carnegie Collection contributors between the 1890s and 1970s) were used as sample image data around which to design and implement various components of the system. These images were digitized and processed to create 3D voxel datasets representing embryonic anatomy. Standard techniques for 3D volume visualization could then be applied to these data. Image processing of these data was required to correct for certain artifacts that were found in the original microscope sections from routine histological techniques of the tissue preparation. Later activities of the project would make use of MRM datasets acquired from the NMHM collection, ultra-high resolution histology images, and three-dimensional adult image data acquired via the Visible Human Project, in addition to embryo data. Collaborations The VEP became a far-reaching collaborative research program involving a large number of eminent scientists across the nation and around the world, including, among many others, Michael Doyle, of UIC, then UCSF, and project founder, Adrianne Noe, Director of the National Museum of Health and Medicine, George Washington University's Robert Ledley, inventor of the Full-body CT scanner, UIUC's Paul Lauterbur, MRI pioneer and Nobel laureate, LSU's Ray Gasser, eminent embryologist, Oregon Health & Science University's Kent Thornburg, internationally renown developmental biologist, Regan Moore, Director of the DICE group at the San Diego Supercomputer Center, William Lennon of Lawrence Livermore National Laboratory, Ingrid Carlbom of Digital Equipment Corporation's Cambridge Research Lab, and Demetri Terzopoulos of the University of Toronto. Some notable Visible Embryo Project collaborations include: Muritech In the mid-1990s, Michael Doyle collaborated with Harvard's Betsey Williams to create an internet atlas of mouse development, in a project named "Muritech." A prototype two- and three-dimensional color atlas of mouse development was developed, using two embryos, a 13.5 d normal mouse embryo and a PATCH mutant embryo of the same age. Serial sections of the embryos, with an external registration marker system, introduced into the paraffin embedding process, were prepared by standard histological methods. For the 2D atlas, color images were digitized from 100 consecutive sections of the normal embryo. For the 3D atlas, 300 gray-scale images digitized from the mutant embryo were conformally warped and reconstructed into a 3D volume dataset. The external fiducial system facilitated the three-dimensional reconstruction by providing accurate registration of consecutive images and also allowed for precise spatial calibration and the correction of warping artifacts. The atlases, with their associated anatomical knowledge base, were then integrated into a multimedia online information resource via the VEP's Web technology to provide research biologists with a set of advanced tools to analyze normal and abnormal murine embryonic development. Next-Generation Internet Contract The Human Embryology Digital Library and Collaboratory Support Tools project was begun in 1999 as a demonstration of the biomedical application potential of the Next Generation Internet (NGI). The collaborators included eight organizations at sites around the continental USA, a mix of medical and information technology organizations, including George Mason University, Eolas, the Armed Forces Institute of Pathology, Johns Hopkins University, Lawrence Livermore National Laboratory, the Oregon Health & Science University, the San Diego Supercomputer Center, and the University of Illinois at Chicago. The project undertook three major applications, based on the Carnegie Collection of Embryos at the AFIP's National Museum of Health and Medicine Human Development Anatomy Center (HDAC), a collection of cellular-level tissue slides that is one of the world's largest repositories of human embryos. These applications included: 1. Digitization, curation, and annotation of embryo data: The VEP team created a production digitization capability, using automated digital microscopy, with data automatically registered for tiling and transmitted to the repository at the San Diego Supercomputer Center, and annotated by teams of biomedical volunteers with expert-level quality control. 2. Distributed embryology education using materials derived from the Carnegie Collection to create animations of embryo development and recorded master classes that can be streamed over the Internet or downloaded to create a portable electronic classroom. 3. Clinical management planning where medical professionals and expectant parent patients can review normal and abnormal development patterns with collaborative consultation from distant experts. AnatLab To enable new ways to interactively explore the VEP's massive volume datasets, Michael Doyle created the zMap system, using the Visible Human Project image data for the first prototype. In 2011, Doyle collaborated with Steven Landers, Maurice Pescitelli, and others to use zMap to create an interactive tool that allows the user to select desired sets of anatomical structures for the automated generation of 3D Quicktime VR visualizations. The system used the resources available in the Eolas AnatLab knowledgebase, which has over 2200 structures identified involving a total of over 4600 sections and 700,000 annotations overall, to access the anatomical structure surface information for individual structures. This surface information was then used to automatically extract the contained volumetric image data and convert the data into a format compatible with the Osirix volume imaging system. Automated scripts then controlled Osirix in the creation of a 3D visualization of the group of selected anatomical structures. Photorealistic results were obtained by using the original color voxel information from the original Visible Human cryosection images to color the surface of the 3D reconstruction. The system then automatically progressed through a pre-defined set of rotations to generate the set of image frames required to create a Quicktime VR (QTVR) interactive movie. This system thereby allowed an anatomy instructor to quickly and easily generate customized interactive 3D reconstructions for use in the classroom. Technologies Over the decades since it was begun, the work done in the Visible Embryo Project has led to the development of several important technological breakthroughs that have had a worldwide impact: Spatial transcriptomics Even though spatial mapping of Omics data had been described as an initial goal of the VEP, it wasn't until 1999 that four VEP collaborators, Michael Doyle, George Michaels, Maurice Pescitelli, and Betsey Williams worked together to create a system for what they called "spatial genomics." Today, this technology is known as Spatial transcriptomics. As their 2001 U.S. patent application states, their system solved the need "to gather gene expression data in a manner that supports the type of exploratory research that can take advantage of the broad-spectrum types of biologic activity analysis enabled by today's microarray tools," as well as the need for "technology to allow the collection of large volumes of these types of data, to enable exploratory investigations into patterns of biologic activity ... to correlate gene expression data with morphological structure in a useful and easy to understand manner, such as in a volume visualization environment ... to allow the collection of larger volumes of gene expression data across a wider spectrum of gene types than ever before." They named their system SAGA, short for Spatial Analysis of Genomic Activity. As described in the related U.S. patents, the SAGA system enabled the multidimensional morphological reconstruction of tissue biologic activity and "makes it possible for biological tissue specimens to be imaged in multiple dimensions to allow morphological reconstruction. The same tissue specimen is physically sampled in a regular raster array, so that tissue samples are taken in a regular multidimensional matrix pattern across each of the dimensions of the tissue specimen. Each sample is isolated and coded so it can be later correlated to the specific multidimensional raster array coordinates, thereby providing a correlation with the sample's original pre-sampling morphological location in the tissue specimen. Each tissue sample is then analyzed with broad-spectrum biological activity methods, providing information on a multitude of biologic functional characteristics [mRNA, etc.] for that sample. The resultant raster-based biological characteristic data may then be spatially mapped into the original multidimensional morphological matrix of image data. ... various types of analysis may then be performed on the resultant correlated multidimensional spatial datasets." Spatial transcriptomics was named the "Method of the Year for 2020" by Nature, in January 2021. The cloud In 1993, Dr. Doyle became the Director of the UCSF Center for Knowledge Management (CKM). To create the underlying software and hardware that would provide the needed computational power for the VEP, Doyle's CKM group designed a new paradigm for performing remote client-server volume visualization over the Internet. This involved creating a system for remotely computing visualizations through a networked cluster, or cloud, of distributed heterogeneous computational engines, and coordinating the computations to pass user interface control messages to those engines, causing the cloud computers to generate new rendered visualizations and stream the resulting views to the users' desktops, while delta-encoding and compressing the streamed data to optimize performance over low-bandwidth connections. To hide the complexity of the system from the user, they modified one of the earliest versions of the NCSA Mosaic Web browser to allow their interactive cloud-computing applications to be automatically launched and run embedded within Web pages, so any user would need only to load a Web document from the VEP and would be able to immediately interactively explore the project's multidimensional datasets, rather than static representations of those datasets. In November 1993, the CKM's VEP research group demonstrated this system, the first Web-based Cloud application platform, on-stage to a meeting of approximately 300 Bay Area SIGWEB members at Xerox PARC. Today, this capability is called "the Cloud." The VEP team's work opened the door to the potential of the Web to provide rich information resources to users, regardless of where they were located and spawned a multi-trillion-dollar industry as a result. zMap Dr. Doyle then began to focus more directly on the problem of how to navigate within these complex biomedical volume datasets and developed a system for mapping the semantic identity of morphological structures within the datasets and integrating those mappings with the hypermedia linking mechanism of the Web. This led to the creation of the first three-dimensional Web image map system and was used to create a variety of online interactive reference systems for biomedical education and research throughout the 90s and beyond. Blockchain One of the challenges for large collaborative knowledge bases is how to assure the integrity of data over a long period of time. Standard cryptographic methods that depend upon trusting a central validation authority are vulnerable to a variety of factors that can lead to data corruption, including hacking, data breaches, insider fraud, and the possible disappearance of the validating authority itself. To solve this problem for the VEP data collections, Dr. Doyle created a novel type of cryptographic system, called Transient-key cryptography. This system allows the validation of data integrity without requiring users of the system to trust any central authority, and also represented the first decentralized blockchain system, enabling the later creation of the Bitcoin system. In the mid-2000s, this technology was adopted as a national standard in the ANSI ASC X9.95 Standard for trusted timestamps. Virtual assistants Since the mid-2000s, the VEP team has made great use of digital voice and text communications systems, to facilitate communications among geographically-distributed team members. To increase the efficiency of these communications, Michael Doyle and Steve Landers collaborated to create the Skybot system, the first AI-based mobile virtual assistant system. Skybot used the power and flexibility of AI to dramatically expand the use of messaging systems. Using Skybot, one could create a variety of programmable responses to incoming calls and chat messages. The system incorporated a state machine that could be configured to automatically trigger automated responses to various communication and user-context events. This provided the user with a surprisingly broad and powerful set of capabilities for automating mobile communication operations and pioneered the mobile intelligent-assistant product category that is now ubiquitous worldwide. Current status Plans are underway to secure the funding necessary to expand the Visible Embryo Project to create a national resource that combines large-scale knowledgebase with advanced analytical tools in an innovative online collaborative environment to support and continue to advance the art and science of Spatial Omics. See also Blockchain Spatial transcriptomics Transient-key cryptography Virtual assistant Visible Human Project References External links Home page of the Visible Embryo Project 2000-2004 NIH/NLM Next Generation Internet contract, hosted at George Mason University Pre-proposal white paper for the follow-on project to the Visible Embryo Project 2000-2004 NIH/NLM Next Generation Internet contract Human anatomy Biotechnology Virtual assistants Cloud computing Bitcoin Bioinformatics
Visible Embryo Project
[ "Engineering", "Biology" ]
3,187
[ "Bioinformatics", "Biological engineering", "nan", "Biotechnology" ]
70,238,171
https://en.wikipedia.org/wiki/Plicaturopsis%20crispa
Plicaturopsis crispa, the crimped gill or crispling, is a saprotrophic species of fungus in the genus Plicaturopsis that can be found in temperate regions year-round, often on hazel, alder, and beech trees. The fungus has a wide distribution, having been recorded in Europe, Asia, Australia, and North America. In Britain, its range has been rapidly increasing with 78% of all records of P. crispa in the FRDBI (Fungal Records Database of Britain & Ireland) being from after the year 2000, many of which are in areas with no previous recordings of the species. Taxonomy Originally described in 1794 by Persoon as Merulius fagineus, he then reclassified it in 1800 as Merulius crispus. Then, in 1821, Fries proceeded to move it into Cantharellus but later, in 1862, had second thoughts and moved it to Trogia, a genus composed of several tropical species with similar hymenial ridges. In 1872, the American mycologist Peck described a new genus Plicatura (from plicate = folded) for the American fungus P. alni. This fungus had already been described in Europe by Fries as Merulius niveus. This caused Karsten to produce the combination Plicatura nivea. Then, in 1922, Carleton Rea abandoned the genus Trogia and moved T. crispa into Plicatura in his book British Basdiomycetae. In 1964, Derek Reid emphasized the morphological differences between both of these Plicatura species and erected a new monotypic genus Plicaturopsis for P. crispa. Molecular findings On the basis of a six-gene study, Binder and colleagues (2010) erected a new order called Amylocorticales that confirmed the previous relationships suggested in Eriksson et al (1981). P. crispa undoubtedly belongs within this group and this new order is sister to the Agaricales. Its worth noting that Merulius, Cantharellus, Trogia, and Plicatura are not closely related as previously thought but are instead from various different orders (Polporales, Cantharellales, Agaricales, and Amylocorticales respectively). Description It forms clusters on typically deciduous trees on decomposing branches. Fruit bodies are generally 1-3 cm in length with bracket-like semi-circular shell shapes. Upper surface is normally concentrically zoned getting paler as it approaches the edge. Underside is made up of pale forked folds, giving a gill-like appearance. It produces white spores which are small, narrow allantoid, weakly amyloid, and only 3–4.5 x 1–1.2 μm. Ecology Plicaturopsis crispa is an effective participant in the initial phase of decay, colonizing predominantly dead branches of deciduous trees (Fagus and Betula) and is associated with a white rot. A few years into the succession of wood decomposition, strong competitors such as Trametes versicolor and the split-gill fungus Schizophyllum commune often displace P. crispa. Gallery References Amylocorticiales Taxa named by Christiaan Hendrik Persoon Taxa described in 1794 Fungi of Asia Fungi of Australia Fungi of Europe Fungi of North America Fungus species
Plicaturopsis crispa
[ "Biology" ]
690
[ "Fungi", "Fungus species" ]
70,238,543
https://en.wikipedia.org/wiki/Water%20sachet
Water sachets or sachet water is a common form of selling pre-filtered or sanitized water in plastic, heat sealed bags in parts of the global south, and are especially popular in Africa. Water sachets are cheaper to produce than plastic bottles, and easier to transport. In some countries, water vendors refer to sachet water as "pure water". High demand, and poor collection of waste from consumers, has resulted in significant plastic pollution and waste from sachets throughout the West Africa. Accumulation of sachets frequently causes blocked stormwater drainage, and other issues. Some countries, such as Senegal, have banned disposable sachets. Because sachets are frequently filled in small and often unregulated facilities, inadequate sanitary conditions can occasionally result in disease or contamination. However, in countries like Ghana consumers still prefer that access over other forms of venders, with a perception of lower risk. This form of water distribution provides vital access to water in communities that otherwise wouldn't have it. However, some scholars have identified this method of distribution as having potential human rights and social justice issues, limiting the right to water and sanitation. Health concerns Studies of sachets frequently find improper sanitary conditions among sachet producers. One study of sachets in Port Harcourt, Nigeria found that sachet water has significant contamination from various disease causing microbes. Prolonged storage of the sachets found human-health threatening levels of the microbes after 4 months in several of the samples. Similarly following the onset of the COVID pandemic, in Damongo found 96% of producers didn't have adequate sanitary measures. By country Ghana Sachet water is common through Ghana. A 2012 review of sachet use in Ghana found sachet water ubiquitous especially in poorer communities. Sachets were typically 500 ml polyethylene bags, and heat sealed at each end. Sachet water delivery is part of a larger trend in delivery by private water vendors from municipal taps. Packaging water in small plastic bags started in the 1990s, and that practice grew after the introduction of Chinese machines for filling and heat sealing bags. A price increase in 2022, saw significant changes in the sales in the Ashanti region. Nigeria Sachet water has become increasingly important part of the water access in Nigeria, especially fast growing cities like Lagos. The cost of Sachet water is dependent on economic changes. In 2021, the Association for Table Water Producers of Nigeria increased the price of bag of sachet water to 200 naira due to increase in production cost. A significant devaluation of local currency led to significant price increases in 2022. In 2024, sachet water currently sells for N50 per sachet, A bag sells between N400 and N500, the increase stikes due to changes in the economy. Some cities have improvised to start selling ice water has some can't afford to buy sachet water. Around June, 2024 two water companies were closed in owerri by National Agency for Food Drugs Administration and Control (NAFDAC) due to poor manufacturing process and unhygienic production. The two factories including Elmabo Table Water and Sylchap Table Water while Giver Table water was cautioned for minor issues. See also Drinking water Purified water Self-supply of water and sanitation WASH – Water supply, sanitation and hygiene Water kiosk References Water Plastics Drinks
Water sachet
[ "Physics", "Environmental_science" ]
694
[ "Hydrology", "Unsolved problems in physics", "Water", "Amorphous solids", "Plastics" ]
70,238,590
https://en.wikipedia.org/wiki/Problem%20Solving%20Through%20Recreational%20Mathematics
Problem Solving Through Recreational Mathematics is a textbook in mathematics on problem solving techniques and their application to problems in recreational mathematics, intended as a textbook for general education courses in mathematics for liberal arts education students. It was written by Bonnie Averbach and Orin Chein, published in 1980 by W. H. Freeman and Company, and reprinted in 2000 by Dover Publications. Audience and reception Problem Solving Through Recreational Mathematics is based on mathematics courses taught by the authors, who were both mathematics professors at Temple University. It follows a principle in mathematics education popularized by George Pólya, of focusing on techniques for mathematical problem solving, motivated by the idea that by doing mathematics rather than being told about its "history, culture, or applications", liberal arts education students (for whom this might be their only college-level mathematics course) can gain a better idea of the nature of mathematics. By concentrating on problems in recreational mathematics, Averbach and Chein hope to motivate students by the fun aspect of these problems. However, this approach may also lead the students to lose sight of the important applications of the mathematics they learn, and contains little to no material on mathematical proof. The book's exercises include some with detailed solutions, some with less-detailed answers, and some that provide only hints to the solution, providing flexibility to instructors in using this book as a textbook. Cartoons and other illustrations of the concepts help make the material more inviting to students. As well as for general education at the college level, this book could also be used to help prepare students going into mathematics education, and for mathematics appreciation for secondary school students. It could also be used as a reference by secondary school mathematics teachers in providing additional examples for their students, or as personal reading for anyone teenaged or older who is interested in mathematics. Alternatively, reviewer Murray Klamkin suggests using the books of Polyá for these purposes, but adding Problem Solving Through Recreational Mathematics as a supplement to these books. Topics The book begins with an introductory chapter on problem-solving techniques in general, including six problems to motivate these techniques. The rest of the book is organized into eight thematic chapters, each of which can stand alone or be read in an arbitrary order. The topics of these chapters are: Logic puzzles, especially focusing on "Knights and Knaves" types of puzzles in which some characters are truthful while others answer only falsely. Word problems involving time and motion, with continuous variables and with solutions using algebra. Number theory, particularly focusing on Diophantine equations, continuing the theme of word problems but with discrete variables for numbers of people, goods, or costs, and also including material on divisibility, prime numbers, and the Chinese remainder theorem. Numeral systems and cryptarithms. Graph theory, including Euler tours and Hamiltonian cycles. Game theory and combinatorial game theory, including material on games of perfect information and on the games of tic tac toe, nim, and hex. Solitaire games and puzzles, including polyominoes, peg solitaire, and the 15 puzzle. A collection of leftover problems which did not fit into any of the other chapters. References Mathematics textbooks 1980 non-fiction books Recreational mathematics Problem books in mathematics
Problem Solving Through Recreational Mathematics
[ "Mathematics" ]
656
[ "Recreational mathematics" ]
70,238,981
https://en.wikipedia.org/wiki/Quasilinearization
In mathematics, quasilinearization is a technique which replaces a nonlinear differential equation or operator equation (or system of such equations) with a sequence of linear problems, which are presumed to be easier, and whose solutions approximate the solution of the original nonlinear problem with increasing accuracy. It is a generalization of Newton's method; the word "quasilinearization" is commonly used when the differential equation is a boundary value problem. Abstract formulation Quasilinearization replaces a given nonlinear operator with a certain linear operator which, being simpler, can be used in an iterative fashion to approximately solve equations containing the original nonlinear operator. This is typically performed when trying to solve an equation such as together with certain boundary conditions for which the equation has a solution . This solution is sometimes called the "reference solution". For quasilinearization to work, the reference solution needs to exist uniquely (at least locally). The process starts with an initial approximation that satisfies the boundary conditions and is "sufficiently close" to the reference solution in a sense to be defined more precisely later. The first step is to take the Fréchet derivative of the nonlinear operator at that initial approximation, in order to find the linear operator which best approximates locally. The nonlinear equation may then be approximated as , taking . Setting this equation to zero and imposing zero boundary conditions and ignoring higher-order terms gives the linear equation . The solution of this linear equation (with zero boundary conditions) might be called . Computation of for ... by solving these linear equations in sequence is analogous to Newton's iteration for a single equation, and requires recomputation of the Fréchet derivative at each . The process can converge quadratically to the reference solution, under the right conditions. Just as with Newton's method for nonlinear algebraic equations, however, difficulties may arise: for instance, the original nonlinear equation may have no solution, or more than one solution, or a multiple solution, in which cases the iteration may converge only very slowly, may not converge at all, or may converge instead to the wrong solution. The practical test of the meaning of the phrase "sufficiently close" earlier is precisely that the iteration converges to the correct solution. Just as in the case of Newton iteration, there are theorems stating conditions under which one can know ahead of time when the initial approximation is "sufficiently close". Contrast with discretizing first One could instead discretize the original nonlinear operator and generate a (typically large) set of nonlinear algebraic equations for the unknowns, and then use Newton's method proper on this system of equations. Generally speaking, the convergence behavior is similar: a similarly good initial approximation will produce similarly good approximate discrete solutions. However, the quasilinearization approach (linearizing the operator equation instead of the discretized equations) seems to be simpler to think about, and has allowed such techniques as adaptive spatial meshes to be used as the iteration proceeds. Example As an example to illustrate the process of quasilinearization, we can approximately solve the two-point boundary value problem for the nonlinear node where the boundary conditions are and . The exact solution of the differential equation can be expressed using the Weierstrass elliptic function ℘, like so: where the vertical bar notation means that the invariants are and . Finding the values of and so that the boundary conditions are satisfied requires solving two simultaneous nonlinear equations for the two unknowns and , namely and . This can be done, in an environment where ℘ and its derivatives are available, for instance by Newton's method. Applying the technique of quasilinearization instead, one finds by taking the Fréchet derivative at an unknown approximation that the linear operator is If the initial approximation is identically on the interval , then the first iteration (at least) can be solved exactly, but is already somewhat complicated. A numerical solution instead, for instance by a Chebyshev spectral method using Chebyshev—Lobatto points for gives a solution with residual less than after three iterations; that is, is the exact solution to , where the maximum value of is less than 1 on the interval . This approximate solution (call it ) agrees with the exact solution with Other values of and give other continuous solutions to this nonlinear two-point boundary-value problem for ODE, such as The solution corresponding to these values plotted in the figure is called . Yet other values of the parameters can give discontinuous solutions because ℘ has a double pole at zero and so has a double pole at . Finding other continuous solutions by quasilinearization requires different initial approximations to the ones used here. The initial approximation approximates the exact solution and can be used to generate a sequence of approximations converging to . Both approximations are plotted in the accompanying figure. Notes See also Describing function References Further reading https://encyclopediaofmath.org/wiki/Quasi-linearization Differential equations
Quasilinearization
[ "Mathematics" ]
1,002
[ "Mathematical objects", "Differential equations", "Equations" ]
70,241,079
https://en.wikipedia.org/wiki/Principles%20of%20Mathematical%20Analysis
Principles of Mathematical Analysis, colloquially known as "PMA" or "Baby Rudin," is an undergraduate real analysis textbook written by Walter Rudin. Initially published by McGraw Hill in 1953, it is one of the most famous mathematics textbooks ever written. History As a C. L. E. Moore instructor, Rudin taught the real analysis course at MIT in the 1951–1952 academic year. After he commented to W. T. Martin, who served as a consulting editor for McGraw Hill, that there were no textbooks covering the course material in a satisfactory manner, Martin suggested Rudin write one himself. After completing an outline and a sample chapter, he received a contract from McGraw Hill. He completed the manuscript in the spring of 1952, and it was published the year after. Rudin noted that in writing his textbook, his purpose was "to present a beautiful area of mathematics in a well-organized readable way, concisely, efficiently, with complete and correct proofs. It was an aesthetic pleasure to work on it." The text was revised twice: first in 1964 (second edition) and then in 1976 (third edition). It has been translated into several languages, including Russian, Chinese, Spanish, French, German, Italian, Greek, Persian, Portuguese, and Polish. Contents Rudin's text was the first modern English text on classical real analysis, and its organization of topics has been frequently imitated. In Chapter 1, he constructs the real and complex numbers and outlines their properties. (In the third edition, the Dedekind cut construction is sent to an appendix for pedagogical reasons.) Chapter 2 discusses the topological properties of the real numbers as a metric space. The rest of the text covers topics such as continuous functions, differentiation, the Riemann–Stieltjes integral, sequences and series of functions (in particular uniform convergence), and outlines examples such as power series, the exponential and logarithmic functions, the fundamental theorem of algebra, and Fourier series. After this single-variable treatment, Rudin goes in detail about real analysis in more than one dimension, with discussion of the implicit and inverse function theorems, differential forms, the generalized Stokes theorem, and the Lebesgue integral. References External links Principles of Mathematical Analysis at McGraw-Hill Education Supplemental comments and exercises to Chapters 1-7 of Rudin, written by George Bergman Mathematical analysis Mathematics textbooks
Principles of Mathematical Analysis
[ "Mathematics" ]
493
[ "Mathematical analysis" ]
70,241,534
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20M23%205G
The Samsung Galaxy M23 5G is an Android-based smartphone designed, developed and marketed by Samsung Electronics. This phone was announced on March 04, 2022. Design The screen is made of Corning glass Gorilla Glass 5. The back panel and the side part are made of matte plastic. From the back, the smartphone looks like the Samsung Galaxy M13. At the bottom are the USB-C connector, speaker, microphone and 3.5 mm audio jack. The second microphone is located on top. On the left side there is a slot for 2 SIM cards and a microSD format memory card up to 1 TB. On the right side are the volume control buttons and the smartphone lock button, which has a built-in fingerprint scanner. Samsung Galaxy M23 5G is sold in 3 colors: Rose Gold, Green and Blue. Specifications Hardware Platform The device received a Qualcomm Snapdragon 750G processor and an Adreno 619 GPU. Battery The battery has a capacity of 5000 mAh. There is also support for fast charging with a power of 25 W. Camera The smartphone received a main triple camera of 50 MP, (wide-angle) with phase autofocus + 8 MP, (ultra-wide-angle) with an angle 123° + 2 MP, (macro). The main camera is capable of recording video in resolution 4K@30fps. The front camera has a resolution of 8 MP, an aperture of (wide-angle) and the ability to record video in a resolution of 1080p@30fps. Screen Screen TFT LCD, 6.6", FullHD+ (2408 × 1080) with pixel density 400 ppi, aspect ratio 20:9, screen refresh rate 120  Hz and Infinity-V (drop-shaped) cutout for the front camera. Memory Samsung Galaxy M23 5G is sold in 4/64, 4/128 and 6/128 GB configurations. In Ukraine, the smartphone is officially sold only in 4/64 and 4/128 GB configurations. Software The smartphone was released on One UI 4.1 based on Android 12. Was updated to One UI 5.1 based on Android 13 References Samsung Galaxy Mobile phones introduced in 2022 Android (operating system) devices Samsung mobile phones Samsung smartphones Mobile phones with multiple rear cameras
Samsung Galaxy M23 5G
[ "Technology" ]
479
[ "Crossover devices", "Phablets" ]
70,242,327
https://en.wikipedia.org/wiki/Forest%20404
Forest 404 is a science fiction podcast written by Timothy X Atack and starring Pearl Mackie. The project was a collaboration among BBC Radio 4, the BBC Natural History Unit, the University of Bristol, the University of Exeter, and the Open University. The show is composed of nine narrative episodes each accompanied by a soundscape and discussion on the show's themes. The narrative of the show follows a data analyst from the 24th century who discovers recordings of the natural world and finds that the audio has a profound effect on its listener. The show received mostly positive reviews and in 2020 won both a WGGB award and an ARIAS award. The show also included an academic study led by Alex Smalley. Background The project was a collaboration among BBC Radio 4, the BBC Natural History Unit, the Open University, the University of Bristol, and the University of Exeter. The show was created in Bristol. The show was written by Timothy X Atack, produced and directed by Becky Ripley, with theme music by Bonobo, and sound design by Graham Wild. Timothy X Atack credited works by Ursula K. Le Guin, and his time spent in the BBC Archives of natural history sounds, as influences in the creation of Forest 404. The 27-part series is composed of nine narrative episodes each accompanied by a soundscape and a discussion on the themes. The soundscapes are roughly five minutes in length and use binaural recording to immerse the listener in the sounds of the natural world similar to forest bathing. The show was first released on BBC Sounds and later broadcast on BBC Radio 4, and was also made available as a box set. The show's title is a reference to the 404 not found error—the protagonist is literally searching for the forest and is unable to find it. Cast and characters Pearl Mackie as Pan Tanya Moodie as Daria Pippa Haywood as Theia Synopsis The story is set in the 24th century after a catastrophe where most of the world's digital information was lost. The protagonist of the story, a data analyst named Pan, is tasked with reviewing the remaining recordings that survived the catastrophe and deleting any unnecessary data. While going through the audio files, Pan encounters a recording of a rainforest from the 21st century. Having never seen a rainforest or even a tree, the recording intrigues her and she begins investigating. She discovers more incomprehensible recordings and learns that these sounds can be dangerous or even deadly to the listener. When Daria—Pan's boss and potential love interest—informs the authorities, they begin to track her down to stop the spread of what they consider a virus. Fleeing from the authorities, Pan finds a woman named Theia who is caring for the last living tree. Pan discovers that the recordings are of the natural world, which has since been destroyed by humans. Listening to the sounds causes some to go mad with the realization that humans were responsible for the destruction of nature. The story ends with Pan broadcasting the audio file titled "Forest 404" from a radio tower. Episodes Reception The plot and writing for the show received mixed reviews from critics. Writing in The Observer, Sean O'Hagan asserted that the show was "conceptually bumpy" and contained some "jarring moments" and plot contrivances that broke his suspension of disbelief. Whereas Torri Yearwood recommended the show in The Tech, calling the story "beautifully believable" and praising the series for its worldbuilding and character development. Commenting in Refinery29, Jazmin Kopotsha wrote that the show has a captivating story that pulls listeners into the series, however, the compelling protagonist is the driving force that keeps the listener engaged. The show's experimental format and companion episodes received an overwhelmingly positive response from critics. In the South China Morning Post, Suji Owen argued that the show's use of companion episodes deepened the themes and ideas throughout the series. While Sam Fritz at the Mississippi Valley Conservancy remarked that the companion episodes allow the show to "transcend other mediums" and provide context for the plot while grounding the narrative in reality. Recommending the show on the Australian Broadcasting Corporation, Carl Smith praised the show for its experimentation with form and pushing the boundaries of podcasting. The show's sound design received a positive response from reviewers. Praising the show's use of binaural technology, Sarah Hemming expressed in the Financial Times that the "richly textured soundscape" was best appreciated with headphones. Similarly Barry Didcock of the The Herald, recommended listening with high quality speakers and emphasized that he enjoyed the show's sound design. Writing on the website Stuff, Katy Atkin recommended the show calling it "a masterpiece in sound design" and asserted that it intensified the story. Awards Academic outcomes Forest 404 also featured an embedded academic study, led by Alex Smalley at the University of Exeter. Designed to deepen understanding into people's responses to the sounds of nature, the study marked one of the largest natural soundscape experiments ever conducted, with 7,596 people taking part. Findings from this research were published in the peer-reviewed journal Global Environmental Change in May 2022. Outcomes demonstrated that soundscapes featuring the sounds of wildlife, such as bird song, were considered more psychologically restorative than those without. Participants who had memories triggered by these sounds were also more likely to find them psychologically restorative, and exhibited a greater motivation to preserve them—an outcome with implications for conservation efforts. See also List of science fiction podcasts References External links 2019 podcast debuts 2019 podcast endings Audio podcasts BBC Radio 4 programmes BBC podcasts Science fiction podcasts British podcasts Scripted podcasts Binaural podcasts Thriller podcasts Monologue podcasts Mass media in Bristol ARIA Award winners LGBTQ-related podcasts
Forest 404
[ "Environmental_science" ]
1,179
[ "Environmental social science", "Environmental psychology" ]
70,242,527
https://en.wikipedia.org/wiki/Lunar%20Atmospheric%20Composition%20Experiment
The Lunar Atmospheric Composition Experiment (LACE) was a miniature magnetic deflection mass spectrometer (neutral mass spectrometer). The experiment's aim was to study the composition and variations of the lunar atmosphere. The only deployment of LACE was as part of the Apollo Lunar Surface Experiments Package (ALSEP) on Apollo 17 within the Taurus–Littrow valley. LACE was a follow-on to the Cold Cathode Gauges that were flown on Apollo 14 and Apollo 15. Those experiments proved the existence of a tenuous lunar atmosphere and determined the upper bounds on the lunar atmospheric density during the lunar day and night, but left its composition unknown. Instrument As gas molecules enter the experiment's aperture, they are ionised by electron bombardment. These gas ions are then collimated into a beam and passed through a magnetic analyser to the detector. The electron-ion sources consist of two filaments, composed of 99% tungsten and 1% rhenium. Multiple ion mass-ranges could be scanned simultaneously by varying the voltage across the electron-ion source. Each mass range had an independent system for counting ions. Each system consisted of an electron multiplier, pulse amplifier, discriminator and counter. The experiment could detect ions of 28 and 64 atomic mass units at the same time, enabling the simultaneous measurement of carbon monoxide and sulphur dioxide. LACE's instrument recording accuracy remained at 1% for all 21-bit counts. During calibration of the instrument, it was discovered that ion flux, hitting the detector at over , resulted in saturation of the counter. Deployment and operation LACE was deployed by the Apollo 17 astronauts on 12 December 1972, at roughly 05:00 UTC. The entrance aperture was deployed upwards to measure the downward flux of gases at the lunar surface. A nylon dust screen covered the upward-facing aperture to protect it during mission surface activities. This dust screen was pulled back by radio command after the crew had taken off and the seismic charges had been detonated. The instrument was turned on by ground command at 18:07 UTC, 27 December 1972; approximately 50 hours after the first sunset following deployment. At sunrise, it was found that heating of the experiment site and LACE's instruments resulted in high rates of outgassing. This resulted in a need to limit the operation of LACE during the day except for a brief check near noon. The persistent high daytime outgassing rates severely curtailed instrument operation throughout its history because of the fear that high background rates would degrade instrument sensitivity over time. Due to the operation of the ion filament, temperature increases resulted in unexpected evaporation of tungsten in the filament. As a result, as part of LACEs operation, the ion source would be disabled to enable the cooldown of the instrument. This would reduce internal outgassing and produce clean mass spectra. The benefit of this tungsten evaporation was that it enabled a constant check on instrument sensitivity, which remained stable. Results The experiment positively identified that the tenuous lunar atmosphere consisted of helium, neon and argon. Helium concentrations matched predictions that assumed most of the lunar helium was derived from the solar winds and that helium does not freeze on the lunar surface. Argon (36Ar and 40Ar) was detected. Since the increase of argon concentrations occurred just prior to dawn, it was shown that argon was likely a condensible gas. It was proposed that the argon freezes out and is adsorbed on the lunar surface at night. As night transitions into day, this frozen argon becomes mobile and migrates ahead of, and in tandem with, the sunrise terminator. This was colloquially referred to in the Apollo 17 preliminary science report as a "pre-dawn breeze". Since the source of 40Ar was likely radioactive decay of potassium (40K), its presence detected by LACE provided evidence of a true native lunar gas. The total density of all the known gases detected by LACE matches that found by the Cold Cathode Gauges. Other species were identified including molecular hydrogen, chlorine, oxygen, hydrogen chloride, and carbon dioxide. Concentrations of these declined throughout the operation of the experiment and it is suspected these constituted instrument contaminants. This conclusion was reached due to the fact that, unlike argon, the detection of these contaminants rose sharply contemporaneously with the local sunrise, rather than leading it. Neon concentrations were 20 times lower than anticipated and the reason for this was not understood at the time. Instrument failure During LACE's tenth lunar month of operation, the experiment developed a problem with the instrument's high-voltage section. The sweep high voltage dropped to zero on 17 October 1973 at 17:32 UTC. The normal 2900 volt output had reduced to several hundred volts, and the instrument could no longer operate. Numerous corrective measures were attempted, but none were successful. References Mass spectrometry Apollo 17 Apollo program hardware
Lunar Atmospheric Composition Experiment
[ "Physics", "Chemistry" ]
1,015
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
70,245,162
https://en.wikipedia.org/wiki/Sierra%20Forest
Sierra Forest is the codename for sixth generation Xeon Scalable server processors designed by Intel, launched in June 2024. It is the first generation of Xeon processors to exclusively feature density-optimized E-cores. Sierra Forest processors are targeted towards cloud server customers with up to 288 Crestmont E-cores. Background On February 17, 2022, Intel announced that upcoming Xeon generations would be split into two tracks for those with P-cores exclusively and E-cores exclusively. These two tracks are intended to serve different market segments with P-core Xeon processors targeting high-performance computing while E-core Xeon processors target cloud customers who prioritize greater core density, energy efficiency and performance in heavily multi-threaded workloads over strong single-threaded usage. On March 29, 2023, Intel announced that Sierra Forest processors had powered on and displayed a processor running 144 E-cores, and announced a release timeline for H1 2024. On September 19, 2023, Intel announced at their Innovation event that a 288-core variant of Sierra Forest would be coming. On June 3, 2024, Intel released the Sierra Forest-SP line of SKUs, also known as the Xeon 6700E series. This product line included seven SKUs at launch, all using the LGA 4710 socket. The low-end SKU has 64 cores, and the high-end SKU has 144 cores. Branding During Intel's Vision event in April 2024, new branding for Xeon processors was unveiled. The Xeon Scalable branding that was introduced in 2017 would be retired in favor of a simplified "Xeon 6" brand for sixth generation Xeon processors. This change brings greater emphasis on processor generation numbers. The badge for the Xeon brand was changed to be more visually in line with the badge design used for Intel's Core Ultra processors since 2023. Architecture Sierra Forest will use only E-cores to achieve higher core counts in order to compete with AMD's Epyc server processors codenamed Bergamo which features up to 128 smaller Zen 4c cores. AMD's Zen 4c cores feature simultaneous multithreading (SMT) while the Crestmont E-cores featured in Sierra Forest processors can only support one thread for each core. The purpose of the Sierra Forest architecture design is to achieve ultra-high core counts for greater compute density that would benefit cloud and HPC server applications. Cloud service providers may not be as interested in HPC accelerators and instead prioritize greater ECU/vCPU integer and floating-point performance. Don Soltis is the principal engineer and chief architect for Xeon E-Core. Products Sierra Forest-SP Sierra Forest-SP (Scalable Performance) uses the Beechnut City platform with the smaller LGA 4710 socket, targeted towards mainstream server. Sierra Forest-SP features up to 144 E-cores and eight-channel DDR5 memory support. TDPs up to 350W are supported on Beechnut City platform. Sierra Forest-AP Sierra Forest-AP uses the Avenue City platform with the larger LGA 7529 socket for higher core count SKUs up to 288. It supports a higher number PCIe lanes and 12-channel DDR5 memory. See also Process–architecture–optimization model, by Intel Tick–tock model, by Intel List of Intel CPU microarchitectures References Intel products Intel microprocessors
Sierra Forest
[ "Technology" ]
702
[]
70,245,307
https://en.wikipedia.org/wiki/HD%2031975
HD 31975 (HR 1606) is a star situated in the southern circumpolar constellation Mensa. It has an apparent magnitude of 6.28, which is near the threshold of naked eye visibility. It is relatively close at a distance of about 106 light years but is receding with a heliocentric radial velocity of . HD 31975 has a stellar classification of F9 V Fe−0.5, indicating that it is a F-type main-sequence star with a mild under abundance of iron in its atmosphere. At present it has 120% the mass of the Sun and 146% the radius of the Sun. It shines at double the luminosity of the Sun from its photosphere at an effective temperature of 6,165 K, giving it a yellow-white glow. HD 31975 has a similar metallicity to the Sun and at an age of 3.5 billion years it spins slowly with a projected rotational velocity of . The Washington Double Star Catalog lists a faint M5 companion 16.5" away, which is related to the star. References F-type main-sequence stars Mensa (constellation) Mensae, 15 Durchmusterung objects 031975 1606 Double stars M-type main-sequence stars 022717
HD 31975
[ "Astronomy" ]
261
[ "Mensa (constellation)", "Constellations" ]
60,883,993
https://en.wikipedia.org/wiki/Applications%20of%20dual%20quaternions%20to%202D%20geometry
In this article, certain applications of the dual quaternion algebra to 2D geometry are discussed. At this present time, the article is focused on a 4-dimensional subalgebra of the dual quaternions which will later be called the planar quaternions. The planar quaternions make up a four-dimensional algebra over the real numbers. Their primary application is in representing rigid body motions in 2D space. Unlike multiplication of dual numbers or of complex numbers, that of planar quaternions is non-commutative. Definition In this article, the set of planar quaternions is denoted . A general element of has the form where , , and are real numbers; is a dual number that squares to zero; and , , and are the standard basis elements of the quaternions. Multiplication is done in the same way as with the quaternions, but with the additional rule that is nilpotent of index , i.e., , which in some circumstances makes comparable to an infinitesimal number. It follows that the multiplicative inverses of planar quaternions are given by The set forms a basis of the vector space of planar quaternions, where the scalars are real numbers. The magnitude of a planar quaternion is defined to be For applications in computer graphics, the number is commonly represented as the 4-tuple . Matrix representation A planar quaternion has the following representation as a 2x2 complex matrix: It can also be represented as a 2×2 dual number matrix: The above two matrix representations are related to the Möbius transformations and Laguerre transformations respectively. Terminology The algebra discussed in this article is sometimes called the dual complex numbers. This may be a misleading name because it suggests that the algebra should take the form of either: The dual numbers, but with complex-number entries The complex numbers, but with dual-number entries An algebra meeting either description exists. And both descriptions are equivalent. (This is due to the fact that the tensor product of algebras is commutative up to isomorphism). This algebra can be denoted as using ring quotienting. The resulting algebra has a commutative product and is not discussed any further. Representing rigid body motions Let be a unit-length planar quaternion, i.e. we must have that The Euclidean plane can be represented by the set . An element on represents the point on the Euclidean plane with Cartesian coordinate . can be made to act on by which maps onto some other point on . We have the following (multiple) polar forms for : When , the element can be written as which denotes a rotation of angle around the point . When , the element can be written as which denotes a translation by vector Geometric construction A principled construction of the planar quaternions can be found by first noticing that they are a subset of the dual-quaternions. There are two geometric interpretations of the dual-quaternions, both of which can be used to derive the action of the planar quaternions on the plane: As a way to represent rigid body motions in 3D space. The planar quaternions can then be seen to represent a subset of those rigid-body motions. This requires some familiarity with the way the dual quaternions act on Euclidean space. We will not describe this approach here as it is adequately done elsewhere. The dual quaternions can be understood as an "infinitesimal thickening" of the quaternions. Recall that the quaternions can be used to represent 3D spatial rotations, while the dual numbers can be used to represent "infinitesimals". Combining those features together allows for rotations to be varied infinitesimally. Let denote an infinitesimal plane lying on the unit sphere, equal to . Observe that is a subset of the sphere, in spite of being flat (this is thanks to the behaviour of dual number infinitesimals). Observe then that as a subset of the dual quaternions, the planar quaternions rotate the plane back onto itself. The effect this has on depends on the value of in : When , the axis of rotation points towards some point on , so that the points on experience a rotation around . When , the axis of rotation points away from the plane, with the angle of rotation being infinitesimal. In this case, the points on experience a translation. See also Eduard Study Quaternion Dual number Dual quaternion Clifford algebra Euclidean plane isometry Affine transformation Projective plane Homogeneous coordinates SLERP Conformal geometric algebra References Hypercomplex numbers Quaternions Euclidean plane geometry Euclidean symmetries Clifford algebras
Applications of dual quaternions to 2D geometry
[ "Physics", "Mathematics" ]
984
[ "Functions and mappings", "Mathematical structures", "Euclidean symmetries", "Euclidean plane geometry", "Mathematical objects", "Algebraic structures", "Mathematical relations", "Hypercomplex numbers", "Planes (geometry)", "Numbers", "Symmetry" ]
60,885,357
https://en.wikipedia.org/wiki/Mutation%20accumulation%20theory
The mutation accumulation theory of aging was first proposed by Peter Medawar in 1952 as an evolutionary explanation for biological aging and the associated decline in fitness that accompanies it. Medawar used the term 'senescence' to refer to this process. The theory explains that, in the case where harmful mutations are only expressed later in life, when reproduction has ceased and future survival is increasingly unlikely, then these mutations are likely to be unknowingly passed on to future generations. In this situation the force of natural selection will be weak, and so insufficient to consistently eliminate these mutations. Medawar posited that over time these mutations would accumulate due to genetic drift and lead to the evolution of what is now referred to as aging. Background and history Despite Charles Darwin's completion of his theory of biological evolution in the 19th century, the modern logical framework for evolutionary theories of aging wouldn't emerge until almost a century later. Though August Weismann did propose his theory of programmed death, it was met with criticism and never gained mainstream attention. It wasn't until 1930 that Ronald Fisher first noted the conceptual insight which prompted the development of modern aging theories. This concept, namely that the force of natural selection on an individual decreases with age, was analysed further by J. B. S. Haldane, who suggested it as an explanation for the relatively high prevalence of Huntington's disease despite the autosomal dominant nature of the mutation. Specifically, as Huntington's only presents after the age of 30, the force of natural selection against it would have been relatively low in pre-modern societies. It was based on the ideas of Fisher and Haldane that Peter Medawar was able to work out the first complete model explaining why aging occurs, which he presented in a lecture in 1951 and then published in 1952 Mechanism of action Amongst almost all populations, the likelihood that an individual will reproduce is related directly to their age. Starting at 0 at birth, the probability increases to its maximum in young adulthood once sexual maturity has been reached, before gradually decreasing with age. This decrease is caused by the increasing likelihood of death due to external pressures such as predation or illness, as well as the internal pressures inherent to organisms that experience senescence. In such cases deleterious mutations which are expressed early on are strongly selected against due to their major impact on the number of offspring produced by that individual. Mutations that present later in life, by contrast, are relatively unaffected by selective pressure, as their carriers have already passed on their genes, assuming they survive long enough for the mutation to be expressed at all. The result, as predicted by Medawar, is that deleterious late-life mutations will accumulate and result in the evolution of aging as it is known colloquially. This concept is portrayed graphically by Medawar through the concept of a "selection shadow". The shaded region represents the 'shadow' of time during which selective pressure has no effect. Mutations that are expressed within this selection shadow will remain as long as reproductive probability within that age range remains low. Evidence supporting the mutation accumulation theory Predation and Delayed Senescence In populations where extrinsic mortality is low, the drop in reproductive probability after maturity is less severe than in other cases. The mutation accumulation theory therefore predicts that such populations would evolve delayed senescence. One such example of this scenario can be seen when comparing birds to organisms of equivalent size. It has been suggested that their ability to fly, and therefore lower relative risk of predation, is the cause of their longer than expected life span. The implication that flight, and therefore lower predation, increases lifespan is further born out by the fact that bats live on average 3 times longer than similarly sized mammals with comparable metabolic rates. Providing further evidence, insect populations are known to experience very high rates of extrinsic mortality, and as such would be expected to experience rapid senescence and short life spans. The exception to this rule, however, is found in the longevity of eusocial insect queens. As expected when applying the mutation accumulation theory, established queens are at almost no risk of predation or other forms of extrinsic mortality, and consequently age far more slowly than others of their species. Age-specific reproductive success of Drosophila Melanogaster In the interest of finding specific evidence for the mutation accumulation theory, separate from that which also supports the similar antagonistic pleiotropy hypothesis, an experiment was conducted involving the breeding of successive generations of Drosophila Melanogaster. Genetic models predict that, in the case of mutation accumulation, elements of fitness, such as reproductive success and survival, will show age-related increases in dominance, homozygous genetic variance and additive variance. Inbreeding depression will also increase with age. This is because these variables are proportional to the equilibrium frequencies of deleterious alleles, which are expected to increase with age under mutation accumulation but not under the antagonistic pleiotropy hypothesis. This was tested experimentally by measuring age specific reproductive success in 100 different genotypes of Drosophila Melanogaster, with findings ultimately supporting the mutation accumulation theory of aging. Criticisms of the mutation accumulation theory Under most assumptions, the mutation accumulation theory predicts that mortality rates will reach close to 100% shortly after reaching post-reproductive age. Experimental populations of Drosophila Melanogaster, and other organisms, however, exhibit age-specific mortality rates that plateau well before reaching 100%, making mutation accumulation alone an insufficient explanation. It is suggested instead that mutation accumulation is only one factor among many, which together form the cause of aging. In particular, the mutation accumulation theory, the antagonistic pleiotropy hypothesis and the disposable soma theory of aging are all believed to contribute in some way to senescence. References Senescence Evolutionary biology Genetics
Mutation accumulation theory
[ "Chemistry", "Biology" ]
1,192
[ "Evolutionary biology", "Genetics", "Senescence", "Cellular processes", "Metabolism" ]
60,885,580
https://en.wikipedia.org/wiki/Effects%20of%20pornography%20on%20young%20people
The effects of pornography on young people are a topic of significant concern and ongoing research, as it encompasses a wide range of psychological, social, and behavioral impacts. As access to the internet has grown, so too has the exposure of young individuals to pornographic content, often before they are emotionally or cognitively prepared to process it. Adolescents turn to pornography for various reasons, including insufficient sex education, sexual arousal, as a coping mechanism, entertainment, alleviating boredom, and exploring their sexual and gender identities. Adolescents may also encounter content that disturbs them. Without alternative narratives, adolescents may develop harmful attitudes about women, sex, LGBTQ people, and people of color, as well as unrealistic expectations about sexual relationships. The use of pornography by adolescents is associated with certain sexual attitudes and behaviors, but causality remain unclear. The discourse around this subject is multifaceted, involving ethical, educational, and parental considerations, and continues to evolve with advancements in technology and changing societal norms. Definition and classification The definition of pornography in research varies, with different terms used, such as "X-rated" or "erotica", and some studies refrain from providing a specific definition. Background Gender stereotypical beliefs and permissive behaviors Gender stereotypical beliefs are understood as a belief that traditional, stereotypical ideas about male and female gender roles and gender relations dominate. These beliefs cover, progressive attitudes towards gender roles, conceptions of women as sexual objects, gender stereotypical beliefs about power imbalance in sexual relationships, and beliefs about gender equality. Pornography consumption prediced stronger stereotype beliefs over time, but not acceptance of rape myths or gendered sexual roles in emerging adulthood, and is overall linked to less progressive sexual beliefs, but the beliefs are low. Permissive sexual behaviors are understood as a positive attitude towards casual sex, often outside of non-binding situations, and romantic relationships. Use of Internet pornography predicted permissive attitudes, and the use is associated with permissive sexual behaviors, however, the impact is generally low. It is, therefore, possible to speak of a relationship between more frequent pornography use and less strict (rather than more permissive) sexual attitudes. Demography When adolescents view pornography, it may be intentional (e.g.) independent searching or unintentional (e.g.) advertising on the Internet or spam emails. The incidence of use ranges from 7% to 98%, depending on the study and the group studied. Methodological differences, technological changes, and cultural context have been cited as reasons for this difference. Male adolescents with autism viewed pornography less often than non-autistic adolescents (ASD 41% vs. non-autistic 76%) and/or masturbated less regularly with pornography (ASD 39% vs. non autistic 76%). In contrast, no difference was found among female adolescents. Age of first use ranges from 6 to 19 years for heterosexual adolescents, with an average age of 11 years for boys and 12 years for girls. First use of pornography ranges from 6 to 17 years for LGBTQ adolescents. The frequency of use among LGBTQ youth in the literature is often contradictory, with some studies reporting higher frequency than heterosexual youth and others not. How many adolescents come into contact with violence in pornography is unclear; in one survey, about three percent of adolescents have consumed pornography with violence. In another survey, this figure was 29% for boys and 16% for girls. In the U.S., the most common forms of pornography among urban, low-income, black, and Hispanic youth were depictions of heterosexual sex, and in rarer cases, more extreme forms of pornography, such as humiliation, bestiality, bondage, and bukkake. Some youth tended to overestimate their own ability to critically evaluate pornography and to ignore ethical concerns about the pornography industry. Girls are more repulsed by pornography and view it as silly and disgusting, having a negative attitude, and some felt that performers were forced to perform certain acts. While men tend to be less critical and reluctant to discuss the gender-specific effects of pornography. Figures from The Netherlands in 2023: young men watched porn in the previous six months between 65% (13-15 years) and 96% (22-24 years), and young women between 22% (13-15 years) and 75% (22-24 years old). Motivations Adolescents turn to internet pornography for various reasons, including: Curiosity and seeking information about Sex and sexual organs, sex positions/-roles, bodies and behaviors how to behave, and how to masturbate and ejaculate. Pornography serves as a way to learn without the risks associated with actual sexual activity. Initially driven by curiosity about sex and pornography, adolescents later use pornography to understand sexual roles and expectations. It also provides a platform to study different sexual mechanisms and techniques of certain and new sexual acts. However, this is less of a reason to consume pornography, especially for frequent users. The usefulness of new information can predict how engaged individuals are with it. While seeking information ranks lower in frequency compared to arousal and pleasure, it remains more prevalent among males, but it's still unclear which exact subgroups of youth use pornography to learn about sex and sexuality. Adolescents feel that traditional sex education falls short in addressing their questions, making pornography a valuable source of information, because sex education was limited, focusing only on STDs, pregnancy risk, and heterosexual sex, or got skipped. For adolescents, pornography has increased value because it provided information that was not present in sex education. Adolescents saw pornography as an unavoidable or necessary source of information. Suggestions about sex education, include expanding sexual education to critically evaluate pornography, discussing consent, reducing the shame associated with viewing pornography, relationship management, and negotiation skills, and how to learn how to satisfy yourself and your partner. This should also address body image, sexual expectations, and prioritize physical and mental well-being, in terms of pleasure and sexual functioning. Adolescents emphasize the need for open and factual discussions about sex, both with adults and to stimulate discussions in small groups of trusted peers. Arousal and Amplification, a significant driving force behind pornography use, especially among boys, is its reinforcement of masturbation and the fulfillment of sexual desires. Adolescents sometimes use it as a substitute for intimacy after a breakup or when a partner is unavailable. Intimacy and Mate-seeking Motives: Young people reported discussing or viewing pornography with a romantic partner, often to increase sexual desire and satisfaction. But, not everyone sees shared consumption as normal. Some young women view it as a potential threat to the relationship and may not be comfortable integrating pornography into their partnerships, especially if pressured, to use pornography. Shared consumption still tends to adhere to traditional gender roles, with young men more inclined towards it and young women focusing on factors like context, privacy and regulation. For some women, consuming pornography is only acceptable within a relationship, which can indirectly pressure their consumption habits to deal with pornography outside of socially accepted contexts to protect their privacy and reduce stigma. Coping Mechanism: Apart from sexual arousal, a significant reason for using pornography is to cope with and alleviate negative emotions. Helping to manage psychological distress, loneliness, and discomfort. It is suggested as a potential causal relationship between lower mood states and the utilization of pornography as a coping strategy. Boredom and Entertainment: Boredom is a common trigger for engaging with pornography, as individuals often seek stimulating activities to alleviate this state. Seeking entertainment, is another motivation that is more common among boys, and in male groups. Watching pornography with peers allows young individuals to gauge others reactions, helping establish social norms around its consumption, as well as determine specific behaviors, experiences, or bodies seen in pornography. In some cases, family members like fathers or cousins served as initial sources of exposure to pornography, driven by a desire to promote heterosexual behaviors and discourage same-sex activities. Sexual and Gender Identity: LGBTQ youth often use pornography to explore and affirm their sexual or gender identities, gravitating towards content that resonates with them. It serves as a crucial tool for validating their sexual orientation, especially for those who feel marginalized in mainstream narratives. Pornography also acts as a means to gauge their readiness to engage in LGBTQ activities. Initially, they found and used internet pornography as their primary source of information about LGBTQ activities, considering pornography as a kind of "guide" for sexual experiences. Pornography was the only source on LGBTQ sexual activity. If these videos include educational content (e.g., contraception during sexual activity, sexual consent, mutual sexual pleasure), could it be particularly valuable. However, they express a willingness to seek out other sources if such information becomes more readily available online or is covered more extensively in schools or by parents. As LGBTQ youth become more informed about LGBTQ activities, their use of pornography aligns more with their peers. They also view pornography as a "safe space" for sexual exploration and expression, providing a sense of validation for sexual identities and feelings that may face stigma in mainstream culture, especially those of young women and young people with LGBTQ+ identities. Effects Addiction and individual distress Problematic pornography use (PPU) or pornography addiction, was understood as a pattern of pornography viewing which causes significant distress to the individual personally, relationally, socially, educationally, or occupationally. The prevalence of PPU by adolescents, lies at under 5%. Frequent users of pornography are more likely to show symptoms of PPU. Higher levels of depressive symptomatology in adolescent boys, and sexual interest, predicted increase in compulsive use of pornographic material over time. Baseline levels and subsequent growth in pornography use subsequently predicted higher levels of PPU, independent of religiosity, negative emotions, and impulsivity. Higher frequency of pornography use is associated with higher probability of suffering from CSB. LGBTQ-Adolescents aren't more likely to develop PPU. Svedin et. al. found that a moderate consumption of pornography is associated with good mental health in boys, while both extremes (too much or too few) were worse off. Watching deviant (non-mainstream) pornography was associated with worse mental health in boys, but girls were unaffected. Blurring with reality Adolescents generally view pornography neither as (socially) realistic nor a useful source of sexual information compared to real-life experiences. However, more frequent consumption of pornography can lead to a perception of it as being "less unrealistic." Some find it to be a reliable source of information if useful content is present. They exhibit "porn literacy," showing critical thinking skills which teens say can be put to better use the older you are and the more experience you have. The differences between pornography and real sexual situations, according to the adolescents, were: Messages about sex, the body, pleasure, and "risky" sexual acts, the lack of emotion, exaggerated appearance and performance, long duration of sex, the speed of sex, sexual aggression, the roles of women and men in pornography, the inappropriate portrayals of marginalized identities, the loveless content, and the abstinence from condoms were described as unrealistic and misleading. Teens described the content as more show than real sex. This was also echoed by youth who have not seen pornography. Some youth were concerned that other consumers (but not themselves), might draw false lessons or unrealistic expectations (through the third-person effect) from pornography and might experience physical harm from replicating pornography. Compared to adults The impact of pornography on adolescents versus adults is still unclear. Risky sexual behavior and certain gender stereotypes linked to pornography were observed in adults, but not in adolescents. Both groups showed a connection between pornography and permissive sexual attitudes. It is suggested that adolescents' brains might be more sensitive to explicit material, but due to a lack of research this question cannot be answered definitively. Guilt, shame, punishment Arab Adolescents grapple with complex emotions regarding pornography. Some experience guilt and shame, struggling to reconcile the emotional and physiological benefits of pornography with their criticisms of the ethics of pornography and the lack of social acceptance of pornography and sexuality in general. Support mechanisms for discussing negative experiences with pornography are lacking. Many parents avoid conversations about it, and adolescents fear punishment if caught so that adults are perceived as ambivalent or uncertain if they had any questions or curiosity about pornography. Some parents avoid conversations about pornography and sex. Peer discussions on the topic are also limited. Some adolescents who participated in studies were only able to discuss their concerns in the studies they participated in because they would not have had the opportunity to do so before. These studies acted as interventions. Open communication and good relationships are seen as crucial in helping adolescents control their consumption. Better conversations about sex and pornography are believed to improve attitudes about sex, reduce stigma, and prevent abuse, reduce the motivation to consume pornography and to show trust and respect to young people. Some adolescents believe that they have the skills to avoid unwanted pornographic content and to mitigate conflicting feelings and potential consequences that may result from viewing pornography. Along with this, they described being able to avoid unwanted content and deal with their negative feelings. Without such discussions and other perspectives, adolescents feel that that leads to pressure to engage in certain sexual acts, lower self-esteem, mismatched expectations, and disappointment in a sexual experience or unnecessary physical pain, normalization of violence, harassment, coercion, and assault. Some women experienced coercion and harassment. Positive effects Adolescent sexual self-exploration covers a range of factors including sexual insecurity, depression through pornography, self-objectification and the internalization of beauty ideals, body monitoring, adolescent self-image and body image, preoccupation with sexual issues, sexual dissatisfaction, sexual self-development, sexual arousal, and sexual experiences. Research suggests a connection between pornography use and these aspects, but definitive correlations have not been established. The use of pornography by young people has shown associations with reduced anxiety related to early sexual experiences, higher sexual satisfaction in firm and loose relationships, and increased comfort in discussing sex. Some individuals find that viewing bodies in pornography, especially in amateur content, can boost self-esteem. Sexual behavior Adolescent pornography consumption predicted greater sexual engagement, greater sexual insecurity, and greater sexual dissatisfaction, and is linked to sexual intercourse (anal sex, oral sex, sexual encounters, sexual desire, earlier sexual initiation, sex with prostitutes/partners/friends), more experience with casual sex, and a higher likelihood of exercise or experiencing especially among female adolescents. However, there isn't any evidence connecting frequent pornography consumption to a wider range of sexual practices. Meaningful evidence linking pornography and sexual risk behaviors is lacking. It's important to note that these findings are rough, incomplete approximations. On average, adolescents did not have frequent sexual intercourse. This means that porn use among adolescents is more likely to be related to a low frequency of these behaviors rather than their massive occurrence. The extent of sexual aggression and victimization varies. Pornography use is also associated with higher likelihood of talking online about sex with strangers, and fantasizing about trying to copy sexual acts seen in pornography, with some adolescents mimicking what they see. No definitive conclusions can be drawn regarding unprotected or paid sex, but teenage pregnancy and sexually transmitted diseases have been associated with pornography use. Women as sexual objects A 2021 review which compiled evidence from other empirical sources such as surveys found that representations of women in pornography may lead adolescent boys to view women as sexual objects, with disregard and disrespect for gender equality. The review, however, does not claim anywhere proving a causal relationship of consuming pornography and changing views of sexual objectification or gender inequality. Legal issues Global legal definitions of pornography have evolved over time in different countries. In Austria, it is self-contained depictions of sexual acts, distorted in a graphic manner, and devoid of any external context of relationships in life. Similarly is in the United States a sexually explicit material judged to be obscene if, the average person, applying contemporary community standards, finds that the work as a whole appeals to prurient interest, the work depicts or describes sexual conduct in a patently offensive way; and the work taken as a whole lacks serious literary, artistic, political or scientific value. The ages when it is legal to watch pornography are different by region, for example in Indonesia it's completely forbidden to use pornography, in the EU minors are not allowed to access pornography, while in Austria's Pornography Act permits depictions that could stimulate lust or mislead sexual drive for those over 16 on a national level, but Austria's states forbid respectively material harmful to youth, pornography or depictions disregarding human dignity under 18. Switzerland generally permits pornography for those over 16. Austria Efforts to protect youth from the effects of pornography date to the 18th century. In the 20th century, laws were passed regulating materials which "endanger the moral welfare of youth" as far back as 1929. This culminated in the 1950 Pornography Act, where the focus shifted from mere depictions of nudity to profiting from it, and was broadened to include stimulation of any sexual feeling at all; the Act is still the basis of Austrian pornography legislation. Social changes in the 1970s resulted in refinements defining which depictions were considered as pornographic based on the standard of an "average person"'s reaction. Case law permitted certain materials to be sold in certain shops as long as the customer's age could be verified, and in 2000 a court permitted broadcasting after midnight as long as a warning message preceded the program. Research issues Methods and ethics Surveys are the main method for studying the effects of pornography on adolescents, due to legal and/or ethical constraints preventing experimental research. In these surveys, young individuals openly discuss their pornography use, for one study author, this indicated a "shift in the position (of pornography) as perverse, deviant, or shameful." The research is based on establishing correlations, which allows for making assumptions about causality but doesn't conclusively prove it (a correlation does not imply causality). This means that it is not possible to draw conclusions about whether the contexts are a consequence or a cause of viewing pornography. It could for example be, that consuming pornography causes certain beliefs or if it's the other way around, or if multiple factors contribute to a particular belief. There's also the possibility that the observed correlation is coincidental. Venues Most studies come from affluent countries like the Netherlands and Sweden, making it challenging to generalize the findings to more sexually conservative nations. Research on pornography often concentrates on potential negative effects, largely neglecting positive ones, this can be justified by theoretical considerations and by cultural concerns of the public. Public debates about adolescents' pornography use often oversimplify how it influences them, assuming that adolescents are uncritical consumers while adults are seen as more discerning. It's unclear which adolescents are most affected by these associations, and there's limited information about the impact on LGBTQ youth. Some behaviors linked to pornography, like casual sex, permissive attitudes, anal sex, or a larger number of partners, may have associated risks under certain circumstances, but they're not inherently harmful. Studies vary in their findings, making it uncertain whether research can definitively answer all questions about the impact of pornography on adolescents. Obstacles There are considerable ethical problems with performing some kinds of research on the effects of pornography use on minors. For example, Rory Reid (UCLA) declared, "Universities don't want their name on the front page of a newspaper for an unethical study exposing minors to porn." The PhD thesis of Marleen J.E. Katayama-Klaassen (2020), at the University of Amsterdam, found a low correlation between pornography and significant effects on youth, and could not show causality. Miranda Horvath, researcher in a 2013 study regarding minors and pornography, also stated: "But it is not possible to establish causation from correlational studies, and to say whether pornography is changing or reinforcing attitudes." Validity Peter and Valkenburg's (2016) systematic review found the literature suggestive but not conclusive that the adolescent brain may be more sensitive to explicit material. Brown and Wisco's (2019) systematic review reached similar conclusions. Future goals An umbrella review stated on this aspect of the types of pornography teens use: "More research is needed on the types of pornography teens use, rather than relying on speculation and opinion. It should be assumed that adolescents are not passive "fools" or "victims" but are critical of social norms (such as the social expectation to disapprove of pornography) and depictions in pornography that are misogynistic, showing fetishization of lesbians, transgender people, and non-binary people which is only made for cisgender heterosexual men's pleasure which perpetuates male dominance and the oppression of women, is racist, homophobic, transphobic, or violent, non-consensual, lack love or intimacy, follow beauty ideals, show little neglected groups, and show superficial depictions that only refer to sexual acts and genitals." See also Child pornography Exploitation of women in mass media Feminist views on pornography Harmful to Minors Influence of mass media Internet addiction disorder Media and American adolescent sexuality Not in Front of the Children Pornography addiction Religious views on pornography Sexting Works cited Jochen Peter & Patti M. Valkenburg (2016) Adolescents and Pornography: A Review of 20 Years of Research In: The Journal of Sex Research, 53:4-5, 509–531, March (free full text) (PDF) Amy J. Peterson, Gillian K. Silver, Heather A. Bell, Stephanie A. Guinosso & Karin K. Coyle (2023) Young People's Views on Pornography and Their Sexual Development, Attitudes, and Behaviors: A Systematic Review and Synthesis of Qualitative Research, American Journal of Sexuality Education, 18:2, 171–209, Bőthe, B., Vaillancourt-Morel, MP., Bergeron, S. et al. Problematic and Non-Problematic Pornography Use Among LGBTQ Adolescents: a Systematic Literature Review. In: Curr Addict Rep 6, 478–494 (2019). (PDF) References Further reading Adolescence Adolescent sexuality Behavioral addiction Digital media use and mental health Internet culture Pornography Research on the effects of pornography Sex and the law Sexuality and computing Sexuality and society Sexuality Sexualization Youth
Effects of pornography on young people
[ "Technology" ]
4,569
[ "Computing and society", "Sexuality and computing" ]
60,885,710
https://en.wikipedia.org/wiki/%CE%92-Carbon%20elimination
β-Carbon elimination (beta-carbon elimination) is a type of reaction in organometallic chemistry wherein an allyl ligand bonded to a metal center is broken into the corresponding metal-bonded alkyl (aryl) ligand and an alkene. It is a subgroup of elimination reactions. Though less common and less understood than β-hydride elimination, it is an important step involved in some olefin polymerization processes and transition-metal-catalyzed organic reactions. Overview Like β-hydride elimination, β-carbon elimination requires the metal to have an open coordination site cis to the alkyl group for this reaction to occur. β-carbon elimination is usually less favored than hydride elimination because the metal–hydride bond is stronger than the metal–carbon bond for most metals in catalytic reactions. The principles governing β-alkyl elimination are not well-established experimentally. One reason for this is that breaking C−C bonds in the presence of other reactive C−H bonds is a rare event, and systems designed to interrogate the reaction are more difficult to devise. β-alkyl elimination β-alkyl elimination is the most common and useful type among all β-carbon elimination reactions. Classification/Driving force β-alkyl elimination with early transition metal complexes In terms of thermodynamics, more electron-deficient metal centers increase the likelihood of β-alkyl elimination. For example, β-alkyl elimination is more favorable than β-hydride elimination when it is bonded to electron-deficient early transition metals (Hf, Ti, Zr, Nb, etc.) with d0 configuration. Computational studies show a thermodynamic preference for β-Me elimination over β-H elimination in these complexes due to additional stability for the metal–alkyl species. The origin of the additional bonding interaction comes from an orbital centered on the CH3 weakly π-donating to the LUMO of the d0 of the metal center which is analogous to the hyperconjugation effect (see figure on the right), thus increasing the stability of M−CH3 over M−H species. Their calculations predict that a more electrophilic metal ion enhances the −CH3 π-donation, which consequently increases the stability of M−CH3 over M−H species. Conversely, a more electron-rich metal ion will favor M−H formation (for example, using the more electron-donating Cp* ligand in Cp*2MX2). In terms of kinetics, steric effects of ligands could play a role in increasing the energy barrier of β-H elimination relative to β-alkyl elimination, specifically when the ligand is Cp*. A model was proposed to illustrate this effect: In both β-methyl elimination (A) and β-hydride elimination (B), the transferring group aligns perpendicular to the Cp*(centroid)−Zr−Cp*(centroid), allowing the σC−C or σC−H bond to overlap with the metal d-orbital. However, to achieve the prerequisite geometry for β-H elimination (B), the adjacent methyl group experiences a significant steric repulsion from the Cp* ligand, thereby elevating the barrier to hydride transfer. By contrast, transition state A for β-Me elimination experiences less steric interaction with the Cp* ligand. β-alkyl elimination with middle and late transition metal complexes In middle and late transition metal complexes, there is larger thermodynamic preference for β-H elimination over β-alkyl elimination, where the difference is usually >15 kcal/mol. Examples involved middle and late transition metal complexes are either absent of β-hydrogens or use ring strain relief and aromaticity as driving forces to favor β-alkyl elimination over β-hydride elimination. Applications Ring-opening polymerization (ROP) Ring-opening polymerization that involves β-alkyl elimination can be catalyzed by Ti, Zr, Pd-based catalyst, and some Lanthanide-based metallocene catalyst, where different polymerization patterns vary when catalysts are different. Examples of copolymerization with alkene or carbon monoxide were also reported. The key step of this kind of ROP is string-driven β-alkyl elimination, which provides linear polymer with unsaturation in the polymer chain. Organic synthesis There is enormous amount of catalytic processes involving β-alkyl elimination that are synthetically useful. β-alkyl elimination in this case, however, is often considered as an alternative way of C–C bond cleavage while oxidative addition is the direct way. One of the examples is β-alkyl elimination of tert-alcoholates which can generate from either addition of an organometallic reagent or ligand exchange. The consequent organometallic species can undergo various downstream reactivities (reductive elimination, carbonyl insertion, etc.) to generate useful building blocks. In addition to ring strain, aromaticity-driven β-Me elimination can be effectively employed to dealkylate steroid derivatives and some other cyclohexyl compounds. β-aryl elimination β-aryl elimination is much less common and understood than β-alkyl elimination. Examples are reported to occur from metal alkoxide and amido complexes. A theoretical study showed that these reactions are driven by consequent extensive conjugation system. A very recent example of catalytic β-aryl elimination which leads to enantioselective synthesis of biaryl atropisomers is driven by release of distorted ring string. References Organometallic chemistry Chemical reactions
Β-Carbon elimination
[ "Chemistry" ]
1,192
[ "Organometallic chemistry", "nan" ]
60,885,782
https://en.wikipedia.org/wiki/Jiaodaluo
Jiaodaluo or Foot Treadle Flour Sifter () were foot-operated pedal implements that were used to sift flour in China. The foot treadle flour sifter had long been in use in China. An illustration of the machine is depicted in Song Yingxing's encyclopedia Tiangong Kaiwu written in 1637. References Agricultural machinery Chinese inventions Material-handling equipment Solid-solid separation
Jiaodaluo
[ "Chemistry", "Engineering" ]
84
[ "Solid-solid separation", "Mechanical engineering stubs", "Mechanical engineering", "Separation processes by phases" ]
60,887,222
https://en.wikipedia.org/wiki/Heptyne
Heptynes are alkynes with one triple bond and the molecular formula C7H12. The isomers are: 1-Heptyne 2-Heptyne 3-Heptyne Alkynes
Heptyne
[ "Chemistry" ]
46
[ "Organic compounds", "Alkynes" ]
60,887,261
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Note%2010
The Samsung Galaxy Note 10 (stylized as Samsung Galaxy Note10) is a line of Android-based phablets developed, produced, and marketed by Samsung Electronics as part of the Samsung Galaxy Note series. They were unveiled on 7 August 2019, as the successors to the Samsung Galaxy Note 9. Details about the phablets were widely leaked in the months leading up to the phablets' announcement. In 2020, a midrange variant, the Galaxy Note 10 Lite, was introduced with lesser specifications and features. Specifications Hardware Displays The Galaxy Note 10 line comprises two models with various hardware specifications; Note 10 / Note 10 5G feature 6.3-inch 1080p (Note 10+ / Note 10+ 5G feature 6.8-inch 1440p) "Dynamic AMOLED" displays with HDR10+ support and "dynamic tone mapping" technology respectively. The displays have curved sides that slope over the horizontal edges of the device. The phablet also features a 19:9 aspect ratio. The front-facing cameras occupy a rounded cut-out on the top of the display, and all models utilize an ultrasonic in-screen fingerprint reader. Storage and chipsets International models of the Note 10 utilize the Exynos 9825 system-on-chip, while the U.S., South American (Except Brazil) and Chinese models utilize the Qualcomm Snapdragon 855. All models are sold with 256 GB of internal Universal Flash Storage 3.0, with the Note 10+ & Note 10+ 5G also being sold in a 512 GB model and offering expandable storage via a microSD card. Batteries They respectively contain non-user-replaceable 3500 and 4300 mAh Lithium-ion batteries, with both variants supporting a 25 watt Super Fast Charging, while the Note 10+ also supports 45 watt Super Fast Charging 2.0, Qi inductive charging, and the ability to charge other Qi-compatible devices from their battery power. The device is compatible with USB PD 3.0. Exterior The Note 10 and Note 10+ are the first mainstream Samsung smartphones to omit the 3.5 mm headphone jack, which earned Samsung criticism for mocking the iPhone 7's lack of the headphone jack on the Galaxy Note 7 UNPACKED keynote in August 2016 – Samsung said it used the extra space for more battery. The sleep/wake power button that used to be on the right side of the phone has been removed, consolidated with the Bixby button on the left side of the phone. New settings have been added that allow the button to be remapped as either a power button or a Bixby button. And this is the second time Samsung removed the heart rate sensor after the Galaxy S10 ( Galaxy S10e and S10 5G ) because it was rarely used by users. For the first time in Samsung's devices since the original Galaxy S (2010), the camera has been placed into the corner, similar to the iPhone X/XS/XR/11 series. In January 2020, the Note 10 Lite was released. It is a midrange variant of the Note 10, containing the same cameras as the main variant. It features 128 GB of storage, a 6.7 inch 1080p "Super AMOLED" screen on a metallic frame, a 4,500 mAh battery and is powered by the Exynos 9810. The variant eliminates the wireless charging feature and stereo speakers, though it retains the 25 watt Super Fast Charging of the main series, and also has a headphone jack. However, the Note 10 Lite lacks a barometer sensor, which has been present on Samsung Galaxy flagships since 2012. Cameras The Note 10 series features a multi-lens rear-facing camera setup with Samsung's Scene Optimizer technology. It houses a dual-aperture 12-megapixel wide-angle lens, a 12-megapixel telephoto lens and a 16-megapixel ultra-wide-angle lens with the Note 10+/ Note 10+ 5G having an additional VGA Depth Vision Camera allowing for 3D AR mapping. The front-facing camera on all models consists of a 10-megapixel punch hole lens in the top center of the display. The camera software includes a new "Shot Suggestion" feature to assist users, "Artistic Live Filters", as well as the ability to post directly to Instagram posts and stories. It also contains the "Scene Optimizer" feature from previous Samsung phones that automatically adjusts the camera settings based on different scenes. Both sets of cameras support 4K/60 FPS video recording and HDR10+ with more advanced video stabilization. There is also Live Focus Video enabling users to capture Bokeh backgrounds in video, much like with Portrait Mode. S-Pen The S-Pen has also undergone notable changes compared to the Note 9. The pen is one piece of plastic, instead of two like Note 9, and supports more advanced Air Actions that allow users to control the phablet remotely with the pen. This includes changing the camera settings and exporting the handwritten text to Microsoft Word remotely. The S-Pen also comes with additional tips for replacement in the box. Software The Note 10 range ships with Android 9 "Pie" with Samsung's One UI skin. A main design element of the One UI is intentional repositioning of key user interface elements in stock apps to improve usability on large screens. Many apps include large headers that push the beginning of content towards the center of the display, while navigation controls and other prompts are often displayed near the bottom of the display instead. In March 2020, the phones received an upgrade to Android 10, bringing with it Single Take mode from the Samsung Galaxy S20 line as well as the ability to record 4K/60fps video with the selfie camera. Gallery See also Samsung Galaxy S10 Samsung Galaxy Fold Samsung Galaxy Note series References External links Official website Samsung Galaxy Note 10/10+ user manual (PDF) Mobile phones introduced in 2019 Samsung Galaxy 10 Android (operating system) devices Samsung smartphones Mobile phones with multiple rear cameras Mobile phones with stylus Mobile phones with 4K video recording Discontinued flagship smartphones Discontinued Samsung Galaxy smartphones
Samsung Galaxy Note 10
[ "Technology" ]
1,288
[ "Discontinued flagship smartphones", "Flagship smartphones" ]
60,888,520
https://en.wikipedia.org/wiki/Single-use%20medical%20devices
Single-use medical devices include any medical equipment, instrument or apparatus having the ability to only be used once in a hospital or clinic and then disposed. The Food and Drug Administration defines this as any device entitled by its manufacturer that it is intended use is for one single patient and one procedure only. It is not reusable, therefore has a short lifespan, and is limited to one patient. There are countless types of single use medical devices, ranging from external, such as plastic gumboots, gloves and bandages merely used to assist a patient to more complex and internal devices, consisting of sharp blades, needles and tubes. Both these devices are single used, because of multiple reasons, but mainly, as it came in contact with radioactivity, blood, infection and disease or human tissue and must therefore be terminated. Each country has its own strict legislation regarding medical waste and the reprocessing of medical devices in hospitals and clinics. Reasons for single-use only There are multiple reasons for a single-use device to be disposed of after using, which include: Design features The device may be manufactured a certain way, making it impossible to properly sterilise, decontaminate and disinfect, which then could possibly be harmful if reused and cause cross-contamination. Endotoxin reaction and chemical burns or sensitisation There could be small amounts of excessive bacteria left over, even after sterilising, which could spark reactions and be hazardous. The device could easily absorb chemical residue from disinfectant agents Patient safety The likelihood of the device might not be able to reach its supposed level of functionality after being reused or remanufactured. The devices’ medium could be altered to become weak and impractical. Different devices Single-use devices stretch over a large area of the medical industry. Different devices are used in every region of the world and also every area of the hospital. First world countries would have access to a larger range of devices than third world countries struggling to even get in contact with medicinal products. Examples include: “Hypodermic needles, syringes, applicators, bandages and wraps, drug tests, exam gowns, face masks, gloves, suction catheters, and surgical sponges.” Some examples of single use devices that can be reprocessed are ventilator circuits, biopsy forceps, blades and drill bits, vaginal speculums, breast pump kits, clamps and ET tubes. Legislation Each country has their own strict legislation on single use medical devices. They all feature similar overall ideas, that focus on putting a patient’s health and safety first, with clear emphasis on sterilisation. In Australia the following legislation applies on medical devices, that includes single use medical devices. “Medical devices are defined as follows by the Therapeutic Goods Act 1989 : a. any instrument, apparatus, appliance, material or other article (whether used alone or in combination, and including the software necessary for its proper application) intended, by the person under whose name it is or is to be supplied, to be used for human beings for the purpose of one or more of the following: i.              diagnosis, prevention, monitoring, treatment or alleviation of disease; ii.             diagnosis, monitoring, treatment, alleviation of, or compensation for, an injury or handicap; iii.            investigation, replacement or modification of the anatomy or of a physiological process; iv.           control of conception; and that does not achieve its principal intended action in or on the human body by pharmacological, immunological or metabolic means, but that may be assisted in its function by such means; or b. an accessory to such an instrument, apparatus, appliance, material or other article. The Therapeutic Goods (Medical Devices) 2007 Regulations require a healthcare facility that reprocesses single-use devices to be licensed as a manufacturer. A healthcare facility that reprocesses single use devices would be considered to be a manufacturer under the Act and thus would be required to conform to the regulation and be subject to audit to ensure compliance.” Environmental concern Production The production element of single use devices is very simple. There are multiple large manufacturing companies, such as Elcam Medical in Israel that produce these medical devices, shipping them world wide to different hospitals, clinics and academic centres. The different processes, such as planning, building, producing, packaging and shipping all happen in this step of the process and is done by the manufacturers or ‘third-party’ companies and only received by the consumers after all aspects of the product is in perfect condition. The consumers (hospitals, clinics etc.) do not take a part in the production process, nor in the disposal or reproduction process. This is all done by ‘third-party’ organisations. Medical waste Plastics have been the main material used in single use devices since the 1960s, where raw materials, such as glass, rubber, metal and woven textiles were in practice before. The modern production of poly-vinyls, polycarbonates and polystyrenes have substituted these previously used materials and have dominated the disposable healthcare market ever since. The main reason for this forward driven use of plastics, was because of economic reasons, as it was cheaper and more efficiently manufactured. This drastic change in materials used in the healthcare industry positively under held the increasing need for medical procedures with a growing globally population, resulting in fundamental changes in the legislation and producing that governed medical device manufacturing, use and disposal or waste. The single use medical devices phenomenon has only recently occurred, as these previous medical products would undergo sterilisation and disinfecting onsite and be reused, but following the substitution of petroleum-based plastics, these devices would be received, used and then disposed of, which increases the quantities of medical waste enormously over the past decades globally. The production of SUDs has set trends universally in the medical industry, making it impossible to rely on any other source of device. “In a study analysing the environmental impact of seven single-use medical devices undergoing reprocessing, all had some form of polyethylene in their contents. Total polyethylene weight ranged anywhere from 7% to 88% of total weight for individual devices and made up 52% of total weight for the combined average of the seven devices.” Reprocessing SUD (Single-use devices) History The reuse and reprocessing of SUDs have been implemented by hospitals around four decades ago, since the late 1970s for two specific benefits; environmental and economic. Glass and metal were mainly used before this time period and heavily sanitised before reusing on another procedure, but the increasing use of the latest plastic materials and market demand for SUDs, reprocessing was fast approaching. Most SUDs, such as needles, syringes and bandages that are in direct contact with human flesh or blood are indeed truly for single use only, but more complex SUDs, such as pacemakers commonly used in surgical procedures are often reprocessed as an economic benefit for the hospitals. Most devices that have been categorised as single-use by their manufacturers, have now been reprocessed by third parties, to reuse. All original manufacturers of these devices try and spread the word to prevent potential dangers of infection, failure and danger. Hospitals reprocess SUDs themselves. “In 2000 a thriving third-party reprocessing industry has emerged in North America and Europe. Only about 2%–3% of all devices can be safely reprocessed.” The global income of third party SUD reprocessing companies are estimated to be “$1.054 billion.” The reprocessing company, Innovative Health’s vice president of marketing and public affairs, Lars Thording states, “Some devices cannot be used more than once due to material degradation, technical limitations and patient safety. This is why we have the ‘single-use’ designation, and it is validly used to ensure patient safety and patient care efficacy.” Many companies add the single use label, to increase sales. “However, a small amount of single-use devices can be re-used, after going through stringent and controlled procedures. It is very possible that original manufacturers in some cases apply the single-use label to increase sales and ensure obsolescence.” Risks In many developing countries the reuse and reprocessing of SUDs are simply because of cost restraints and immediate need of these medical devices, but are potentially risk-bounded, as the sterilisation and standards are not yet up to date and could possibly be a hazard for patients. A study done in African countries, reports that 15% to 60% of clinics reuse immunisation needles and syringes without proper disinfecting, resulting in increasingly large cases of unsterilised injections. 55% of North-western China’s health care workers reported having used SUDs, resulting in an estimated 135 to 3120 per 100,000 population children in China to have obtained hepatitis B infections through unsafe vaccination practices. Ethics and legalities A national survey was performed by Canadian Agency for Drugs and Technologies in Health (CADTH) of acute care facilities in Canada in 2008, establishing that 28% of responding hospitals reprocess SUDs, but the larger amount of 42%, was done more through bigger hospitals and academic centres. They found that of the hospitals recorded, in-house reprocessing was done by 85%, resulting in 40% not having written policy approving their practice. Since the development of policies, legal issues, risks awareness and standards having to be met, many hospitals have relied heavily on third party reprocessing companies, who specialise in reprocessing, making it more convenient and assessable for them. This process includes the shipping of infected SUDs, the reprocessors sterilising and disinfecting them and then being shipped back. In many cases the hospitals would receive unknown SUDs and not their own ones. The most common ethical issue known in the reprocessing of SUDs is patient consent. A hospital carries the responsibility the moment they adopt a reuse policy. Seeking consent by informing a patient that a reused device is being used, which could trigger unnecessary uncertainty and not requiring consent, as a hospital should only have policies that would ensure 100% patient safety if any reused devices were to be used in surgery is an ongoing discussion in the industry. Commonly, hospitals not seeking consent could be accused of ‘hidden rationing’, not concerning a patient’s independence and putting one to risk if something were to occur and cause damage, as the likelihood of a device malfunctioning is increased with every reuse. The economical ethics of not using a product more than once, if it is certainly capable to do so, could be viewed as unethical, as most of the time, manufacturers label these devices as single use and could arguably do so to increase sales and revenue, by hospitals constantly bulk buying, instead of focusing on patient safety as a priority. The primary goal for the ethical reprocessing of SUDs is to protect the communal health, resulting in the patient’s health being put first and to ensure the reprocessing of the devices is done ethically, cost efficiently and safely with an outcome of the reused SUD to be considered as an effective brand new product with least amount of risk. Manufacturing companies There are many manufacturing companies that produce and reprocess single use medical devices safe and efficiently. Elcam Medical A world class producer of disposable medical devices and components for the OEM market, and a provider of innovative solutions for specialised flow control needs. Cadence Inc. A single use medical device manufacturer catering for the OEM market. Their headquarters are in Staunton, Virginia. Sterling Industries Sterling Industires is a medical device contract manufactuerer that assists medical device OEMs and scale-up companies with the production of their single-use medical devices. Reprocessing companies Innovative Health A reprocessing company, which specialises in the safe remanufacturing of SUDs. Ascent Healthcare Solutions A multi-million-dollar company formed by two corporations merging in 2005; Vanguard Medical Concepts and Alliance Medical Corporation. Ascent has facilities in two locations where the reprocessing of medical devices is done, Phoenix, Arizona and Lakeland in Florida. It is transported and delivered across various states in North America, providing its services to 1800 hospitals and purchasing organisations. They specialise and offer devices in the cardiovascular, orthopaedics, gastroenterology, and general surgery industries complying to the FDA's 510(k) and Quality System Regulation requirements. Their staff base includes more than 900 employees. ReNu Medical The 100% green FDA-registered medical reprocessing company was founded in 2000 in Everett, Washington. They focus on supplying chain saving and waste elimination, providing instant solutions to rising healthcare prices. They specialise in DVT Garments, Pulse Oximeter Probes and many other SUDs to hospitals and clinics nationwide. References Medical devices
Single-use medical devices
[ "Biology" ]
2,678
[ "Medical devices", "Medical technology" ]
60,888,751
https://en.wikipedia.org/wiki/Myers%20allene%20synthesis
In organic chemistry, the Myers allene synthesis is a chemical reaction that converts a propargyl alcohol into an allene by way of an arenesulfonylhydrazine as a key intermediate. This name reaction is one of two discovered by Andrew Myers that are named after him; both this reaction and the Myers deoxygenation reaction involve the same type of intermediate. The reaction is a three-step process in which the alcohol first undergoes a Mitsunobu reaction with an arenesulfonylhydrazine in the presence of triphenylphosphine and diethyl azodicarboxylate. Unlike hydrazone-synthesis reactions, this reaction occurs on the same nitrogen of the hydrazine that has the arenesulfonyl substituent. Upon warming, this product undergoes an elimination of arylsulfinic acid to give an unstable diazene as a reactive intermediate. The diazene extrudes N2 to give isolated allene product. The authors describe this last step as a [3,3]-sigmatropic reaction in the original report but call it a retro-ene reaction in another publication. (Note: The IUPAC defines a sigmatropic rearrangement as a pericyclic reaction involving both breaking and formation of a new σ bond in which the total number of π and σ bonds do not change, whereas a retro-ene reaction involves the fragmentation of a molecule to a fragment with a double bond with allylic hydrogen (the 'ene') and a multiple-bonded species (the 'enophile') via a cyclic transition state. In this case, the reaction occurs with the net gain of a π bond and loss of a σ bond, so strictly speaking, only the description of the reaction as a retro-ene reaction is apt.) Both the first step (Mitsunobu reaction) and third step (sigmatropic reaction) are stereospecific, so the chirality of the propargyl alcohol controls the chirality of the resulting allene. The use of ortho-nitrobenzenesulfonylhydrazine gives reactants and intermediates with appropriate relative stability to enable the whole process to be performed as a one-pot reaction, though the order in which the reagents are mixed is important. Mechanistic studies suggest that the diazene is formed as mixture of cis and trans isomers that easily interconvert, and that the cis is what reacts most readily to form the allene. References Rearrangement reactions Name reactions
Myers allene synthesis
[ "Chemistry" ]
533
[ "Name reactions", "Rearrangement reactions", "Organic reactions" ]
60,891,797
https://en.wikipedia.org/wiki/NGC%203928
NGC 3928, also known as the Miniature Spiral, is a lenticular galaxy, sometimes classified as a dwarf spiral galaxy, in the constellation Ursa Major. It was discovered by William Herschel on March 9, 1788. Gallery References External links Ursa Major Lenticular galaxies 3928 037136 Markarian galaxies
NGC 3928
[ "Astronomy" ]
66
[ "Ursa Major", "Constellations" ]
60,891,845
https://en.wikipedia.org/wiki/Pisces%E2%80%93Eridanus%20stellar%20stream
Pisces–Eridanus stellar stream is a close stellar stream, between 80 and 226 parsecs away, and stretching 120° across the sky, an open cluster that was stretched apart by past gravitational interactions. By analysis of the highest mass member stars, it is estimated to be only 120 million years old, a similar age to the Pleiades. According to a 2020 study, this stellar stream contains about 1400 stars moving together and has a mass of 770 M☉. Stars The stream includes at least 6 naked eye stars: Lambda Tauri at 3.5 magnitude, 148 parsec distance; Omicron Aquarii at 4.7 magnitude, 134 parsec distance; 106 Aquarii at 5.2 magnitude, 114 parsec distance; 108 Aquarii at 5.2 magnitude, 98 parsec distance; Tau1 Aquarii at 5.7 magnitude, 97 parsec distance; and Nu Fornacis at 4.7 magnitude and 114 parsec distance. See also List of stellar streams References Extended stellar systems in the solar neighborhood II. Discovery of a nearby 120° stellar stream in GaiaDR2 Stefan Meingast, João Alves, and Verena Fürnkranz. 17 January 2019 Image of the Week: A river of stars Stellar streams Stellar associations
Pisces–Eridanus stellar stream
[ "Astronomy" ]
272
[ "Galaxy stubs", "Astronomy stubs" ]
60,892,603
https://en.wikipedia.org/wiki/Hard%20hadronic%20reaction
Hard hadronic reactions are hadron reactions in which the main role is played by quarks and gluons and which are well described by perturbation theory in QCD. All hadrons discovered so far fit into the standard picture, in which they are colorless composite particles built from quarks and antiquarks. The characteristic energies associated with this internal quark structure (that is, the characteristic binding energies in potential models) are of the order of GeV. There is a natural classification of hadron collision processes: if the momentum transfer is significantly less than , then the dynamics of the internal degrees of freedom of the hadrons is insignificant, and the theory can be reformulated as an effective hadron theory. if the momentum transfer during scattering is substantially greater than this magnitude, then this is a hard hadron reaction. In this case, good accuracy hadrons can be considered weakly coupled, and scattering occurs between the individual components of rapidly moving hadrons - partons. This behavior is called asymptotic freedom and is primarily associated with a decrease in the strong interaction constant with an increase in the transfer of momentum (for this discovery the 2004 Nobel Prize in Physics was awarded). Literature Quantum chromodynamics
Hard hadronic reaction
[ "Physics" ]
252
[ "Particle physics stubs", "Particle physics" ]
60,893,132
https://en.wikipedia.org/wiki/UV-328
UV-328 (2-(2H-benzotriazol-2-yl)-4,6-di-tert-pentylphenol) is a chemical compound that belongs to the phenolic benzotriazoles. It is a UV filter that is used as an antioxidant for plastics. Properties UV-328 has a melting point of 80-86 °C, a vapor pressure of 4,6·10−5 Pa (20 °C) and a water solubility of 0,17±0,07 μg·l−1 (25 °C). The octanol-water partition coefficient (log KOW) is 7,93. Applications UV-328 is a light stabilizer for a variety of plastics and other organic substrates. Its use is recommended for the stabilization of styrene homopolymers and copolymers, acrylic polymers, unsaturated polyesters, polyvinyl chloride, polyolefins, polyurethanes, polyacetals, polyvinyl butyral, elastomers and adhesives. It protects polymers and organic pigments from UV radiation and helps maintain the original appearance and physical integrity of moldings, films, sheets and fibers during outdoor weathering. The application concentration is 0.1-1 %. UV-328 is recommended for applications such as automotive coatings, industrial coatings, commercial inks such as wood stains or do-it-yourself inks. Hazard UV-328 is persistent, bioaccumulative and toxic (PBT) as well as very persistent and very bioaccumulative (vPvB). Thus, it is in the list of substances of very high concern. The 2023 Conference of the Parties of the United Nations Stockholm Convention on Persistent Organic Pollutants took the decision to eliminate the use of UV-328, by listing this chemical in Annex A to the Convention. A bioaccumulation factor (log BAF) of 2,6–3,4 was determined in fish from Canadian rivers. It may cause long lasting harmful effects to aquatic life. UV-328 has been found to be associated with adverse health effects in mammals based on repeated-dose toxicity studies conducted in rats and dogs, with the primary health effect being liver toxicity. It is also associated with adverse effects on the kidney based on the study in rats. The finding of UV-328 in plastics sampled on remote beaches, in stomachs of seabirds and in preen gland oil show that it is also transported over long distances and is taken up by biota. Detections in Arctic biota include eggs of common eider, kittiwake, European shag and glaucous gull as well as the livers of mink. See also Tinuvin 770 References Benzotriazoles Phenyl compounds Antioxidants Persistent organic pollutants under the Stockholm Convention
UV-328
[ "Chemistry" ]
602
[ "Persistent organic pollutants under the Stockholm Convention" ]
60,893,414
https://en.wikipedia.org/wiki/Offshore%20installation%20security
Offshore installation security is the protection of maritime installations from intentional harm. As part of general maritime security, offshore installation security is defined as the installation's ability to combat unauthorized acts designed to cause intentional harm to the installation. The security of offshore installations is vital as not only may a threat result in personal, economic, and financial losses, but it also concerns the strategic aspects of the petroleum market and geopolitics. Offshore installations refer to offshore platforms, oil platforms, and various types of offshore drilling rigs. It also is a general term for mobile and fixed maritime structures which includes facilities that are intended for exploration; drilling; the production, processing, or storage of hydrocarbons, and other related activities regarding the processing of fluids lying beneath the seabed. Offshore installations are most commonly engaged in drilling actions located in the continental shelf of a country and form a major part of the petroleum industry's upstream sector. Whilst records of security incidents date to the 1960s, the matter did not appear in academic writings until the early 1980s . A milestone is the 1988 SUA Act & Protocol which criminalized crime or violence against ships or fixed platforms. After the September 11 attacks in 2001, there was increased awareness of possible threats in the offshore energy sector. Threats stem from sources such as pirates, environmental extremists, and other criminals, and they may vary in gravity and frequency. There are a variety of protective mechanisms in place, and these range from international legal frameworks to specific industry planning and responses. History 1960s - 2000s Record keeping of security incidents of offshore installations dates back to the 1960s, but it was not until the early 1980s that possible threats were first addressed within academic literature. This lack of protection left the assets vulnerable to attacks; however, with the Achille Lauro attack in 1985, the awareness for the protection of maritime targets, including offshore installations, increased. The attack is seen as a major driver for the 1988 adoption of the Convention For The Suppression Of Unlawful Acts Against The Safety Of Maritime Navigation (SUA Act) criminalizes behavior of crime or violence against ships including attacks of terrorism and piracy. The signing of the accompanying SUA Protocol, the Protocol for the Suppression of Unlawful Acts against the Safety of Fixed Platforms Located on the Continental Shelf , which prohibits and punishes behavior that may threaten the security of offshore fixed platforms is seen to present a milestone in offshore installation security. In the same year Brian Michael Jenkins published a paper under the RAND Corporation and was the first to comprehensively list a record of historical attacks on offshore installations and identify the main methods of attack. By the late 1980s the awareness of installation security had increased, and the first international legal regulation was in place. Nevertheless, industry standards with regards to the protection of offshore installations were still low. September 11 attacks as turning point The 9/11 attacks marked a turning point in the international awareness and policy towards the comprehensive protection of offshore energy sector as political engagement with the topic increased. Moreover, since 2004, the international community experienced an increase in the attacks on offshore installation due to reasons such as the increased capabilities of adversaries, political instability within certain nations, and armed conflicts in oil producing countries. For example, since 2006 the conflict in the Niger Delta has resulted in increased attacks in the Gulf of Guinea and raised security level. According to the International Energy Agency, the security of offshore oil and gas industry is currently of economic and strategic importance as about one quarter of the global energy supply stems from offshore sources. The resulting overall development towards heightened awareness and recognition of the issue has affected the organization of the offshore oil and gas sector within their installations. For example, some companies include a security division within their Health, Safety & Environment departments. This overall development has brought changes to the international regulatory framework; namely, the passing of the ISPS Codes and the 2005 amendments to the 1988 SUA Convention and Protocol. Additionally, national laws have been enacted to include critical infrastructure protection policies (for further information see below 'Protection Mechanisms'). Security threats While a security threat is seen as "any unlawful interference with offshore oil and gas operations or an act of violence directed towards offshore installations", there are several ways of how to classify the various threats facing offshore installations. The most comprehensive and encyclopedic compilation is Dr. Mikhail Kashubsky in his 2016 book. Offshore Oil and Gas Installations Security: An International Perspective. The book includes a comprehensive dataset of past attacks and security incidents involving offshore oil and gas installations entitled the Offshore Installations Attack Dataset (OIAD). In his writing, Kashubsky established an offshore security threat nexus in which he classifies the different threats. This classification identifies the people and organizations behind the threats as an analysis to learn more about their motivation, intent and tactics, to develop an effective response. Specifically, there are three factors taken into account by Kashubsky when assessing the offshore security threats: geography and other enabling factors, motivations and objectives, and capabilities and tactics. With regards to geography, the location of the offshore installation is identified for possible vulnerability. Other enabling factors refer to how events such as civil wars or political unrest in the region might effect offshore security. Motivations and objectives highlight the difference in intentions by the respective threats and how this relates to a differing methods in which they might deploy threats methods. Capabilities and tactics, address how to adapt defensive operations depending on the type of type and aim of a threat. These can range from piratical kidnapping tactics to external sabotage. Since threats are seen as being motivated by a range of objectives, the threats are also seen as being interlinked and overlapping. Lastly, Kashubsky ranks the different threats according to the API Security Risk Assessment methodology. This consists of a 5-level threat ranking system that define threat rankings for the petroleum and petrochemical industry, where 1 is very low, 2 is low, 3 is medium, 4 is high, and 5 is very high. This ranking is based on these three factors as well as the frequency of past incidents. The offshore security threat nexus identifies and ranks the following threats: Civil protest: These are interferences caused by non-violent environmental activists, indigenous activists, labour activists, striking workers, anti-government protesters or the like, usually employing non-violent and non-destructive measures. API-SRA Ranking: High Cyber threats: These present a broad spectrum of motivations and capabilities; however, there is a trend of cyber-attacks to target critical infrastructure targets, attacks that can be executed from any location worldwide. API-SRA Ranking: High Inter-state hostilities: These are certain actions of nation-states that take the form of interstate armed conflict and wars, maritime boundary disputes, or state terrorism. API-SRA Ranking: High Piracy: Piracy activities are those acts that seek financial gain and it describes the act of piracy. API-SRA Ranking: Medium Insurgency: These include regular or guerrilla combat against the armed forces of an established authority and the government or administration which act in opposition to civil authority. These may also relate to piracy as a financial tactic. API-SRA Ranking: Medium Organised crime: This addresses criminal activities with illegal ventures for financial purpose, specifically those which are non-ideological. API-SRA Ranking: Medium Internal sabotage: This addresses the deliberate destruction, disruption, or damage of equipment by dissatisfied employees, current or former. It also includes the intentional disclosure of sensitive and confidential information to third parties. API-SRA Ranking: Medium Terrorism: This concerns activities organised for terrorist purposes with a political aim or a tactic to realise certain sub-goals. In this classification, violence is deliberately used. API-SRA Ranking: Low Vandalism: The concerns acts that damage cargo, support equipment, infrastructure, systems, or facilities. They can include violent actions of radical environmental and animal rights groups that intend to cause damage to company property. API-SRA Ranking: Very Low With this classification system, the highest threats are seen to stem from civil protest, interstate hostilities, and cyber threats. On the other hand, terrorism threat is low, and vandalism even lower. The other categories provide a medium threat level. Geographical considerations The security of an offshore installation stands in close relation to its geographical location. Even though attacks have taken place in all regions of the world, most occurred in political and economically unstable countries. The majority of these, more than 60%, took place off the coast of Nigeria. This raised the notion that there are national and regional dimensions that must be considered. Regions of heightened concern include the following: Gulf of Guinea with more than 60% of the attacks taking place there Bay of Benegal and the Asia Pacific region due to civil unrest onshore Persian Gulf which is in an oil rich region Indian Ocean specifically around the Horn of Africa Possible consequences of security incidents There is a variety of consideration when analyzing the consequences of a possible threat materializing. Within this, offshore installations security threats are considered hybrid-threats as the consequences may be felt by various organizations and sectors around the globe. Personal security concerns Possible injury or death of offshore workers need to be considered. Attacks may result in grave injuries or other medical consequences, or loss of life in the worst case. Operational security concerns A materialized security threat may result in the disruption of the functioning of the offshore installation due to the damage or harm on the operational site. Environmental security concerns The consequences of oil spills, especially in the high seas, may be grave. A possible oil spill may cause long-lasting damage to the immediate environment, but may have wider implications too. For example, the food security of a region may be compromised due to water contamination. Not only may water offshore and in coastal waters be affected, but it also may cause toxic effects on shorelines and shallow inshore waters. This could have a negative effect on the population living in the region. Economic Security Concerns A successful attack may result in economic concerns for a variety of people who are involved. First, for the operating company may suffer damage and also a loss of income when production is stalled. Additionally, a disruption of oil and gas supply to the market may result in volatile oil prices, which would carry an effect on global economy and the stock exchange. An oil spill may also have significant effects on other sectors such as local fisheries and tourism which could experience losses. Energy security concerns With the offshore oil and gas sector being one fourth of the global energy production, offshore oil and gas extraction has become increasingly important in the evolving world energy scene. Petroleum, as one of the most important energy resources of the earth, will remain an essential part of the global energy demand also in the future, as demands are not projected to curtail. Thus, an uninterrupted petroleum supply is essential in light of the global energy security as a sustained disruption in oil supply may cause national emergencies. Strategic security concerns A sustained disruption in oil and gas supply may also cause geopolitical concerns. It could present a weakened position of a nation within global politics as it loses power within those factors that govern international relations. Protection mechanisms Offshore installations enjoy a number of protection mechanisms that are international, regional, and industry specific. Legal mechanisms UNCLOS Art. 60 The 1982 United Nations Convention on the Law of the Sea (UNCLOS) provides a basic legal basis for protecting offshore installations. Typically, offshore installations are deployed either in the territorial sea, the contiguous zone, or the exclusive economic zone (EEZ) of a coastal state. Whilst the coastal state has full enforcement jurisdiction over all security matters in the territorial sea, in the contiguous zone it has also has powers over law enforcement issues which affect its domestic stability. This allows the coastal state to secure its offshore assets broadly through jurisdiction in these two zones. In the EEZ the rights are more limited, as the coastal state cannot restrict others' right to innocently transit the waters. 'Art. 60 of UNCLOS' gives coastal states the right to create a 500-meter safety zone around offshore installations which designates it as an area of restricted navigation where any passing vessel or boat may be considered a potential security concern. Within this zone, personnel may take appropriate measures to stop those who pose the threat. SUA Convention + protocol The Convention for the Suppression of Unlawful Acts Against the Safety of Maritime Navigation (SUA Convention) and its accompanying Protocol for the Suppression of Unlawful Acts Against the Safety of Fixed Platforms Located on the Continental Shelf (SUA Protocol) criminalized behavior of crime, violence, or behavior that may threaten the security of ships and fixed platforms. The main purpose of the Convention was to ensure that appropriate action is taken against those who have committed unlawful acts against vessels and offshore oil and gas infrastructure as it obliges contracting Governments either to extradite or prosecute alleged offenders. The 2005 amendments, moreover, addressed vulnerable elements of the maritime-based oil and gas industry and drew attention to potential acts of terrorism. These actions establish that consideration should be also given also to the oil and gas industry. With this the SUA Convention and Protocol provided the first international treaty and framework for combating and prosecuting criminals and terrorists who have attacked or used a tanker or a fixed oil or gas installation as part of a terrorist operation. ISPS Code The International Ship and Port Facility Security Code (ISPS) prescribed responsibilities to governments, companies and personnel to detect security threats and take preventive measures against security incidents affecting ships or port facilities used in international trade. It additionally introduced maritime security levels for quick crisis communication which provides industry members with a framework for crisis response. The ISPS Code is enacted in national law in the EU and the US. Industry mechanisms International Association of Oil and Gas Producers (OGP documents) The International Association of Oil and Gas Producers is asserted as the "voice of the global upstream oil and gas industry" and has published several documents in the form of reports that recommend best practices to be introduced in the oil and gas industry including enhanced security of energy installations. The pertinent documents are: OGP Report No. 494 on integrating security in major projects - principles & guidelines OGP Report No. 512 on security management system IOGP Report No. 555 on conducting security risk assessments (SRA) in dynamic threat environments ISO Standards ISO 31000:2009 The voluntary international ISO Standards introduced recommendations and best practices for industry actors. The ISO 31000:2009 Risk Management: Principles and Guidelines is a standard presenting internationally accepted best practice frameworks and guidelines for action on risk management. It presents a systematized protocol to identify, analyse, evaluate, and treat possible risks to support strategies for major safety and security incident prevention, response, and recovery. Implementation of these standards is designed to both prepare for and react to an security emergency. Risk assessment mechanisms RAMCAP RAMCAP or Risk Analysis and Management for Critical Asset Protection is a framework for analyzing and managing the risks associated with attacks against the United States national critical infrastructure assets. It provides an overarching 7-step methodology for assessment and management of risks and their impact. It has been developed by the American Society of Mechanical Engineers to be used by the staff and management of infrastructure facilities and is also used by the American industry to report to the US Department of Homeland Security CRISRRAM CRISRRAM, or critical infrastructures and systems risk and resilience assessment methodology, is a security methodology developed by the European Commission. It addresses risks and vulnerabilities of critical infrastructure at asset, system, and societal levels which takes into account environmental and man-made security hazards. It provides industry professionals with a framework to analyse, act, and a security emergency. SVA Methodology for the Petroleum and Petrochemical Industries The Security Vulnerability Assessment (SVA) Methodology for the Petroleum and Petrochemical Industries from the American Petroleum Institute and the National Petrochemical & Refiners Association aims at maintaining and increasing the security of energy facilities in the petroleum sector. The document establishes a security vulnerability assessment methodology to identify and analyse the threats and vulnerabilities those energy installations face. Moreover, general security risk management practices, such as enterprise risk management are employed throughout the sector. See also Deepwater Horizon explosion Nigerian Oil Crisis Price of oil Oil-dependent country energy security critical infrastructure Cyberterrorism Risk assessment Risk management References External links Kashubsky (2016). Offshore Oil and Gas Installations Security: An International Perspective Cordner, L. (2018). Maritime Security Risks, Vulnerabilities and Cooperation: Uncertainty in the Indian Ocean. IEA (2018). Offshore Energy Outlook. BP (2018). BP Statistical Review of World Energy. 67th Edition. Offshore installations
Offshore installation security
[ "Engineering" ]
3,360
[ "Offshore installations", "Offshore engineering" ]
60,893,561
https://en.wikipedia.org/wiki/Rebecca%20Lange
Rebecca Ann Lange is a professor of experimental petrology, magmatism and volcanism at the University of Michigan. Her research investigates how magmatism has shaped the evolution of the Earth, as well as the formation of continental crust. She is a Fellow of the Mineralogical Society of America and was awarded the F.W. Clarke Medal in 1995. Early life and education Lange studied geology at the University of California, Berkeley. She earned her bachelor's degree in 1995, and remained there for her doctoral studies. She was a member of Sigma Xi. Lange completed her doctorate under the supervision of Ian S. E. Carmichael. Together they worked on the aurora volcanic field, which is located in the Mono Lake in the Great Basin. Research and career Lange was a postdoctoral researcher at Princeton University where she worked with Alexandra Navrotsky on the heat capacities of silicate liquids. Lange was appointed assistant professor at the University of Michigan in 1991 and was promoted to professor in 2004. Her research investigates how magmatism and volcanism have shaped the Earth. Lange studies the formation of the continental crust. She works on the Trans-Mexican Volcanic Belt, a neogene volcanic arc at the edge of the North American Plate. Here she is uncovering the eruption rates of magma, proportions of different types of magma and role of water. She created a thermodynamic model of the plagioclase-liquid exchange reaction. Lange's model contained calorimeteric and volumetric information for the liquid and crystalline components. Lange has since served on the F.W. Clarke Medal committee. Awards and honours Her awards and honours include: 1995 Awarded the F.W. Clarke Medal by the Geochemical Society 1997 University of Michigan Class of 1923 Memorial Teaching Award and John Dewey Award 2014 Geochemical Fellow 2016 Served as president of the Mineralogical Society of America References Year of birth missing (living people) Living people University of Michigan faculty University of California, Berkeley alumni American geochemists American mineralogists Women mineralogists Fellows of the Mineralogical Society of America
Rebecca Lange
[ "Chemistry" ]
417
[ "Geochemists", "American geochemists" ]
60,894,070
https://en.wikipedia.org/wiki/Water%2C%20Air%2C%20%26%20Soil%20Pollution
Water, Air, & Soil Pollution is a monthly peer-reviewed scientific journal covering the study of environmental pollution. It was established in 1971 and is published by Springer Science+Business Media. The editor-in-chief is Jack T. Trevors. According to the Journal Citation Reports, the journal has a 2017 impact factor of 1.769. References External links Springer Science+Business Media academic journals Academic journals established in 1971 Monthly journals English-language journals Environmental science journals
Water, Air, & Soil Pollution
[ "Environmental_science" ]
96
[ "Environmental science journals", "Environmental science journal stubs" ]
60,894,072
https://en.wikipedia.org/wiki/Indigenous%20cuisine
Indigenous cuisine is a type of cuisine that is based on the preparation of cooking recipes with products obtained from native species of a specific area. Indigenous cuisine is prepared using indigenous ingredients of vegetable or animal origin in traditional recipes of the typical cuisine of a place. Contemporary indigenous cuisine uses indigenous products to create new dishes. Chefs and restaurateurs using indigenous foods are aided by farmers who are reviving traditional varieties and breeds. Defining terms David Cook has asked how "indigenous cooking" can be defined, arguing that it can mean anything from techniques to ingredients, and that the ingredients can be further argued as using only pre-colonial ingredients vs. using post-colonial and invasive-species ingredients, concluding that "it all depends on your concept of [indigenous] identity." Australia In Australia, there are chefs both "sticking to the old recipes (and) innovating new ones" using traditional ingredients. Canada In Canada, multiple restaurants owned by First Nations restaurateurs offer menus based on traditional ingredients such as beans, corn, and squash. According to restaurateur, Shawn Adler, one of the challenges is public awareness. "People understand what Thai food is, what Italian food is, what Chinese food is, what Ethiopian food is," he said. "But people don’t really understand what indigenous cuisine is." Caribbean The concept was also used in the Caribbean. Chile The concept as such began to take shape and gain popularity in Chile at the beginning of the 2010s, when a restaurant dedicated to this style of cuisine in Santiago de Chile appeared in the ranking of the World's 50 Best Restaurants in 2014. El Salvador In El Salvador indigenous cuisine is an "emerging movement...composed of young chefs who are integrating traditional foods into contemporary cuisine", according to NPR. Fatima Mirandel said, "We take old ingredients from the [farming areas] and combine them in new ways. The flavor is new and exciting for our generation, and brings back a flood of good memories for the older people." вЗЭв United States Some US Native American chefs using indigenous ingredients in traditional dishes object to referring to indigenous cuisine as a "trend". Sean Sherman, a member of the Oglala Lakota and indigenous food activist, said, "It's not a trend. It's a way of life." Vincent Medina, a member of the Muwekma Ohlone tribe, serves acorn-flour brownies, a non-traditional dish made with indigenous ingredients, at his Cafe Ohlone by Mak-’amham. See also Traditional food List of food origins References Gastronomy by type Endemism
Indigenous cuisine
[ "Biology" ]
538
[ "Endemism", "Biodiversity" ]
60,894,223
https://en.wikipedia.org/wiki/Kinematic%20synthesis
In mechanical engineering, kinematic synthesis (also known as mechanism synthesis) determines the size and configuration of mechanisms that shape the flow of power through a mechanical system, or machine, to achieve a desired performance. The word synthesis refers to combining parts to form a whole. Hartenberg and Denavit describe kinematic synthesis as ...it is design, the creation of something new. Kinematically, it is the conversion of a motion idea into hardware. The earliest machines were designed to amplify human and animal effort, later gear trains and linkage systems captured wind and flowing water to rotate millstones and pumps. Now machines use chemical and electric power to manufacture, transport, and process items of all types. And kinematic synthesis is the collection of techniques for designing those elements of these machines that achieve required output forces and movement for a given input. Applications of kinematic synthesis include determining: the topology and dimensions of a linkage system to achieve a specified task; the size and shape of links of a robot to move parts and apply forces in a specified workspace; the mechanical configuration of end-effectors, or grippers, for robotic systems; the shape of a cam and follower to achieve a desired output movement coordinated with a specified input movement; the shape of gear teeth to ensure a desired coordination of input and output movement; the configuration of a system of gears, belts, and cable, or rope drives, to perform a desired power transmission; the size and shape of fixturing systems to provide precision in part manufacture and component assembly. Kinematic synthesis for a mechanical system is described as having three general phases, known as type synthesis, number synthesis and dimensional synthesis. Type synthesis matches the general characteristics of a mechanical system to the task at hand, selecting from an array of devices such as a cam-follower mechanism, linkage, gear train, a fixture or a robotic system for use in a required task. Number synthesis considers the various ways a particular device can be constructed, generally focussed on the number and features of the parts. Finally, dimensional synthesis determines the geometry and assembly of the components that form the device. Linkage synthesis A linkage is an assembly of links and joints that is designed to provide required force and movement. Number synthesis of linkages which considers the number of links and the configuration of the joints is often called type synthesis, because it identifies the type of linkage. Generally, the number of bars, the joint types, and the configuration of the links and joints are determined before starting dimensional synthesis. However, design strategies have been developed that combine type and dimensional synthesis. Dimensional synthesis of linkages begins with a task defined as the movement of an output link relative to a base reference frame. This task may consist of the trajectory of a moving point or the trajectory of a moving body. The kinematics equations, or loop equations, of the mechanism must be satisfied in all of the required positions of the moving point or body. The result is a system of equations that are solved to compute the dimensions of the linkage. There are three general tasks for dimensional synthesis, i) path generation, in which the trajectory of a point in the output link is required, ii) motion generation, in which the trajectory of the output link is required, and iii) function generation in which the movement of the output link relative to an input link is required. The equations for function generation can be obtained from those for motion generation by considering the movement of the output link relative to an input link, rather than relative to the base frame. The trajectory and motion requirements for dimensional synthesis are defined as sets of either instantaneous positions or finite positions. Instantaneous positions is a convenient way to describe requirements on the differential properties of the trajectory of a point or body, which are geometric versions of velocity, acceleration and rate of change of acceleration. The mathematical results that support instantaneous position synthesis are called curvature theory. Finite-position synthesis has a task defined as a set of positions of the moving body relative to a base frame, or relative to an input link. A crank that connects a moving pivot to a base pivot constrains the center of the moving pivot to follow a circle. This yields constraint equations that can be solved graphically using techniques developed by L. Burmester, and called Burmester theory. Cam and follower design A cam and follower mechanism uses the shape of the cam to guide the movement of the follower by direct contact. Kinematic synthesis of a cam and follower mechanism consists of finding the shape of the cam that guides a particular follower through the required movement. A plate cam is connected to a base frame by hinged joint and the contour of the cam forms a surface that pushes on a follower. The connection of the follower to the base frame can be either a hinged or sliding joint to form a rotating and translating follower. The portion of the follower that contacts the cam can have any shape, such as a knife-edge, a roller, or flat-faced contact. As the cam rotates its contact with the follower face drives its output rotation or sliding movement. The task for a cam and follower mechanism is provided by a displacement diagram, which defines the rotation angle or sliding distance of the follower as a function of the rotation of the cam. Once the contact shape of follower and its motion are defined, the cam can be constructed using graphical or numerical techniques. Gear teeth and gear train design A pair of mating gears can be viewed as a cam and follower mechanism designed to use the rotary movement of an input shaft to drive the rotary movement of an output shaft. This is achieved by providing a series of cam and followers, or gear teeth, distributed around the circumferences of two circles that form the mating gears. Early implementation of this rotary movement used cylindrical and rectangular teeth without concern for smooth transmission of movement, while the teeth were engaged---see the photo of the main drive gears for the windmill Doesburgermolen in Ede, Netherlands. The geometric requirement that ensures smooth movement of contacting gear teeth is known as the fundamental law of gearing. This law states that for two bodies rotating about separate centers and in contact along their profiles, the relative angular velocity of the two will be constant as long as the line perpendicular to the point of contact of their two profiles, the profile normal, passes through the same point along the line between their centers throughout their movement. A pair of tooth profiles that satisfy the fundamental law of gearing are said to be conjugate to each other. The involute profile that is used for most gear teeth today is self-conjugate, which means that if the teeth of two gears are the same size then they will mesh smoothly independent of the diameters of the mating gears. The relative movement of gears with conjugate tooth profiles is defined by the distance from the center of each gear to the point at which the profile normal intersects the line of centers. This is known as the radius of the pitch circle for each gear. The calculation of the speed ratios for a gear train with conjugate gear teeth becomes a calculation using the ratios of the radii of the pitch circles that make up the gear train. Gear train design uses the desired speed ratio for a system of gears to select the number of gears, their configuration, and the size of their pitch circles. This is independent of the selection of the gear teeth as long as the tooth profiles are conjugate, with the exception that the circumferences of the pitch circles must provide for a whole number of teeth. References Mechanics
Kinematic synthesis
[ "Physics", "Engineering" ]
1,541
[ "Mechanics", "Mechanical engineering" ]
60,894,433
https://en.wikipedia.org/wiki/K%C4%81nga%20pirau
Kānga pirau (which translates literally from Māori as rotten corn), is a fermented maize (corn) porridge dish which is considered a delicacy by many Māori people of New Zealand. Production The corn is traditionally prepared by soaking whole corn cobs in streams of running water in woven baskets for up to six weeks, until the corn kernels have settled to the bottom of the basket. In modern preparations, the corn is soaked in containers filled with water. The resulting fermentation process results in the corn having a rather pungent aroma, hence the name rotten corn. Historically, this fermentation process was also used for the preservation of fish and crustaceans such as crayfish. Serving The resulting fermented corn is mashed before serving, and is often served with cream and sugar. History The dish dates back to at least the 19th century. See also Boza List of porridges Ogi Poi Pozol References Fermented foods Maize dishes Māori cuisine Porridges
Kānga pirau
[ "Biology" ]
209
[ "Fermented foods", "Biotechnology products" ]
60,894,693
https://en.wikipedia.org/wiki/NGC%202004
NGC 2004 (also known as ESO 86-SC4) is an open cluster of stars in the southern constellation of Dorado. It was discovered by Scottish astronomer James Dunlop on September 24, 1826. This is a young, massive cluster with an age of about 20 million years and 23,000 times the mass of the Sun. It has a core radius of . NGC 2004 is a member of the Large Magellanic Cloud, which is a satellite galaxy of the Milky Way. References External links Open clusters 2004 ESO objects Dorado Large Magellanic Cloud Astronomical objects discovered in 1826 Discoveries by James Dunlop
NGC 2004
[ "Astronomy" ]
124
[ "Dorado", "Constellations" ]
60,894,825
https://en.wikipedia.org/wiki/BlueKeep
BlueKeep () is a security vulnerability that was discovered in Microsoft's Remote Desktop Protocol (RDP) implementation, which allows for the possibility of remote code execution. First reported in May 2019, it is present in all unpatched Windows NT-based versions of Microsoft Windows from Windows 2000 through Windows Server 2008 R2 and Windows 7. Microsoft issued a security patch (including an out-of-band update for several versions of Windows that have reached their end-of-life, such as Windows XP) on 14 May 2019. On 13 August 2019, related BlueKeep security vulnerabilities, collectively named DejaBlue, were reported to affect newer Windows versions, including Windows 7 and all recent versions up to Windows 10 of the operating system, as well as the older Windows versions. On 6 September 2019, a Metasploit exploit of the wormable BlueKeep security vulnerability was announced to have been released into the public realm. History The BlueKeep security vulnerability was first noted by the UK National Cyber Security Centre and, on 14 May 2019, reported by Microsoft. The vulnerability was named BlueKeep by computer security expert Kevin Beaumont on Twitter. BlueKeep is officially tracked as: and is a "wormable" remote code execution vulnerability. Both the U.S. National Security Agency (which issued its own advisory on the vulnerability on 4 June 2019) and Microsoft stated that this vulnerability could potentially be used by self-propagating worms, with Microsoft (based on a security researcher's estimation that nearly 1 million devices were vulnerable) saying that such a theoretical attack could be of a similar scale to EternalBlue-based attacks such as NotPetya and WannaCry. On the same day as the NSA advisory, researchers of the CERT Coordination Center disclosed a separate RDP-related security issue in the Windows 10 May 2019 Update and Windows Server 2019, citing a new behaviour where RDP Network Level Authentication (NLA) login credentials are cached on the client system, and the user can re-gain access to their RDP connection automatically if their network connection is interrupted. Microsoft dismissed this vulnerability as being intended behaviour, and it can be disabled via Group Policy. As of 1 June 2019, no active malware of the vulnerability seemed to be publicly known; however, undisclosed proof of concept (PoC) codes exploiting the vulnerability may have been available. On 1 July 2019, Sophos, a British security company, reported on a working example of such a PoC, in order to emphasize the urgent need to patch the vulnerability. On 22 July 2019, more details of an exploit were purportedly revealed by a conference speaker from a Chinese security firm. On 25 July 2019, computer experts reported that a commercial version of the exploit may have been available. On 31 July 2019, computer experts reported a significant increase in malicious RDP activity and warned, based on histories of exploits from similar vulnerabilities, that an active exploit of the BlueKeep vulnerability in the wild might be imminent. On 13 August 2019, related BlueKeep security vulnerabilities, collectively named DejaBlue, were reported to affect newer Windows versions, including Windows 7 and all recent versions of the operating system up to Windows 10, as well as the older Windows versions. On 6 September 2019, an exploit of the wormable BlueKeep security vulnerability was announced to have been released into the public realm. The initial version of this exploit was, however, unreliable, being known to cause "blue screen of death" (BSOD) errors. A fix was later announced, removing the cause of the BSOD error. On 2 November 2019, the first BlueKeep hacking campaign on a mass scale was reported, and included an unsuccessful cryptojacking mission. On 8 November 2019, Microsoft confirmed a BlueKeep attack, and urged users to immediately patch their Windows systems. Mechanism The RDP protocol uses "virtual channels", configured before authentication, as a data path between the client and server for providing extensions. RDP 5.1 defines 32 "static" virtual channels, and "dynamic" virtual channels are contained within one of these static channels. If a server binds the virtual channel "MS_T120" (a channel for which there is no legitimate reason for a client to connect to) with a static channel other than 31, heap corruption occurs that allows for arbitrary code execution at the system level. Windows XP, Windows Vista, Windows 7, Windows Server 2003, Windows Server 2008, and Windows Server 2008 R2 were named by Microsoft as being vulnerable to this attack. Versions newer than 7, such as Windows 8, Windows 10 and Windows 11, were not affected. The Cybersecurity and Infrastructure Security Agency stated that it had also successfully achieved code execution via the vulnerability on Windows 2000. Mitigation Microsoft released patches for the vulnerability on 14 May 2019, for Windows XP, Windows Vista, Windows 7, Windows Server 2003, Windows Server 2008, and Windows Server 2008 R2. This included versions of Windows that have reached their end-of-life (such as Vista, XP, and Server 2003) and thus are no longer eligible for security updates. The patch forces the aforementioned "MS_T120" channel to always be bound to 31 even if requested otherwise by an RDP server. The NSA recommended additional measures, such as disabling Remote Desktop Services and its associated port (TCP 3389) if it is not being used, and requiring Network Level Authentication (NLA) for RDP. According to computer security company Sophos, two-factor authentication may make the RDP issue less of a vulnerability. However, the best protection is to take RDP off the Internet: switch RDP off if not needed and, if needed, make RDP accessible only via a VPN. See also Bad Rabbit ransomware attack - 2017 WannaCry ransomware attack Blaster (computer worm) Dyn cyberattack – 2016 Sasser (computer worm) EternalBlue References External links BlueKeep: Windows Update patches HERE, HERE and HERE (Microsoft). Proof-of-Concept of the flaw by Sophos Computer security exploits 2019 in computing Windows administration
BlueKeep
[ "Technology" ]
1,264
[ "Computer security exploits" ]
60,895,550
https://en.wikipedia.org/wiki/May%20Kaftan-Kassim
May Arif Kaftan-Kassim (1928 – July 23, 2020), also known as May A. Kaftan, was an Iraqi radio astronomer. She trained at Harvard University, and advised on the creation of the Erbil Observatory in Iraq in the 1970s. Early life May Arif Kaftan came from a "fairly conventional, very religious Muslim family," by her own account. Her father was a government official. She attended the University of Manchester as an undergraduate and graduate student, on a scholarship for Iraqi students in the sciences. She completed her doctoral studies in astronomy at Radcliffe College in 1958, with a dissertation titled A study of neutral hydrogen in a region in Cygnus. American astronomers Nan Dieter-Conklin and Frank Drake were her classmates in astronomy at Harvard; they all finished in the same year, and all studied under Cecilia Payne-Gaposchkin. Career Kaftan-Kassim worked at the National Radio Astronomy Observatory in West Virginia, from 1964 to 1966. In 1968, she attended the United Nations Conference on the Exploration and Peaceful Uses of Outer Space (COPUOS), in Vienna, informally representing Iraq. She was on staff at the Dudley Observatory at the State University of New York at Albany in the early 1970s. While attending the annual URSI meeting in Washington D.C. in 1981, Kaftan-Kassim gave an oral history interview for the National Radio Astronomy Observatory archives. She was a visiting professor of astronomy at Agnes Scott College, 1983–1984. Kaftan-Kassim helped establish the astronomy program at the University of Baghdad, advising on texts and hiring. She returned to Baghdad in the mid-1970s to advise on the construction of Iraq's National Astronomical Observatory, near Erbil, and was a project manager there, before she lost her position in 1981 in a shifting political context. "I came back with the understanding that it would be six months here, six months there," she explained to an American newspaper in 1979, "but there is so much to do I can't go back to the States." She spent some time doing research at the Byurakan Astrophysical Observatory in the Soviet Union. She did, eventually, return to the United States to live. The American Astronomical Society listed her as a member for over 60 years in 2017. Research publications by Kaftan-Kassim included "Measurements of the 1.9 cm Thermal Radio Emission from Mercury" (Nature 1967), "A Survey of High-Frequency Radio Radiation from Planetary Nebulae" (Astrophysical Journal 1969), "High Frequency Radio Observations of the Stephan's Quintet Region" (Nature 1975), "Extinction and Radio Structure of IC 2149" (Monthly Notices of the Royal Astronomical Society 1977), and "A Radio Continuum Survey of Isolated Pairs of Galaxies" (Astronomical Journal 1978). Personal life Kaftan married pediatrician Sami El-Sheikh Kassim; they separated in the 1970s. Their son Namir E. Kassim was born in Baghdad; he also became a radio astronomer in the United States. May Kaftan-Kassim died in 2020. References External links May A. Kaftan's IAU listing, updated 2020. 1928 births 2020 deaths People from Baghdad Iraqi women academics Iraqi astronomers Women astronomers Radio astronomers Radcliffe College alumni Iraqi expatriates in the United States Alumni of the University of Manchester
May Kaftan-Kassim
[ "Astronomy" ]
698
[ "Women astronomers", "Astronomers" ]
63,015,317
https://en.wikipedia.org/wiki/Waukesha%20Biota
The Waukesha Biota (also known as Waukesha Lagerstätte, Brandon Bridge Lagerstätte, or Brandon Bridge fauna) is an important fossil site located in Waukesha County and Franklin, Milwaukee County within the state of Wisconsin. This biota is preserved in certain strata within the Brandon Bridge Formation, which dates to the early Silurian period. It is known for the exceptional preservation of soft-bodied organisms, including many species found nowhere else in rocks of similar age. The site's discovery was announced in 1985, leading to a plethora of discoveries. This biota is one of the few well studied Lagerstätten (exceptional fossil sites) from the Silurian, making it important in our understanding of the period's biodiversity. Some of the species are not easily classified into known animal groups, showing that much research remains to be done on this site. Other taxa that are normally common in Silurian deposits are rare here, but trilobites are quite common. History and significance The discovery of the Waukesha Biota was first published in 1985 by paleontologists D. G. Mikulic, D. E. G. Briggs, and Joanne Kluessendorf. At the time this site was one of only several known that preserved soft-body parts in fossils. Examples of other sites of this type known at the time were the famous Cambrian aged Burgess shale in British Columbia, and the Carboniferous aged Mazon Creek fossil beds in northern Illinois. This was the only one of its kind known from the Silurian, meaning it was instrumental in the study of early Paleozoic soft-bodied organisms. Since then other Lagerstätten from the Silurian (like the Eramosa lagerstatte) have been found, but none have the same faunal diversity that the Waukesha Biota has. The exceptional preservation of the fossils of the Waukesha Biota thus provides a window to a significant portion of Silurian life that otherwise may have been undetected and therefore unknown to science. Stratigraphy and depositional environment Most of the Waukesha Biota is preserved within a layer of thinly-laminated, fine-grained, shallow marine sediments of the Brandon Bridge Formation consisting of mudstone and dolomite deposited in a sedimentary trap at the end of an erosional scarp over the eroded dolomites of the Schoolcraft and Burnt Bluff Formations. A separate thin bed containing the biota is also present about above the interval. Fossils of unambiguous, fully terrestrial organisms are lacking from the Waukesha Biota. Most of the Waukesha Biota fossils were found at a quarry in Waukesha County, Wisconsin, owned and operated by the Waukesha Lime and Stone Company. Other fossils were collected from a quarry in Franklin, Milwaukee County, owned and operated by Franklin Aggregate Inc. That quarry lies south of the quarry in Waukesha. The Franklin fossils were from blasted material apparently originating from a horizon and setting equivalent to that of the Waukesha site. Its biota is similar to that from the Waukesha site, except that it lacks trilobites. Taphonomy Taphonomy is the study of how organisms decay and become fossilized or preserved in the paleontological record. The taphonomy of the Waukesha Biota is unusual in preserving few of the kinds of animals that typically dominate the Silurian fossil record, including in other strata of the same two quarries. Fossils of corals, echinoderms, brachiopods, bryozoans, gastropods, bivalves, and cephalopods are rare or absent from the Waukesha Biota, although trilobites are diverse and common. This is because of the preservation bias this site has where soft bodied and lightly skeletonized organisms preserve more often than hard shelled organisms. The complete opposite when compared to the taphonomy of most other fossil sites. The exceptional preservation of non-biomineralized and lightly sclerotized remains of the Waukesha Biota is generally attributed to a combination of favorable conditions, including the transportation of the organisms to a sediment trap that was hostile to scavengers but favorable to the production of organic films that coated the surfaces of the dead organisms, which inhibited decay, sometimes enhanced by promoting precipitation of a thin phosphatic coating, which is observed on many of the fossils. Biota Alga One genus of dasycladalean alga is known from the lagerstätte. Hemichordata Many of the hemichordates are members of the group Graptolithina. Porifera Poriferans, also known as sea sponges, are rare in this locality, with only one specimen known. Cnidaria The cnidarians of the site are mainly represented by conulariids, but coral are also known. Echinodermata Echinoderm fossils are rare at the site, but crinoids have been found here. Brachiopoda Like many of the hard shelled organisms known from this site, the brachiopods found here are poorly preserved and rare. Cephalopoda Normally common in Silurian deposits, nautiloid cephalopods are known from only a handful of specimens from the Waukesha biota. "Worms" Multiple soft bodied fossils of "worms" and other vermiform animals are known from the site. Arthropoda Arthropods dominate the fauna of the Waukesha biota in both number of specimens and diversity. A wide variety including crustaceans, trilobites, chelicerates, and less familiar groups like thylacocephalans, cheloniellids and marrellomorphs are known. Also found are enigmatic arthropods whose taxonomy has puzzled paleontologists since the sites discovery. Chordata Multiple chordate fossils (possibly belonging to conodonts) are known from this site. See also Eramosa lagerstätte, a middle Silurian aged lagerstätte in Ontario, Canada Soom Shale lagerstätte, an Upper Ordovician aged lagerstätte in South Africa References Lagerstätten Paleontology in Wisconsin Paleozoic paleobiotas Silurian United States Sheinwoodian Telychian Waukesha County, Wisconsin
Waukesha Biota
[ "Biology" ]
1,315
[ "Paleozoic paleobiotas", "Prehistoric biotas" ]
63,015,947
https://en.wikipedia.org/wiki/Farshad%20Fatemi
Seyed Farshad Fatemi Ardestani (born 15 May 1973) is an Iranian economist and a member of Iranian National Competition Council. He is Associate Professor of Economics and Vice President for Administration and Finance at Sharif University of Technology. Fatemi is known for his works on game theory and industrial organization. References External links Farshad Fatemi Living people Iranian economists Academic staff of Sharif University of Technology Alumni of University College London 1973 births Alumni of the University of Essex Game theorists
Farshad Fatemi
[ "Mathematics" ]
98
[ "Game theorists", "Game theory" ]
63,016,742
https://en.wikipedia.org/wiki/PSAT-2
PSAT-2 is an experimental amateur radio satellite from the U.S. Naval Academy, which was developed in collaboration with the Technical University of Brno in Brno, Czech Republic. AMSAT North America's OSCAR number administrator assigned number 104 to this satellite; in the amateur radio community it is therefore also called Navy-OSCAR 104, short NO-104. Mission PSAT-2 was launched on June 25, 2019 with a Falcon Heavy from Kennedy Space Center, Florida, United States, as part of Mission STP-2 (Space Test Program 2) as one of 24 satellites. In August 2019, the VHF payload failed and control of the satellite was lost. However, after nearly two years of downtime, the payload mysteriously reactivated and control was regained. Frequencies The following frequencies for the satellite were coordinated by the International Amateur Radio Union: 145.825 MHz - Uplink and downlink APRS digipeater, 1200 bd (once again functional as of 2021) 435.350 MHz - Downlink PSK31 and SSTV 29.4815 MHz - Uplink PSK31 See also OSCAR References External links PSAT2 - Amateur Radio Communications Transponders. APRS PSAT2 SSTV camera and transponder homepage, pictures and tlm archive. Satellites orbiting Earth Amateur radio satellites Spacecraft launched in 2019
PSAT-2
[ "Astronomy" ]
271
[ "Astronomy stubs", "Spacecraft stubs" ]