id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
73,788,473
https://en.wikipedia.org/wiki/SCHEMBL19952957
SCHEMBL19952957 is an oxadiazole based antibiotic, originally developed in 2014 as a potential treatment for infections with methicillin-resistant Staphylococcus aureus (MRSA) and other antibiotic-resistant bacteria. Subsequently, it has been found to be useful against Clostridioides difficile as it not only kills active bacteria but also inhibits the germination of the dormant spores which can otherwise often lead to persistent infections that repeatedly recur upon cessation of antibiotic treatment. While it is still only being researched in animals at this stage, this dual action is a significant advance over existing antibiotics, and it is likely that drugs from this class may be developed as new medications for the treatment of antibiotic-resistant infections in humans. See also Cadazolid Fidaxomicin Ridinilazole Surotomycin References Oxadiazoles 4-Hydroxyphenyl compounds Cyclopentanols Aromatic ethers
SCHEMBL19952957
[ "Chemistry" ]
207
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
73,788,726
https://en.wikipedia.org/wiki/Martingale%20%28clothing%29
A martingale (also martingale belt) is a strap on a dress or a half-belt on a coat or a jacket, used to adjust the fullness of the cloth. The martingale is typically attached to the piece of clothing by buttons. In a military overcoat, a martingale is a common and practical feature, as a pleated coat can be spread out as a blanket once the strap is unfastened. Etymology The name comes from a martingale strap used in the horse tack to restrict the movements of the horse's head; another theory suggests that the martingale coat originated in the 15th–16th centuries when a design of a man's martingale breeches included a flap between the legs buttoned to the belt in the back. The word martingale comes from through . The Occitan word is a feminine version of "from Martigues", where martingale breeches with (in the words of Rabelais) "a drawbridge on the ass that makes excretion easier" supposedly originated. It is also possible that the association between the pants and inhabitants of Martigues is due to the latter having a reputation for naiveté and extravagance. History In France, martingale breeches were apparently popular, being worn by Francis I of France, "mignons" of the royal court, and Rabelais' Panurge. The first use of the martingale in a woman's dress dates to 1951 (Christian Dior at the autumn Paris Fashion Week). The strap was placed between the shoulder blades, and since then martingales have been used by couturiers everywhere, but avoiding the waistline.Martingale coats became fashionable for women post-war in 1950s and are still being made for men. References Sources Parts of clothing
Martingale (clothing)
[ "Technology" ]
381
[ "Components", "Parts of clothing" ]
73,788,761
https://en.wikipedia.org/wiki/Faber%E2%80%93Evans%20model
The Faber–Evans model for crack deflection, is a fracture mechanics-based approach to predict the increase in toughness in two-phase ceramic materials due to crack deflection. The effect is named after Katherine Faber and her mentor, Anthony G. Evans, who introduced the model in 1983. The Faber–Evans model is a principal strategy for tempering brittleness and creating effective ductility. Fracture toughness is a critical property of ceramic materials, determining their ability to resist crack propagation and failure. The Faber model considers the effects of different particle morphologies, including spherical, rod-shaped, and disc-shaped particles, and their influence on the driving force at the tip of a tilted and/or twisted crack. The model first suggested that rod-shaped particles with high aspect ratios are the most effective morphology for deflecting propagating cracks and increasing fracture toughness, primarily due to the twist of the crack front between particles. The findings provide a basis for designing high-toughness two-phase ceramic materials, with a focus on optimizing particle shape and volume fraction. Fracture mechanics and crack deflection Fracture mechanics is a fundamental discipline for understanding the mechanical behavior of materials, particularly in the presence of cracks. The critical parameter in fracture mechanics is the stress intensity factor (K), which is related to the strain energy release rate (G) and the fracture toughness (Gc). When the stress intensity factor reaches the material's fracture toughness, crack propagation becomes unstable, leading to failure. In two-phase ceramic materials, the presence of a secondary phase can lead to crack deflection, a phenomenon where the crack path deviates from its original direction due to interactions with the second-phase particles. Crack deflection can lead to a reduction in the driving force at the crack tip, increasing the material's fracture toughness. The effectiveness of crack deflection in enhancing fracture toughness depends on several factors, including particle shape, size, volume fraction, and spatial distribution. The study presents weighting functions, F(θ), for the three particle morphologies, which describe the distribution of tilt angles (θ) along the crack front: The weighting functions are used to determine the net driving force on the tilted crack for each morphology. The relative driving force for spherical particles is given by: where and prescribes the strain energy release rate only for that portion of the crack front which tilts. To characterize the entire crack front at initial tilt, must be qualified by the fraction of the crack length intercepted and superposed on the driving force that derives from the remaining undeflected portion of the crack. The resultant toughening increment, derived directly from the driving forces, is given by: where represents the fracture toughness of the matrix material without the presence of any reinforcing particles, is the volume fraction of spheres, relates the rod length to its radius, , and is the ratio of the disc radius, , to its thickness, . Spatial location and orientation of particles The spatial location and orientation of adjacent particles play a crucial role in determining whether the inter-particle crack front will tilt or twist. If adjacent particles produce tilt angles of opposite sign, twist of the crack front will result. Conversely, tilt angles of like sign at adjacent particles cause the entire crack front to tilt. Therefore, to evaluate the toughening increment, all possible particle configurations must be considered. For spherical particles, the average twist angle is determined by the mean center-to-center nearest neighboring distance, , between particles with spheres of radius r: The maximum twist angle occurs when the particles are nearly co-planar with the crack, given by: and depends exclusively on the volume fraction. For rod-shaped particles, the analysis of crack front twist is more complex due to difficulties in describing the rod orientation with respect to the crack front and adjacent rods. The twist angle, , is determined by the effective tilt angle, , and the inter-particle spacing between randomly arranged rod-shaped particles. The twist of the crack front is influenced not only by the volume fraction of rods but also by the ratio of the rod length to radius: where represents the dimensionless effective inter-particle spacing between two adjacent rod-shaped particles. Morphology and volume effects on fracture toughness The analysis reveals that rod-shaped particles with high aspect ratios are the most effective morphology for deflecting propagating cracks, with the potential to increase fracture toughness by up to four times. This toughening arises primarily from the twist of the crack front between particles. Disc-shaped particles and spheres are less effective in increasing fracture toughness. For disc-shaped particles with high aspect ratios, initial crack front tilt can provide significant toughening, although the twist component still dominates. In contrast, neither sphere nor rod particles derive substantial toughening from the initial tilting process. As the volume fraction of particles increases, an asymptotic toughening effect is observed for all three morphologies at volume fractions above 0.2. For spherical particles, the interparticle spacing distribution has a significant impact on toughening, with greater enhancements when spheres are nearly contacting and twist angles approach π/2. The Faber–Evans model suggests that rod-shaped particles with high aspect ratios are the most effective morphology for deflecting propagating cracks and increasing fracture toughness, primarily due to the twist of the crack front between particles. Disc-shaped particles and spheres are less effective in enhancing toughness. However, the interparticle spacing distribution plays a significant role in the toughening by spherical particles, with greater toughening achieved when spheres are nearly contacting. In designing high-toughness two-phase ceramic materials, the focus should be on optimizing particle shape and volume fraction. The model proved that ideal second phase should be chemically compatible and present in amounts of 10 to 20 volume percent, with particles having high aspect ratios, particularly those with rod-shaped morphologies, providing the maximum toughening effect. This model is often used in the development of advanced ceramic materials with improved performance when the factors that contribute to the increase in fracture toughness is a consideration. See also Fracture toughness Toughening Ceramic Engineering Fracture References Fracture mechanics Ceramic engineering Materials science
Faber–Evans model
[ "Physics", "Materials_science", "Engineering" ]
1,277
[ "Structural engineering", "Applied and interdisciplinary physics", "Fracture mechanics", "Materials science", "nan", "Ceramic engineering", "Materials degradation" ]
73,789,292
https://en.wikipedia.org/wiki/AT%202021lwx
AT 2021lwx (also known as ZTF20abrbeie or "Scary Barbie") is the most energetic non-quasar optical transient astronomical event ever observed, with a peak luminosity of 7 × 1045 erg per second (erg s−1) and a total radiated energy between 9.7 × 1052 erg to 1.5 × 1053 erg over three years. Despite being lauded as the largest explosion ever, GRB 221009A was both more energetic and brighter. It was first identified in imagery obtained on 13 April 2021 by the Zwicky Transient Facility (ZTF) astronomical survey and is believed to be due to the accretion of matter into a super massive black hole (SMBH) heavier than one hundred million solar masses (). It has a redshift of z = 0.9945, which would place it at a distance of about eight billion light-years from earth, and is located in the constellation Vulpecula. No host galaxy has been detected. Forced photometry of earlier ZTF imagery showed AT 2021lwx had already begun brightening by 16 June 2020, as ZTF20abrbeie. It was also detected independently in data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) as ATLAS20bkdj, and the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) as PS22iin. At the Neil Gehrels Swift Observatory, X-ray observations were made with the X-ray Telescope and ultraviolet, with the Ultraviolet-Optical Telescope (UVOT). Subrayan et al. originally interpreted it to be a tidal disruption event between an SMBH (~108 ) and a massive star (~14 ). Wiseman et al. disfavor this interpretation, and instead believe the most likely scenario is "the sudden accretion of a large amount of gas, potentially a giant molecular cloud" (~1,000 ), onto an SMBH (>108 ). The inferred mass of the SMBH, based on the light to mass ratio, is about 1 hundred million - 1 billion solar masses, given the observed brightness. However, the theoretical limit for an accreting super massive black hole is 1 hundred million solar masses. Given the best understood model of accreting SMBH's, this even may be the most massive SMBH to possibly accrete matter. See also Ophiuchus Supercluster eruption, a 5 × 1061-erg event that may have occurred up to 240 million years ago, revealed by a giant radio fossil MS 0735.6+7421, a 1061-erg eruption that has been occurring for the last 100 million years GRB 080916C, an 8.8 × 1054-erg gamma-ray burst seen in 2008 GRB 221009A, a 1.2 × 1055-erg gamma-ray burst seen in 2023 References Astronomical objects discovered in 2023 Astronomical events Supermassive black holes Vulpecula
AT 2021lwx
[ "Physics", "Astronomy" ]
638
[ "Black holes", "Vulpecula", "Astronomical events", "Unsolved problems in physics", "Supermassive black holes", "Constellations" ]
73,789,359
https://en.wikipedia.org/wiki/Sony%20Xperia%201%20V
The Sony Xperia 1 V is an Android smartphone manufactured by Sony. Launched on May 11, 2023, it succeeds the Xperia 1 IV as the latest flagship of Sony's Xperia series. The device was announced along with the mid-range Xperia 10 V, with expected release dates by June 2023 for Japan and European markets and July 2023 for the US. Xperia 1 V marks the last Xperia to have a 4K display, as its successor, the Xperia 1 VI, opted for an LTPO FHD+ display instead. See also List of longest smartphone telephoto lenses References Notes Android (operating system) devices Flagship smartphones Sony smartphones Mobile phones introduced in 2023 Mobile phones with multiple rear cameras Mobile phones with 4K video recording
Sony Xperia 1 V
[ "Technology" ]
160
[ "Mobile technology stubs", "Flagship smartphones", "Mobile phone stubs" ]
73,789,427
https://en.wikipedia.org/wiki/CII%20Iris%2080
The CII Iris 80 computer is the most powerful computer made by the French company CII as part of Plan Calcul. It was released in 1970 and had roughly the same capabilities and performance than its main rivals in Europe: the IBM 360/75 and 360/85. The Iris 80 is the backward-compatible successor to the CII 10070, a licensed SDS Sigma-7, and to the Iris 50, an in-house development from the Sigma-9 architecture. It essentially upgraded the Iris 50 with modern integrated circuits, as well as multiprocessor capabilities. Its operating system, Siris 8, was also upgraded from Siris 7 to leverage the new capabilities of the Iris 80. Because of a policy of national preference that the Plan Calcul imposed on the public sector, this computer was installed at four of the approximately twenty French university computing centers in the mid-1970s, as well as INRIA and other research organizations. About a hundred Iris 80s were delivered, including 27 dual processors. The CS 40, used for telephone switching, was derived from it. The original successors to the Iris 80 was supposed to be the CII / Unidata X4 and X5 set to be released in 1976. However, after the eventual merger of CII with Honeywell-Bull, the Iris 80 was instead succeeded by the DPS-7, which included an Iris 80 and Siris 8 emulation mode to ensure compatibility. Hardware CPU The CPU is a modification of the CII 10070 (32-bit words, largely identical instruction set), with addressing revised for multi-processor operation. Paging uses associative memory. Main memory can be expanded to 4 megabytes. Calculation precision is 64 bits, ensuring the convergence of calculations that may diverge on other machines. Peripherals Magnetic disk capacity increased from the MD 25 (25 megabytes) to MD 200 (200 megabytes) by 1974. Mitra 15 minicomputers are used as controllers. Software Operating systems The Iris 80's operating system is a multitasking operating system known as Siris 8, a rewrite of Siris 7, intended to take advantage of new addressing modes. This rewrite was carried out by Jean Ichbiah, and notably made it possible to operate an Iris 80 triple-processor system in Évry. Siris 8 handles a varied workload, including batch processing (local and remote processing) and time sharing. It was the first system to include routing software for the transport of data to other computers, , and a networking and data sharing system, adapted to the customers at universities, research centers, and administrations of Iris 80. The CYCLADES network was notably demonstrated at SICOB 1975 with applications simultaneously running at the INRIA headquarters at Rocquencourt and various regional sites. Languages Symbol assembler, Metasymbol, a meta-assembler LP70, a language similar to PL360 COBOL Fortran IV extended BASIC Algol 60 PL/I Pascal Simula 67 SNOBOL Lisp—Several implementations of Lisp, from the universities of Toulouse, Grenoble, etc., were used by the university community LIS, a systems implementation language, derived from MESA, Modula-2 and Simula, intended for writing portable operating systems Software packages Mistral document retrieval system Socrate database management system Modulef modular library for calculation using the finite element method References External links Description and pictures of the CII Iris 80 from Fédération des Equipes Bul (FEB) Mainframe computers History of computing in France Computers designed in France 32-bit computers
CII Iris 80
[ "Technology" ]
735
[ "History of computing", "History of computing in France" ]
73,789,728
https://en.wikipedia.org/wiki/Blow%20Up%20%28Australian%20TV%20series%29
Blow Up is an Australian reality television show based on a Dutch format, in which ten artists compete to create the best balloon artworks for a $100,000 prize. It is hosted by Stephen Curry and Becky Lucas and judged by professional balloon artist Chris Adamo. The series premiered on 15 May 2023 on the Seven Network. The programme is produced by Endemol Shine Australia and was first announced in August 2022. It commenced filming in the same month in Melbourne, and was officially confirmed at Seven's 2023 upfronts in October 2022. After the first two episodes drew disappointing ratings, the series was moved to 7flix from its third episode. Contestants Reception Viewership Although highly advertised for weeks, the series debuted to 288,000 viewers, coming third in its timeslot behind MasterChef Australia and The Summit, respectively, and ranking 19th for the night. The second episode fared no better, with only 224,000 viewers, losing more than 40,000 from its debut and coming fifth in its timeslot, and ranking below the top 20 programs of the night. After the series was moved to 7flix, the third episode drew 30,000 viewers. The final episode drew just 16,000 viewers. Critical The show has been unfavourably compared to the similar television show Lego Masters (also produced by Endermol Shine and airing on the rival Nine Network), which had concluded its fifth season just a week before the premiere of Blow Up. Hamish Blake, the host of Lego Masters also poked fun at the premise of the show, stating in an episode of Lego Masters that "Balloons are good for a part of one episode of a show. No, I don’t think there’s a series in them." References External links Blow Up at 7plus Seven Network original programming 2023 Australian television series debuts 2023 Australian television series endings 2020s Australian reality television series Australian television series based on Dutch television series Television series by Banijay Balloons
Blow Up (Australian TV series)
[ "Chemistry" ]
405
[ "Balloons", "Fluid dynamics" ]
73,792,270
https://en.wikipedia.org/wiki/Global%20Digital%20Compact
The Global Digital Compact is an initiative proposed in the United Nations Secretary-General António Guterres's Common Agenda. The objective of this compact is to ensure that digital technologies are used responsibly and for the benefit of all, while addressing the digital divide and fostering a safe and inclusive digital environment. The Global Digital Compact is part of the Pact for the Future, which was discussed and adopted at the UN Summit of the Future in September 2024. Background and Process Following consultations with over 1 million voices from around the world, the UN Member States adopted a declaration that emphasized the importance of improving digital cooperation. In response, the Secretary-General's report, "Our Common Agenda," proposes a Summit of the Future, with a technology track leading to the Global Digital Compact. On 17 January 2023, the President of the UN General Assembly appointed Rwanda and Sweden as Co-facilitators to lead the intergovernmental process on the Global Digital Compact. A road map for the process was published on 16 January 2023. As part of the consultative process, the United Nations invites input from individuals, groups, associations, organizations, and entities to help shape the Global Digital Compact. The input provided will inform deliberations of the Global Digital Compact, which will take place in 2024 as part of the Summit of the Future. Key Aspects The Global Digital Compact aims to bring together governments, private sector entities, civil society organizations, and other stakeholders to work collaboratively on a set of shared principles and commitments. Some key aspects of the Global Digital Compact include: Connectivity: Ensuring that all people, including schools, have access to the internet and digital tools for connectivity and socio-economic prosperity. Internet Fragmentation: Preventing the division and fragmentation of the internet to maintain a unified global digital space. Data Protection: Providing individuals with options for how their data is used and ensuring their privacy is respected. Human Rights Online: Applying human rights principles in the digital sphere, including freedom of expression, privacy, and protection from discrimination and misleading content. Artificial Intelligence Regulation: Promoting the ethical development and use of artificial intelligence in alignment with shared global values. Digital Commons: Recognizing digital technologies as a global public good and encouraging their development and use for the benefit of all. Relation to Other Initiatives The Global Digital Compact is related to various other international efforts, such as the Sustainable Development Goals (SDGs), the UN Secretary-General's Roadmap on Digital Cooperation, and the Partner2Connect Digital Coalition. External links UNGA President letter: designation of co-facilitators Background Note References United Nations Digital technology
Global Digital Compact
[ "Technology" ]
530
[ "Information and communications technology", "Digital technology" ]
73,792,277
https://en.wikipedia.org/wiki/Diurnal%20mood%20variation
Diurnal mood variation or morning depression is a prominent depression symptom characterized by gradual mood improvement through the day, reaching its peak sometime after twilight. While the main form of diurnal mood variation presents itself as described, a reversed form, with a worsening of mood towards the evening, also exists. While some mood changes are generally experienced by the majority of patients diagnosed with depression, such recurrent mood instability is a consistent predictor of suicidal ideation, and may cause increased mortality. Diurnal mood variation is most strongly associated with melancholic depression, which is also referred to as endogenous or somatic depression. According to the diagnostic criteria outlined in The Diagnostic and Statistical Manual of Mental Disorders (DSM) and The International Classification of Diseases (ICD), diurnal mood variation characterized by worsening symptoms in the early morning is recognized as a hallmark symptom of melancholic features in somatic major depressive disorder. Symptoms Patients experiencing diurnal mood variation generally complain about the following symptoms, which gradually improve throughout the day: feelings of sadness irritability trouble getting out of bed extreme lack of energy in the morning fatigue psychomotor slowing difficulty performing daily tasks, such as making the bed or dressing up delayed cognitive function, often described as fogginess Distinction from regular mood change Diurnal mood variation generally does not correspond with important behavioural or environmental stimuli, unlike regular mood changes and depression in general, which can be experienced in irregular waves. According to one study, among individuals with melancholic features, mood variations tended to occur spontaneously in over half of the cases. In contrast, healthy controls predominantly attributed their mood fluctuations to their own activities or external circumstances. Patients also report a preference for performing the majority of their activities at dusk or in the evening, which is consistent with the evening chronotype. People experiencing diurnal mood variation consistently prefer later bed and wake-up times. Pathomechanism Although diurnal mood variation is a prevalent pattern observed in various mood disorders, there exists a gap in the literature regarding a comprehensive analysis of its underlying causes; the mechanisms underlying DMV symptoms are not well understood. Diurnal changes in activity patterns align with the characteristics of individuals with an evening chronotype, who experience their peak energy and efficiency towards the later part of the day. The body's biological clock system plays a role in regulating wake-behavior rhythms, which in turn affects a person's chronotype and influences their mood variations. Numerous studies have demonstrated that circadian rhythms have an impact on an individual's psychological well-being, including their susceptibility to psychopathological conditions. In clinical settings, individuals with a late chronotype, commonly known as evening types, have been observed to have a higher likelihood of experiencing depression. This association suggests that being an evening type may contribute to an increased risk of developing depressive symptoms. Functional neuroimaging plays a crucial role in deepening the understanding of the neural mechanisms involved in major depressive disorder with diurnal mood variation symptoms. The circadian or clock system consists of multiple cellular clocks present in organs and tissues, and it plays a vital role in regulating brain function. Research has indicated that functional changes in the ventral and dorsal emotion neural systems are associated with diurnal mood variation symptoms. The ventral emotion neural system, which encompasses the amygdala, ventral anterior cingulate, orbitofrontal cortex, ventral striatum, and insula, is particularly involved in facilitating the experience of emotions. Previous studies have provided support for the notion that diurnal mood symptoms are linked to functional alterations in these brain regions. In one study, when compared to individuals without depression, patients diagnosed with depression exhibited decreased metabolic activity in the frontal and parietal cortex throughout the day. Interestingly, in depressed patients, improvements in mood during the evening were accompanied by increased metabolic activity in the frontal regions of the brain. These findings suggest a link between mood fluctuations and altered metabolic activity in specific brain areas in individuals with depression. One study revealed that individuals experiencing sleep deprivation displayed a higher percentage of diurnal mood variation compared to an unaffected control group. References Depression (mood) Psychiatry Chronobiology Mood disorders
Diurnal mood variation
[ "Biology" ]
865
[ "Chronobiology" ]
73,792,354
https://en.wikipedia.org/wiki/Stephen%20M.%20Gardiner
Stephen M. Gardiner (born 1967) is an American philosopher and Professor of Philosophy and Ben Rabinowitz Endowed Professor of Human Dimensions of the Environment at the University of Washington. He is known for his works on environmental philosophy and ancient Greek philosophy. Books Dialogues on Climate Justice, Routledge, 2023 The Ethics of “Geoengineering” the Global Climate: Justice, Legitimacy and Governance, Routledge, 2021 Oxford Handbook of Intergenerational Ethics, Oxford University Press, 2021 Debating Climate Ethics, Oxford University Press, 2016 Oxford Handbook of Environmental Ethics, Oxford University Press, 2016 A Perfect Moral Storm, Oxford University Press , 2011 Climate Ethics: Essential Readings, Oxford University Press, 2010 Virtue Ethics, Old and New, Cornell University Press, 2005 References External links 21st-century American philosophers 1967 births American philosophy academics Cornell University alumni University of Washington faculty Environmental philosophers American ethicists Living people Scholars of ancient Greek philosophy Climate change mitigation researchers
Stephen M. Gardiner
[ "Engineering" ]
189
[ "Geoengineering", "Climate change mitigation researchers" ]
73,793,820
https://en.wikipedia.org/wiki/RNA%20velocity
RNA velocity is based on bridging measurements to an underlying mechanism, mRNA splicing, with two modes indicating the current and future state. It is a method used to predict the future gene expression of a cell based on the measurement of both spliced and unspliced transcripts of mRNA. RNA velocity could be used to infer the direction of gene expression changes in single-cell RNA sequencing (scRNA-seq) data. It provides insights into the future state of individual cells by using the abundance of unspliced to spliced RNA transcripts. This ratio can indicate the transcriptional dynamics and potential fate of a cell, such as whether it is transitioning from one cell type to another or undergoing differentiation. Software usage There are several software tools available for RNA velocity analysis.Each of these tools has its own strengths and applications, so the choice of tool would depend on the specific requirements of your analysis: velocyto Velocyto is a package for the analysis of expression dynamics in single cell RNA seq data. In particular, it enables estimations of RNA velocities of single cells by distinguishing unspliced and spliced mRNAs in standard single-cell RNA sequencing protocols. It is the first paper proposed the concept of RNA velocity. velocyto predicted RNA velocity by solving the proposed differential equations for each gene. The authors envision future manifold learning algorithms that simultaneously fit a manifold and the kinetics on that manifold, on the basis of RNA velocity. scVelo scVelo is a method that solves the full transcriptional dynamics of splicing kinetics using a likelihood-based dynamical model. This generalizes RNA velocity to systems with transient cell states, which are common in development and in response to perturbations. scVelo was applied to disentangling subpopulation kinetics in neurogenesis and pancreatic endocrinogenesis. scVelo demonstrate the capabilities of the dynamical model on various cell lineages in hippocampal dentate gyrus neurogenesis and pancreatic endocrinogenesis. cellDancer cellDancer is a scalable deep neural network that locally infers velocity for each cell from its neighbors and then relays a series of local velocities to provide single-cell resolution inference of velocity kinetics. cellDancer improved the extisting hypothesis of kinetic rates of velocyto and scVelo, transcription rate was either a constant (velocyto model) or binary values (scVelo model), splicing and degradation rates were shared by all the genes and cells, which may have unpredictable performance, while cellDancer can predict the specific transcription, splicing and degradation rates of each gene in each cell through deep learning. MultiVelo MultiVelo is a differential equation model of gene expression that extends the RNA velocity framework to incorporate epigenomic data. MultiVelo uses a probabilistic latent variable model to estimate the switch time and rate parameters of chromatin accessibility and gene expression . DeepVelo DeepVelo is a neural network–based ordinary differential equation that can model complex transcriptome dynamics by describing continuous-time gene expression changes within individual cells. DeepVelo has been applied to public datasets from different sequencing platforms to (i) formulate transcriptome dynamics on different time scales, (ii) measure the instability of cell states, and (iii) identify developmental driver genes via perturbation analysis. UnitVelo UnitVelo is a statistical framework of RNA velocity that models the dynamics of spliced and unspliced RNAs via flexible transcription activities. UnitVelo supports the inference of a unified latent time across the transcriptome. References Molecular biology RNA RNA sequencing
RNA velocity
[ "Chemistry", "Biology" ]
774
[ "Genetics techniques", "RNA sequencing", "Molecular biology techniques", "Molecular biology", "Biochemistry" ]
73,794,766
https://en.wikipedia.org/wiki/Vanderwaltozyma%20polyspora
Vanderwaltozyma polyspora is a species of multi-spored yeast fungus in the family Saccharomycetaceae found in soil, first described by Johannes P. van der Walt, and moved to a new genus by Cletus P. Kurtzman in 2003 (together with Vanderwaltozyma yarrowii). Background Like other Vanderwaltozyma species it is characterized by the fermentation of glucose and galactose and the assimilation of nitrogen sources like ethylamine, nitrate, lysine, and cadaverine. V. polyspora has been rarely isolated from natural sources, with only eight strains of the species isolated and reported until 2020. It has oblong to reniform ascospores that release spores quickly, and it can produce up to 100 ascospores due to supernumerary mitosis in the ascus parent cell. Growth on agar has cream to brownish color and is butyrous to glossy. References External links Fungi described in 1956 Saccharomycetaceae Yeasts Fungus species
Vanderwaltozyma polyspora
[ "Biology" ]
222
[ "Yeasts", "Fungi", "Fungus species" ]
73,794,841
https://en.wikipedia.org/wiki/Vanderwaltozyma
Vanderwaltozyma is a genus of ascomycetous yeasts in the family Saccharomycetaceae. The genus name is in honour of Johannes P. van der Walt (1925-2011), a South African mycologist who first described Vanderwaltozyma polyspora and Vanderwaltozyma yarrowii (in the Kluyveromyces genus). The genus was circumscribed by Cletus P. Kurtzman in 2003. Vanderwaltozyma species are characterized by the fermentation of glucose and galactose, the assimilation of nitrogen sources like ethylamine, nitrate, lysine, and cadaverine, and spores shaped spheroidal, oblong, or reniform. Species According to Catalogue of Life (as of May 2023), the genus has 7 accepted species: Vanderwaltozyma huisunica C.F. Lee & Chin F. Chang Vanderwaltozyma meishanica C.F. Lee & Chin F. Chang Vanderwaltozyma molinica C.F. Lee & Chin F. Chang Vanderwaltozyma polyspora (Van der Walt) Kurtzman Vanderwaltozyma tropicalis Nakase, Jindam., Kenji Tanaka, Ninomiya, Limtong, H. Kawas. & C.F. Lee Vanderwaltozyma verrucispora C.F. Lee, Chun H. Liu, Ninomiya, H. Kawas. & Nakase Vanderwaltozyma yarrowii (Van der Walt) Kurtzman References Saccharomycetaceae Yeasts Ascomycota genera
Vanderwaltozyma
[ "Biology" ]
360
[ "Yeasts", "Fungi" ]
73,795,388
https://en.wikipedia.org/wiki/Nanophysiology
Nanophysiology is a field that concerns the function of nanodomains, such as the regulation of molecular or ionic flows in cell subcompartments, such as glial protrusions, dendritic spines, dendrites, mitochondria and many more. Background Molecular organization in nanocompartments provides the construction required to achieve elementary functions that can sustain higher physiological functions of a cell. This includes calcium homeostatis, protein turn over, plastic changes underlying cell communications. The goal of this field is to determine the function of these nanocompartments based on molecular organization, ionic flow or voltage distribution. Voltage dynamics How the voltage is regulated in nanodomains remains an open field. While the classical Goldman-Hodgkin-Huxley-Katz models in biophysics provides a foundation for electrophysiology and has been responsible for many advances in neuroscience, this theory remains insufficient to describe the voltage dynamics in small nano-compartments, such as synaptic terminals or cytoplasm around voltage-gated channels, because they are based on spatial and ionic homogeneity. Instead, electrodiffusion theory should be used to describe electrical current flow in these nanostructures and reveal the structure-function. References Biophysics Cell biology
Nanophysiology
[ "Physics", "Biology" ]
265
[ "Cell biology", "Applied and interdisciplinary physics", "Biophysics" ]
73,796,057
https://en.wikipedia.org/wiki/OV1-14
Orbiting Vehicle 1–14 (also known as OV1-14 ) was a satellite launched 6 April 1968 to measure electromagnetic interference and measure proton and electron flux at altitudes up to . OV1-14 was also supposed to study the Sun in the Lyman-alpha line. Part of the OV1 series of USAF satellites, using standardized designs and sent to orbit on decommissioned Atlas ICBMs to reduce development and launching costs, OV1-14 was launched side-by-side with OV1-13. The launch marked the first usage of the Atlas F in the OV program. Unfortunately, the satellite failed after four to seven days, returning about 24 hours of usable data. History The Orbiting Vehicle satellite program arose from a US Air Force initiative, begun in the early 1960s, to reduce the expense of space research. Through this initiative, satellites would be standardized to improve reliability and cost-efficiency, and where possible, they would fly on test vehicles or be piggybacked with other satellites. In 1961, the Air Force Office of Aerospace Research (OAR) created the Aerospace Research Support Program (ARSP) to request satellite research proposals and choose mission experiments. The USAF Space and Missiles Organization created their own analog of the ARSP called the Space Experiments Support Program (SESP), which sponsored a greater proportion of technological experiments than the ARSP. Five distinct OV series of standardized satellites were developed under the auspices of these agencies. The OV1 program, managed by Lt. Col. Clyde Northcott, Jr. was an evolution of the 2.7 m "Scientific Passenger Pods" (SPP), which, starting on 2 October 1961, rode piggyback on suborbital Atlas missile tests and conducted scientific experiments during their short time in space. General Dynamics received a $2 million contract on 13 September 1963 to build a new version of the SPP (called the Atlas Retained Structure (ARS)) that would carry a self-orbiting satellite. Once the Atlas missile and ARS reached apogee, the satellite inside would be deployed and thrust itself into orbit. In addition to the orbital SPP, General Dynamics would create six of these satellites, each to be long with a diameter of , able to carry a payload into a circular orbit. Dubbed "Satellite for Aerospace Research" (SATAR), the series of satellites was originally to be launched from the Eastern Test Range on Atlas missions testing experimental Advanced Ballistic Re-Entry System (ABRES) nosecones. However, in 1964, the Air Force transferred ABRES launches to the Western Test Range causing a year's delay for the program. Moreover, because WTR launches would be into polar orbit as opposed to the low-inclination orbits typical of ETR launches, less mass could be lofted into orbit using the same thrust, and the mass of the SATAR satellites had to be reduced. Prior to the double launch of which OV1-14 was a part, there had been 12 satellites in the OV1 series, the first orbited on January 21, 1965. All were launched on decommissioned Atlas D ICBMs, with the exception of OV1-1, the last ABRES test launch, and OV1-6, launched via the Titan IIIC tasked for the Manned Orbiting Laboratory test flight. Spacecraft design OV1-14, like the rest of the OV1 satellite series, consisted of a cylindrical experiment housing capped with flattened cones on both ends containing 5000 solar cells producing 22 watts of power. Continuing the design trend started with OV1-7, the solar cells were flat rather than curved, as had been in the case with the first six OV1 satellites. Two antennae for transmitting telemetry and receiving commands extended from the sides of the spacecraft. 12 helium-pressurized hydrogen peroxide thrusters provided attitude control. There was also a folding boom for mounting one of the radiation experiments. OV1-13 and 14 were the first in the OV1 series to use Pulse-code modulation digital telemetry, which afforded the return of more and more precise data from the satellite. For stabilization purposes, the satellite was magnetically charged such that it would remain oriented parallel to the Earth's magnetic field, flipping over each time it crossed over one of the Earth's poles. A nutational dampener prevented wobble around the satellites axis of rotation. In contrast to prior flights utilizing the Atlas D, which mounted multiple OV satellites on a paddle-like extension on the rocket nose, the Atlas F used to launch OV1-13 and 14 enclosed the satellites in a symmetrical shroud measuring in height and in diameter made of aluminum. Experiments OV1-14 carried nine experiments for the measurement of electromagnetic interference and proton and electron flux at altitudes up to . OV1-14 carried a Lyman-alpha line photometer for studying the Sun. Mission OV1-14 was launched from Vandenberg's 576-A-2 launch pad along with OV1-13 on an Atlas F rocket on 6 April 1968 at 09:59:42 UTC, mounted with OV1-13 on a simple truss framework. Once in orbit, they separated from the rocket carrier and each other using their attached thrusters. The spacecraft was spin-stabilized, rotating about every eight seconds. After four (or possibly seven) days, the power supply on OV1-14 failed due to overcharging and rupture of its battery, ending a mission that was supposed to last at least a year. Only 84000 seconds (about 24 hours) of usable data was collected, and by that point the satellite was rotating every two seconds. Enough data was collected to determine that the onboard Faraday cup, used to collect charged particles, was not sensitive enough to measure the flux of particles in orbit. As a result, after OV1-15, OV1 satellites carried electrostatic analyzers instead. Legacy and status As of 13 May 2023, OV1-14 is still in orbit, and its position can be tracked on-line. The OV1 program ultimately comprised 22 missions, the last flying on 19 September 1971. References Spacecraft launched in 1968 Military satellites Satellites
OV1-14
[ "Astronomy" ]
1,267
[ "Satellites", "Outer space" ]
73,796,285
https://en.wikipedia.org/wiki/CII%20Iris%2050
The Iris 50 computer is one of the computers marketed by the French company CII as part of plan Calcul at the end of the 1960s. Designed for the civilian market, it was produced from 1968 to 1975 and was the successor to the CII 10070 (SDS Sigma 7). Its main competitor in Europe was the IBM 360/50, which, like the Iris 50, was a universal 32 bits mainframe suitable for both business and scientific applications. At the same time that the CII was building the Iris 50, it had to study military variants for the army called P0M, P2M, and P2MS. The Iris 35 M version, used in particular to process the information needed to fire the Pluton missile, had a magnetic core memory made up of elements of 16 kilobytes each; tolerant of severe environmental conditions. Its main peripherals were a printer, a monitor, and modems. CII concluded that it was impossible to create another CPU compatible with Iris 50. It then decided to adopt the Sigma 9 architecture for the Iris 80, inspired by the Sigma 7 and marketed by (SETI), one of the three companies that had merged in 1966 to create CII. The operating system for the Iris 50 was Siris 7, designed and developed by CII. Its successor, the Iris 80, was considerably transformed and improved, both in terms of the components, which moved from DTL to TTL, and the operating system (Siris 7/8) on which the IRIA researchers worked to increase its speed. A slower-speed version, the Iris 45, was introduced in 1972. References External links Technical specifications and illustrations of the Iris 50 (Fédération des Equipes Bull) Mainframe computers History of computing in France Computers designed in France 32-bit computers
CII Iris 50
[ "Technology" ]
371
[ "History of computing", "History of computing in France" ]
73,796,912
https://en.wikipedia.org/wiki/Ophiocordyceps%20dipterigena
Ophiocordyceps dipterigena is an entomopathogenic fungi species from the genus Ophiocordyceps. This species was originally described in 2007. Description Other entomopathogenic fungi manipulate their hosts by making the fly look for the part of the plant where the stem is, and then the fly hangs itself by its legs. This hanging behavior seems to help the fungus grow and develop. O. dipterigena manipulates its host to land on a leaf without needing to hang itself. In this particular species, once the fly dies, a part of the fungus grows out from inside the insect through its head, resembling its antennae. The stroma of O. dipterigena is yellow. Biomedical role Ophiocordyceps dipterigena provide a source for β-glucans. Ecology Ophiocordyceps dipterigena might be considered as a potential candidate for the biological control of Agromyzid flies, opening new possibilities for the use of entomopathogenic fungi in biological control programs. References External links Ophiocordycipitaceae Fungi of Suriname Fungus species
Ophiocordyceps dipterigena
[ "Biology" ]
237
[ "Fungus stubs", "Fungi", "Fungus species" ]
67,952,883
https://en.wikipedia.org/wiki/History%20of%20nuclear%20fusion
The history of nuclear fusion began early in the 20th century as an inquiry into how stars powered themselves and expanded to incorporate a broad inquiry into the nature of matter and energy, as potential applications expanded to include warfare, energy production and rocket propulsion. Early research In 1920, the British physicist, Francis William Aston, discovered that the mass of four hydrogen atoms is greater than the mass of one helium atom (He-4), which implied that energy can be released by combining hydrogen atoms to form helium. This provided the first hints of a mechanism by which stars could produce energy. Throughout the 1920s, Arthur Stanley Eddington became a major proponent of the proton–proton chain reaction (PP reaction) as the primary system running the Sun. Quantum tunneling was discovered by Friedrich Hund in 1929, and shortly afterwards Robert Atkinson and Fritz Houtermans used the measured masses of light elements to show that large amounts of energy could be released by fusing small nuclei. Henry Norris Russell observed that the relationship in the Hertzsprung–Russell diagram suggested that a star's heat came from a hot core rather than from the entire star. Eddington used this to calculate that the temperature of the core would have to be about 40 million K. This became a matter of debate, because the value is much higher than astronomical observations that suggested about one-third to one-half that value. George Gamow introduced the mathematical basis for quantum tunnelling in 1928. In 1929 Atkinson and Houtermans provided the first estimates of the stellar fusion rate. They showed that fusion can occur at lower energies than previously believed, backing Eddington's calculations. Nuclear experiments began using a particle accelerator built by John Cockcroft and Ernest Walton at Ernest Rutherford's Cavendish Laboratory at the University of Cambridge. In 1932, Walton produced the first man-made fission by using protons from the accelerator to split lithium into alpha particles. The accelerator was then used to fire deuterons at various targets. Working with Rutherford and others, Mark Oliphant discovered the nuclei of helium-3 (helions) and tritium (tritons), the first case of human-caused fusion. Neutrons from fusion were first detected in 1933. The experiment involved the acceleration of protons towards a target at energies of up to 600,000 electron volts. A theory verified by Hans Bethe in 1939 showed that beta decay and quantum tunneling in the Sun's core might convert one of the protons into a neutron and thereby produce deuterium rather than a diproton. The deuterium would then fuse through other reactions to further increase the energy output. For this work, Bethe won the 1967 Nobel Prize in Physics. In 1938, Peter Thonemann developed a detailed plan for a pinch device, but was told to do other work for his thesis. The first patent related to a fusion reactor was registered in 1946 by the United Kingdom Atomic Energy Authority. The inventors were Sir George Paget Thomson and Moses Blackman. This was the first detailed examination of the Z-pinch concept. Starting in 1947, two UK teams carried out experiments based on this concept. 1950s The first successful man-made fusion device was the boosted fission weapon tested in 1951 in the Greenhouse Item test. The first true fusion weapon was 1952's Ivy Mike, and the first practical example was 1954's Castle Bravo. In these devices, the energy released by a fission explosion compresses and heats the fuel, starting a fusion reaction. Fusion releases neutrons. These neutrons hit the surrounding fission fuel, causing the atoms to split apart much faster than normal fission processes. This increased the effectiveness of bombs: normal fission weapons blow themselves apart before all their fuel is used; fusion/fission weapons do not waste their fuel. Stellarator In 1949 expatriate German Ronald Richter proposed the Huemul Project in Argentina, announcing positive results in 1951. These turned out to be fake, but prompted others' interest. Lyman Spitzer began considering ways to solve problems involved in confining a hot plasma, and, unaware of the Z-pinch efforts, he created the stellarator. Spitzer applied to the US Atomic Energy Commission for funding to build a test device. During this period, James L. Tuck, who had worked with the UK teams on Z-pinch, had been introducing the stellarator concept to his coworkers at LANL. When he heard of Spitzer's pitch, he applied to build a pinch machine of his own, the Perhapsatron. Spitzer's idea won funding and he began work under Project Matterhorn. His work led to the creation of Princeton Plasma Physics Laboratory (PPPL). Tuck returned to LANL and arranged local funding to build his machine. By this time it was clear that the pinch machines were afflicted by instability, stalling progress. In 1953, Tuck and others suggested solutions that led to a second series of pinch machines, such as the ZETA and Sceptre devices. Spitzer's first machine, 'A' worked, but his next one, 'B', suffered from instabilities and plasma leakage. In 1954 AEC chair Lewis Strauss foresaw electricity as "too cheap to meter". Strauss was likely referring to fusion power, part of the secret Project Sherwood—but his statement was interpreted as referring to fission. The AEC had issued more realistic testimony regarding fission to Congress months before, projecting that "costs can be brought down... [to]... about the same as the cost of electricity from conventional sources..." Edward Teller In 1951 Edward Teller and Stanislaw Ulam at Los Alamos National Laboratory (LANL) developed the Teller-Ulam design for a thermonuclear weapon, allowing for the development of multi-megaton yield fusion bombs. Fusion work in the UK was classified after the Klaus Fuchs affair. In the mid-1950s the theoretical tools used to calculate the performance of fusion machines were not predicting their actual behavior. Machines invariably leaked plasma at rates far higher than predicted. In 1954, Edward Teller gathered fusion researchers at the Princeton Gun Club. He pointed out the problems and suggested that any system that confined plasma within concave fields was doomed due to what became known as interchange instability. Attendees remember him saying in effect that the fields were like rubber bands, and they would attempt to snap back to a straight configuration whenever the power was increased, ejecting the plasma. He suggested that the only way to predictably confine plasma would be to use convex fields: a "cusp" configuration.:118 When the meeting concluded, most researchers turned out papers explaining why Teller's concerns did not apply to their devices. Pinch machines did not use magnetic fields in this way, while the mirror and stellarator claques proposed various solutions. This was soon followed by Martin David Kruskal and Martin Schwarzschild's paper discussing pinch machines, however, which demonstrated those devices' instabilities were inherent.:118 ZETA The largest "classic" pinch device was the ZETA, which started operation in the UK in 1957. Its name is a take-off on small experimental fission reactors that often had "zero energy" in their name, such as ZEEP. In early 1958, John Cockcroft announced that fusion had been achieved in the ZETA, an announcement that made headlines around the world. He dismissed US physicists' concerns. US experiments soon produced similar neutrons, although temperature measurements suggested these could not be from fusion. The ZETA neutrons were later demonstrated to be from different versions of the instability processes that had plagued earlier machines. Cockcroft was forced to retract his fusion claims, tainting the entire field for years. ZETA ended in 1968. Scylla The first experiment to achieve controlled thermonuclear fusion was accomplished using Scylla I at LANL in 1958. Scylla I was a θ-pinch machine, with a cylinder full of deuterium. Electric current shot down the sides of the cylinder. The current made magnetic fields that pinched the plasma, raising temperatures to 15 million degrees Celsius, for long enough that atoms fused and produced neutrons. The Sherwood program sponsored a series of Scylla machines at Los Alamos. The program began with 5 researchers and $100,000 in US funding in January 1952. By 1965, a total of $21 million had been spent. The θ-pinch approach was abandoned after calculations showed it could not scale up to produce a reactor. Tokamak In 1950–1951 in the Soviet Union, Igor Tamm and Andrei Sakharov first discussed a tokamak-like approach. Experimental research on those designs began in 1956 at the Moscow Kurchatov Institute by a group of Soviet scientists led by Lev Artsimovich. The tokamak essentially combined a low-power pinch device with a low-power stellarator. The notion was to combine the fields in such a way that the particles orbited within the reactor a particular number of times, today known as the "safety factor". The combination of these fields dramatically improved confinement times and densities, resulting in huge improvements over existing devices. Other In 1951, the United States completed the Greenhouse Item test of the first boosted fission weapon. A deuterium–tritium gas was used to enhance the fission yield. This became the first instance of artificial thermonuclear fusion, and the first weaponization of fusion. In 1952 Ivy Mike, part of Operation Ivy, became the first detonation of a hydrogen bomb, yielding 10.4 megatons of TNT using liquid deuterium. Cousins and Ware built a toroidal pinch device in England and demonstrated that the plasma in pinch devices is inherently unstable. In 1953 The Soviet Union tested its RDS-6S test, (codenamed "Joe 4" in the US) demonstrated a fission/fusion/fission ("Layercake") design that yielded 600 kilotons. Igor Kurchatov spoke at Harwell on pinch devices, revealing that the USSR was working on fusion. Seeking to generate electricity, Japan, France and Sweden all start fusion research programs In 1955, John D. Lawson (scientist) creates what is now known as the Lawson criterion which is a criterion for a fusion reactor to produce more energy than is lost to the environment due to problems like Bremsstrahlung radiation. In 1956 the Soviet Union began publishing articles on plasma physics, leading the US and UK to follow over the next several years. The Sceptre III z-pinch plasma column remained stable for 300 to 400 microseconds, a dramatic improvement on previous efforts. The team calculated that the plasma had an electrical resistivity around 100 times that of copper, and was able to carry 200 kA of current for 500 microseconds. 1960s In 1960 John Nuckolls published the concept of inertial confinement fusion (ICF). The laser, introduced the same year, turned out to be a suitable "driver". In 1961 the Soviet Union tested its 50 megaton Tsar Bomba, the most powerful thermonuclear weapon ever. Spitzer published a key plasma physics text at Princeton in 1963. He took the ideal gas laws and adapted them to an ionized plasma, developing many of the fundamental equations used to model a plasma. Laser fusion was suggested in 1962 by scientists at LLNL. Initially, lasers had little power. Laser fusion (inertial confinement fusion) research began as early as 1965. At the 1964 World's Fair, the public was given its first fusion demonstration. The device was a Theta-pinch from General Electric. This was similar to the Scylla machine developed earlier at Los Alamos. By the mid-1960s progress had stalled across the world. All of the major designs were losing plasma at unsustainable rates. The 12-beam "4 pi laser" attempt at inertial confinement fusion developed at LLNL targeted a gas-filled target chamber of about 20 centimeters in diameter. The magnetic mirror was first published in 1967 by Richard F. Post and many others at LLNL. The mirror consisted of two large magnets arranged so they had strong fields within them, and a weaker, but connected, field between them. Plasma introduced in the area between the two magnets would "bounce back" from the stronger fields in the middle. A.D. Sakharov's group constructed the first tokamaks. The most successful were the T-3 and its larger version T-4. T-4 was tested in 1968 in Novosibirsk, producing the first quasistationary fusion reaction.:90 When this was announced, the international community was skeptical. A British team was invited to see T-3, and confirmed the Soviet claims. A burst of activity followed as many planned devices were abandoned and tokamaks were introduced in their place—the C model stellarator, then under construction after many redesigns, was quickly converted to the Symmetrical Tokamak. In his work with vacuum tubes, Philo Farnsworth observed that electric charge accumulated in the tube. In 1962, Farnsworth patented a design using a positive inner cage to concentrate plasma and fuse protons. During this time, Robert L. Hirsch joined Farnsworth Television labs and began work on what became the Farnsworth-Hirsch Fusor. This effect became known as the Multipactor effect. Hirsch patented the design in 1966 and published it in 1967. Plasma temperatures of approximately 40 million degrees Celsius and 109 deuteron-deuteron fusion reactions per discharge were achieved at LANL with Scylla IV. In 1968 the Soviets announced results from the T-3 tokamak, claiming temperatures an order of magnitude higher than any other device. A UK team, nicknamed "The Culham Five", confirmed the results. The results led many other teams, including the Princeton group, which converted their stellarator to a tokamak. 1970s Princeton's conversion of the Model C stellarator to a tokamak produced results matching the Soviets. With an apparent solution to the magnetic bottle problem in-hand, plans begin for a larger machine to test scaling and methods to heat the plasma. In 1972, John Nuckolls outlined the idea of fusion ignition, a fusion chain reaction. Hot helium made during fusion reheats the fuel and starts more reactions. Nuckolls's paper started a major development effort. LLNL built laser systems including Argus, Cyclops, Janus, the neodymium-doped glass (Nd:glass) laser Long Path, Shiva laser, and the 10 beam Nova in 1984. Nova would ultimately produce 120 kilojoules of infrared light during a nanosecond pulse. The UK built the Central Laser Facility in 1976. The "advanced tokamak" concept emerged, which included non-circular plasma, internal diverters and limiters, superconducting magnets, and operation in the so-called "H-mode" island of increased stability. Two other designs became prominent; the compact tokamak sited the magnets on the inside of the vacuum chamber, and the spherical tokamak with as small a cross section as possible. In 1974 J.B. Taylor re-visited ZETA and noticed that after an experimental run ended, the plasma entered a short period of stability. This led to the reversed field pinch concept. On May 1, 1974, the KMS fusion company (founded by Kip Siegel) achieved the world's first laser induced fusion in a deuterium-tritium pellet. The Princeton Large Torus (PLT), the follow-on to the Symmetrical Tokamak, surpassed the best Soviet machines and set temperature records that were above what was needed for a commercial reactor. Soon after it received funding with the target of breakeven. In the mid-1970s, Project PACER, carried out at LANL explored the possibility of exploding small hydrogen bombs (fusion bombs) inside an underground cavity.:25 As an energy source, the system was the only system that could work using the technology of the time. It required a large, continuous supply of nuclear bombs, however, with questionable economics. In 1976, the two beam Argus laser became operational at LLNL. In 1977, the 20 beam Shiva laser there was completed, capable of delivering 10.2 kilojoules of infrared energy on target. At a price of $25 million and a size approaching that of a football field, Shiva was the first megalaser. At a 1977 workshop at the Claremont Hotel in Berkeley Dr. C. Martin Stickley, then Director of the Energy Research and Development Agency ’s Office of Inertial Fusion, claimed that "no showstoppers" lay on the road to fusion energy. The DOE selected a Princeton design Tokamak Fusion Test Reactor (TFTR) and the challenge of running on deuterium-tritium fuel. The 20 beam Shiva laser at LLNL became capable of delivering 10.2 kilojoules of infrared energy on target. Costing $25 million and nearly covering a football field, Shiva was the first "megalaser" at LLNL. 1980s In the German/US HIBALL study, Garching used the high repetition rate of the RF driver to serve four reactor chambers using liquid lithium inside the chamber cavity. In 1982 high-confinement mode (H-mode) was discovered in tokamaks. Magnetic mirror The US funded a magnetic mirror program in the late 1970s and early 1980s. This program resulted in a series of magnetic mirror devices including: 2X,:273 Baseball I, Baseball II, the Tandem Mirror Experiment and upgrade, the Mirror Fusion Test Facility, and MFTF-B. These machines were built and tested at LLNL from the late 1960s to the mid-1980s. The final machine, MFTF cost 372 million dollars and was, at that time, the most expensive project in LLNL history. It opened on February 21, 1986, and immediately closed, allegedly to balance the federal budget. Laser Laser fusion progress: in 1983, the NOVETTE laser was completed. The following December, the ten-beam NOVA laser was finished. Five years later, NOVA produced 120 kilojoules of infrared light during a nanosecond pulse. Research focused on either fast delivery or beam smoothness. Both focused on increasing energy uniformity. One early problem was that the light in the infrared wavelength lost energy before hitting the fuel. Breakthroughs were made at LLE at University of Rochester. Rochester scientists used frequency-tripling crystals to transform infrared laser beams into ultraviolet beams. Chirping In 1985, Donna Strickland and Gérard Mourou invented a method to amplify laser pulses by "chirping". This changed a single wavelength into a full spectrum. The system amplified the beam at each wavelength and then reversed the beam into one color. Chirp pulsed amplification became instrumental for NIF and the Omega EP system. LANL constructed a series of laser facilities. They included Gemini (a two beam system), Helios (eight beams), Antares (24 beams) and Aurora (96 beams). The program ended in the early nineties with a cost on the order of one billion dollars. In 1987, Akira Hasegawa noticed that in a dipolar magnetic field, fluctuations tended to compress the plasma without energy loss. This effect was noticed in data taken by Voyager 2, when it encountered Uranus. This observation became the basis for a fusion approach known as the levitated dipole. In tokamaks, the Tore Supra was under construction from 1983 to 1988 in Cadarache, France. Its superconducting magnets permitted it to generate a strong permanent toroidal magnetic field. First plasma came in 1988. In 1983, JET achieved first plasma. In 1985, the Japanese tokamak, JT-60 produced its first plasmas. In 1988, the T-15 a Soviet tokamak was completed, the first to use (helium-cooled) superconducting magnets. In 1998, the T-15 Soviet tokamak with superconducting helium-cooled coils was completed. Spherical tokamak In 1984, Martin Peng proposed an alternate arrangement of magnet coils that would greatly reduce the aspect ratio while avoiding the erosion issues of the compact tokamak: a spherical tokamak. Instead of wiring each magnet coil separately, he proposed using a single large conductor in the center, and wiring the magnets as half-rings off of this conductor. What was once a series of individual rings passing through the hole in the center of the reactor was reduced to a single post, allowing for aspect ratios as low as 1.2.:B247:225 The ST concept appeared to represent an enormous advance in tokamak design. The proposal came during a period when US fusion research budgets were dramatically smaller. ORNL was provided with funds to develop a suitable central column built out of a high-strength copper alloy called "Glidcop". However, they were unable to secure funding to build a demonstration machine. Failing at ORNL, Peng began a worldwide effort to interest other teams in the concept and get a test machine built. One approach would be to convert a spheromak.:225 Peng's advocacy caught the interest of Derek Robinson, of the United Kingdom Atomic Energy Authority. Robinson gathered a team and secured on the order of 100,000 pounds to build an experimental machine, the Small Tight Aspect Ratio Tokamak, or START. Parts of the machine were recycled from earlier projects, while others were loaned from other labs, including a 40 keV neutral beam injector from ORNL. Construction began in 1990 and operation started in January 1991.:11 It achieved a record beta (plasma pressure compared to magnetic field pressure) of 40% using a neutral beam injector ITER The International Thermonuclear Experimental Reactor (ITER) coalition forms, involving EURATOM, Japan, the Soviet Union and United States and kicks off the conceptual design process. 1990s In 1991 JET's Preliminary Tritium Experiment achieved the world's first controlled release of fusion power. In 1992, Physics Today published Robert McCory's outline of the current state of ICF, advocating for a national ignition facility. This was followed by a review article from John Lindl in 1995, making the same point. During this time various ICF subsystems were developed, including target manufacturing, cryogenic handling systems, new laser designs (notably the NIKE laser at NRL) and improved diagnostics including time of flight analyzers and Thomson scattering. This work was done at the NOVA laser system, General Atomics, Laser Mégajoule and the GEKKO XII system in Japan. Through this work and lobbying by groups like the fusion power associates and John Sethian at NRL, Congress authorized funding for the NIF project in the late nineties. In 1992 the United States and the former republics of the Soviet Union stopped testing nuclear weapons. In 1993 TFTR at PPPL experimented with 50% deuterium, 50% tritium, eventually reaching 10 megawatts. In the early nineties, theory and experimental work regarding fusors and polywells was published. In response, Todd Rider at MIT developed general models of these devices, arguing that all plasma systems at thermodynamic equilibrium were fundamentally limited. In 1995, William Nevins published a criticism arguing that the particles inside fusors and polywells would acquire angular momentum, causing the dense core to degrade. In 1995, the University of Wisconsin–Madison built a large fusor, known as HOMER. Dr George H. Miley at Illinois built a small fusor that produced neutrons using deuterium and discovered the "star mode" of fusor operation. At this time in Europe, an IEC device was developed as a commercial neutron source by Daimler-Chrysler and NSD Fusion. The next year, Tore Supra reached a record plasma duration of two minutes with a current of almost 1 M amperes driven non-inductively by 2.3 MW of lower hybrid frequency waves (i.e. 280 MJ of injected and extracted energy), enabled by actively cooled plasma-facing components. The upgraded Z-machine opened to the public in August 1998. The key attributes were its 18 million ampere current and a discharge time of less than 100 nanoseconds. This generated a magnetic pulse inside a large oil tank, which struck a liner (an array of tungsten wires). Firing the Z-machine became a way to test high energy, high temperature (2 billion degrees) conditions. In 1996. In 1997, JET reached 16.1 MW (65% of heat to plasma), sustaining over 10 MW for over 0.5 sec. As of 2020 this remained the record output level. Four megawatts of alpha particle self-heating was achieved. ITER was officially announced as part of a seven-party consortium (six countries and the EU). ITER was designed to produce ten times more fusion power than the input power. ITER was sited in Cadarache. The US withdrew from the project in 1999. JT-60 produced a reversed shear plasma with the equivalent fusion amplification factor of 1.25 - as of 2021 this remained the world record. In the late nineties, a team at Columbia University and MIT developed the levitated dipole, a fusion device that consisted of a superconducting electromagnet, floating in a saucer shaped vacuum chamber. Plasma swirled around this donut and fused along the center axis. In 1999 MAST replaced START. 2000s "Fast ignition" appeared in the late nineties, as part of a push by LLE to build the Omega EP system, which finished in 2008. Fast ignition showed dramatic power savings and moved ICF into the race for energy production. The HiPER experimental facility became dedicated to fast ignition. In 2001 the United States, China and Republic of Korea joined ITER while Canada withdrew. In April 2005, a UCLA team announced a way of producing fusion using a machine that "fits on a lab bench", using lithium tantalate to generate enough voltage to fuse deuterium. The process did not generate net power. The next year, China's EAST test reactor was completed. This was the first tokamak to use superconducting magnets to generate both toroidal and poloidal fields. In the early 2000s, LANL researchers claimed that an oscillating plasma could reach local thermodynamic equilibrium. This prompted the POPS and Penning trap designs. In 2005 NIF fired its first bundle of eight beams, achieving the most powerful laser pulse to date - 152.8 kJ (infrared). MIT researchers became interested in fusors for space propulsion, using fusors with multiple inner cages. Greg Piefer founded Phoenix Nuclear Labs and developed the fusor into a neutron source for medical isotope production. Robert Bussard began speaking openly about the polywell in 2006. In March 2009, NIF became operational. In the early 2000s privately backed fusion companies launched to develop commercial fusion power. Tri Alpha Energy, founded in 1998, began by exploring a field-reversed configuration approach. In 2002, Canadian company General Fusion began proof-of-concept experiments based on a hybrid magneto-inertial approach called Magnetized Target Fusion. Investors included Jeff Bezos (General Fusion) and Paul Allen (Tri Alpha Energy). Toward the end of the decade, Tokamak Energy started exploring spherical tokamak devices using reconnection. 2010s Private and public research accelerated in the 2010s. Private projects In 2017, General Fusion developed its plasma injector technology and Tri Alpha Energy constructed and operated its C-2U device. In August 2014, Phoenix Nuclear Labs announced the sale of a high-yield neutron generator that could sustain 5×1011 deuterium fusion reactions per second over a 24-hour period. In October 2014, Lockheed Martin's Skunk Works announced the development of a high beta fusion reactor, the Compact Fusion Reactor. Although the original concept was to build a 20-ton, container-sized unit, the team conceded in 2018 that the minimum scale would be 2,000 tons. In January 2015, the polywell was presented at Microsoft Research. TAE Technologies announced that its Norman reactor had achieved plasma. In 2017, Helion Energy's fifth-generation plasma machine went into operation, seeking to achieve plasma density of 20 T and fusion temperatures. ST40 generated "first plasma". In 2018, Eni announced a $50 million investment in Commonwealth Fusion Systems, to attempt to commercialize ARC technology using a test reactor (SPARC) in collaboration with MIT. The reactor planned to employ yttrium barium copper oxide (YBCO) high-temperature superconducting magnet technology. Commonwealth Fusion Systems in 2021 tested successfully a 20 T magnet making it the strongest high-temperature superconducting magnet in the world. Following the 20 T magnet CFS raised $1.8 billion from private investors. General Fusion began developing a 70% scale demo system. In 2018, TAE Technologies' reactor reached nearly 20 M°C. Government and academic projects In 2010, NIF researchers conducted a series of "tuning" shots to determine the optimal target design and laser parameters for high-energy ignition experiments with fusion fuel. Net fuel energy gain was achieved in September 2013. In April 2014, LLNL ended the Laser Inertial Fusion Energy (LIFE) program and directed their efforts towards NIF. A 2012 paper demonstrated that a dense plasma focus had achieved temperatures of 1.8 billion degrees Celsius, sufficient for boron fusion, and that fusion reactions were occurring primarily within the contained plasmoid, necessary for net power. In August 2014, MIT announced a tokamak it named the ARC fusion reactor, using rare-earth barium-copper oxide (REBCO) superconducting tapes to construct high-magnetic field coils that it claimed produced comparable magnetic field strength in a smaller configuration than other designs. In October 2015, researchers at the Max Planck Institute of Plasma Physics completed building the largest stellarator to date, the Wendelstein 7-X. In December they produced the first helium plasma, and in February 2016 produced hydrogen plasma. In 2015, with plasma discharges lasting up to 30 minutes, Wendelstein 7-X attempted to demonstrate the essential stellarator attribute: continuous operation of a high-temperature plasma. In 2014 EAST achieved a record confinement time of 30 seconds for plasma in the high-confinement mode (H-mode), thanks to improved heat dispersal. This was an order of magnitude improvement vs other reactors. In 2017 the reactor achieved a stable 101.2-second steady-state high confinement plasma, setting a world record in long-pulse H-mode operation. In 2018 MIT scientists formulated a theoretical means to remove the excess heat from compact nuclear fusion reactors via larger and longer divertors. In 2019 the United Kingdom announced a planned £200-million (US$248-million) investment to produce a design for a fusion facility named the Spherical Tokamak for Energy Production (STEP), by the early 2040s. 2020s In December 2020, the Chinese experimental nuclear fusion reactor HL-2M achieved its first plasma discharge. In May 2021, Experimental Advanced Superconducting Tokamak (EAST) announced a new world record for superheated plasma, sustaining a temperature of 120 M°C for 101 seconds and a peak of 160 M°C for 20 seconds. In December 2021 EAST set a new world record for high temperature (70 M°C) plasma of 1,056 seconds. In 2020, Chevron Corporation announced an investment in start-up Zap Energy, co-founded by British entrepreneur and investor, Benj Conway, together with physicists Brian Nelson and Uri Shumlak from University of Washington. In 2021 the company raised $27.5 million in Series B funding led by Addition. In 2021, the US DOE launched the INFUSE program, a public-private knowledge sharing initiative involving a PPPL, MIT Plasma Science and Fusion Center and Commonwealth Fusion Systems partnership, together with partnerships with TAE Technologies, Princeton Fusion Systems, and Tokamak Energy. In 2021, DOE's Fusion Energy Sciences Advisory Committee approved a strategic plan to guide fusion energy and plasma physics research that included a working power plant by 2040, similar to Canadian, Chinese, and U.K. efforts. In January 2021, SuperOx announced the commercialization of a new superconducting wire, with more than 700 A/mm2 current capability. TAE Technologies announced that its Norman device had sustained a temperature of about 60 million degrees C for 30 milliseconds, 8 and 10 times higher, respectively, than the company's previous devices. The duration was claimed to be limited by the power supply rather than the device. In August 2021, the National Ignition Facility recorded a record-breaking 1.3 megajoules of energy created from fusion which is the first example of the Lawson criterion being surpassed in a laboratory. In February 2022, JET sustained 11 MW and a Q value of 0.33 for over 5 seconds, outputting 59.7 megajoules, using a mix of deuterium and tritium for fuel. In March 2022 it was announced that Tokamak Energy achieved a record plasma temperature of 100 million kelvins, inside a commercial compact tokamak. In October 2022, the Korea Superconducting Tokamak Advanced Research (KSTAR) reached a record plasma duration of 45 seconds, sustaining the high-temperature fusion plasma over the 100 million degrees Celsus based on the integrated real-time RMP control for ELM-less H-mode, i.e. fast ions regulated enhancement (FIRE) mode, machine learning algorithm, and 3D field optimization via an edge-localized RMP. In December 2022, the NIF achieved the first scientific breakeven controlled fusion experiment, with an energy gain of 1.5. In February 2024, the KSTAR tokamak set a new record (shot #34705) for the longest duration (102 seconds) of a magnetically confined plasma. The plasma was operated in the H-mode, with much better control of the error field than was possible previously. KSTAR also set a record (shot #34445) for the longest steady-state duration at a temperature of 100 million degrees Celsius (48 seconds, ELM-LESS FIRE mode). See also Timeline of nuclear fusion Fusion power § History Timeline of nuclear weapons development Timeline of nuclear power References Citations Bibliography External links Nuclear fusion History of physics
History of nuclear fusion
[ "Physics", "Chemistry" ]
7,098
[ "Nuclear fusion", "Nuclear physics" ]
67,953,981
https://en.wikipedia.org/wiki/Windows%2011
Windows 11 is the latest major release of Microsoft's Windows NT operating system, released on October 5, 2021. It succeeded Windows 10 (2015), and is available for free for any Windows 10 devices that meet the new Windows 11 system requirements. Windows 11 features major changes to the Windows shell influenced by the canceled Windows 10X, including a redesigned Start menu, the replacement of its "live tiles" with a separate "Widgets" panel on the taskbar, the ability to create tiled sets of windows that can be minimized and restored from the taskbar as a group, and new gaming technologies inherited from Xbox Series X and Series S such as Auto HDR and DirectStorage on compatible hardware. Internet Explorer (IE) has been replaced by the Chromium-based Microsoft Edge as the default web browser, like its predecessor, Windows 10, and Microsoft Teams is integrated into the Windows shell. Microsoft also announced plans to allow more flexibility in software that can be distributed via the Microsoft Store and to support Android apps on Windows 11 (including a partnership with Amazon to make its app store available for the purpose). Citing security considerations, the system requirements for Windows 11 were increased over Windows 10; Microsoft only officially supports the operating system on devices using an eighth-generation Intel Core CPU or newer (with some minor exceptions), a second-generation AMD Ryzen CPU or newer, or a Qualcomm Snapdragon 850 ARM system-on-chip or newer, with UEFI and Trusted Platform Module (TPM) 2.0 supported and enabled. There are some exceptions to these requirements, however . While the OS can be installed on devices with unsupported configurations, Microsoft does not guarantee the availability of updates. Furthermore, Windows 11 completely removes support for 32-bit CPUs, including both 32-bit x86 and 32-bit ARM processors, ensuring compatibility only with 64-bit x86-64 and ARM64 processors. Windows 11 received a mixed reception at launch. Pre-release coverage of the operating system focused on its stricter hardware requirements, with discussions over whether they were legitimately intended to improve the security of Windows, or as a ploy to upsell customers to newer devices, and over the e-waste associated with the changes. Upon release, it was praised for its improved visual design, window management, and stronger focus on security, but was criticized for various modifications to aspects of its user interface that were seen as worse than its predecessor; some were seen as an attempt to dissuade users from switching to competing applications. Additionally, some users have pointed out disadvantages such as the removal of features like the ability to move the taskbar and increased system requirements that may exclude older devices. , Windows 11, accounting for 35% of Windows installations worldwide, is the second most popular Windows version in use, with its predecessor Windows 10 still being the most used version in virtually all countries (with Guyana being an exception, where Windows 11 is the most used), having over 2 times the market share globally. Windows 11 has an estimated 23% share of all PCs (the rest being other Windows editions and other operating systems such as macOS and Linux), and an estimated 8.6% share of all devices (including mobile, tablet and console) are running Windows 11. To comply with the Digital Markets Act, Microsoft is allowing users in the European Economic Area to remove the Microsoft Edge browser, Microsoft Bing search engine, and advertisements to comply with users' interests. Following the discontinuation of Windows Phone with Windows 10 Mobile in 2020, Windows 11 is the first major version of Windows NT without a companion mobile version. Development At the 2015 Ignite conference, Microsoft employee Jerry Nixon stated that Windows 10 would be the "last version of Windows". The operating system was considered to be a service, with new builds and updates to be released over time. PC World argued that the widely reported comment was however taken out of context, noting that the official event transcript marks it only as a segue rather than a core part of the talk. It argues that Nixon was referring to the fact that he could talk freely at the event because 10 was the last version in current development. In October 2019, Microsoft announced "Windows 10X", a future edition of Windows 10 designed exclusively for dual-touchscreen devices such as the then-upcoming Surface Neo. It featured a modified user interface designed around context-sensitive "postures" for different screen configurations and usage scenarios, and changes such as a centered taskbar and updated Start menu without Windows 10's "live tiles". Legacy Windows applications would also be required to run in "containers" to ensure performance and power optimization. Microsoft stated that it planned to release Windows 10X devices by the end of 2020. In May 2020, during the COVID-19 pandemic, Panos Panay, Microsoft's chief product officer for Microsoft Windows and Microsoft Office, stated that "as we continue to put customers' needs at the forefront, we need to focus on meeting customers where they are now", and announced that Windows 10X would only launch on single-screen devices at first, and that Microsoft would "continue to look for the right moment, in conjunction with our OEM partners, to bring dual-screen devices to market". In October 2020, reports emerged that Microsoft was working on a user interface refresh for Windows 10 codenamed "Sun Valley", scheduled to be included in a late-2021 feature update codenamed "Cobalt". Internal documentation stated that the aim for "Sun Valley" was to "reinvigorat[e]" the Windows user interface and make it more "fluid", with a more consistent application of WinUI, while reports suggested Microsoft planned to adapt UI elements seen in Windows 10X. In January 2021, it was reported that a job listing referring to a "sweeping visual rejuvenation of Windows" had been posted by Microsoft. By December 2020, Microsoft had begun to implement and announce some of these visual changes and other new features on Windows 10 Insider Preview builds, such as new system icons (which also included the replacement of shell resources dating back as far as Windows 95), improvements to Task View to allow changing the wallpaper on each virtual desktop, x86-64 emulation on ARM, and adding the Auto HDR feature from Xbox Series X. On May 18, 2021, Head of Windows Servicing and Delivery John Cable stated that Windows 10X had been canceled and that Microsoft would be "accelerating the integration of key foundational 10X technology into other parts of Windows and products at the company". Announcement At the Microsoft Build 2021 developer conference, CEO and chairman Satya Nadella teased about the existence of the next generation of Windows during his keynote speech. According to Nadella, he had been self-hosting it for several months. He also teased that an official announcement would come very soon. Just a week after Nadella's keynote, Microsoft started sending invitations for a dedicated Windows media event at 11:00 a.m. ET on June24, 2021. Microsoft also posted an 11-minute video of Windows start-up sounds to YouTube on June10, 2021, with many people speculating both the time of the Microsoft event and the duration of the Windows start-up sound video to be a reference to the name of the operating system as Windows 11. On June 24, 2021, Windows 11 was officially announced at a virtual event hosted by Chief Product Officer Panos Panay. According to Nadella, Windows 11 is "a re-imagining of the operating system". Further details for developers such as updates to the Microsoft Store, the new Windows App SDK (code-named "Project Reunion"), new Fluent Design guidelines, and more were discussed during another developer-focused event on the same day. Release and marketing The Windows 11 name was accidentally released in an official Microsoft support document in June 2021. Leaked images of a purported beta build of Windows 11's desktop surfaced online later on June 15, 2021, which were followed by a leak of the aforementioned build on the same day. The screenshots and leaked build show an interface resembling that of the canceled Windows 10X, alongside a redesigned out-of-box experience (OOBE) and Windows 11 branding. Microsoft would later confirm the authenticity of the leaked beta, with Panay stating that it was an "early weird build". At the June 24 media event, Microsoft also announced that Windows 11 would be released in "Holiday 2021". Its release will be accompanied by a free upgrade for compatible Windows 10 devices through Windows Update. On June 28, Microsoft announced the release of the first preview build and SDK of Windows 11 to Windows Insiders. On August 31, 2021, Microsoft announced that Windows 11 was to be released on October 5, 2021. The release would be phased, with newer eligible devices to be offered the upgrade first. Since its predecessor Windows 10 was released on July 29, 2015, more than six years earlier, this is the longest time span between successive releases of Microsoft Windows operating systems, beating the time between Windows XP (released on October 25, 2001) and Windows Vista (released on January 30, 2007). The first television commercial for Windows 11 premiered during the 2021 NFL Kickoff Game on September 9, 2021; it was intended to showcase a "feeling of immersion and fluidity", with imagery of operating system features and Xbox Game Studios' Halo Infinite. Other promotional campaigns on release day included the Burj Khalifa in Dubai being illuminated with imagery of the Windows 11 logo and default "Bloom" wallpaper, and Mikey Likes It ice cream parlors in New York City distributing free cups of "Bloomberry" ice cream. Though a support document listed October 4, 2021, as the initial release date, Microsoft officially released Windows 11 on October 5, 2021, as an opt-in, in-place upgrade through either the Windows 11 Installation Assistant application (which can perform the upgrade, or generate an ISO image or USB install media), or via Windows Update in a phased rollout; Microsoft anticipated that Windows 11 would be available via Windows Update to all eligible devices by mid-2022. New installations of Windows 10 on eligible hardware may present an option to upgrade during the OOBE. Retail copies of Windows 11 (consisting of a license key and USB flash drive) were released on May 9, 2022, and digital licenses became available via Microsoft Store on July 28, 2022. On September 20, 2023, around two years after the release date of Windows 11, Microsoft announced that users would no longer be able to use Windows 7 or Windows 8/8.1 keys to activate Windows 10/11. However, as of 2024, there are some reports that they still work, under certain conditions. Features Windows 11, the first major Windows release since 2015, builds upon its predecessor by revamping the user interface to follow Microsoft's new Fluent Design guidelines. The redesign, which focuses on ease of use and flexibility, comes alongside new productivity and social features and updates to security and accessibility, addressing some of the deficiencies of Windows 10. The Microsoft Store, which serves as a unified storefront for apps and other content, is also redesigned in Windows 11. Microsoft now allows developers to distribute Win32, progressive web applications, and other packaging technologies in the Microsoft Store, alongside Universal Windows Platform apps. Microsoft also announced plans to allow third-party application stores (such as Epic Games Store) to distribute their clients on Microsoft Store. Windows 11 supports x86-64 software emulation on ARM-based platforms. The collaboration platform Microsoft Teams is integrated into the Windows 11 user interface, and is accessible via the taskbar. Skype will no longer be bundled with the OS by default. In early 2023, the Phone Link app gained limited support for iMessage. Microsoft claims performance improvements such as smaller update sizes, faster web browsing in "any browser", faster wake time from sleep mode, and faster Windows Hello authentication. Windows 11 ships with the Chromium-based Microsoft Edge web browser (for compatibility with Google Chrome web browser), and does not include or support Internet Explorer. Its rendering engine MSHTML (Trident) is still included with the operating system for backwards compatibility reasons, and Edge can be configured with Group Policy to render whitelisted websites in "IE Mode" (which still uses IE's rendering engine MSHTML, instead of Blink layout engine). Windows 11 is the first version of Windows since the original retail release of Windows 95 to not ship with Internet Explorer. The updated Xbox app, along with the Auto HDR and DirectStorage technologies introduced by the Xbox Series X and Series S, will be integrated into Windows 11; the latter requiring a graphics card supporting DirectX 12 and an NVMe solid-state drive. User interface A redesigned user interface is present frequently throughout the operating system, building upon the Fluent Design System; translucency, shadows, a new color palette, and a rounded geometry are prevalent throughout the UI. A prevalent aspect of the design is an appearance known as "Mica", described as an "opaque, dynamic material that incorporates theme and desktop wallpaper to paint the background of long-lived windows such as apps and settings". Much of the interface and start menu takes heavy inspiration from the now-canceled Windows 10X. The Segoe UI font used since Windows Vista has been updated to a variable version, improving its ability to scale between different display resolutions. The taskbar's buttons are center-aligned by default, and it is permanently pinned to the bottom edge of the screen; it cannot be moved to the top, left, or right edges of the screen as in previous versions of Windows without manual changes to the registry. The notifications sidebar is now accessed by clicking the date and time, with other Quick Actions toggles, as well as volume, brightness, and media playback controls, moved to a new settings pop-up displayed by clicking on the system tray. The "Widgets" button on the taskbar displays a panel with Microsoft Start, a news aggregator with personalized stories and content (expanding upon the "news and interests" panel introduced in later builds of Windows 10). Microsoft Teams is similarly integrated with the taskbar, with a pop-up showing a list of recent conversations. The Start menu has been significantly redesigned, replacing the "live tiles" used by Windows 8.x and 10 with a grid of "pinned" applications, and a list of recent applications and documents. File Explorer was updated to replace its ribbon toolbar with a more traditional toolbar, while its context menus have been redesigned to move some tasks (such as copy and paste) to a toolbar along the top of the menu, and hide other operations under an overflow menu. Task View, a feature introduced in Windows 10, features a refreshed design, and supports giving separate wallpapers to each virtual desktop. The window snapping functionality has been enhanced with two additional features; hovering over a window's maximize button displays pre-determined "Snap Layouts" for tiling multiple windows onto a display, and tiled arrangement of windows can be minimized and restored from the taskbar as a "snap group". When a display is disconnected in a multi-monitor configuration, the windows that were previously on that display will be minimized rather than automatically moved to the main display. If the same display is reconnected, the windows are restored to their prior location. Windows Subsystem for Android On October 21, 2021, Windows Subsystem for Android (WSA) became available to Beta channel builds of Windows 11 for users in the United States, which allows users to install and run Android apps on their devices. Users can install Android apps through any source using the APK file format. An Amazon Appstore client for Microsoft Store is also available. The Windows Subsystem for Android and Amazon Appstore became available to Release channel users in the United States on February 15, 2022, in Windows 11 Release build 22000.527. On March 5, 2024, Microsoft announced deprecation of WSA with support ending on March 5, 2025. WSA is based on the Intel Bridge runtime compiler; Intel stated that the technology is not dependent on its CPUs, and will also be supported on x86-64 and ARM CPUs from other vendors. Setup Home and Pro (since version 22H2) edition installation requires internet connection and Microsoft account login (only if for personal use on Pro) is mandatory unless manually bypassed to create a local user. However, Microsoft has since blocked one of the last remaining easy bypass methods that allowed local account creation during initial setup, complicating the bypass process further. All other editions are excluded from this requirement. System security As part of the minimum system requirements, Windows 11 only runs on devices with a Trusted Platform Module 2.0 security coprocessor, albeit with some exceptions, see for details. According to Microsoft, the TPM 2.0 coprocessor is a "critical building block" for protection against firmware and hardware attacks. In addition, Microsoft now requires devices with Windows 11 to include virtualization-based security (VBS), hypervisor-protected code integrity (HVCI), and Secure Boot built-in and enabled by default. The operating system also features hardware-enforced stack protection for supported Intel and AMD processors for protection against zero-day exploits. Like its predecessor, Windows 11 also supports multi-factor authentication and biometric authentication through Windows Hello. Artificial intelligence In subsequent updates, Microsoft added several features based on artificial intelligence (AI), like live captions, background noise removal in videoconferencing, webcam auto-framing that follows the user's movements, and AI-powered Bing Chat in the taskbar's search field. Following the integration of GPT-4 in Microsoft's other products, the company announced that by summer 2023, the newly released Microsoft Copilot would add GPT-4 integration to the Windows taskbar. On May 20, 2024, Microsoft officially announced Recall, a feature that uses a hardware AI accelerator to locally store snapshots of the user's activity (including content transcribed using live captions), and which allows users to search through them. This feature is exclusive to devices certified under the "Copilot+ PC" branding. Following pushback from the cyber security community, Microsoft delayed the feature in June 2024. A preview version will be added to the Microsoft Insider program at later date in order to test added security measures. Editions Windows 11 is available in two main editions; the Home edition, which is intended for consumer users, and the Pro edition, which contains additional networking and security features (such as BitLocker), as well as the ability to join a domain. Windows 11 Home may be restricted by default to verified software obtained from Microsoft Store ("S Mode"). Windows 11 Home requires an Internet connection and a Microsoft account in order to complete first-time setup. This restriction is also applied to Windows 11 Pro since version 22H2 as it was announced in February 2022, although a Microsoft account isn't required if it's not for personal use. Windows 11 SE was announced on November 9, 2021, as an edition exclusively for low-end devices sold in the education market; it is intended as a successor to Windows 10 S, and also competes primarily with ChromeOS. It is designed to be managed via Microsoft Intune. Based on feedback from educators, Windows 11 SE has multiple UI differences and limitations, including Snap Layouts not containing layouts for more than two applications at once, all applications opening maximized by default, and Widgets being removed. It is bundled with applications such as Microsoft Office for Microsoft 365, Minecraft Education Edition, and Flipgrid, while OneDrive is used to save files by default. Windows 11 SE does not include Microsoft Store; third-party software is provisioned or installed by administrators. To target organizations migrating from Google Chrome, Microsoft Edge is configured by default to enable the installation of extensions from the Chrome Web Store. Other editions Other editions include Pro Education, Pro for Workstations, Education, Enterprise, Enterprise multi-session, IoT Enterprise, Enterprise LTSC, IoT Enterprise LTSC, Home Single Language, and Team; along with regional variations. These editions remain fundamentally the same as their Windows 10 edition counterparts. Two new editions called IoT Enterprise Subscription and IoT Enterprise Subscription LTSC have been introduced in version 24H2. Supported languages Before the launch of Windows 11, OEMs (as well as mobile operators) and businesses were offered two options for device imaging: Component-Based Servicing lp.cab files (for the languages to be preloaded on the first boot) and Local Experience Pack .appx files (for the languages available for download on supported PCs). The 38 fully-localized Language Pack (LP) languages were available as both lp.cab and .appx packages, while the remaining 72 partially-localized Language Interface Pack (LIP) languages were only available as .appx packages. With Windows 11, that process has changed. Five new LP languages were added — Catalan, Basque, Galician, Indonesian, and Vietnamese — bringing the total number of LP languages to 43. Furthermore, these 43 languages can only be imaged using lp.cab packages. This is to ensure a fully supported language-imaging and cumulative update experience. The remaining 67 LIP languages that are LXP-based will move to a self-service model, and can only be added by Windows users themselves via the Microsoft Store and Windows Settings apps, not during the Windows imaging process. Any user, not just admins, can now add both the display language and its features, which can help users in business environments, but these exact options for languages (both LP and LIP) still depend on the OEM and mobile operator. Updates and support Like Windows 10, Windows 11 follows Microsoft's Modern Lifecycle Policy. Each annual feature update has its own support lifecycle: two years for the Home and Pro editions, and three years for the Education and Enterprise editions. Microsoft has stated that Windows 11 provides no lifecycle guarantee if it has been installed on a machine that does not meet its minimum hardware requirements. Windows 11 receives annual major updates, though Microsoft sometimes adds major features in mid-cycle releases. Starting in 2022, in the Enterprise and Education editions, major features added in yearly releases will be turned off by default until the next yearly release, though these features can be manually enabled as a group policy. Preview releases The Windows Insider program carries over from Windows 10, with pre-release builds divided into "Dev" (unstable builds used to test features for future feature updates), "Beta" (test builds for the next feature update; relatively stable in comparison to Dev channel), and "Release Preview" (pre-release builds for final testing of upcoming feature updates) channels. Versions System requirements Official The basic system requirements of Windows 11 differ significantly from Windows 10. Windows 11 only supports 64-bit systems such as those using an x86-64 or ARM64 processor; IA-32 and ARM32 processors are no longer supported. Thus, Windows 11 is the first consumer version of Windows not to support 32-bit processors (although Windows Server 2008 R2 is the first version of Windows Server to not support them). The minimum RAM and storage requirements were also increased; Windows 11 now requires at least 4 GB of RAM and 64 GB of storage. S mode is only supported for the Home edition of Windows 11. As of August 2021, the officially supported list of processors includes eighth generation Intel Core CPUs (Coffee Lake) and later, AMD Zen+ CPUs/APUs and later (which include the "AF" revisions of Ryzen 1000 CPUs, which are underclocked Zen+ CPUs that supplant Ryzen 1000 parts that could no longer be manufactured due to a change in process), and Qualcomm Snapdragon 850 and later. The compatibility list includes the Intel Core i7-7820HQ, a seventh-generation processor used by the Surface Studio 2, although only on devices that shipped with DCH-based drivers. Original equipment manufacturers (OEM) can still ship computers without TPM 2.0 enabled upon Microsoft's approval. On May 20, 2024, Microsoft announced "Copilot+ PC"—a brand of Windows 11 devices that are designed to support enhanced artificial intelligence features. Copilot+ PCs require an on-board AI accelerator, at least 256 GB of storage, and at least 16 GB of RAM. The first wave of Copilot+ PCs run the Qualcomm Snapdragon X Elite system-on-chip. x86-64-based Copilot+ PCs began to be announced later in the year, which are based on AMD Ryzen AI and Intel Core Ultra CPUs. Unofficial Devices with unsupported 64-bit processors are not blocked from installing or running Windows 11; however, a clean install or upgrade using ISO installation media must be performed as Windows Update will not offer an upgrade from Windows 10. Additionally, users must also accept an on-screen disclaimer stating that they will not be entitled to receive updates, and that damage caused by using Windows 11 on an unsupported configuration are not covered by the manufacturer's warranty. In addition, various unofficial methods to bypass other Windows 11 requirements, such as, but not limited to, TPM 2.0 exist; furthermore there also exists an official bypass method provided directly by Microsoft (whereas the installation itself remains unofficially supported). In April 2024, Windows Insider version 24H2 builds began to have a dependency of the SSE4.2 and POPCNT CPU instructions (corresponding to the x86-64 v2 microarchitecture level), increasing the unofficial minimum compatibility to Bulldozer microarchitecture-based processors like the AMD FX (2011) processors and first-generation Intel Core i (2008) processors. Intel Core 2 (like the Core 2 Duo and Core 2 Quad), AMD K10 CPUs (such as Phenom II and Athlon II) and older are no longer supported. Finally, version 24H2 now requires ARMv8.1, dropping unofficial support for ARMv8.0. E.g., the Snapdragon 835 and older are no longer supported. Firmware compatibility Legacy BIOS is no longer officially supported; a UEFI system and a Trusted Platform Module (TPM) 2.0 security coprocessor is now officially required. The TPM requirement in particular has led to confusion as many motherboards do not have TPM support, or require a compatible TPM to be physically installed onto the motherboard. Many newer CPUs also include a TPM implemented at the CPU level (with AMD referring to this as "fTPM", and Intel referring to it as "Platform Trust Technology" [PTT]), which might be disabled by default and require changing settings in the computer's UEFI firmware, or a UEFI firmware update that changes the default settings to reflect these requirements. ARM64 version of Windows 11 requires the UEFI firmware with ACPI protocol. Starting with version 24H2, IoT Enterprise editions have officially reintroduced legacy BIOS support and eliminated the requirement for a TPM. Third-party software Some third-party software may refuse to run on configurations of Windows 11 that do not comply with the hardware security requirement. After the release of Windows 11, Riot Games' kernel-level anti-cheat system Vanguard—used in Valorant and since May 2024 by League of Legends—began to enforce the operating system security requirements, and will not allow the games to be run on the OS if secure boot and a TPM 2.0-compliant coprocessor are not enabled. IoT Enterprise editions While IoT Enterprise editions have always had slightly reduced official requirements compared to other Windows 11 editions, notably starting with version 24H2, minimum requirements were further reduced and now differ significantly. These updated 24H2 requirements were announced on May 22, 2024, for both LTSC and non-LTSC editions. For the first time since Windows 11 release, Microsoft has officially eliminated a TPM and UEFI minimum requirement for all systems running these editions and dropped the minimum DirectX version down to 10 (version 12 was previously required on 23H2). Finally, the IoT Enterprise LTSC edition further drops the minimum required RAM to 2 GB and storage space to 16 GB. Reception Pre-release Reception of Windows 11 upon its reveal was positive, with critics praising the new design and productivity features. However, Microsoft was criticized for creating confusion over the minimum system requirements for Windows 11. The increased system requirements (compared to those of Windows 10) initially published by Microsoft meant that up to 60 percent of existing Windows 10 PCs were unable to upgrade to Windows 11, which has faced concerns that this will contribute to electronic waste. Microsoft has not specifically acknowledged this when discussing the cutoff, it was also acknowledged that the sixth and seventh generation of Intel Core processors were prominently afflicted by CPU-level security vulnerabilities such as Meltdown and Spectre, and that newer CPUs manufactured since then had increased mitigations against the flaws. Speaking to IT news outlet CRN, a dozen solution providers all felt that they "believe Windows 11 will be a meaningful step up in security, and they agree with Microsoft's strategy of putting security first." Research Vice President of Gartner Stephen Kleynhans felt that Microsoft was "looking at the entire stack from the hardware up through the applications and the user experience and trying to make the entire stack work better and more securely. Launch Andrew Cunningham of Ars Technica gave a mixed but overall cautiously positive review of Windows 11 upon its release. He praised the improvements to its visual design (describing the new "Mica" appearance as reminiscent of the visual appearance of iOS and macOS, and arguing that Microsoft had "[made] a serious effort" at making the user-facing aspects of Windows 11 more consistent visually. He also praised window management, performance (assessed as being equivalent to if not better than Windows 10), other "beneficial tweaks". Criticism was raised towards Widgets' lack of support for third-party content, thus limiting it to Microsoft services only, regressions in taskbar functionality and customization. He also noted the inability to easily select default applications for common tasks such as web browsing, as it requires the user to select the browser application for each file type individually. Apart from the user interface, system requirements and Microsoft's unclear justification for its processor compatibility criteria remained a major sticking point for him. While some of the system requirements have brought greater public attention to hardware security features present on modern PCs, he argued that these could already be employed on Windows 10, albeit optionally. Cunningham concluded that "as I've dug into [Windows 11] and learned its ins and outs for this review, I've warmed to it more", but argued that the OS was facing similar "public perception" issues to Windows Vista and Windows 8. However, he noted that 11 did not have as many performance issues or bugs as Vista had upon its release, nor was as "disjointed" as 8, and recommended that users who were unsure about the upgrade should stay on Windows 10 in anticipation of future updates to 11. Tom Warren of The Verge described Windows 11 as being akin to a house in the middle of renovations, but that "actually using Windows 11 for the past few months hasn't felt as controversial as I had expected"—praising its updated user interface as being more modern and reminiscent of iOS and ChromeOS, the new start menu for feeling less cluttered than the Windows 10 iteration, updates to some of its stock applications, and Snap Assist. Warren noted that he rarely used the Widgets panel or Microsoft Teams, citing that he preferred the weather display that later versions of Windows 10 offered, and did not use Teams to communicate with his friends and family. He also acknowledged the expansion of the Microsoft Store to include more "traditional" desktop applications. However, he felt that Windows 11 still felt like a work in progress, noting UI inconsistencies (such as dark mode and new context menu designs not being uniform across all dialogues and applications, and the UWP Settings app still falling back upon legacy Control Panel applets for certain settings), regressions to the taskbar (including the inability to move it, drag files onto taskbar buttons to focus the corresponding application, and the clock only shown on the primary display in multi-monitor configurations), and promised features (such as dynamic refresh rate support and a universal microphone mute button) not being present on the initial release. Overall, he concluded that "I wouldn't rush out to upgrade to Windows 11, but I also wouldn't avoid it. After all, Windows 11 still feels familiar and underneath all the UI changes, it's the same Windows we've had for decades." Mark Hatchman of PC World was more critical of Windows 11, arguing that it "sacrifices productivity for personality, but without cohesion", commenting upon changes such as the inability to use local "offline" accounts on Windows 11 Home, regressions to the taskbar, a "functionally worse" start menu, Microsoft Teams integration having privacy implications and being a ploy to coerce users into switching to the service, File Explorer obscuring common functions under unclear icons, forcing users to scroll through many options to discourage changing the default web browser from Microsoft Edge, and that the OS "anecdotally feels less responsive, slower, and heavier than Windows 10". He concluded that Windows 11 "feels practical and productive, but less so than its predecessor in many aspects", while its best features were either "hidden deeper within", required specific hardware (DirectStorage, Auto HDR) or were not available on launch (Android app support). See also List of operating systems References External links Windows 11 release information from Microsoft 11 2021 software Android (operating system) ARM operating systems Proprietary operating systems Tablet operating systems X86-64 operating systems Microsoft Windows
Windows 11
[ "Technology" ]
7,031
[ "Computing platforms", "Microsoft Windows" ]
67,954,057
https://en.wikipedia.org/wiki/Fosterito
is the name of a kind of awning which covers many entrances to stations in the Bilbao metro. These awnings are made with glass and steel. They are named after Norman Foster, who designed the architecture for the stations of the system, as well as their entrances. These entrances can be seen in all the three lines of the network. References Bilbao metro Buildings and structures in Biscay Foster and Partners buildings Glass architecture Buildings and structures in Bilbao
Fosterito
[ "Materials_science", "Engineering" ]
92
[ "Glass architecture", "Glass engineering and science", "Architecture stubs", "Architecture" ]
67,954,458
https://en.wikipedia.org/wiki/Beryllium%20oxalate
Beryllium oxalate is an inorganic compound, a salt of beryllium metal and oxalic acid with the chemical formula . It forms colorless crystals, dissolves in water, and also forms crystalline hydrates. The compound is used to prepare ultra-pure beryllium oxide by thermal decomposition. Synthesis The action of oxalic acid on beryllium hydroxide: Chemical properties Crystalline hydrates lose water when heated: References Inorganic compounds Beryllium compounds Oxalates
Beryllium oxalate
[ "Chemistry" ]
98
[ "Inorganic compounds" ]
67,955,862
https://en.wikipedia.org/wiki/Urine%20deflector
A urine deflector is a device for deflecting the stream of urine during urination. These may be part of a chamber pot, latrine or toilet intended for the purpose, or they may be deterrents, installed in the sides or corners of buildings to discourage their casual use as urinals by passers-by. They may be constructed in various ways from a variety of materials but are typically designed to have an angled surface which catches and redirects the stream. Intentional design Equipment used for toilet training such as a potty chair will typically include a urine deflector to ensure that the urine does not splash forward and outside the receptacle. Latrines constructed by the US Marines would contain urine deflectors made from sheet metal or tar paper. These would catch and direct the urine into a trough which would carry it to a separate drainage pit. This would minimise the unpleasant smell which typically results from decomposition and production of ammonia. Other designs of latrine typically include similar urine deflectors to prevent degradation of the wooden components and the walls of the pit. Deterrent Urine deflectors are thought to be the earliest example of hostile architecture. Such devices were common in the streets of London in the 19th century. A correspondent to The Farmer's Magazine wrote in 1809, Some may still be found in places such as the Bank of England, Fleet Street and the Savoy. Other cities where antique examples may still be seen include Lviv, Norwich and Venice. In other cities such as Vienna, barriers such as iron railings and spikes have been used to keep people away from attractive corners and crannies. German cities such as Hamburg and Cologne have pioneered the use of hydrophobic paint on walls to deter Wildpinklers. This water-repellent coating causes the stream to rebound at a similar angle and so wet the offender. Other places such as Hackney, Manchester and San Francisco have since evaluated the method for particular trouble spots. London's Soho district was painted in this way in 2022 and Westminster council's full programme of deterrence also included posters, punishment and provision of more public toilets. Gallery See also Manneken Pis Taking the piss Urine-diverting dry toilet References Sanitation Urban design Urine
Urine deflector
[ "Biology" ]
457
[ "Urine", "Excretion", "Animal waste products" ]
67,956,151
https://en.wikipedia.org/wiki/Lithium%20oxalate
Lithium oxalate is an organic compound with the chemical formula . It is a salt of lithium metal and oxalic acid. It consists of lithium cations and oxalate anions . Lithium oxalate is soluble in water and converts to lithium oxide when heated. Synthesis One of the methods of synthesis is the reaction of direct neutralization of oxalic acid with lithium hydroxide: Properties The compound crystallizes in the monoclinic system, cell parameters a = 3.400 Å, b = 5.156 Å, c = 9.055 Å, β = 95.60°, Z = 4. Lithium oxalate decomposes when heated at : Applications In pyrotechnics, the compound is used to color the flame red. References Inorganic compounds Lithium salts Oxalates
Lithium oxalate
[ "Chemistry" ]
166
[ "Inorganic compounds", "Lithium salts", "Salts" ]
67,958,010
https://en.wikipedia.org/wiki/Antimicrobial%20nanotechnology
Antimicrobial nanotechnology is the study of using biofilms to disrupt a microbe's cell membrane, deliver an electric charge to the microbe, and cause immediate cellular death via a "mechanical kill" process, preventing the original microbe from mutating into a superbug. The biofilms are made up of long atomic chains that can breach the cell wall. These spikes are roughly the size of a human hair and are far too small to injure large cells in mammals. These atom chains have a significant positive charge that attracts bacteria that are negatively charged. A new class of antimicrobial has been created by applying nanotechnology to the challenge of superbugs and multiple drug resistance organisms. Problem statement According to a report published in the Archives of Internal Medicine on 22 February 2010, health care–associated infections affect 1.7 million hospitalizations per year. The most prevalent nosocomial infections can live or stay on surfaces for months, posing a continuing transmission risk. On dry surfaces, most gram-positive bacteria, including Enterococcus spp. (including VRE), Staphylococcus aureus (including MRSA), and Streptococcus pyogenes, can persist for months. VRE has been cultured from frequently touched objects and has been found to survive on surfaces for more than three days. Dried cotton fabrics have been shown to support Enterococci that is resistant to vancomycin for up to 18 hours and fungi for more than five days. Nanotechnology antimicrobials are promising because they limit the spread of bacteria by lowering the number of infection agents at frequent contact points (doorknobs, rails, tables, etc.). These new treatments have been certified by the Environmental Protection Agency and are being considered for use in hospitals and other settings where community-acquired illnesses spread quickly, such as cruise ships and jails. Environmental measures and adequate antibiotic use are the first steps in preventing the emergence of superbugs. According to studies, even if a patient does not need an antibiotic, a doctor is considerably more likely to prescribe one if they believe the patient does. Safety and effects to the environment Antimicrobial nanotechnology is an environmentally friendly solution because it is based from water and contains no heavy metals, arsenic, tin, or polychlorinated phenols. According to tests, a garment treated with antimicrobial nanotechnology will degrade in a landfill in 5 years to carbon dioxide, nitrous oxide, and silicon dioxide. Using a nanotechnology antimicrobial Biofilms are being developed as consumer products that can be sprayed or wiped over porous and nonporous surfaces. The microbe resistance of a surface treated with the appropriate antimicrobial nanotechnology can last up to 90 days, or the product's usable life, if protected during the production process. On the preventative front, European researchers are developing MRSA-resistant nanotechnology-enhanced textiles that might be utilised in hospital gowns, curtains, bedding, and pillow coverings. References External links Nanotechnology Cell biology Medical technology
Antimicrobial nanotechnology
[ "Materials_science", "Engineering", "Biology" ]
647
[ "Antimicrobials", "Cell biology", "Materials science", "Nanomedicine", "Nanotechnology", "Biocides", "Medical technology" ]
67,960,347
https://en.wikipedia.org/wiki/3-Chloro-PCP
3-Chloro-PCP (3'-Cl-PCP) is a recreational designer drug from the arylcyclohexylamine family, with dissociative effects. It has comparable potency to phencyclidine but with a slightly different effects profile, being somewhat more potent as an NMDA antagonist but around the same potency as a dopamine reuptake inhibitor. It was first identified in Slovenia in December 2020, and was made illegal in Hungary in April 2021. See also 3-F-PCP 3-Me-PCP 3-MeO-PCP 4-Keto-PCP References Arylcyclohexylamines Designer drugs Dissociative drugs 3-Chlorophenyl compounds 1-Piperidinyl compounds
3-Chloro-PCP
[ "Chemistry" ]
167
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
67,960,783
https://en.wikipedia.org/wiki/Sparse%20polynomial
In mathematics, a sparse polynomial (also lacunary polynomial or fewnomial) is a polynomial that has far fewer terms than its degree and number of variables would suggest. For example, is a sparse polynomial as it is a trinomial with a degree of . The motivation for studying sparse polynomials is to concentrate on the structure of a polynomial's monomials instead of its degree, as one can see, for instance, by comparing Bernstein–Kushnirenko theorem with Bezout's theorem. Research on sparse polynomials has also included work on algorithms whose running time grows as a function of the number of terms rather than on the degree, for problems including polynomial multiplication, division, root-finding algorithms, and polynomial greatest common divisors. Sparse polynomials have also been used in pure mathematics, especially in the study of Galois groups, because it has been easier to determine the Galois groups of certain families of sparse polynomials than it is for other polynomials. The algebraic varieties determined by sparse polynomials have a simple structure, which is also reflected in the structure of the solutions of certain related differential equations. Additionally, a sparse positivstellensatz exists for univariate sparse polynomials. It states that the non-negativity of a polynomial can be certified by sos polynomials whose degree only depends on the number of monomials of the polynomial. Sparse polynomials oftentimes come up in sum or difference of powers equations. The sum of two cubes states that . Here is a sparse polynomial since out of the possible terms, only appear. Other examples include the identities and also where the product of two polynomials give a spearse polynomial. The Bring–Jerrard normal form of a quintic, is also a sparse polynomial. See also Askold Khovanskii, one of the main contributors to the theory of fewnomials. References Polynomials
Sparse polynomial
[ "Mathematics" ]
384
[ "Polynomials", "Algebra" ]
67,961,810
https://en.wikipedia.org/wiki/Sodium%20phenylbutyrate/ursodoxicoltaurine
Sodium phenylbutyrate/ursodoxicoltaurine, also known as sodium phenylbutyrate/taurursodiol and sold under the brand names Albrioza and Relyvrio, is a fixed-dose combination medication used for the treatment of amyotrophic lateral sclerosis (ALS). It contains sodium phenylbutyrate and ursodoxicoltaurine (taurursodiol). The most common adverse reactions experienced with sodium phenylbutyrate/ursodoxicoltaurine include diarrhea, abdominal pain, nausea and upper respiratory tract infection. Sodium phenylbutyrate/ursodoxicoltaurine acts by blocking apoptotic pathways in the mitochondria and in the endoplasmic reticulum. Sodium phenylbutyrate is a chemical chaperone that helps proteins maintain their normal conformation, preventing aggregation that may lead to cell death. Ursodoxicoltaurine improves mitochondrial energy production. The combination was approved for medical use in Canada as Albrioza, in June 2022, and in the United States, as Relyvrio, in September 2022. The European Union's drug regulators refused to approve it, citing concerns about effectiveness. In April 2024, the manufacturer announced that it is withdrawing the medication from the US and Canadian markets, due to it failing a key phase III clinical trial. Medical uses Sodium phenylbutyrate/ursodoxicoltaurine is indicated for the treatment of amyotrophic lateral sclerosis (ALS). History In the Phase II/III CENTAUR clinical trial, sodium phenylbutyrate/ursodoxicoltaurine (AMX0035) increased survival times for ALS patients. The Phase II PEGASUS clinical trial found that the drug was safe and tolerated by patients with Alzheimer's disease. The efficacy of sodium phenylbutyrate/ursodoxicoltaurine for the treatment of ALS was demonstrated in a 24-week, multi-center, randomized, double-blind, placebo-controlled, parallel-group study. Compared to members of the 137-adult cohort that received a placebo medication, those randomly assigned to treatment with sodium phenylbutyrate/ursodoxicoltaurine showed a slower rate of declining daily functioning and longer overall survival. In September 2022, the United States Food and Drug Administration (FDA) approved Amylyx Pharmaceuticals' application for Relyvrio's approval under the priority review and orphan drug programs. In March 2024, Amylyx Pharmaceuticals announced that its Phase III PHOENIX clinical trial of 664 American and European adults followed over 48 weeks showed no statistically significant difference in the functioning of ALS patients that were randomly assigned to treatment with Relyvrio, as compared to those receiving a placebo drug. Society and culture Legal status The FDA Peripheral and Central Nervous System Drugs Advisory Committee voted not to recommend approval, and then in an unusual second vote recommended approval. In April 2024, after a disappointing phase III trial of the medication failed to produce significant differences versus a placebo, Amylyx announced they would begin the process of withdrawing Albrioza and Relyviro from the North American market. Economics In the United States, healthcare insurer Cigna decided, in 2023, to reverse its prior decision to cover the cost of the medication for all ALS patients, opting instead to cover "patients who meet certain clinical criteria", arguing that the drug is "experimental, investigational or unproven". Following the earlier announcement of plans to potentially withdraw the medication, the Institute for Clinical and Economic Review criticized the company for pricing the drug at $158,000 per year of treatment, given the uncertainty. Research It is being studied as a treatment for Wolfram syndrome and progressive supranuclear palsy, despite the withdrawal of North American marketing authorization for amyotrophic lateral sclerosis (ALS). References Drugs acting on the nervous system Combination drugs Orphan drugs Medical controversies Withdrawn drugs
Sodium phenylbutyrate/ursodoxicoltaurine
[ "Chemistry" ]
850
[ "Drug safety", "Withdrawn drugs" ]
67,961,913
https://en.wikipedia.org/wiki/Ennetcom
Ennetcom was a Netherlands based communications network and service provider. Company The company was based in the Netherlands as were most of its customers, but most of the company servers were based in Canada. Danny Manupassa, the company owner, was arrested in 2016 amid allegations that the phones were largely used by criminals. The company had about 19,000 users. The phones sold for €1,500 each and used company servers for traffic. The devices had been altered so they could not make calls or use the Internet normally. Canadian authorities seized the servers and passed messages to Dutch authorities. The latter had managed to decrypt 3.6 million messages by 2017, apparently because the key to the messages had been stored on the same servers the messages were on. These messages have led to arrests, including that of Naoufal Fassih. Fassih has been convicted of one charge of murder and one of attempted murder in relation to the murder of Ali Motamed. See also ANOM EncroChat – a network infiltrated by law enforcement to investigate organized crime in Europe Exclu Sky Global References Anonymity networks Cyberspace Dark web Law enforcement operations Organized crime in Europe 2016 disestablishments in the Netherlands Defunct darknet markets
Ennetcom
[ "Technology" ]
257
[ "Information technology", "Cyberspace" ]
67,962,409
https://en.wikipedia.org/wiki/LY-393558
LY-393558 is a potent serotonin reuptake inhibitor and antagonist of the 5-HT1B, 5-HT1D, and 5-HT2A receptors. LY-393558 was also found to reduce serotonin-induced vasoconstriction, indicating that it may have therapeutic potential for the treatment of pulmonary hypertension. References 5-HT1 antagonists 5-HT2 antagonists Benzothiadiazines Fluoroarenes Indoles Tetrahydropyridines Sulfones
LY-393558
[ "Chemistry" ]
120
[ "Sulfones", "Functional groups" ]
67,964,681
https://en.wikipedia.org/wiki/3F-PiHP
3F-PiHP (3F-α-PiHP) is a recreational designer drug from the substituted cathinone family, with stimulant effects. It was first identified in both Sweden and Finland in mid-2019, and was made illegal in Finland in August 2019. See also 3-Fluoromethcathinone 3F-NEH 3F-PVP 3F-PHP 4F-PHP α-PHiP α-PCyP Isohexylone References Pyrrolidinophenones Designer drugs Serotonin-norepinephrine-dopamine releasing agents 3-Fluorophenyl compounds
3F-PiHP
[ "Chemistry" ]
135
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
67,967,062
https://en.wikipedia.org/wiki/Archiv%20der%20Pharmazie
The Archiv der Pharmazie (German pronunciation: [ˈ arˈçiːf ˈdeːɐ̯ farmaˈtsiː], English: Archive of Pharmacy) is a monthly peer-reviewed scientific journal covering all aspects of chemistry in the life sciences. The journal was established in 1822 and is published by Wiley-VCH on behalf of the Deutsche Pharmazeutische Gesellschaft; it is the oldest German pharmaceutical journal still in publication. Until 2019, the editor-in-chief was Holger Stark (Heinrich Heine University Düsseldorf). He was succeeded in 2020 by Andreas Link (University of Greifswald). History The first edition appeared in 1822 under the name Archiv des Apothekervereins im nördlichen Teutschland für die Pharmacie und ihre Huelfswissenschaften (English: Archive of the Pharmacists' Association in Northern Germany for Pharmacy and its Auxiliary Sciences). In 1832, the journal was merged with Liebigs Annalen (then known as Annalen der Pharmacie), but would split from it following editorial disputes between the editors Justus von Liebig and Rudolph Brandes. From 1924 (volume 242) the journal was called Archiv der Pharmazie und Berichte der Deutschen Pharmazeutischen Gesellschaft (English: Archive of Pharmacy and Reports from the German Pharmaceutical Society), before obtaining its current name in 1971. In 1995 the publication language changed from German to English. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2022 impact factor of 5.1. References External links Monthly journals Wiley-VCH academic journals Medicinal chemistry journals Publications established in 1822 Hybrid open access journals English-language journals
Archiv der Pharmazie
[ "Chemistry" ]
373
[ "Medicinal chemistry journals", "Medicinal chemistry" ]
67,967,371
https://en.wikipedia.org/wiki/Kovacs%20effect
In statistical mechanics and condensed matter physics, the Kovacs effect is a kind of memory effect in glassy systems below the glass-transition temperature. A.J. Kovacs observed that a system’s state out of equilibrium is defined not only by its macro thermodynamical variables, but also by the inner parameters of the system. In the original effect, in response to a temperature change, under constant pressure, the isobaric volume and free energy of the system experienced a recovery characterized by non-monotonic departure from equilibrium, whereas all other thermodynamical variables were in their equilibrium values. It is considered a memory effect since the relaxation dynamics of the system depend on its thermal and mechanical history. The effect was discovered by Kovacs in the 1960s in polyvinyl acetate. Since then, the Kovacs effect has been established as a very general phenomenon that comes about in a large variety of systems, model glasses, tapped dense granular matter, spin-glasses, molecular liquids, granular gases, active matter, disordered mechanical systems, protein molecules, and more. The effect in Kovacs’ experiments Kovacs’ experimental procedure on polyvinyl acetate consisted of two main stages. In the first step, the sample is instantaneously quenched from a high initial temperature to a low reference temperature , under constant pressure. The time-dependent volume of the system in , , is recorded, until the time when the system is considered to be at equilibrium. The volume at is defined as the equilibrium volume of the system at temperature : In the second step, the sample is quenched again from to a temperature that is lower than , so that . But now, the system is held at temperature only until the time when its volume reaches the equilibrium value of , meaning . Then, the temperature is raised instantaneously to , so both the temperature and the volume agree with the same equilibrium state. Naively, one expects that nothing should happen when the system is at and . But instead, the volume of the system first increases and then relaxes back to , while the temperature is held constant at . This non-monotonic behavior in time of the volume after the jump in the temperature can be simply captured by: where , and . is also referred as the “Kovacs hump”. Kovacs also found that the hump displayed some general features: with only one maximum of height at a certain time ; as the temperature is lowered, the hump becomes larger, increases, and moves to shorter times, decreases. In the subsequent studies of the Kovacs hump in different systems, a similar protocol with two jumps in the temperature has been employed. The associated time evolution of a relevant physical quantity , often the energy, is monitored and displays the Kovacs hump. The physical relevance of this behavior is the same as in the Kovacs experiment: it shows that does not completely characterize the dynamical state of the system, and the necessity of incorporating additional variables to have the whole picture. The Kovacs hump described above has been rationalized by employing linear response theory for molecular systems, in which the initial and final states are equilibrium ones. Therein, the "direct" relaxation function (with only one temperature jump, instead of two) is a superposition of positive exponentially decaying modes, as a consequence of the fluctuation-dissipation theorem. Linear response makes it possible to write the Kovacs hump in terms of the direct relaxation function. Specifically, the positivity of the all the modes in the direct relaxation function ensures the "normal" character of the hump, i.e. the fact that . Recently, analogous experiments have been proposed for "athermal" systems, like granular systems or active matter, with the proper reinterpretation of the variables. For instance, in granular gases the relevant physical property is still the energy—although one usually employs the terminology "granular temperature" for the kinetic energy in this context—but it is the intensity of the external driving that plays the role of the temperature. The emergence of Kovacs-like humps highlights the relevance of non-Gaussianities to describe the physical state of granular gases. "Anomalous" Kovacs humps have been reported in athermal systems, i.e. , i.e. a minimum is observed instead of a maximum. Although the linear response connection between the Kovacs hump and the direct relaxation function can be extended to athermal systems, not all the modes are positive definite—the standard version of the fluctuation-dissipation theorem does not apply. This is the key that facilitates the emergence of anomalous behavior. References Statistical mechanics Amorphous solids
Kovacs effect
[ "Physics" ]
985
[ "Amorphous solids", "Statistical mechanics", "Unsolved problems in physics" ]
65,194,617
https://en.wikipedia.org/wiki/Split%20and%20pool%20synthesis
The split and pool (split-mix) synthesis is a method in combinatorial chemistry that can be used to prepare combinatorial compound libraries. It is a stepwise, highly efficient process realized in repeated cycles. The procedure makes it possible to prepare millions or even trillions of compounds as mixtures that can be used in drug research. History According to traditional methods, most organic compounds are synthesized one by one from building blocks coupling them together one after the other in a stepwise manner. Before 1982 nobody was even dreaming about making hundreds or thousands of compounds in a single process. Not speaking about millions or even trillions. So the productivity of the split and pool method invented by Prof. Á. Furka (Eötvös Loránd University Budapest Hungary), in 1982 seemed incredible at first sight. The method had been described it in a document notarized in the same year. The document is written in Hungarian and translated to English Motivations that led to the invention are found in a 2002 paper and the method was first published in international congresses in 1988 then in print in 1991. The split and pool synthesis and its features The split and pool synthesis (S&P synthesis) differs from traditional synthetic methods. The important novelty is the use of compound mixtures in the process. This is the reason of its unprecedentedly high productivity. Using the method one single chemist can make more compounds in a week than all chemists produced in the whole history of chemistry. The S&P synthesis is applied in a stepwise manner by repeating three operations in each step of the process: Dividing a compound mixture into equal portions Coupling one different building block (BB) to each portion Pooling and thoroughly mixing the portions The original method is based on the solid-phase synthesis of Merrifield The procedure is illustrated in the figure by the flowing diagram showing of a two-cycle synthesis using the same three BBs in both cycles. Choosing the solid phase method in the S&P synthesis is reasonable since otherwise removal of the by-products from the mixture of compounds would be very difficult. Efficiency The high efficiency is the most important feature of the method. In a multi step (n) synthesis using equal number of BBs (k) in every step the number of components in a forming combinatorial library (N) is: N=kn This means that the number of components increases exponentially with the number steps (cycles) while the number of the required couplings increases only linearly. If a different number of building BBs are used in the cycles (k1, k2, k3....kn) the number of the formed components is: N=k1.k2.k3...kn. This feature of the procedure offers the possibility to synthesize a practically unlimited number of compounds. For example, if 1000 BBs are used in four cycles 1 trillion compounds are expected to form. The number of needed couplings is only 4000! The reason of the high efficiency The explanation of the extraordinary efficiency is the use of mixtures in the synthetic steps. If in a traditional reaction one compound is coupled with one reactant and one new compound is formed. If a mixture of compounds containing n components is coupled with a single reactant the number of new compounds formed in the single coupling is n. The difference between the traditional and the split and pool synthesis is convincingly shown by the number of coupling steps in the traditional and the split and pool synthesis of 3,2 million pentapeptides. Conventional synthesis: 3,200,000x5=16,000,000 coupling steps cca 40,000 years S&P synthesis: 20x5=100 coupling steps cca 5 days It is possible to conduct the conventional synthesis rational way as is shown in the figure. In this case, the number of coupling cycles is: 20+400+8,000+160,000+3,200,000=3,368,420 cca 9,200 years The theoretical upper limit of the number of components As often mentioned the split and pool method makes it possible to synthesize an unlimited number of compounds. In fact, the theoretical maximum number of components depends on the quantity of the library expressed in moles. If for example, 1 mol library is synthesized the maximum number of components is equal to the Avogadro number: 6,02214076·1023 In such a library each component would be represented by a single molecule. Components of the library form in equal molar quantities As far as the chemistry of the couplings makes it possible the components of the libraries form in nearly equal molar quantity. This is made possible by dividing of the mixtures into equal samples and by homogenization of the pooled samples by thoroughly mixing them. The equal molar quantity of components of the library is very important considering their applicability. The presence of compounds in unequal quantities may lead to difficulties in evaluation of the results in screening. The solid phase method makes it possible to use the reagents in excess to drive the reactions close to completion since the surplus can easily be removed by filtration. The possibility of using two mixtures in the synthesis In principle, the use of two mixtures in the S&P synthesis can lead to the same combinatorial library that forms in the usual S&P method. The differences in the reactivity of BBs however, bring about large differences in the concentrations of components, and the differences are expected to increase after each step. Although a considerable amount of labor could be saved by using the two mixtures approach when a high number of BBs are coupled in each position, it is advisable to stick to the normally used S&P procedure. The presence of all structural varieties in the library Formation of all structural variants that can be deduced from the BBs is an important feature of the S&P synthesis. Only the S&P method can achieve this in a single process. On the other hand, the presence of all possible structural varieties in a library assures that the library is a combinatorial one and is prepared by combinatorial synthesis. Forming of one compound in the beads The consequence of using a single BB in couplings is the formation of a single compound in each bead. The formation of OBOC libraries is an inherent property of the S&P synthesis. The reason is explained in the figure. The structure of the compound formed in a bead depends on the reaction vessels in which the bead happens to occur in the synthetic route. It depends on the decision of the chemist to use the library in the tethered (OBOC) form or cleave down the compounds from the beads and use it as a solution. Realization of the split and pool synthesis The split and pool synthesis was first applied to prepare peptide libraries on solid support. The synthesis was realized in a home-made manual device shown in the figure. The device has a tube with 20 holes to which reaction vessels could be attached. One end of the tube is linked to a waste container and a water pump. Left shows loading and filtering, right coupling-shaking position. In the early years of combinatorial chemistry, an automatic machine was constructed and commercialized at AdvancedChemTech (Louisville KY USA). All operations of the S&P synthesis are carried automatically under computer control. At present, the Titan 357 automatic synthesizer is available at aapptec (Louisville KY, USA). Encoded split and pool synthesis Although in the S&P synthesis a single compound forms on each bead its structure is not known. For this reason, encoding methods had been introduced to help to determine the identity of the compound contained in a selected bead. Encoding molecules are coupled to the beads in parallel with the coupling of the BBs. The structure of the encoding molecule has to be easier determined than that of the library member on the bead. Ohlmeyer et al. published a binary encoding method. They used mixtures of 18 tagging molecules that after cleaving them from the beads could be identified by Electron Capture Gas Chromatography. Nikolajev et al. applied peptide sequences for encoding Sarkar et al. described chiral oligomers of pentenoic amides (COPAs) that can be used to construct mass encoded OBOC libraries. Kerr et al. introduced an innovative kind of encoding. An orthogonally protected removable bifunctional linker was attached to the beads. One end of the linker was used to attach the non-natural BBs of the library while to the other end the encoding amino acid triplets were linked. One of the earliest and very successful encoding methods was introduced by Brenner and Lerner in 1992. They proposed to attach DNA oligomers to the beads for encoding their content. The method was implemented by Nielsen, Brenner, and Janda using the bifunctional linker of Kerr et al. to attach the encoding DNA oligomers. This made it possible to cleave down the compound with the DNA encoding oligomer attached to it. Split and pool synthesis in solution Han et al. described a method that made it possible to keep the advantages of both the high efficiency of S&P synthesis and that of a homogeneous media in the chemical reactions. In their method polyethyleneglycol (PEG) was used as soluble support in S&P synthesis of peptide libraries. MeO-CH2-CH2-O-(CH2-CH2-O)n-CH2-CH2-OH PEG proved suitable for this purpose since it is soluble in a wide variety of aqueous and organic solvents and its solubility provides homogeneous reaction conditions even when the attached molecule itself is insoluble in the reaction medium. Separation from the solution of the polymer and the synthesized compounds bound to it can be achieved by precipitation and filtration. The precipitation requires concentrating the reaction solutions then diluting with diethyl ether or tert-butyl methyl ether. Under carefully controlled precipitation conditions the polymer with the bound products precipitates in crystalline form and the unwanted reagents remain in solution. In the solid phase, S&P synthesis a single compound forms on each bead, and as a consequence, the number of compounds can't exceed the number of beads. So, the theoretical maximum number of compounds depends on the quantity of the solid support and the size of the beads. On 1 g polystyrene resin, for example, a maximum of 2 million compounds can be synthesized if the diameter of the resin beads is 90 μm, and 2 billion can be made if the bead size is 10 μm. In practice, the solid support is used in excess (often tenfold) to be sure that all expected components are formed. The above limitation is completely removed if the solid support is omitted and the synthesis is carried out in solution. In this case, there is no upper limit concerning the number of components of the library. Both the number of components and the quantity of the library can be freely decided based only on practical considerations. An important modification was introduced in the synthesis of DNA encoded combinatorial libraries by Harbury and Halpin. The solid support in their case is replaced by the encoding DNA oligomers. This makes it possible to synthesize libraries containing even trillions of components and screen them using affinity binding methods. A different way of carrying out solution-phase S&P synthesis is applying scavenger resins to remove the byproducts. Scavenger resins are polymers having functional groups that make it possible to react with and bind components of the excess of reagents then filtered them out from the reaction mixture Two examples: a resin containing primary amino groups can remove the excess of acyl chlorides from reaction mixtures while an acyl chloride resin removes amines. A fluorous technology was described by Curran The fluorous synthesis employs functionalized perfluoroalkyl (Rf) groups like 4,4,5,5,6,6,7,7,8,8,9,9,9-Tridecafluorononyl {CF3(CF2)4CF2CH2CH2-} group attached to substrates or reagents. The Rf groups make it possible to remove either the product or the reagents from the reaction mixture. At the end of the procedure, the Rf groups attached to the substrate can be removed from the product. By attaching Rf groups to the substrate the synthesis can be carried out in solution and the product can be separated from the reaction mixture by liquid extraction using a fluorous solvent like perfluoromethylcyclohexane or perfluorohexane. It can be seen that the function of the Rf groups in the synthesis is similar to that of the solid or soluble support. If the Rf tag is attached to reagent its excess can be removed from the reaction mixture by extraction. Polymer supported reagents are also used in S&P synthesis. Special features in the synthesis of DNA encoded combinatorial libraries Self-assembling DNA encoded libraries One of the best examples of the special features caused by DNA encoding is the synthesis of the self-assembling library introduced by Mlecco et al. First, two sublibraries are synthesized. In one of the sublibraries BBs are attached to the 5’ end of an oligonucleotide containing a dimerization domain followed by the codes of the BBs. In the other sublibrary the BBs are attached to the 3’ end of the oligonucleotides also containing a dimerization domain and the codes of another set of BBs. The two sublibraries are mixed in equimolar quantities, heated to 70 °C then allowed to cool to room temperature, heterodimerize and form the self-assembling combinatorial library. One member of such two pharmacophore library is shown in the figure. In affinity screening, the two BBs of the pharmacophore may interact with the two adjacent binding sites of the target protein. DNA templated libraries In the synthesis of DNA templated combinatorial libraries, the ability of the DNA double helix to direct region-specific chemical reactions is harnessed by Gartner et al. The DNA- linked reagents are kept in close proximity. This is equivalent to the virtual increase of local concentration that is nearly constant within a distance of 30 nucleotides. The proximity effect helps reactions to proceed. Two libraries are synthesized. A template library containing at one end one of the BBs and its code followed by two annealing regions for the codes of the BBs of the two reagent libraries. Each of the two reagent libraries contains a coding oligonucleotide linked with cleavable bonds to the reagent (BB) capable of forming a bond with the already linked BB taking advantage of the proximity effect. The synthesis is realized in two steps as shown in the figure. Each step has three operations: mixing, annealing, coupling-cleaving. Synthesis in Yoctoreactor The yoctoreactor method introduced by Hansen et al. is based on the geometry and stability of a three-dimensional DNA structure that creates a yoctoliter (10−24 L) size chemical reactor in which proximity of BBs brings about reactions among them. The DNA oligomers comprise the DNA-barcode for the attached BBs and form the structural elements of the reactor. One kind of yoctoreactor format is shown in the figure. Sequence encoded routing Harbury and Halpin developed DNA template libraries that direct like genes the synthesis of DNA encoded organic libraries. The members of the template combinatorial library contain the codes of all BBs and their order of couplings. The figure shows one member of a simple ssDNA template library (A) containing the codes of three BBs (2, 4, 6) that planned to be successively attached. The coding regions are separated by the same non-coding regions (1, 3, 5, 7) in all members. The sequence directed procedure uses a series of columns of resin beads each coated with the anticodon of one of the BBs (B). When the template library is transferred to an anticodon column the proper template member is captured by hybridization then is coupled with the appropriate BB. After finished with all anticodon columns of a coupling position (CP) the libraries are eluted from the beads of the anticodon columns mixed and the mentioned operations are repeated with the series of anticodon columns of the next CP. In figure, C shows one member of the template library captured by the “yellow” second CP anticodon library. The template contains the “red” BB already coupled in CP1 and the “yellow” BB attached after its capture. The final library contains all of the synthesized organic compounds attached to their encoding DNA oligomers. Stepwise coupling and coding One of the most forward-looking method commonly used for DNA-encoding is applied in the synthesis of single-pharmacophore libraries. As the figure shows the library is built repeating the usual cycles of S&P synthesis, The second operation of the cycle is modified: in addition to coupling with the BBs the encoding DNA oligomer is elongated by attaching the code of the BB by ligation. Synthesis using macroscopic units of solid support Modifications had been developed enabling the split and pool synthesis to produce known compounds in larger quantities than the content of a bead of solid support and retain the high efficiency of the original method. As published by Moran et al. and Nicolau et al. the resin normally used in the solid phase synthesis was enclosed into permeable capsules including a radiofrequency label recording the BBs in order of their coupling. Both manual and automatic machine was constructed to sort the capsules into the appropriate reaction vessels. A different kind of labeled macroscopic solid support unit was introduced by Xiao et al. The supports are 1x1 cm polystyrene grafted square plates. The medium carrying the code is a 3x3 mm ceramic plate in the center of the synthesis support The code is etched into the ceramic support by a CO2 laser in the form of a two-dimensional bar code that can be read by a special scanner. String synthesis The String Synthesis introduced by Furka et al. uses stringed macroscopic solid support units (crowns) and the units are identified by their position occupied on the string. One string is assigned for every building block in the synthesis. In the coupling stage, the string is in the proper reaction vessel. The content of the strings coming out from a synthetic step must be redistributed into the strings of the next step. The units are not pooled. The redistribution demonstrated in the figure follows the combinatorial distribution rule: all products formed in a synthetic step are equally divided among all reaction vessels of the next synthetic step. Different distribution formats can be followed that allows the identification the content of each crown depending on the position on the new string and the destination reaction vessel of the string. The stringed crowns and the trays used in manual sorting are shown in the figure. The destination tray is moved step by step in the direction of the arrow. The crowns are transferred in groups from the slots of the source tray into the all opposite slots of the destination tray. The transfers are directed by computer and the products are identified by the positions of the crowns occupied on the final strings. A fast automatic sorter machine had also been described. The sorter is outlined in the figure. It has two sets of aligned tubes. The lower ones are step by step moving in the direction showed by the arrow and the coin-like units are dropped from the upper source tubes into the lower destination ones. The tubes may serve as reaction vessels too. A software had also been developed that can direct sorting if not a full combinatorial library is synthesized only a set of its components are prepared that are picked out from the full library. References Combinatorial chemistry Chemical synthesis
Split and pool synthesis
[ "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
4,121
[ "Combinatorial chemistry", "Materials science", "Combinatorics", "nan", "Chemical synthesis" ]
65,194,728
https://en.wikipedia.org/wiki/2-Methyldodecane
2-Methyldodecane, an organic compound with a chemical formula C13H28, is an isomer of tridecane. It is produced by the reaction of 1-bromodecane and diisopropyl zinc. Reaction of decylmagnesium bromide and 2-bromopropane produce 2-methyldodecane too. Another method to produce 2-methyldodecane is react 1-dodecene and trimethylaluminium. References Alkanes
2-Methyldodecane
[ "Chemistry" ]
111
[ "Organic compounds", "Alkanes" ]
65,195,339
https://en.wikipedia.org/wiki/Kalanchoe%20%27Tarantula%27
Kalanchoe 'Tarantula', or ''Kalanchoe katapifa'' 'Tarantula', is a succulent cultivar in the kalanchoe genus that produces small bouquets of pink flowers. Description 30cm in height and width, the plant features irregular, spidery leaves (hence its name), and produces long-lasting, vibrant pink flowers in spring and autumn. Cultivation It is cultivated as houseplant and as a rock or garden plant. In winter, it thrives in bright light indoors as it is frost-intolerant. In summer it would need bright indirect light with some shade. See also Kalanchoe blossfeldiana References Kalanchoe House plants Ornamental plant cultivars Hybrid plants Drought-tolerant plants
Kalanchoe 'Tarantula'
[ "Biology" ]
160
[ "Hybrid plants", "Plants", "Hybrid organisms" ]
65,195,585
https://en.wikipedia.org/wiki/Uthapuram%20caste%20wall
The Uthapuram caste wall, called by various names as the wall of shame, the wall of untouchability is a 12 ft high and 600 meter long wall built by dominant caste villagers reportedly to segregate the Dalit population in the Village of Uthapuram in Tamil Nadu. The village witnessed violence between Dalits and the dominant castes during 1948, 1964 and 1989 and was also known for its caste based discrimination. Protests started in 2008 campaigning to demolish the wall led mostly by the Communist Party of India (Marxist) and left-wing organizations. Later a small portion of the wall was demolished by the government to allow entry to the Dalits to access the main road. Many dominant caste villagers left the village and moved 3 km away with their belongings reportedly as a protest for demolishing the wall. 70 houses belonging to the Dalits were attacked in October 2008 reportedly in retaliation for the demolition of the wall and a Dalit man was shot dead by the police. Tensions continued until 2015, when during a clash between the communities several vehicles were set on fire and many were hospitalized. Background Caste divisions and clashes The Village of Uthapuram in the Madurai district has two major castes, the dominant caste Pillai and the Dalit Pallar caste. The village was known for its caste tensions and there were violent conflicts between the castes during the years 1948, 1964 and 1989. Caste discrimination The dominant caste villagers reportedly blocked attempts of the Dalits to build a bus stop and increased the elevation of a parapet close to the bus stop to discourage the Dalits from sitting before them. The tea-shops managed by caste Hindus are not visited by the Dalits. The Dalits are not permitted to enter an dominant caste-dominated streets and are refused space in the community halls and in the village squares and were also denied entry to burial sites. The wall The wall which was 600 meters long and 12 ft high was described in variously as a caste wall, a wall of shame, a wall of bias and a wall of untouchability, was built by caste-Hindus in 1989 after a caste violence in the village. The passes through areas intended for common use by members of all the castes. It also barred Dalits from directly entering the main road. Dalits have to use a circular path and walk a some more miles to get to the main road. Clashes and protests in 2008 The fourth conflict began in 2008 after a period of 20 years, and kept going in numerous ways for another 5 years. It began in April 2008 when the caste Hindus used iron rods to electrify the 600 meter wall to prevent the Dalits from entering into the dominant caste areas during night times. Initially, the Dalits were hesitant to contend but the Tamil Nadu Untouchability Eradication Front (TNUEF), Communist Party of India (Marxist) (CPM), Communist Party of India (CPI) and All India Democratic Women's Association (AIDWA) opposed this action by the dominant caste villagers vigorously. A member of the TNUEF alleged that two cows were electrocuted by the electrified wall. Following the state-wide protests of the progressive organisations, the electricity minister of Tamil Nadu called for the removal of the power line. The CPI(M) along with local Dalits started a campaign for the destruction of the caste wall. The Dalits orchestrated a demonstration at the front of the Taluk office calling for the wall to be pulled down. The CPI(M)'s general secretary, N. Varadarajan said that his party cadre will demolish the wall on their own if the government did not take any actions. Demolition On 6 May, the district administration got involved and destroyed a 15-foot portion of the wall to allow the Dalits to travel in the presence of a few hundred policemen and the supervision of the district officials. In an act of protest, some caste Hindus returned their ration cards to the Tehsildar. About 600 dominant caste members left the village during the demolition and moved to Thalaiyoothu, a place 3 km from the village with their livestock and declared that they would not return. The problem became tense again when the dominant caste villagers who left the village didn't listen to a request from the District Collector to come back soon so that everyone in the village can live in peace. When district officials met with them, they made several demands including a patta for a temple where they had been worshiping for more than 400 years, a permanent police outpost in the village, and new housing for people whose residences which they claimed were destroyed by Dalit anti-socials during the riots of 1989. At Thalaiyoothu on May 12, The leader of the village's dominant caste group, told Frontline that his people left the village more out of panic than as a mark of rebellion. After the wall was taken down, he said they felt insecure. He claimed the Dalits live better now with most of them having government jobs or being land owners. He also claimed that since the Dalits were actually on a buying spree and the dominant caste members fear that they might be forced to sell their property to Dalits. He also claimed that the wall was built to protect the dominant caste villagers. However this version is not accepted by the village's Dalits. They assert they were at the receiving side of hostility, instead of the other way around. Attacks On 1 October 2008, more than 70 Dalits houses were attacked as a response to the destruction of the wall and a Dalit youth was shot dead by the police as a result of the tensions on November 4, 2008. Continued tensions On 10 November 2011, several Dalits entered a temple controlled by dominant caste with police protection. Although several dominant caste members welcomed them with folded arms, there were women crying in the streets opposing their entry. In 2012, the Dalits were not allowed to participate in the temple's consecration ceremony and in 2013 the Dalits did not attend the temple festivals. In April 2014, the dominant caste villagers locked the temple and left the village opposing the High court order for allowing the Dalits for Temple entry. In October 2015, the Dalits and the dominant caste villagers clashed during a temple festival which started over a dispute over placing a garland over a tree. Six motor-bikes were set ablaze and the tehsildars vehicle was also damaged. The police filed cases on 70 people belonging to both the castes and arrested 21. Several injured during the clashes were hospitalized. References External links Accord in Uthapuram - The Hindu Frontline Madurai district Separation barriers Caste-related violence in India Crime in Tamil Nadu Social history of Tamil Nadu History of Tamil Nadu (1947–present) Dalit history Violence against Dalits in Tamil Nadu
Uthapuram caste wall
[ "Engineering" ]
1,366
[ "Separation barriers" ]
65,198,326
https://en.wikipedia.org/wiki/Priority%20matching
In graph theory, a priority matching (also called: maximum priority matching) is a matching that maximizes the number of high-priority vertices that participate in the matching. Formally, we are given a graph , and a partition of the vertex-set into some subsets, , called priority classes. A priority matching is a matching that, among all possible matchings, saturates the largest number of vertices from ; subject to this, it saturates the largest number of vertices from ; subject to this, it saturates the largest number of vertices from ; and so on. Priority matchings were introduced by Alvin Roth, Tayfun Sonmez and Utku Unver in the context of kidney exchange. In this problem, the vertices are patient-donor pairs, and each edge represents a mutual medical compatibility. For example, an edge between pair 1 and pair 2 indicates that donor 1 is compatible with patient 2 and donor 2 is compatible with patient 1. The priority classes correspond to medical priority among patients. For example, some patients are in a more severe condition so they must be matched first. Roth, Sonmez and Unver assumed that each priority-class contains a single vertex, i.e., the priority classes induce a total order among the pairs. Later, Yasunori Okumura extended the work to priority-classes that may contain any number of vertices. He also showed how to find a priority matching efficiently using an algorithm for maximum-cardinality matching, with a run-time complexity of . Jonathan S. Turner presented a variation of the augmenting path method (Edmonds' algorithm) that finds a priority matching in time . Later, he found a faster algorithm for bipartite graphs: the algorithm runs in time See also Maximum cardinality matching Rank-maximal matching References Matching (graph theory)
Priority matching
[ "Mathematics" ]
373
[ "Matching (graph theory)", "Mathematical relations", "Graph theory" ]
65,198,905
https://en.wikipedia.org/wiki/Hans%20Sigrist%20Prize
The Hans Sigrist Prize is awarded by the Hans Sigrist Foundation, at the University of Bern in Switzerland. The Foundation's benefactor Hans Sigrist died on December 30, 1982. The Foundation was founded in 1993. The Foundation's first award was presented in 1994. The Hans Sigrist Prize is for mid-career researchers in order to boost those researcher's potential impact. Every year, the Foundation asks faculty members at the University of Bern to propose a prize field. The Foundation board chooses a field from those proposals and selects a chair for the prize search committee. Two former Hans Sigrist prize winners have gone on to win Nobel prizes later in their careers. The Hans Sigrist Doctoral Fellowship is an up to three year Fellowship for doctoral candidates at the University of Bern. Its field changes with the field chosen for that year's Hans Sigrist Prize. N Later won a Nobel Prize References Swiss awards Fellowships
Hans Sigrist Prize
[ "Technology" ]
192
[ "Science and technology awards", "Science award stubs" ]
65,200,566
https://en.wikipedia.org/wiki/Pecazine
Pecazine (INN), also known as mepazine (trade name Pacatal), is a phenothiazine formerly used as a neuroleptic drug or major tranquilizer. Pecazine was first synthesized in 1953 by Wilhelm Schuler and Otto Nieschulz and was quickly incorporated into psychiatric practice as an ataractic, i.e., a true tranquilizer rather than a hypnotic or depressant. It was considered interchangeable with chlorpromazine, albeit with a different side effect profile, which included less sedation and a lower risk of extrapyramidal symptoms due to its potent parasympatholytic and anticholinergic effect. As early as 1958, however, studies reported inferiority to other phenothiazines in the treatment of schizophrenia and questioned its place in the clinic; in 1960, a double-blind, randomized controlled trial found pecazine to be no more effective than placebo. Subsequent research found that, like the structurally related promethazine, pecazine is essentially devoid of antipsychotic activity. Pecazine was implicated in a number of cases of agranulocytosis and was subsequently withdrawn from the market. More recently, it has become a subject of research interest as a MALT1 and RANKL inhibitor. References Phenothiazines Withdrawn drugs
Pecazine
[ "Chemistry" ]
283
[ "Drug safety", "Withdrawn drugs" ]
65,200,662
https://en.wikipedia.org/wiki/Pyroxasulfone
Pyroxasulfone is a pre-emergence herbicide that inhibits the production of very long chain fatty acids in plants. The structure of the existing herbicide thiobencarb served as the basis for development but pyroxasulfone requires a lower dose (100–25 g/ha) and is more stable resulting in longer efficacy. it had been registered for use in Japan, Australia, USA, Canada, Saudi Arabia and South Africa and was used on crops including maize, soybean, wheat and cotton. In 2015 it was applied to over 6 million hectares of land. Pyroxasulfone is from a novel chemical class but has a similar mode of action to acetamide herbicides such as metolachlor, acetochlor and dimethenamid. It is mainly used to control annual grasses but is also effective against broadleaf weeds including lambsquarters (Chenopodium berlandieri), pigweed and waterhemp (both Amaranthus species) and black nightshade (Solanum nigrum) References Further reading Herbicides Pyrazoles Oxazoles Sulfones Trifluoromethyl compounds
Pyroxasulfone
[ "Chemistry", "Biology" ]
245
[ "Sulfones", "Biocides", "Functional groups", "Herbicides" ]
65,200,723
https://en.wikipedia.org/wiki/Entangled%20Life
Entangled Life: How fungi make our worlds, change our minds and shape our futures is a 2020 non-fiction book on mycology by British biologist Merlin Sheldrake. His first book, it was published by Random House on 12 May 2020. Author Sheldrake is an expert in mycorrhizal fungi, holds a PhD in tropical ecology from the University of Cambridge for his work on underground fungal networks in tropical forests in Panama, where he was a predoctoral research fellow of the Smithsonian Tropical Research Institute, and he is on the advisory board of the Society for the Protection of Underground Networks (SPUN). His research is primarily in the fields of fungal biology and the history of Amazonian ethnobotany. He is the son of Rupert Sheldrake, a New Age author, and Jill Purce, an author and therapist, and the brother of musician Cosmo Sheldrake. Summary The book looks at fungi from a number of angles, including decomposition, fermentation, nutrient distribution, psilocybin production, the evolutionary role fungi play in plants, and the ways in which humans relate to the fungal kingdom. It uses music and philosophy to illustrate its thesis, and introduces readers to a number of central strands of research on mycology. It is also a personal account of Sheldrake's experiences with fungi. Reception The book was published to largely positive reviews. According to Book Marks, the book received "rave" reviews based on 22 critic reviews with 16 being "rave" and 5 being "positive" and 1 being "mixed". In Books in the Media, a site that aggregates critic reviews of books, the book received a (4.53 out of 5) from the site which was based on 7 critic reviews. Jennifer Szalai of The New York Times called the book an "ebullient and ambitious exploration" of fungi, adding, "reading it left me not just moved but altered, eager to disseminate its message of what fungi can do." Eugenia Bone of The Wall Street Journal called it "a gorgeous book of literary nature writing in the tradition of [Robert] Macfarlane and John Fowles, ripe with insight and erudition." Rachel Cooke of The Observer called it "an astonishing book that could alter our perceptions of fungi forever." Richard Kerridge, reviewing the book in The Guardian, wrote that "when we look closely [at fungi], we meet large, unsettling questions... [Sheldrake] carries us easily into these questions with ebullience and precision." The book was named on Time magazine's list of the 100 Must-Read Books of 2020, and The Daily Telegraph list of the 50 Best Books of 2020, and was chosen as one of the best nature books of 2020 by The Times. It was serialized on BBC Radio 4 as the book of the week, and is a Sunday Times best-seller. It won the Wainwright Prize in the Global Conservation Writing category, the Guild of Food Writers First Book Award, and the 2021 Royal Society Science Books Prize. It was shortlisted for the 2021 British Book Award for Non-Fiction: Narrative Book of the Year. Entangled Life was an inspiration for the Spring 2021 couture collection by Iris van Herpen. Sheldrake presented the documentary "Fungi: Web of Life", narrated by Björk. References External links Merlin Sheldrake in Hyde Park Civilization on ČT24 17.6.2023 (moderator Daniel Stach) Mycology 2020 non-fiction books Random House books Mycological literature
Entangled Life
[ "Biology" ]
739
[ "Mycology" ]
65,201,794
https://en.wikipedia.org/wiki/A-230
A-230 is an organophosphate nerve agent. It was developed in the Soviet Union under the FOLIANT program and is one of the group of compounds referred to as Novichok agents that were revealed by Vil Mirzayanov. A-230 is possibly the most potent nerve agent for which specific toxicity figures have been published, with a human lethal dose estimated to be less than 0.1 mg. However it was felt to be less suitable for weaponisation than other agents such as A-232 and A-234, due to issues with the liquid agent exhibiting low volatility and solidifying at low temperatures, as well as poor stability in the presence of water. Legal status A-230 has been added to Schedule 1 of the Annex on Chemicals of the Chemical Weapons Convention as of June 2020, and it has been explicitly named as an example compound for schedule 1.A.13. For chemicals listed in Schedule 1, the most stringent declaration and verification measures are in place combined with far-reaching limits and bans on production and use. It is notable to say that Annex 1 does not explicitly relate this structure to the name A-230, just add this particular structure to the prohibited compounds section. See also C01-A035 C01-A039 A-242 EA-3148 EA-3990 Methylfluorophosphonylcholine VR VP References Acetylcholinesterase inhibitors Amidines Phosphonamidofluoridates Novichok agents
A-230
[ "Chemistry" ]
307
[ "Bases (chemistry)", "Amidines", "Functional groups" ]
65,206,035
https://en.wikipedia.org/wiki/Eberhard%27s%20theorem
In mathematics, and more particularly in polyhedral combinatorics, Eberhard's theorem partially characterizes the multisets of polygons that can form the faces of simple convex polyhedra. It states that, for given numbers of triangles, quadrilaterals, pentagons, heptagons, and other polygons other than hexagons, there exists a convex polyhedron with those given numbers of faces of each type (and an unspecified number of hexagonal faces) if and only if those numbers of polygons obey a linear equation derived from Euler's polyhedral formula. The theorem is named after Victor Eberhard, a blind German mathematician, who published it in 1888 in his habilitation thesis and in expanded form in an 1891 book on polyhedra. Definitions and statement For an arbitrary convex polyhedron, one can define numbers , , , etc., where counts the faces of the polyhedron that have exactly sides. A three-dimensional convex polyhedron is defined to be simple when every vertex of the polyhedron is incident to exactly three edges. In a simple polyhedron, every vertex is incident to three angles of faces, and every edge is incident to two sides of faces. Since the numbers of angles and sides of the faces are given, one can calculate the three numbers (the total number of vertices), (the total number of edges), and (the total number of faces), by summing over all faces and multiplying by an appropriate factor: and Plugging these values into Euler's polyhedral formula and clearing denominators leads to the equation which must be satisfied by the face counts of every simple polyhedron. However, this equation is not affected by the value of (as its multiplier is zero), and, for some choices of the other face counts, changing can change whether or not a polyhedron with those face counts exists. That is, obeying this equation on the face counts is a necessary condition for the existence of a polyhedron, but not a sufficient condition, and a complete characterization of which face counts are realizable would need to take into account the value of . Eberhard's theorem implies that the equation above is the only necessary condition that does not depend on . It states that, if an assignment of numbers to (omitting ) obeys the equation then there exists a value of and a simple convex polyhedron with exactly -sided faces for all . Examples There are three simple Platonic solids, the tetrahedron, cube, and dodecahedron. The tetrahedron has , the cube has , and the dodecahedron has , with all other values of being zero. These three assignments of numbers to all obey the equation that Eberhard's theorem requires them to obey. The existence of these polyhedra shows that, for these three assignments of numbers to , there exists a polyhedron with . The case of the dodecahedron, with and all others except zero, describes more generally the fullerenes. There is no fullerene with but these graphs are realizable for any other value of ; see for instance, the 26-fullerene graph, with . There is no simple convex polyhedron with three triangle faces, three pentagon faces, and no other faces. That is, it is impossible to have a simple convex polyhedron with , and for . However, Eberhard's theorem states that it should be possible to form a simple polyhedron by adding some number of hexagons, and in this case one hexagon suffices: bisecting a cube on a regular hexagon passing through six of its faces produces two copies of a simple roofless polyhedron with three triangle faces, three pentagon faces, and one hexagon face. That is, setting suffices in this case to produce a realizable combination of face counts. Related results An analogous result to Eberhard's theorem holds for the existence of polyhedra in which all vertices are incident to exactly four edges. In this case the equation derived from Euler's formula is not affected by the number of quadrilaterals, and for every assignment to the numbers of faces of other types that obeys this equation it is possible to choose a number of quadrilaterals that allows a 4-regular polyhedron to be realized. A strengthened version of Eberhard's theorem states that, under the same conditions as the original theorem, there exists a number such that all choices of that are greater than equal to and have the same parity as are realizable by simple convex polyhedra. A theorem of David W. Barnette provides a lower bound on the number of hexagons that are needed, whenever the number of faces of order seven or higher is at least three. It states that, in these cases, For polygons with few pentagons and many high-order faces, this inequality can force the number of hexagons to be arbitrarily large. More strongly, it can be used to find assignments to the numbers of faces for which the required number of hexagons cannot be bounded by any function of the maximum number of sides of a face. Analogues of Eberhard's theorem have also been studied for other systems of faces and face counts than simple convex polyhedra, for instance for toroidal graphs and for tessellations. See also Erdős–Gallai theorem Grinberg's theorem References Polyhedral combinatorics
Eberhard's theorem
[ "Mathematics" ]
1,133
[ "Polyhedral combinatorics", "Theorems in combinatorics", "Combinatorics", "Theorems in discrete mathematics" ]
65,208,303
https://en.wikipedia.org/wiki/A-242
A-242 is an organophosphate nerve agent. It was developed in the Soviet Union under the FOLIANT program and is one of the group of compounds referred to as Novichok agents that were revealed by Vil Mirzayanov. Mirzayanov gives little specific information about A-242, stating that it is highly toxic but no figures are given to compare it to other related agents. It is reportedly a solid rather than a volatile liquid as with most nerve agents, and in order to weaponise it successfully, it had to be milled into a fine powder form that could be dispersed as a dust. Legal status A-242 has been added to Schedule 1 of the Annex on Chemicals of the Chemical Weapons Convention as of June 2020, and it has been explicitly named as an example compound for schedule 1.A.15. For chemicals listed in Schedule 1, the most stringent declaration and verification measures are in place combined with far-reaching limits and bans on production and use. See also C01-A035 C01-A039 A-230 A-232 A-234 A-262 References Acetylcholinesterase inhibitors Guanidines Phosphonamidofluoridates Novichok agents Diethylamino compounds
A-242
[ "Chemistry" ]
259
[ "Guanidines", "Functional groups" ]
65,209,976
https://en.wikipedia.org/wiki/Global%20Advisory%20Committee%20on%20Vaccine%20Safety
The Global Advisory Committee on Vaccine Safety (GACVS) is a group of experts that provides independent and authoritative guidance to the World Health Organization (WHO) on the topic of safe vaccine use. To maintain its independence, GACVS members may not represent WHO in any way. The Committee was established by the WHO in 1999, and as part of its responsibilities, oversees the Vaccine Safety Net. The group meets twice yearly and publishes its findings in the WHO Weekly Epidemiological Record. Engagements and topics undertaken by the GACVS have included the safety of vaccines for measles, influenza, human papilloma virus, Japanese encephalitis, rotavirus and hepatitis B. In May 2020, as part the WHO's aim to coordinate global research on tests, treatments and vaccines against the SARS-CoV-2, the GACVS addressed the issue of rapidly developing COVID-19 vaccines during a global emergency and growing misinformation and vaccine hesitancy. Purpose The purpose of the GACVS is to provide a ready group of independent experts that can advise the WHO on issues relating to vaccine safety, enabling the WHO to respond quickly and authoritatively with potential global importance. As part of its responsibilities, GACVS oversees the Vaccine Safety Net. History and function WHO established the GACVS in 1999 on a background of advances and increasing knowledge of vaccines accompanied by concerns relating to their safety and subsequent influence on public confidence in vaccine programmes. Its membership consists of a number of experts in several fields that touch on the topic of vaccine safety, including epidemiology, vaccinology, ethics, neurology, internal medicine, and autoimmunity. It is an advisory body that provides the WHO with scientifically backed "advice on vaccine safety issues of potential global importance", makes recommendations for policy-making and bringing together ad hoc task forces, and prioritizes aspects of checking vaccine safety. An example of an issue, on which the Committee might be called to provide guidance, is the matter of short- and long-term national vaccination programmes. According to its 2017 terms of reference, the Committee: cascades its findings by various means. creates task forces as required and when needed identifies causal relationships makes recommendations to the WHO reviews up-to-date knowledge around vaccine safety, Members are nominated by the Director of WHO's Department of Essential Medicines and Health Products, and are appointed for an initial term of three years. Current members can only be renewed for one additional term. To maintain independence in advising, it reports that its members may not represent WHO "in any capacity or in any forum." Current and former members of the GACVS can be found on the official website. The group meets twice yearly and publishes its findings in the WHO Weekly Epidemiological Record. Engagements and topics undertaken by the GACVS include the safety of immunization during pregnancy. The GACVS is also aware of its increasing responsibility towards low- and middle-income countries that make vaccines for export. Vaccine hesitancy The GACVS aims to respond quickly and authoritatively in addressing vaccine-related adverse effects, thereby maintaining confidence in vaccines and immunization coverage with the result that the incidence of disease falls. The GACVS evaluates and interprets reports of adverse effects of vaccines that impact on international vaccination programmes, helping to develop better surveillance systems, particularly in low- and middle-income countries. It also monitors the clinical testing of new vaccines and their use in immunization programs. The GACVS has been involved in issues relating to vaccine hesitancy regarding several vaccines including vaccines for measles, influenza, human papilloma virus, Japanese encephalitis, rotavirus and hepatitis B. COVID-19 In May 2020, during the global emergency of COVID-19 and as part of the WHO's aim to coordinate global research on tests, treatments and vaccines against the coronavirus responsible for COVID-19 disease, the GACVS addressed the issue of monitoring fast-emerging COVID-19 vaccines amid a global emergency and growing misinformation and vaccine hesitancy. A COVID-19 vaccine safety surveillance manual was published by the WHO in 2020, upon recommendation and guidance of GACVS members. Evaluation Upon the 15-year anniversary of the GACVS, a number of members reviewed the Committee's contributions and ongoing challenges. Reference section External links section Global Advisory Committee on Vaccine Safety . World Health Organization. Health advocacy groups Vaccination-related organizations
Global Advisory Committee on Vaccine Safety
[ "Biology" ]
929
[ "Vaccination-related organizations", "Vaccination" ]
75,146,830
https://en.wikipedia.org/wiki/List%20of%20Record%20of%20Ragnarok%20characters
This is a list of characters of the manga series Record of Ragnarok. Valkyries The eldest of the valkyries and their leader, she convinces the gods to hold the Ragnarok. She despises the gods and takes advantage of the situation to enact her revenge upon them. Brunhilde's youngest sister and a valkyrie in training. The fourth of the 13 valkyrie sisters. She performed a Völundr with Lü Bu in round 1, turning into the "Sky Piercer", a halberd. The seventh of the 13 Valkyrie sisters. She performed a Völundr with Adam in round 2, turning into a knuckleduster. The second of the 13 Valkyrie sisters. She performed a Völundr with Kojiro Sasaki in round 3, turning into the "Monohoshizao", an ōdachi. Due to Hrist's bipolar personality, the sword was able to reform itself into a daishō set of katana after being broken in half. The eleventh of the 13 Valkyrie sisters. She was forced into performing a Völundr with Jack the Ripper in round 4, turning into a pair of gloves that could turn anything into a divine weapon. The third of the 13 Valkyrie sisters. She performed a Völundr with Raiden Tameemon in round 5, turning into the "Mawashi of Flesh and Bone", a special mawashi that allowed Raiden to completely manipulate his own muscles. The tenth of the 13 Valkyrie sisters. She performed a Völundr with Qin Shi Huang in round 7, turning into a pair of spaulders, known as the "Shenluo Kaixiu" or "Almighty Spaulders", and later reformed into a sword known as the "Shi Huang Goujian Sword". The ninth of the 13 Valkyrie sisters. She performed a Völundr with Nikola Tesla in round 8, turning into materials that allowed Tesla to complete his special set of armour, known as the "Super Automaton β". The fifth of the 13 Valkyrie sisters. She performed a Völundr with Leonidas in round 9, turning into an aspis shield. The sixth of the 13 Valkyrie sisters. She performed a Völundr with Soji Okita in round 10, turning into a katana. Einherjar The einherjar are 13 human warriors chosen personally by Brunhilde to fight in Ragnarök, later being joined by Buddha, leaving the einherjar with one extra fighter. A military general and warlord who lived during the late Eastern Han dynasty of Imperial China and humanity's representative for the first match, fighting and losing against Thor. His weapon is the Sky Piercer, a halberd granted by the valkyrie Randgriz, whose special ability allows Lu Bu to break any armor. The progenitor of all humanity who fights and loses against Zeus in the second match. Designed in the image of a god, Adam can perfectly replicate any move and technique he lays his eyes upon. His weapon is a knuckleduster, granted by the valkyrie Reginleif. Despite losing, his effort inspires the rest of humanity to believe in their chances. A famous Japanese swordsman who fights and wins in the third match against Poseidon. His weapon is the Monohoshizao, a two-handed nodachi granted by the valkyrie Hrist, whose special ability allowed her to transform into two weapons after the Monohoshizao was shattered. An infamous British serial killer from the late 19th century who fights and wins in the fourth match against Heracles. He wears a pair of gloves granted by the valkyrie Hlökk, whose special ability allows Jack the Ripper to turn anything his gloves touch into a divine weapon. The highest-rated Japanese sumo wrestler from the 19th century who fights and loses in the fifth match against Shiva. He wears a mawashi granted by the valkyrie Thrud, which gives him complete control over his body's muscles. A former human who founded Buddhism, known as "The Enlightened One". Despite having attained godhood, Buddha decides to represent and win for humanity in the sixth match, much to the ire of the other gods. He initially wields the Six Realms Staff, an oversized praying wheel that can assume six different forms according to his current emotional state. During his fight with Hajun, Buddha is granted by Zerofuku's soul to use Great Nirvana Sword Zero. The founder of the Qin dynasty and the first emperor to unify China, in the 3rd century BC, who fights and wins against Hades in the seventh match. His weapons are the Allmighty Spaulders, granted by the valkyrie Alvitr. His spaulders later change into the Shi Huang Goujian Sword. In addition to his weapons, Qin is also a skilled martial artist. Qin's eyes can also see specific "star" points in a person's body, allowing him to strike at specific areas and disrupt attack flows. However, they also cause him to feel any pain he sees. A Serbian-American inventor from the 20th century, who fights and loses in the eighth match against Beelzebub. His weapon is the "Super Automaton β", a powered exoskeleton granted by the vaklyrie Göndul. A Spartan king from the 5th century BC, famous for his instrumental role at the Battle of Thermopylae, who fights and loses in the ninth match against Apollo. His weapon is an aspis shield granted by the Valkyrie Geirölul. The captain of the Shinsengumi, a special police force from 19th century Japan, who fights and wins in the tenth match against Susano'o no Mikoto. His weapon is a katana granted by the valkyrie Skalmöld. A Finnish sniper and war veteran from the 20th century, recognized as the deadliest marksman in history, who fights in the eleventh match against Loki. A French astrologer, physician and reputed seer from the 16th century. A Russian mystic and self-proclaimed holy man from the early 20th century. A Japanese folk hero from the Heian period. Gods' Fighters The God's Fighters are 13 divine warriors chosen to fight in Ragnarök. Their ranks had initially included Buddha, but he would defect to humanity's side in Round 6, later being replaced by Hades. The Norse god of thunder who fights and wins the first match against Lü Bu, armed with the hammer Mjölnir. The supreme Greek god and chairman of the Gods' Council who fights and wins the second match, fighting barehanded against Adam. The Greek god of the sea and Zeus' older brother who fights and loses the third match to Kojiro Sasaki, armed with a trident. A former human who ascended to become the Greek god of strength and heroism. He fights for the gods and loses the fourth match to Jack the Ripper, armed with a divine club that can transform based on beseeched power from his twelve labors. The four-armed Hindu god of destruction and one of the three gods that make up the Trimurti who participates in and wins the fifth match, fighting barehanded against Raiden Tameemon. A deity armed with the Misery Axe and the original form of the Japanese Seven Lucky Gods, a group of deities under who bestow good fortune and serve as executioners of those who dare to defile the gods. Originally a kind-hearted deity who absorbed others' misfortune, Zerofuku's views on humanity changed upon witnessing humanity's deprived nature and being replaced by Buddha. This made Zerofuku resent humans enough to consider killing them all, acting on the last of his kindness to stop him by splintering himself into the Seven Lucky Gods. When Bishamonten was chosen to represent the gods in the sixth round against Buddha, he absorbed the other Lucky Gods to resume their true form as a grudge-driven Zerofuku. The two battled to a standstill before Buddha rekindles Zerofuku's old self, only to be consumed when Hajun reconstitutes himself to continue the fight. Zerofuku aids Buddha in spirit by performing Völundr to become his weapon to kill Hajun, dying while departing in peace. As the Demon Lord of the Sixth Heaven, Papiyas is a legendary berserker whose vast power eventually destroyed his body. Considered a demon of legend, Papiyas died due to his body being unable to compensate for his vast power. Beelzebub found and cultivated his remains into a seed he planted in Zerofuku, which germinated when the deity lost his resentment towards humanity. The revived Papiyas consumed Zerofuku to recreate his body, taking over in the deity's match with Buddha before being killed by the ascended human. The Greek god of the underworld who replaced Buddha on the gods' roster, participating in and losing the seventh match against Qin Shi Huang while seeking vengeance for his brother Poseidon. He wields a bident fused with the remnants of Poseidon's trident. A Philistine deity portrayed as a demon in Jewish and Christian lore. He fights and defeats Nikola Tesla in the eighth match, armed with the "Staff of Apomyius," a cane that enhances his body's vibrations for offense and defense, depending on which hand Beelzebub uses to wield it. His staff was a gift from Hades. The Greek god of the sun, who fights and wins in the ninth match against Spartan King Leonidas. He fights with the "Threads of Artemis", strings of light that he is able to fully manipulate. The Shinto god of the sea and storms. One of the three central deities of Japanese mythology who fights and loses in the tenth match against Soji Okita. He wields the "Onigiri Ame-no-Murakumo", a katana forged from his previous sword, Kusanagi no Tsurugi. The Onigiri Ame-no-Murakumo was forged through the combined efforts of humanity and the gods' greatest blacksmiths. The Norse god of deceit and blood brother to Odin, who fights in the eleventh match against Simo Häyhä. The supreme Norse god, father of Thor and blood brother of Loki. Before the start of the tenth round, Beelzebub and Buddha confront Odin, and Beelzebub discovers Odin's secret plan, to revive the origin of the universe: Arche. The Egyptian god of death, mummification, and embalmment. He was originally the fighter of the tenth round of the tournament, until Susano'o no Mikoto appeared and took his place. Humans An advisor to the warlord Lü Bu during the late Eastern Han dynasty of China. He appears during Round 1 to support Lü Bu. After Lü Bu is killed by Thor, Chen Gong, alongside the rest of Lü Bu's army, chose to follow his lord into the afterlife. The legendary horse of Lü Bu during the late Eastern Han dynasty of China. He appears during Round 1, serving as Lü Bu's mount during his entrance, and later after his legs were shattered after one of Thor's attacks. After Lü Bu is killed by Thor, Red Hare, alongside the rest of Lü Bu's army, chose to follow his lord into the afterlife. A Chinese general serving under Liu Bei during the late Eastern Han dynasty and early Three Kingdoms period of China. He appears during Round 1 to support Lü Bu. A Chinese warlord during the late Eastern Han dynasty of China and founding emperor of Shu Han. He appears during Round 1 to support Lü Bu, and later during Round 7 to support Qin Shi Huang. A Chinese general serving under Liu Bei during the late Eastern Han dynasty of China. He appears during Round 1 to support Lü Bu. A Chinese warlord and statesman who rose to power during the late Eastern Han dynasty of China. He appears during a flashback of Lü Bu's life, overseeing the executions of Lü Bu and his army. An Italian sculptor, painter, architect, and poet of the High Renaissance. He appears during Round 2, sketching a picture of Adam. An influential German composer of the Classical period. He appears during Round 2, after Hermes plays Johann Sebastian Bach's Air on the G String to assist Zeus' entry, turning to Bach to confirm the song played was his. A German composer and musician of the late Baroque period. He appears during Round 2, after Hermes plays his Air on the G String to assist Zeus' entry, crying at how well the god had played it. The first woman and wife of Adam. She appears during Round 2 to support her husband. The first son of Adam and Eve. He appears during Round 2 to support his father. The second son of Adam and Eve. He appears during Round 2 to support his father. A Japanese samurai and adoptive son of Musashi Miyamoto. He appears during Round 3, initially skeptical of the selection of Kojiro Sasaki over someone like his father, but later supported him after Sasaki proved his worth. A Japanese swordsman and head of the Yoshioka-ryū house of swordsmanship who fought Musashi Miyamoto. He appears during Round 3, initially skeptical of the selection of Kojiro Sasaki over someone like Miyamoto, but later supported him after Sasaki proved his worth. A Japanese warrior-monk and founder of the Hōzōin-ryū school of spearmanship. He appears during Round 3, initially skeptical of the selection of Kojiro Sasaki over someone like Musashi Miyamoto, but later supported him after Sasaki proved his worth. A Japanese swordsman, philosopher, strategist, writer, and rōnin. In life, he was one of the many skilled swordsmen that were challenged by Kojiro Sasaki. He appears during Round 3, bitter at how Sasaki was chosen over someone like himself, but later supported Sasaki after he proved his worth. He later appears during Round 10, watching Soji Okita's battle against Susano'o no Mikoto. The nephew of Seigen Toda and kenjutsu instructor at the latter's dojo. In life, he believed Kojiro Sasaki to have no talent with swordsmanship, being easily angered by the latter's laziness when it came to training. However, after returning from a six months break from the dojo, Sasaki swiftly defeated Kagekatsu, earning his respect. He later appears during Round 3 to support Sasaki. The brother of Seigen Toda and father of Kagekatsu Toda. He served as an assistant instructor at Seigen's dojo. While he initially believed Kojiro Sasaki to have no talent with swordsmanship, joking that the young boy was a "budding merchant". However, after Sasaki returned to the dojo after a six months absence and easily defeated Kagekatsu, Kagemasa gained immense respect for the young swordsman. He later appears during Round 3 to support Sasaki. A Japanese swordsman and master of the Chujō-ryū sword style. He was also Kojiro Sasaki's kenjutsu instructor. He later appears during Round 3 to support Sasaki. A Japanese swordsman and founder of the Ittō-ryū school of sword fighting. During life, Kojiro Sasaki challenged him to a duel, which Itto would win. He later appears during Round 3 to support Sasaki. A Japanese swordsman and master of the Yagyū Shinkage-ryū school of sword fighting. During life, Kojiro Sasaki challenged him to a duel, which Yagyu would win. He later appears during Round 3 to support Sasaki. A Japanese swordsman and founder of the Shinkage-ryū school of sword fighting. During life, Kojiro Sasaki challenged him to a duel, which Kamiizumi would win. He later appears during Round 3 to support Sasaki. A British novelist known for creating the Sherlock Holmes series. He appears during Round 4, explaining the history of Jack the Ripper to a couple spectators that were unaware of who he was. An English playwright, poet, and actor, regarded as the greatest writer in the English language. He appears during Round 4, spectating the match between Jack the Ripper and Hercules. A young Greek boy from Thebes. He was the childhood friend of Hercules, then known as Alcides. He later appears during Round 4 to support Alcides. A prostitute who lived in Victorian London and the mother of the future Jack the Ripper. One day, Mary was visited by a struggling playwright, Jack Smith, who promised to come back and marry her out of poverty once one of his plays became famous. Holding onto this promise, Mary decided to give birth to the playwright's son, naming him "Jack" after his father. However, Jack Smith did not keep his promise, instead marrying a noblewoman after becoming successful. After news of this got out, Mary had a psychotic breakdown, yelling at her son that she wished she had never given birth to him. As a result, Jack stabbed her in the neck, calling the color of her fear "beautiful", and held her as she bled out. A playwright who lived in Victorian London. One day, while struggling to make ends meet, he visited a brothel, where he met Mary. Having seemingly fallen in love with her at first sight, he promised her that, once he became successful, he would come back and marry her out of poverty. Unbeknownst to him, this encounter resulted in him fathering a son, the future Jack the Ripper. Despite his promise, Smith proceeded to marry a noblewoman after one of his plays, Gin and Rose in the Slums, became successful. This, unknowingly, caused Mary to have a psychotic break, which eventually led to her son, Jack, killing her in cold blood. After murdering his mother, Jack tracked Smith down to his estate, and revealed his identity as the latter's son before slicing Smith's throat. A British prostitute who lived in the same brothel as Mary and her son, the future Jack the Ripper. She frequently served as a mother-figure to the young Jack while his mother was working, taking pity on the young boy for having to have been born into such a status. She later appears during Round 4 to support Jack. A Japanese swordsman and commander of the Shinsengumi. He appears throughout the series, watching Ragnarök alongside Soji Okita, Kojiro Sasaki, and Hrist. He also appears as the chief supporter of Okita during Round 10. A Japanese sumo wrestler, recognized as the fourth yokozuna. He also served as the coach for a young Raiden Tameemon. He appears during Round 5 to support Raiden. A Japanese sumo wrestler, recognized as the fifth yokozuna. He appears during Round 5 to support Raiden Tameemon. A Japanese physician and scholar known for his translation of Kaitai Shinsho (New Book of Anatomy) and a founder of Rangaku (Western learning) and Ranpō (Dutch style medicine) in Japan. He appears during Round 5 to support Raiden Tameemon. A Japanese artist, ukiyo-e painter and printmaker during the Edo period, famous for the woodblock print series Thirty-Six Views of Mount Fuji, which includes the iconic print The Great Wave off Kanagawa. He appears during Round 5 to support Raiden Tameemon. Father of Tarokichi Seki, who would later become Raiden Tameemon. While he was concerned for his son's well-being when he was unable to walk by the age of 2, he expressed extreme joy when he started. He also respected Tarokichi's determination to learn how to walk, even when his muscles crushed all of his bones. He later appears during Round 5 to support his son. Mother of Tarokichi Seki, who would later become Raiden Tameemon. She was extremely worried for her son's well-being when he was unable to walk by the age of 2, praying to the Gods every day to give her son the strength to walk. She used this story to console Tarokichi after the other village children started fearing him, telling him to use his strength for good. She later appears during Round 5 to support her son. A young Japanese boy from the same village as Tarokichi Seki, the future Raiden Tameemon. During his youth, Toraji had an immense passion for sumo, and frequently played with the children of the village. However, he grew fearful of the young Tarokichi after the latter was able to throw him out of the ring with just one hit, calling him a "monster". He later appears during Round 5 to support Raiden. A Japanese sumo wrestler who was in charge of the Urakaze stable while Raiden Tameemon and Kajinosuke Tanikaze were training there. He later appears during Round 5 to support Raiden. A Chinese philosopher and founder of Confucianism, as well as a member of the Four Sages. He appears during Round 6 to support Buddha. His character was omitted from the anime adaptation. A Greek philosopher credited as the founder of Western philosophy, as well as a member of the Four Sages. He appears during Round 6 to support Buddha. His character was omitted from the anime adaptation. A Jewish philosopher and founder of Christianity, as well as a member of the Four Sages. He appears during Round 6 to support Buddha. His character was omitted from the anime adaptation. A king of the Shakya and father of Siddhartha Gautama. He appears during Round 6 to support his son. An ancient Indian prince that served as an older brother-figure to Siddhartha Gautama. Jataka was a just ruler who served his people well, but eventually became ill and bedridden. One day, while Siddhartha was visiting him, Jataka confessed his thoughts concerning his own happiness, expressing a desire to travel and see the world. At his funeral, Siddhartha achieved Enlightenment and took his coffin from the procession and laid it in a river, realizing Jataka's dream. He later appears during Round 6 to support Buddha. Founder of the Ming dynasty. He appears during Round 7 to support Qin Shi Huang. Founder of the Han dynasty. He appears during Round 7 to support Qin Shi Huang. The seventh emperor of the Han dynasty and one of the longest reigning Chinese emperors. He appears during Round 7 to support Qin Shi Huang. The first emperor of the state of Cao Wei during the Three Kingdoms period. He appears during Round 7 to support Qin Shi Huang. Founder of the Eastern Wu dynasty during the Three Kingdoms period. He appears during Round 7 to support Qin Shi Huang. A Chinese woman who served as bodyguard and caretaker for Ying Zheng, the future Qin Shi Huang. In 260 BCE, her son, Chun Ou, was unfortunately buried alive at Changping alongside hundreds of Zhao soldiers. As a result, she grew to hate the Qin. Despite her hatred, she grew affectionate for the young Ying Zheng, who was being held prisoner in Zhao territory, after learning about his ailment. After 2 years of caring for the young boy, Ying Zheng was called back to the Qin state, as his father had recently ascended to the throne. While on route, the carriage that was being used to carry Ying Zheng was ambushed by people wishing to kill the crown prince. As the fight ensued, Chun Yan was killed, using her final breaths to encourage Ying Zheng to walk the path he believed in and become the greatest king of all. She later appears during Round 7 to support Qin Shi Huang. A young Chinese boy who was unfortunately buried alive in 260 BCE alongside hundreds of Zhao soldiers. He later appears during Round 7 to support Qin Shi Huang alongside his mother, Chun Yan. An Italian astronomer, physicist, and engineer who has been called the "Father of Modern Science". He appears during Round 8 to support Nikola Tesla. A German physicist who is regarded as one of the greatest and most influential physicists of all time. He appears during Round 8 to support Nikola Tesla. A Polish physicist and chemist who conducted pioneering research on radioactivity. She appears during Round 8 to support Nikola Tesla. An English mathematician, physicist, astronomer, alchemist, theologian, and author who was a key figure in the Scientific Revolution and the Enlightenment that followed. He appears during Round 8 to support Nikola Tesla. A Swedish chemist, engineer, inventor, businessman, and philanthropist known for the invention of dynamite and the Nobel Prize. He appears during Round 8 to support Nikola Tesla. An American inventor and businessman who developed many devices in fields such as electric power generation, mass communication, sound recording, and motion pictures. He appears during Round 8 to support Nikola Tesla. A second lieutenant in the US Navy who participated in the Philadelphia Experiment. While the military claimed the experiment was to make a ship invisible to radars, the true objective was to use tesla coils to cause the ship to teleport. While the mission was successful at making the ship teleport, it resulted in the deaths and disappearances of 13 soldiers and scientists, and caused 6 others to go insane. The scene was so gruesome that it caused Ensign T to develop post-traumatic stress disorder. While he initially kept quiet about the event, Ensign T would eventually leak the true purpose of the event to the media. The older brother of Nikola Tesla. Dane was a young inventor who was well-known around their village and served as Nikola's main inspiration to pursue science. One day, Dane was hired to design a new windmill for the village, but he quickly became stressed over whether his new design would work or not. However, Nikola reassured Dane, stating that, even if the design fails the first time, they can just keep trying, causing him to regain his resolve and start building the windmill. However, one particularly stormy night, Dane went to check on the windmill only for his horse to be struck by lightning, immediately killing him. After that, his windmill design ended up being flawed, causing it not to move. As a result, Dane's genius was eventually forgotten by the townsfolk and the young inventor faded into obscurity. He later appears during Round 8 to support Nikola. A young Spartan soldier and staunch follower of Leonidas. When Leonidas went against the elders wishes and chose to fight against the invading Persians, Hagis was one of the 300 Spartan soldiers that chose to follow their king, fighting alongside him at the Battle of Thermopylae. He later appears during Round 9 to support Leonidas. A Japanese swordsman and captain of the second unit of the Shinsengumi. He appears during Round 10 to support Soji Okita. A Japanese swordsman and captain of the third unit of the Shinsengumi. He appears during Round 10 to support Soji Okita. A Japanese swordsman and captain of the eighth unit of the Shinsengumi. He appears during Round 10 to support Soji Okita. A Japanese swordsman and spy for the Shinsengumi. He appears during Round 10 to support Soji Okita. A Japanese swordsman and vice-commander of the Shinsengumi. He appears during Round 10 to support Soji Okita. A Japanese swordsman and captain of the sixth unit of the Shinsengumi. He appears during Round 10 to support Soji Okita. A Japanese swordsman and member of the Shinsengumi. He appears during Round 10 to support Soji Okita. A Japanese swordsman and member of the Shinsengumi. He appears during Round 10 to support Soji Okita. A Japanese swordsmith and creator of the dōjigiri. He, alongside Munechika Sanjo, Kunitsuna Awataguchi, Kanayagokami, and Hephaestus, reforged Susano'o's Ame-no-Murakumo-no-Tsurugi into the Onigiri Ame-no-Murakumo. A Japanese swordsmith and creator of the mikazuki. He, alongside Yasutsuna Hoki, Kunitsuna Awataguchi, Kanayagokami, and Hephaestus, reforged Susano'o's Ame-no-Murakumo-no-Tsurugi into the Onigiri Ame-no-Murakumo. A Japanese swordsmith and creator of the onimaru. He, alongside Munechika Sanjo, Kunitsuna Awataguchi, Kanayagokami, and Hephaestus, reforged Susano'o's Ame-no-Murakumo-no-Tsurugi into the Onigiri Ame-no-Murakumo. Gods The herald of the Greek gods and Zeus's butler, often seen giving play-by-play commentary for Ares. The Greek goddess of love. She is accompanied by a group of stone golems that she uses as a throne that holds up her large breasts. A Norse god who keeps watch for invaders and the onset of Ragnarök. He oversees and comments on the fights of Ragnarok. A pair of ravens that fly all over the world, Midgard, and bring information to the god Odin. They are usually seen resting on Odin's shoulders. The Greek god of courage and war, often feigning knowing how fighters pull special moves. The Norse god of justice and reconciliation. He appears during Round 1 to support Thor. The Norse god of war. He was killed years before Ragnarök while defending Asgard from the Jötunn. The Hindu goddess of fertility, love, and beauty, as well as Shiva's first wife. She appears during Round 5 to support Shiva. The Hindu goddess of time, death, and the end of the world, as well as Shiva's second wife. She appears during Round 5 to support Shiva. The Hindu goddess of war, as well as Shiva's third wife. She appears during Round 5 to support Shiva. The Hindu god of success, wisdom, and new beginnings. He appears during Round 5 to support Shiva, his father. The Hindu god of storms. He was the best friend of Shiva during their childhood, and dreamed of becoming the strongest god in Svarga. However, he would eventually abandon his dream after Shiva threw their match to see who was strongest. He later appears during Round 5 to support Shiva. The older of the Asura brothers. He and his brother, Nishumbha, fought against Rudra and Shiva in the past after attacking a village. He later appears during Round 5 to support Shiva. The younger of the Asura brothers. He and his brother, Shumbha, fought against Rudra and Shiva in the past after attacking a village. He later appears during Round 5 to support Shiva. The Hindu god of fire. He was defeated, alongside Varuna, by Rudra and Shiva during the former's quest to become the strongest god in Svarga. He later appears during Round 5 to support Shiva. The Hindu god of water. He was defeated, alongside Agni, by Rudra and Shiva during the former's quest to become the strongest god in Svarga. He later appears during Round 5 to support Shiva. The Hindu god of lightning. He was defeated by Rudra and Shiva during the former's quest to become the strongest god in Svara. He later appears during Round 5 to support Shiva. The Hindu god of preservation. He was defeated, alongside Brahma, by Rudra and Shiva during the former's quest to become the strongest god in Svarga. He later appears during Round 5 to support Shiva. The Hindu god of creation. He was defeated, alongside Vishnu, by Rudra and Shiva during the former's quest to become the strongest god in Svarga. He later appears during Round 5 to support Shiva. The Shinto god of fortune in war and battles and leader of the Seven Lucky Gods. He was initially chosen as one of the Gods' Fighters, chosen to fight in Round 6 against Buddha, before he fused with the other Lucky Gods to create Zerofuku. The Shinto god of fortune in fishing and trading and member of the Seven Lucky Gods. During Round 6, he fused with the other Lucky Gods to create Zerofuku. The Shinto goddess of fortune in music, art, and knowledge and member of the Seven Lucky Gods. During Round 6, she fused with the other Lucky Gods to create Zerofuku. The Shinto god of fortune in business and plenitude and member of the Seven Lucky Gods. During Round 6, he fused with the other Lucky Gods to create Zerofuku. The Shinto god of fortune in wealth and happiness and member of the Seven Lucky Gods. During Round 6, he fused with the other Lucky Gods to create Zerofuku. The Shinto god of fortune in cooking, farming, and banking and member of the Seven Lucky Gods. During Round 6, he fused with the other Lucky Gods to create Zerofuku. The Shinto god of fortune in longevity and member of the Seven Lucky Gods. During Round 6, he fused with the other Lucky Gods to create Zerofuku. The Greek god of rivers and oceanic bodies of water. He served as Poseidon's servant until his death during Ragnarök, after which he presented Hades with the shattered remnants of Poseidon's trident in hopes that Hades would use them during his round to avenge his master. He later appears during Round 7 to support Hades. The Greek god of conquest and older brother of Zeus, whom he attempted to overthrow after the Titanomachy. While seemingly killed by Poseidon with his existence expunged from historical records, Adamas survived after Hades arranged for him to be made into a cyborg by Beelzebub, remaining in Helheim under the name of "Adamantine". The Greek primordial goddess and personification of the Earth. Shortly after the Titanomachy, Gaia, unsupportive of Zeus' ascension to King of the Cosmos, rallied together the Giants, her children, in a war against the gods, known as the Gigantomachy. A goddess known for being the first wife of Adam. At some point in the past, she joined Beelzebub on his quest to kill Satan, after learning it was him who had killed Lucifer. However, after falling in love with her, Beelzebub would become possessed by Satan yet again and kill Lilith. In her final moments, Lilith wished for Beelzebub to keep living, and placed a blessing on him that prevented him from ever being killed, either by himself or others. Ragnarok. Apollo stand in for the ninth round in its place. The Shinto goddess of metalworking and technology. She, alongside Yasutsuna Hoki, Munechika Sanjo, Kunitsuna Awataguchi, and Hephaestus, reforged Susano'o's Ame-no-Murakumo-no-Tsurugi into the Onigiri Ame-no-Murakumo. She later appears during Round 10, spectating the match between Soji Okita and Susano'o. The Greek god of fire, volcanoes, and blacksmiths. He, alongside Yasutsuna Hoki, Munechika Sanjo, Kunitsuna Awataguchi, and Kanayagokami, reforged Susano'o's Ame-no-Murakumo-no-Tsurugi into the Onigiri Ame-no-Murakumo. A primordial god, Shinto god of creation, and father of the Three Noble Gods who instructed his children to rule over the world. The Shinto goddess of the sun and chief kami of the pantheon, as well as one of the Three Noble Gods, ruling over the heavens. The Shinto god of the moon and one of the Three Noble Gods, ruling over the oceans. Other Characters A lust demon that attempted to manipulate Brunhilde into sleeping with him in exchange for protecting her from the Gods' wrath. He was killed by Thor halfway through this confrontation. His character was omitted from the anime adaptation. Leader of the Titans and the personification of time, as well as the father of Zeus and his siblings. He was killed by Zeus during the Titanomachy, but not before being the only participant to successfully land a hit on him. A demon who, in the ancient past, attempted to assault Eve in the Garden of Eden, only to be stopped by a couple of birds that came to her aid. In retribution, the Serpent took a bite from the forbidden fruit, and used it as false evidence in a trial to have Eve cast out of Heaven. However, the trial wouldn't go as planned as Adam broke into the courtroom and took several bites out of forbidden fruits, so that he could be cast out alongside Eve. Angered by this, the Serpent assumed a monstrous form and attempted to kill Adam and Eve, only to be killed himself after Adam copied his claws. A multi-headed dog that watched over Helheim. While being loyal to Hades, he was also befriended by Hercules during his twelve labors. He appears during Round 4 when he fuses with Hercules in order to provide him with more power to fight Jack the Ripper. A demon-god considered by people of ancient China to be a God of War or God of Militaries who also created the five tools of war. He decreed that, should one wish to become a king, they should supply him with sacrifices and kneel before him, lest he dethrone and kill them. The brutal cycle would continue for centuries, until Chiyou was ultimately killed by Qin Shi Huang, allowing the later to unify all of China. His fighting techniques would also serve as the basis for Qin Shi Huang's own martial arts. An angel that befriended Beelzebub in the past. He was killed by Beelzebub after the later was possessed by Satan. An angel that befriended Beelzebub in the past. He was killed by Beelzebub after the later was possessed by Satan. An angel that befriended Beelzebub in the past. He was killed by Beelzebub after the later was possessed by Satan. A demon who serves as a guard of Tartarus. A demon who serves as a guard of Tartarus. A half-human, half-god who is imprisoned in Tartarus for a yet unrevealed reason. He is also Brunhilde's lover. A being that was drove out of Heaven due to his hideous appearance. After experiencing the same treatment on Earth as he had in Heaven, he started terrorizing the citizens of ancient Greece. One day, after attacking the town of Delphi, he was challenged to a fight against a young Apollo. Despite losing day after day, Python refused to give up, which Apollo told him is what made him "beautiful". After hearing Apollo's kind words, Python would stop attacking humans and, out of respect for Apollo, built a temple dedicated to him, inscribing it with Apollo's signature phrase: "Know Thyself". References Record of Ragnarok Cultural depictions of Adam and Eve Cultural depictions of Jack the Ripper Cultural depictions of Gautama Buddha Cultural depictions of Qin Shi Huang Cultural depictions of Nikola Tesla Cultural depictions of Leonidas I Cultural depictions of Nostradamus Cultural depictions of Grigori Rasputin Cultural depictions of Michelangelo Wolfgang Amadeus Mozart in fiction Cultural depictions of Johann Sebastian Bach Cultural depictions of Cain and Abel Cultural depictions of Miyamoto Musashi Cultural depictions of Arthur Conan Doyle Cultural depictions of William Shakespeare Cultural depictions of Confucius Cultural depictions of Socrates Cultural depictions of Jesus Cultural depictions of Galileo Galilei Cultural depictions of Albert Einstein Cultural depictions of Marie Curie Cultural depictions of Isaac Newton Cultural depictions of Thomas Edison
List of Record of Ragnarok characters
[ "Astronomy" ]
8,237
[ "Cultural depictions of Isaac Newton", "Cultural depictions of astronomers", "Cultural depictions of Galileo Galilei" ]
75,147,095
https://en.wikipedia.org/wiki/287%20%28number%29
287 is the natural number following 286 and preceding 288. In mathematics 287 is an odd composite number with 2 prime factors. 287 is the sum of consecutive primes in three different ways, 89+97+101, 43+53+59+61+67, and 17+19+23+29+31+37+41+43+47 287 is a pentagonal number which follows the concept of triangular numbers. 287 is an odd semiprime number. 287 is the sum of exactly 4 nonzero squares. 287 is a number where 2(287)-1 and 2(287)+1 are both prime. References Integers
287 (number)
[ "Mathematics" ]
132
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
75,148,513
https://en.wikipedia.org/wiki/GRB%20230307A
GRB 230307A was an extremely bright, long duration gamma-ray burst (GRB), likely produced as a consequence of a neutron star merger or black hole - neutron star merger event. It lasted around three minutes, and was observed to have a gamma ray fluence of 3×10−4 erg cm−2 in the 10 to 1000 KeV (electronvolt) range making it second only to GRB 221009A, which was an extremely bright and long duration gamma ray burst deemed to be the Brightest Of All Time. The burst was around 1000 times more powerful than a typical gamma-ray burst. The burst had the second-highest gamma-ray fluence ever recorded. The James Webb Space Telescope (JWST) detected the chemical signature for tellurium (Te). The neutron stars were once part of a spiral galaxy (host galaxy) but were kicked out via gravitational interactions. Then while outside of the main galaxy at a distance of 120,000 light years, they merged, creating GRB 230307A. 230307A is the second brightest gamma ray burst detected in more than 50 years of observations and is located behind the Magellanic Bridge. Despite its long duration, it is most likely the result of the compact merger of a binary ejected from a galaxy in the local universe (redshift z=0.065). The observation of the spectra of heavy elements tellurium and lanthanide was reported from the settling dust of the event. Discovery At 15:44:06 UT on 7 Mar 2023, the Fermi Gamma-ray Burst Monitor (GBM) triggered and located GRB 230307A . at the same time, the Gravitational Wave High-energy Electromagnetic Counterpart All-sky Monitor light curve shows a roughly fast rise and exponential decay (FRED) shape with a possible precursor, with a total duration of ~100 sec. At 2023-03-07T15:44:09Z UT (Solar Orbiter onboard time), Spectrometer Telescope for Imaging X-rays (STIX) detected GRB 230307A. The gamma-ray burst signal can be clearly seen in the STIX quick-look light curves in the range between 10 - 84 keV. The GRB has a single peak and a duration of about 40 seconds. The AGILE team also reported hours, T0 =15:44:06 (UTC) The event lasted about 30 s and it released a total number of 527069 counts in the MCAL detector (above a background rate of 1154 Hz), and 920952 counts in the AC Top detector (above a background rate of 2959 Hz). The 2001 Mars Odyssey's Gamma Ray Spectrometer on Mars also reported it within 12 hours resulting in precisely estimating its incoming direction through Interplanetary Network triangulation. Tellurium (Te) in GRB 230307A was discovered in 2023 by using the James Webb's Space Telescope's (JWST) mid infrared data. JWST obtained mid-infrared (mid-IR) imaging and spectroscopy 29 and 61 days after the burst. See also Christmas burst (GRB 101225A) - a 28 minute long Gamma ray burst that occurred in December 25, 2010 References Long-duration gamma-ray bursts Astronomical objects discovered in 2023 Mensa (constellation) March 2023
GRB 230307A
[ "Astronomy" ]
699
[ "Mensa (constellation)", "Constellations" ]
75,149,985
https://en.wikipedia.org/wiki/Villa%20del%20Principe
The Villa del Principe, Palazzo del Principe, or Palace of Andrea Doria in Fassolo is one of the main historical suburban villas of Genoa, Italy. It was built in the 16th century in an area that it is now located in the city center, but at the time of the construction of the villa was just outside of the city walls towards Capo di Faro and the Lanterna. The villa was intended as the private residence of the Genoese admiral Andrea Doria, Prince of Melfi, who often hosted emperors, kings and other foreign authorities. The villa was nonetheless never officially listed as a Palazzo dei Rolli of the Republic of Genova as it was a suburban villa and not an urban palace. From his residence, Andrea Doria was able to exert a strong political influence on the city, while staying away from the Doge's Palace and the often-treacherous political life of the Republic. The villa is considered one of the masterpieces of the Italian Renaissance. The interior, recently restored, is decorated with frescoes, stuccoes, tapestries and historical wooden furniture. Particularly noteworthy are Perino del Vaga's frescoes in the Salone dei Giganti and in the Loggia degli Eroi (1533), and the Flemish tapestries portraying the Battle of Lepanto (1571). It still belongs to the Doria Pamphili family and it is open to the public as a museum. History Andrea Doria and the liberation of Genoa in 1528 Andrea Doria (Oneglia, 1466 – Genova, 1560) was a Genoese aristocrat, statesman and naval commander who served for many years in the Genoese navy, as well as offering his services as mercenary commander. In 1528, when Genoa was under French military occupation, he attacked the harbor with thirteen galeas and took control of the city. Once the battle was over, he was offered to become the Head of State, but he declined the offer saying that he was “not interested in the perks of power, but only in the independence and prosperity of the Republic”. Realistically, many historians believe that his decision to decline the offer was dictated by his cautious approach and his deep knowledge of the Genoese political landscape. The Doria family, one of the most illustrious among the Genoese nobility, had its quarters in the heart of the city, around the Church of San Matteo, a Romanic church under their patronage and burial place of the family from its foundation in 1125. On 12 September 1528, after the liberation of the city from the French, Andrea pronounced his non acceptance speech from the stairs of that iconic church. On the same day he received, as a thank you gift from the Senate, the Lazzaro Doria Palace in Piazza San Matteo, which he never inhabited. Instead, he preferred to live outside the city, where he could continue to wield considerable power on the life of the Republic without being entangled in lesser political skirmishes. Thus, twelve reformers were appointed to write the new Constitution, which sanctioned the status of Genoa as an oligarchic republic, and consacrated Andrea Doria as "Censor for life" and ‘Father of the Motherland”. The Villa del Principe in Fassolo Andrea Doria chose to live in his villa in Fassolo, in an area just outside the now demolished Porta di San Tomaso, where he lived until his death. He purchased the estate in 1521 from the Lomellini family who had acquired it from the Recanelli family in 1498. As the villa was significantly damaged during the war events of 1528, Andrea decided to purchase also the neighboring villa Giustinani Furneto and commissioned the restoration of the combined two villas to Perino Buonaccorsi, known as Perino del Vaga (1501–1547). The restoration took place from 1529 to 1533. After the death of Andrea in 1560, the villa was further enlarged by his successor Giovanni Andrea I Doria, who commissioned from Antonio Roderio the edification of the west wing and from Giovanni Ponzello the construction of the lateral loggias towards the sea. The villa remained at the center of the political, artistic and mundane life of Genoa for the whole 16th century. In 1529 and 1533, it hosted the Holy Roman Emperor Charles V and, in 1548, the King of Spain Philip II. In those occasions, according to the ancient chronicles, the prince organized sumptuous receptions by the seashore, jousts to celebrate his illustrious guests and fireworks to impress the crowds. In the 16th century, the villa was the one and only court that the Republic of Genoa has ever known. In the 19th century, the villa was still renowned for its beauty and hosted the French Emperor Napoleon Bonaparte, the King of Italy Victor Emmanuel II and the famous opera composer Giuseppe Verdi. In 1854 the construction of the Turin–Genoa railway cut off the Northern part of the gardens from the villa, while in the 1880s the Stazione Marittima (English: “Maritime Terminal”) replaced Andrea Doria’s private harbor, cutting off the villa from the sea. Description Architecture The Northern façade on Via San Benedetto is characterized by simple architectural lines, with a portal realized by Perino del Vaga and Silvio Cosini, surmounted by the coat of arms of the Doria family and the Latin motto "Fundavit eam Altissimus" (English: "God the Most High founded it”). The southern façade is an intricate combination of monumental loggias and colonnades overlooking the Italian gardens. The architectural units preceding the renovation by Perin del Vaga are not fully integrated, as the interest of the architect was mainly decorative, rather than structural. His intervention, however, is clearly visible in the creation of the Loggia degli Eroi, with its different style as compared with the Corinthian column of the underlying colonnade. The lateral loggias, with a Serlian design, are later additions that contribute to the overall grandiosity of the building. Decoration The pictorial decoration of the exterior, vastly celebrated already at the time of the reconstruction of the villa, was attributed to Girolamo da Treviso, Domenico Beccafumi and to Il Pordenone. It is now lost but still worth noting doe to the great influence that it exerted on the Genoese painting school of the 16th century, from Antonio, Ottavio and Andrea Semino to Luca Cambiaso. The interior still preserves a decoration of great impact and a large art collection, including paintings of Sebastiano del Piombo, Domenico Piola and the Bronzino. The Loggia degli Eroi The Loggia degli Eroi (English: “Loggia of the Heroes”) was decorated by Perino del Vaga with a wide set of frescoes and stuccoes according to mythological themes. In the vaults are depicted the Forefathers of the Dorias and the Roman Virtues. These frescoes were already celebrated at the time of their execution and mentioned with great praise by Giorgio Vasari in his biography of Perino del Vaga in 1568. The Prince’s Apartment Sala della Carità Romana Salone della Caduta dei Giganti e gli Arazzi di Alessandro Magno Camera di Perseo Camera dei sacrifici Camera di Cadmo Camera dello Zodiaco Sala di Paride Sala di Ercole The Princess’s Apartment Salone del Nettuno o del naufragio Sala di Psiche Sala di Aracne Sala di Filemone Sala di Fetonte Sala del Tributo Sala del Trionfo Sala dei fatti di Prometeo Sala della Punizione Galleria Aurea Built for Giovanni Andrea I Doria, the Galleria Aurea (English: "Golden Gallery") has an elongated shape and is open on two sides. At the end of the 17th century, in line with the mutated tastes, it became the main reception hall of the villa instead of the Salone dei Giganti. Built by Battista Cantone e Luca Carlone in 1595, It was decorated with stuccoes and gold by the Italian sculptor Marcello Sparzo. The gardens The villa was surrounded by a luscious park, once extending from Andrea Doria’s private harbour up half-way to the top of the hill at the back of the villa. An artificial lake was built to source the water necessary for the fountains to flow, now interred but still remembered in the name of the uphill neighborhood called Lagaccio. Southern gardens The Southern part of the park towards the sea is landscaped as an Italian garden. It is open to the public and has been recently restored. It features the Fountain of the Triton by Giovanni Angelo Montorsoli, and the Fountain of Neptune by Taddeo Carlone, Giuseppe Carlone and Battista Carlone built between 1599 and 1601. Northern gardens The Northern part of the park used to be landscaped with monumental stairs, nymphaea and a colossal statue of Jupiter resembling Andrea Doria, made by Marcello Sparzo and known as "the Giant". This part of the park is not lost: cut off from the villa by the construction of the railway and the Genova Piazza Principe train station, it became a residential area. In 1913, the architect Gino Coppedè built the art nouveau Albergo Miramare next to the colossal statue of the Giant, which was eventually demolished in 1929. Gallery See also Genoa Andrea Doria Doria (family) Republic of Genoa Genoese Navy Rolli di Genova Perino del Vaga Giovanni Ponzello References Bibliography Catalogo delle Ville Genovesi. Genova: Italia Nostra, 1967, pp79–97 Guida d'Italia, Liguria. Touring Club Italiano, 2009, pp. 171–173. Airaldi, Gabriella. Andrea Doria. Salerno Editore. 2015. Stagno, Laura. Palazzo del Principe, Villa di Andrea Doria. Genova: SAGEP, 2005. . Genoa Culture in Genoa Italian Renaissance 16th-century establishments in Italy Cultural history of Italy Italian Renaissance gardens Renaissance art Renaissance architecture in Liguria Art history Architectural history Villas in Liguria
Villa del Principe
[ "Engineering" ]
2,135
[ "Architectural history", "Architecture" ]
75,152,043
https://en.wikipedia.org/wiki/Donor%20coordination
Donor coordination is a problem in social choice. There are several donors, each of whom wants to donate some money. Each donor supports a different set of targets. The goal is to distribute the total donated amount among the various targets in a way that respects the donors' preferences. As an example, consider a town with three recreational facilities that require funding: theater, chess club, and basketball field. There are two donors: Alice and George, each of whom wants to donate 3000. Alice wants to donate to indoor activities (theater or chess), whereas George prefers to donate to competitive activities (chess or basketball). Suppose further that the donors consider the facilities substitute goods, so that the utility of a donor is the sum of money distributed to a facility he likes. Consider the following possible distributions: In the uncoordinated distribution, Alice gives 1500 to each indoor activity, while George gives 1500 to each competitive activity. The resulting distribution is 1500, 3000, 1500. Each donor has a utility of 4500. In contrast, if they coordinate, they can contribute everything to the chess club, so the distribution becomes 0, 6000, 0. Now, each donor has a utility of 6000, so this distribution Pareto-dominates the previous one. Alternatively, one can assume that the donors consider the facilities complementary goods, so that the utility of a donor is the minimum amount of money distributed to a facility he likes. In this case, the uncoordinated distribution 1500,3000,1500 gives both donors utility 1500; the distribution 0,6000,0 gives both donors utility 0; but there is an even better distribution: If the donations are divided equally, the distribution becomes 2000,2000,2000. Now, each donor has a utility of 2000. In both cases, coordination can improve the efficiency of the allocation. Donor coordination is a variant of participatory budgeting, in which the budget is donated by the voters themselves, rather than given by the government. Since the donations are voluntary, it is important that the coordination algorithm ensures that each voter weakly gains from participating in the algorithm, i.e., the amount contributed to projects he approves of is weakly higher when he participates than when he does not. Donor coordination has been studied in several settings, which can be broadly categorized into divisible and indivisible: In divisible donor coordination, each target can receive and use any amount of funding (as in the opening example). In this setting, targets are often called charities. In indivisible donor coordination, each target has a pre-determined cost, and it can be either fully funded, or not funded at all. In this setting, targets are often called projects. Divisible targets Donor coordination with divisible targets is similar to the problem of fractional social choice, except that in the latter, the "budget" is fixed in advance (e.g. time, probability, or government funds), and not donated voluntarily by the agents. Additive binary utilities Brandl, Brandt, Peters and Stricker study donor coordination with additive binary (dichotomous) preferences, represented by approval ballots. Formally, for each donor i there is a set of approved charities denoted by Ai, and i's utility from a distribution d is the total amount of money distributed to charities in Ai: . They analyze several rules. They are exemplified below for a setting with 4 targets (a,b,c,d) and 5 donors who contribute 1 each, and whose approval sets are ac, ad, bc, bd, a. The uncoordinated rule just divides the contribution of each agent i among the charities liked by i. So the funding distribution is 2,1,1,1 and the utilities of the five agents are 3,3,2,2,2. This mechanism is implementable and individually-rational, but not efficient: the outcome is dominated, for example, by the distribution 3,2,0,0, where the utilities are 3,3,2,2,3. The Nash product rule finds a budget-allocation maximizing the product of utilities. It is Pareto efficient, implementable and individually-rational. However, it is not Strategyproof nor resource-monotonic. The Constrained-utilitarian rule finds a budget-allocation maximizing the sum of utilities from the set of all implementable rules. It is implementable, individually-rational, strategyproof and resource-monotonic. However, it is not Pareto-efficient. They present a new rule. Their rule is fair, resource-monotonic and efficient. They also prove a strong impossibility result: there is no PB rule that satisfies the following three properties: strategyproofness, efficiency, and positivity (- at least one approved project of each agent receives a positive amount). The proof reasons about 386 preference profiles and was obtained with the help of a SAT solver. Additive general utilities Brandl, Brandt, Greger, Peters, Stricker, Suksompong study donor coordination assuming donors have additive but non-binary utilities. Formally, for each donor i and charity x, there is a value vi,x, and i's utility from a distribution d is: . They prove that the Nash product rule incentivizes donors to contribute their entire budget, even when attractive outside options are available. while spending each donor’s contribution only on projects the donor finds acceptable. The Nash rule is also efficient. On the down side, it is not strategyproof, and violates simple monotonicity conditions (even in the binary case). Leontief utilities Brandt, Greger, Segal-Halevi, Suksompong study donor coordination assuming donors have Leontief utilities. This is motivated by funding charities, where it is reasonable that donors want to maximize the minimum amount given to a charity they approve. More generally, for each donor i and charity j, there is a value vi,j, and i's utility from a distribution d is: . They define a rule called the Equilibrium Distribution Rule (EDR), which finds a pure-strategy Nash equilibrium in a game in which the donors' strategies are the possible decompositions of their donations. They prove that there always exists a unique pure Nash equilibrium, and it can be found efficiently using convex programming, by maximizing the Nash social welfare (a sum of logarithms of agents' utilities, weighted by their donations). EDR is Pareto-efficient, group-strategyproof, and satisfies several other monotonicity properties. With binary-Leontief utilities, EDR is also egalitarian for projects and for agents (subject to decomposability), can be found efficiently using linear programming, and attained at the limit of a best-response sequence. Quasilinear utilities Buterin, Hitzig and Weyl present a mechanism in which donors invest money to create public goods. They assume that agents have quasilinear utilities, so without coordination, there will be under-provision of public goods due to the free-rider problem. They suggest a mechanism called Quadratic Finance, inspired by quadratic voting. The amount received by each project x is , where ci,x is the contribution of agent i to project x. They show that, in the standard model (selfish, independent, private values, quasilinear utilities), this mechanism yields the utilitarian-optimal provision of public goods. Other ways to encourage public goods provision are: Voting to select which projects will be implemented. This system does not let people indicate how much they support a specific project. Matching funds and tax deductions. These practices aim to amplify the effects of small contributions and increase the diversity of potential contributors. But the matching ratio is often set arbitrarily. In contrast, QF attains the same effects in an optimal way. They present variations and extensions of QF. They explain how it can be used to campaign finance reform, funding open source software, news media finance, charitable giving, and urban public projects. Indivisible targets Donor coordination with indivisible targets is similar to combinatorial participatory budgeting, except that in the latter, the budget is fixed in advance and not contributed voluntarily by the agents. Funding by donations only Aziz and Ganguly study a variant on indivisible participatory budgeting in which there is no exogeneous budget. There is a list of potential projects, each with its own cost. Each agent approves a subset of the projects, and provides an upper bound on the amount of money he can donate. The utility of each agent equals the amount of money spent on projects he approves (i.e., cost-satisfaction). The rule should specify (1) Which projects are funded? (2) How much money each donor pays? Note that, because the projects are indivisible, probably most donors will pay less than their upper bound. They study three axioms related to encouraging participation: Minimal Return: each donor's utility is at least as high as the money he pays (no one loses from participating). Implementability: it is possible to decompose the budget allocation such each donor's donation is given only to projects he approves. Implementability implies Minimal Return. Individual Rationality: each donor's utility is at least as high as the maximum possible utility the donor could get by donating alone. Three axioms related to efficiency: Exhaustiveness: no set of agents can pool their unused donations and fund a project approved by all of them. Pareto-optimality among all allocations, or among implementable or minimal-return allocations. Payment-constrained Pareto-optimality: the allocation is not Pareto-dominated by any other allocation of at most the same price. Two axioms related to fairness: Weak core stability: no group of agents can pool their budgets and get an outcome strictly better to all group members. Proportionality: if a group of agents N all approve only the same set of projects P, and the cost of P is at most the total donation by agents of N, then all projects in P are funded. Finally, they study strategyproofness. They study which axioms are satisfied by three welfare-maximization rules: utilitarian, egalitarian (leximin) and Nash-product; they also study their computational complexity. They also conduct experiments for studying the price of fairness - how much fairness properties effect the social welfare - in instances that model two real-life donor coordination scenarios: share-house setting, and crowdfunding setting. Aziz, Gujar, Padala, Suzuki and Vollen extend the above study to agents with cardinal ballots and quasilinear utilities. They show that welfare maximization admits an FPTAS, but welfare maximization subject to a natural and weak participation requirement is strongly inapproximable. Combining donations and government funds: Donation No Harm Chen, Lackner and Maly study an extension of indivisible participatory budgeting in which there is both exogeneous budget and potential donations. Each voter can, besides voting for projects, also donate to specific projects of his choice. The donations of each project are deducted from its cost before the PB rule is activated. Their aim is to guarantee that rich donors do not use their donations to have an unfairly large influence on the total budget. Formally, they define a condition called "Donation-No-Harm", which requires that the utility of each agent when there are donations is at least as high as his utility without donations. They also study monotonicity properties specific to the setting with donations. They assume cardinal utilities. They also assume that projects belong to possibly-overlapping categories, with upper and lower quotas on each category. They study 8 rules: 4 based on global optimization and 4 based on greedy optimization. They consider three ways to adapt these rules to the setting with donations: First, reduce the cost of each project by the total amount donated to it; then, run the PB rule on the reduced costs. With this adaptation, all 8 rules violate Donation-No-Harm. First, run the rule without the donations; then, run the rule again, with the savings due to the donations as the new budget. With this adaptation, all 8 rules satisfy Donation-No-Harm. First, run the rule without the donations; then, find a bundle with a maximum social welfare among all bundles that donate the outcome of the first stage. With this adaptation, too, all 8 rules satisfy Donation-No-Harm. Besides Donation No Harm, they also study three monotonicity axioms: Donation-project-monotonicity, Donation-welfare-monotonicity, and Donation-voter-monotonicity. They also study two computational problems related to this setting: Deciding whether a given bundle is selected by some rule: this problem is coNP-hard for the 4 global optimization rules, but in P for the 4 greedy rules. Deciding whether a given voter can spend a given amount of money in a way that will increase his utility. The problem is sigma2P-complete for the 4 global optimization rules, and NP-complete for the 4 greedy rules. Donor coordination in inter-country aid In the Paris Declaration of 2005, donor countries agreed to coordinate their donations in order to eliminate duplication of efforts and better align foreign aid flows with priorities of the recipient countries. They acknowledged that aid fragmentation impairs the effectiveness of aid. However, Nunnenkamp, Ohler and Thiele show that these ideas were not implemented in practice, and the donor coordination even declined. Leiderer presents specific evidence for this from aid to the health and education sectors in Zambia. See also Crowdfunding Effective altruism References Participatory budgeting Effective altruism
Donor coordination
[ "Biology" ]
2,841
[ "Effective altruism", "Behavior", "Altruism" ]
75,152,655
https://en.wikipedia.org/wiki/Parochial%20altruism
Parochial altruism is a concept in social psychology, evolutionary biology, and anthropology that describes altruism towards an in-group, often accompanied by hostility towards an out-group. It is a combination of altruism, defined as behavior done for the benefit of others without direct effect on the self, and parochialism, which refers to having a limited viewpoint. Together, these concepts create parochial altruism, or altruism which is limited in scope to one's in-group. Parochial altruism is closely related to the concepts of in-group favoritism and out-group discrimination. Research has suggested that parochial altruism may have evolved in humans to promote high levels of in-group cooperation, which is advantageous for group survival. Parochial altruism is often evoked to explain social behaviors within and between groups, such as why people are cooperative within their social groups and why they may be aggressive towards other social groups. History The concept of parochial altruism was first suggested by Charles Darwin. In his book, "The Descent of Man," Darwin observed that competition between a group of the same species and cooperation within groups were important evolutionary traits that influenced human behavior. While Darwin first described the general concept of parochial altruism, the term was first coined in 2007 by economists Jung-Kyoo Choi and Samuel Bowles. Following Darwin's initial theories, modern researchers in fields such as evolutionary biology and social psychology began investigating the evolution of group dynamics and altruism. Bowles and fellow economist Herbert Gintis were particularly influential in this work, proposing a co-evolution between warfare and in-group altruism. In addition to this work on evolution, a set of influential studies conducted with indigenous groups in Papua New Guinea were major contributions to the study of parochial altruism. These studies demonstrated how social norms and behaviors surrounding cooperation are often shaped by parochialism. Specifically, these altruistic behaviors were found to be limited to one's own ethnic, racial, or language group. This work revealed that individuals were more likely to protect members of their in-group, even if it required aggression to out-group members. Definition and characteristics Parochial altruism refers to a form of altruistic behavior that is exhibited preferentially towards members of one's own group, often accompanied by hostility towards those outside the group. This phenomenon is characterized by a combination of "in-group love" and "out-group hate". More broadly, altruism can manifest in different forms, ranging from small acts of kindness, like helping a stranger or a friend in need, to more significant sacrifices, such as donating an organ to save another's life. Evolutionary biologists, ethologists, and psychologists have investigated the roots of altruism, suggesting that it may have evolved as a means of enhancing the survival of one's kin (kin selection) or as a strategy to receive a reciprocal benefit from another individual (the norm of reciprocity). Altruism is often contrasted with ethical egoism, the view that individuals should act in their own self-interest. The complexity of human motivation makes the distinction between altruism and self-interest difficult to identify, and this is an ongoing debate within psychology and philosophy alike. Evolutionary theories Kin Selection Theory Kin selection is a theory in evolutionary biology that may offer a foundational framework to help explain the mechanisms underlying parochial altruism. In 1964, evolutionary biologist William Donald Hamilton proposed a theory and mathematical formula, commonly referred to as Hamilton's Rule. The rule posits that evolutionary processes may favor altruistic behaviors when they benefit close genetic relatives, thereby indirectly promoting the transmission of shared genes. Hamilton's Rule is described by the formula C < r × B, where C represents the cost to the altruist, r is the genetic relatedness between the altruist and the receiver, and B is the benefit to the receiver. In essence, kin selection suggests that individuals are more likely to perform altruistic acts if the cost to themselves is outweighed by the benefit to their relatives. It suggests that individuals may be evolutionarily predisposed to exhibit altruistic behaviors towards members of their own group, especially if those group members are close genetic relatives. Reciprocity The norm of reciprocity states that people tend to respond to others in the same way that they have been treated. For example, kind and altruistic behavior will be responded to with more kind and altruistic behavior, while unkind and aggressive behavior will be responded to with more unkind and aggressive behavior. This principle, central to the theory of reciprocal altruism introduced by Robert Trivers in 1971, suggests that altruistic behaviors within a group are reciprocated, thereby reinforcing group cohesion and mutual support. This idea has been applied to group cooperation, which suggests that reciprocity is evolutionarily advantageous, particularly in the context of an in-group. Reciprocal altruism extends beyond kin selection, as it benefits individuals based on their previous actions, not just genetic relatedness. Reciprocity has been observed in a wide range of species, indicating its evolutionary advantage in fostering cooperation among non-kin group members. In the context of parochial altruism, the expectation of reciprocity fosters social connection and a sense of mutual obligation that is preferential to the in-group. Co-evolution with war Evolutionary theorists have suggested that the human capacity for altruism may have co-evolved with warfare. This theory argues that in-group altruism, a core component of parochial altruism, would have increased chances of success in warfare. Groups who were willing to sacrifice for each other would be more cohesive and cooperative, thus conferring advantages in warfare. Ultimately, greater success in warfare would lead to greater genetic success. Conversely, the pressures and demands of warfare may have intensified the need for in-group altruism and exacerbated parochialism. This process may have led to a bidirectional relationship between warfare and parochial altruism, with each element reinforcing the other. The idea of war and altruism being intricately interconnected may also help explain the high frequency of intergroup conflicts observed in ancient human societies. Group Selection Theory The idea of parochial altruism may seem counterintuitive from an individual selection theory, given that parochialism is often dangerous to the individual. To explain this, theorists often reference group selection theory, which suggests that natural selection operates at the group level, not just among individuals. Specifically, behavior that is beneficial to a group, even if it is costly to an individual, may be selected because it increases the overall survival chances and genetic success of a group. Group selection theory suggests that individual behaviors and decisions may be shaped by the needs of the group. For example, an individual may choose to sacrifice themselves by attacking an out-group, if they perceive a benefit to their in-group. This theory has faced considerable criticism and is not universally accepted in the field. Third party punishment Third Party Punishment is a phenomenon that occurs when an individual, who was not directly affected by a transgression, punishes the transgressor. This form of punishment is influential in maintaining social order and reinforcing group norms, even if it incurs personal cost to the punisher. Third party punishment is an integral component of enforcing social norms among societies. Research on parochial altruism often employs third-party punishment experiments, whereby individuals are more likely to protect norm violators from their in-groups, and punish those from an out-group. This bias in third party punishment is a basis for parochial altruism. These experiments often use economic games, such as the dictator game or the prisoner's dilemma to measure punishment. Furthermore, researchers have identified neural mechanisms for social cognition that seem to specifically modulate third-party norm enforcement. The study illustrated that participants who were determining punishment for out-group members who have transgressed show greater activity and connectivity in a network of brain regions that modulate sanction-related decisions, while participants who were determining punishment for in-group members who have transgressed show greater activity and connectivity in brain regions that modulate mentalizing. Cross-cultural perspectives Like many psychological phenomenon, parochial altruism may manifest uniquely across different cultural contexts. Research has revealed that cultures vary in both intensity and expression of in-group favoritism and out-group hostility. These differences are likely the result of norms, societal structures, and historical factors that vary among cultures. Joseph Henrich and colleagues conducted a large-scale research study examining cross-cultural variations in economic and dictator games in 15 small-scale societies. Their studies revealed that economic and social environments influence altruistic behavior towards in-group members. For example, they found that societies with a higher level of market integration and adherence to religion showed more fairness in economic games. This suggests that there is a moral component of altruism, that is influenced by culture and is distinct from the in-group and out-group model of parochial altruism. Additionally, theories about the coevolution of parochial altruism and war suggest that social structures and organization may play a role in shaping parochial altruism. Societies with strong clan or tribal affiliations, and particularly those with more frequent conflict, tend to exhibit more pronounced parochial altruism, reinforcing cooperation and unity within the social group. Historical and ecological factors may also influence the extent of parochial altruism within societies. In regions with a history of intergroup conflict or scare resources that must be fought over, groups may exhibit stronger in-group loyalty and out-group aggression as an adaptive response to the environment. Psychological and sociological implications Individual psychology Parochial altruism influences individual through its impact on social identity and perception. Social identity theory suggests that individuals derive a sense of self from their group memberships. Parochial altruism can reinforce a social identity when individuals behave more altruistically to their own one-group. Similarly, in-group favoritism and out-group hostility are central to parochial altruism, and shape how individuals perceive and interact with others. Individuals are more likely to view in-group members as trustworthy and likable, and view out-group members as suspicious and hostile. Thus, parochial altruism is an example of how group membership shapes individual attitudes and interpersonal dynamics. Within-group relations Parochial altruism influences within-group relations by fostering a sense of unity and cooperation among group members. This is achieved through the in-group favoritism that is characteristic of parochial altruism, whereby individuals selectively behave altruistically towards members of their own group. Research on social identity illustrates how these in-group biases reinforce a sense of shared identity and collective goals. Social identity theory further posits that enhanced group cooperation can increase group morale and self-esteem, strengthening the social bonds among group members. Intergroup relations Contrary to within-group relations, parochial altruism influences intergroup relations through increased tension and conflict between in-groups and out-groups. This is driven by the out-group hostility component of parochial altruism, where individuals are more likely to punish out-group members and treat them with aggression when compared with in-group members. Research illustrates that these out-group biases that are characteristic of parochial altruism can lead to prejudice, discrimination, and intergroup conflict. Animal models The study of parochial altruism extends beyond human societies, with various animal models providing insight into the evolutionary origins and mechanisms of this behavior. In the animal kingdom, parochial altruism has been observed within the context of territorial defense and resource allocation within social groups. For example, chimpanzees have been observed to exhibit behaviors that mirror human parochial altruism, such as defending their group's territory against outsiders and favoring group members in food-sharing and grooming practices. These behaviors are directed towards enhancing the survival of in-group members, similar to the in-group favoritism and out-group hostility characteristic of human parochial altruism. Similar behavior has been observed in vampire bats, who demonstrate reciprocal altruism within their social groups by sharing meals with kin and non-kin group members, but not with other bats. Criticism and controversy While the concept of parochial altruism has been influential in explaining social behaviors like in-group altruism and out-group hostility, it has also received criticism. Specifically, the evolutionary basis of parochial altruism has been questioned for the theory's reliance on group selection. Group selection posits that natural selection operates at the group level, favoring traits that are beneficial for the group rather than the individual. This concept contrasts the traditional and more scientifically backed view of Darwinian selection, which occurs at the individual level and promotes traits beneficial to individual organisms. This debate over group selection is a longstanding issue in evolutionary biology, and the group selection theory has faced critiques from scientists such as Richard Dawkins and Steven Pinker, who argue that there is not sufficient evidence to support the theory. An alternative theory, multi-level selection, was proposed by David Sloan Wilson and Elliott Sober as a modern interpretation of group selection. Field studies on parochial altruism during conflict have also illustrated the need for a more nuanced understanding of parochial altruism. Researchers conducted studies before, during, and after riots in Northern Ireland, investigating how the conflict influenced real-world measures of cooperation, such as charity and school donations. The findings revealed that conflict was associated with reductions in all types of altruism, including both in-group and out-group, challenging the notion that inter-group conflict unconditionally promotes parochial altruism. Instead, they suggest that conflict may lead to a reduction in all types of cooperation. Critics have argued that the co-evolution of war and altruism is an oversimplification, which also fails to explain peaceful interactions between groups, defensive strategies, and sex differences in parochial altruism. Future directions Emerging research seeks to investigate the neural basis of parochial altruism, using modern technologies such as neuroimaging and neurobiological approaches. Studies utilizing functional magnetic resonance imaging (fMRI) have identified specific brain regions that are activated during in-group versus out-group interactions, indicating a potential neural basis for parochial decision-making. Other research studies have examined how neuroendocrine factors, such as oxytocin and testosterone, may influence in-group favoritism and out-group hostility. A study by De Dreu et al. demonstrated that intranasal administration of oxytocin increased in-group trust and cooperation, as well as aggression toward perceived out-group threats. Other studies have illustrated that testosterone is associated with parochial altruism in humans and may modulate the neural systems associated with it. See also Altruism In-group and out-group In-group favoritism Social identity theory Cooperation Kin Selection Reciprocal Altruism Group Selection Evolutionary game theory Moral Psychology Intergroup Relations Reciprocal Altruism References Wikipedia Student Program Altruism
Parochial altruism
[ "Biology" ]
3,097
[ "Behavior", "Altruism" ]
75,153,245
https://en.wikipedia.org/wiki/Tanmay%20A.%20M.%20Bharat
Tanmay A. M. Bharat is a programme leader in the Structural Studies Division of the MRC Laboratory of Molecular Biology. He and his group use electron tomography, together with several structural and cell biology methods to study the cell surfaces of bacteria and archaea. His work has increased the understanding of how surface molecules help in the formation of multicellular communities of prokaryotes, examples of which include biofilms and microbiomes. He has been awarded several prizes and fellowships for his work. Education Bharat graduated with a BA in Biological Sciences from the University of Oxford, UK. His studies were supported by a Rhodes Scholarship. He then undertook research at the European Molecular Biology Laboratory in Heidelberg, Germany for his PhD working with John A. G. Briggs. He studied the structure and assembly of pathogenic viruses using cryogenic electron microscopy and tomography. His work on several viral capsid proteins improved understanding of how viruses are assembled within infected cells. Research He subsequently joined the MRC Laboratory of Molecular Biology (LMB) in Cambridge to pursue post-doctoral research with Jan Löwe using cryo-EM to study proteins within bacterial cells. After his post-doctoral appointment concluded, he was recruited to the Sir William Dunn School of Pathology, University of Oxford as a Wellcome Trust and Royal Society Sir Henry Dale Fellow. After obtaining tenure at Oxford, he moved back to the LMB as a programme leader in 2022. His research investigates how bacteria and archaea use their surface molecules to form multicellular communities. For instance, during human infections bacteria form biofilms that help them evade antibiotics. The group also use electron tomography. Scientific publications Bharat is the author or co-author of over 46 scientific publications. These include: Jan Böhning, Mnar Ghrayeb, Conrado Pedebos, Daniel K. Abbas, Syma Khalid, Liraz Chai & Tanmay A. M. Bharat (2022) Donor-strand exchange drives assembly of the TasA scaffold in Bacillus subtilis biofilms. Nature Communications volume 13, article number 7082. Tanmay A.M. Bharat, Andriko von Kügelgen & Vikram Alva (2021) Molecular Logic of Prokaryotic Surface Layer Structures. Trends in Microbiology May;29(5):405-415. Charlotte Melia, Jani Bolla, Stefan Lanwermeyer, Daniel Mihaylov, Patrick Hoffmann, Jiandong Huo, Michael Wozny, Louis Elfari, Jan Böhning, Ray Owens, Carol  Robinson, George O’Toole & Tanmay A.M. Bharat (2021) Architecture of cell-cell junctions in situ reveals a mechanism for bacterial biofilm inhibition. Proceedings of the National Academy of Sciences of the United States of America 118(31): Andriko von Kügelgen, Vikram Alva and Tanmay A.M. Bharat (2021) Complete atomic structure of a native archaeal cell surface. Cell Reports volume 37, issue 8, 110052. Abul K. Tarafder, Andriko von Kügelgen, Adam J. Mellul & Tanmay A. M. Bharat (2020) Phage liquid crystalline droplets form occlusive sheaths that encapsulate and protect infectious rod-shaped bacteria. Proceedings of the National Academy of Sciences of the United States of America volume 117, issue 9, pages 4724-4731. Andriko von Kügelgen, Haiping Tang., Gail Hardy, Danguole Kureisaite-Ciziene, Yves Brun, Phillip Stansfeld, Carol Robinson, & Tanmay A.M. Bharat (2020) In Situ Structure of an Intact Lipopolysaccharide-Bound Bacterial Surface Layer. Cell 180(2): 348-358 Tanmay A.M. Bharat, Christopher J. Russo, Jan Löwe, Lori A. Passmore & Sjors H.W. Scheres (2015) Advances in Single-Particle Electron Cryomicroscopy Structure Determination applied to Sub-tomogram Averaging Structure volume 23, issue 9, pages 1743-1753. Tanmay A. M. Bharat, James D. Riches, Larissa Kolesnikova, Sonja Welsch, Verena Krähling, Norman Davey, Marie-Laure Parsy, Stephan Becker & John A. G. Briggs (2011) Cryo-Electron Tomography of Marburg Virus Particles and Their Morphogenesis within Infected Cells. PLOS Biology Awards Bharat has been awarded many prizes and fellowships. These include a 2018 Vallee Research Scholarship, the 2019 EMBL John Kendrew Award the 2020 Philip Leverhulme Prize for Biological Sciences, the 2021 Eppendorf Award for Young European Investigators, and the 2021 Lister Prize, the 2022 Colworth Medal from the Biochemical Society and the 2023 Fleming Prize from the Microbiology Society. References Living people Rhodes Scholars Alumni of the University of Oxford Biochemists Structural biologists Virologists Microbiologists Year of birth missing (living people)
Tanmay A. M. Bharat
[ "Chemistry", "Biology" ]
1,055
[ "Biochemistry", "Structural biologists", "Biochemists", "Structural biology" ]
75,154,341
https://en.wikipedia.org/wiki/Barrel-shaped%20jug
The Barrel-shaped jug is a type of pottery known in the Mediterranean in the Ancient Cypriot art of the island of Cyprus, from the 10th century BCE to the 3rd century CE. This type of jugs, with and without strainers, were quite common in Archaic Cypriot pottery.Because of their rounded shape, they do not stand on their own, suggesting a quite specific function. They are found in the tombs of Eastern Cyprus, and may only have had a funerary role. These jars are very similar to Chinese Cocoon jars, and West-East transmission has been suggested. References Ceramic materials Ancient Cyprus Pottery
Barrel-shaped jug
[ "Engineering" ]
125
[ "Ceramic engineering", "Ceramic materials" ]
75,155,342
https://en.wikipedia.org/wiki/310%20%28number%29
310 is the natural number following 309 and preceding 311. In mathematics 310 is an even composite number with 3 prime factors. 310 is a sphenic number meaning that it has 3 prime factors. 310 is a noncototient number which means that m − φ(m) = n has no solution for n=310. 310 is the number of Dyks 11 paths with strictly intersecting peaks. 310 in base 6 is 1,234. The sum of the divisors of 310 is a perfect square. References Integers
310 (number)
[ "Mathematics" ]
108
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
75,156,973
https://en.wikipedia.org/wiki/Pouring
Pouring is the act of tilting an open container which has liquid or bulk flowable solid inside of it, to cause the contents to flow out of the container under the influence of gravity. This may be done to move the contents to another container (such as pouring a shared drink from a bottle into individual cups), to coat a surface with the contents (as in pouring a syrup onto solid food, or the artistic technique of paint pouring), or simply to empty the original container. Physical aspects Depending on context, these changes may be desirable or undesirable. For example, in the preparation of Malaysian-style 'pulled tea', the brewed tea mixed with milk is poured back and forth between two containers at an increasing distance in order to create a foam and to cool the tea. In contrast, in industrial processes such as casting where a molten metal is poured into a mould, the pouring process can induce the creation of bifilms (doubly-oxidated surfaces folded over on each other) which lead to further defects in the final cast part. The difficulty of controlling all the variables when pouring purely by hand, along with the potential danger to people in the vicinity from splashing of hot liquid, has led to the development of automated pouring equipment for use in industrial settings. There has also been research in robotics to develop robots which can pour drinks and granular ingredients such as sugar. Systems for pouring face unique difficulties compared to other cases of robots manipulating physical objects: contact sensors cannot be attached directly to liquids, so measuring parameters of pouring, such as vibration and the amount of liquid which has been poured, may require the use of computer vision techniques. Cultural aspects The metaphor of "pouring prayers" appears in Sanskrit, Latin, and Ancient Greek poetry, in reference to the literal pouring out of a drink onto the ground or over an icon as an offering in a libation. See also Teapot effect, when a liquid being poured from a container runs down the spout or the body of the vessel instead of flowing out in an arc Decantation, a process for separating a liquid from another liquid or a solid by pouring it into a separate container Bulk material handling References Works cited Fluid mechanics
Pouring
[ "Engineering" ]
447
[ "Civil engineering", "Fluid mechanics" ]
75,157,010
https://en.wikipedia.org/wiki/Hattusa%20Green%20Stone
The Hattusa Green Stone is a roughly cubic block of nephrite standing in the remains of the Great Temple at Hattusa, capital of the Hittites in the late Bronze Age. Now on the hill above Boğazkale, in the Turkish Province of Çorum, Hattusa is a World Heritage Site. The original purpose of the stone is unknown, but it is a tourist attraction, as it has a magical reputation for granting wishes. Location The remains of at least thirty-one temples survive at Hattusa, which itself covers some four hundred acres (162 hectares). One of these, called by archaeologists "the Great Temple" and "Temple 1", stands on a raised platform and measures 215 by 140 feet (65 by 42 metres). This is believed to have been for the worship of the Sun-goddess and the Storm-god Tarḫunna, due to their importance to the Hittites and the stone bases of statues which survive. The Green Stone was found in a small room of the temple at the southern end of the street leading from the gateway. It remains there, open to the weather. By comparison with the door sills, the stone now sits below ground level, suggesting that this was not its original position. Description and purpose A block of nephrite, a dark green mineral form of jade which is common in the region, and dressed into the form of a cube, about 27 inches (69 cm) per side and weighing about 2,200 pounds (1000 kg), the Green Stone is supposed to have had some religious use or purpose, but what it may have been is unknown. The suggestion has been made that it may have been merely the base of a statue. However, the stone is the only one of its kind found at Hattusa. Professor Andreas Schachner, director of archaeology at the site, commented in 2019 that he believed the stone had been used by the Hittites and all the civilizations which came after them, but why it was brought to the temple and what it was used for there remained to be discovered. The local inhabitants call the stone a "wish stone". Its magical reputation and the mystery of its origins draw many tourists from Turkey and other countries to visit it every day. Gallery Notes External links "The Mysterious Green Stone of Hattusa", Ancient Architects, YouTube Stones Hattusa Jade
Hattusa Green Stone
[ "Physics" ]
486
[ "Stones", "Physical objects", "Matter" ]
75,157,035
https://en.wikipedia.org/wiki/Keiji%20Morokuma
Keiji Morokuma (諸熊 奎治, Morokuma Keiji; July 12, 1934 – November 27, 2017) was a Japanese theoretical chemist and chemical engineer known for developing energy decomposition analysis for molecular interactions and the ONIOM method in quantum chemistry. Education and career Morokuma was born in Kagoshima, Kagoshima Prefecture and studied engineering at Kyoto University. As a student of the Nobel laureate Kenichi Fukui, one of the pioneers of quantum chemistry in Japan, at Kyoto University, Morokuma received his doctorate in 1963 in fuel chemistry. He became research associate at Kyoto University until 1966, when he became a postdoc at Harvard University with Martin Karplus working on reaction dynamics as a Fulbright visiting scholar. Afterwards, he joined the Department of Chemistry at the University of Rochester as an assistant professor in 1967 and eventually became a full professor in 1971. He stayed at Rochester until 1976 before moving to the Institute for Molecular Science in Okazaki, Japan and worked there until 1993. From 1978 to 1993, Morokuma was also the director of the Computer Center at the institute. In 1993, Morokuma moved back to the US and became the William Henry Emerson Professor of Chemistry at Emory University. He retired from Emory University in 2006 and traveled back to Japan, where he became a senior research fellow at the Fukui Institute for Fundamental Chemistry in Kyoto University. He remained in Kyoto until his death in 2017. Morokuma developed the ONIOM method, a method that integrates molecular orbit methods and those of molecular mechanics at several levels and uses them to calculate large molecules. He investigated potential surfaces in chemical reactions and reactions and structure of nanoparticles, proteins and transition metal complexes as well as photochemistry of excited state molecules and biomolecules. Honors and awards Morokuma was a Sloan Research Fellow in 1970. In 1978, he received the Prize of the International Academy of Quantum Molecular Science, of which he was a member. In 1991, he was the first to receive the Schrödinger Medal. In 1992 the Prize of the Japanese Chemical Society, in 2005 the Fukui Medal of the Asian Pacific Association of Theoretical & Computational Chemists and in 2008 the Imperial Prize of the Japan Academy. In 2009, he was recognized as the Person of Cultural Merit in Japan. References 2017 deaths 1934 births Japanese chemists Computational chemists Academic staff of Kyoto University Persons of Cultural Merit Emory University faculty Theoretical chemists 20th-century Japanese chemists Kyoto University alumni University of Rochester faculty Sloan Research Fellows Members of the International Academy of Quantum Molecular Science
Keiji Morokuma
[ "Chemistry" ]
524
[ "Quantum chemistry", "Physical chemists", "Computational chemists", "Theoretical chemistry", "Computational chemistry", "Theoretical chemists" ]
75,157,146
https://en.wikipedia.org/wiki/Marc%20Rogers%20%28security%20researcher%29
Marc Rogers is a British information security expert and ethical hacker. He received media attention for uncovering vulnerabilities in modern technologies such as Google Glass and Tesla’s Model S. He was also involved in the protection of medical facilities against hackers seeking to exploit health organizations during the COVID-19 pandemic. Rogers is one of the organizers and the director of security at DEF CON, the world’s largest hacker conference. Biography Rogers started as a lone hacker during the 1980s and was known by his trade name “Cjunky”. Years later, he transitioned to ethical hacking and initially worked for a series of European companies. In 2003, he started working for Vodafone and rose to its head of security until he left the company after six years. By 2013, he was employed as the Principal Security Researcher for Lookout. Rogers then transferred to Cloudflare eventually becoming the Head of Information Security, a post he held until 2018. Rogers became the vice president of Cybersecurity Strategy for Okta from 2018 until 2022. In the same year, he became a senior technical advisor for the Institute for Security and Technology (IST). Rogers is a member of the U.S. Ransomware Task Force. He was a recipient of the President’s Volunteer Services Award in 2023 for his work with the U.S. government against cyber criminals and cyber security threats. He is now based in San Francisco. Noted initiatives As the principal security researcher of Lookout, Rogers identified a flaw in Google Glass that gave the hacker complete control of the device. Another noted hack was his exploit of Apple’s TouchID technology, which gave him control of the iPhone 5’s fingerprint sensor, a feat that he also executed on the iPhone 6. Together with Kevin Mahaffey, Rogers also breached the technology-heavy Model S car in 2015 during his employment as principal security researcher at CloudFlare. Using a laptop, they were able to remotely control various Tesla functions. They hacked the company’s network and accessed data that allowed them to get administrative access to the Model S. In 2020, Rogers co-founded the COVID-19 Cyber Threat Intelligence (CTI) League, a group formed to combat hacks against medical facilities and frontline respondents during the COVID-19 pandemic. The group is composed of nearly 400 cybersecurity expert volunteers and Rogers was one of its four initial managers. He is also among the organizers of DEF CON security conference, the largest gathering of hackers in the world. Rogers also does consultation work for television shows that deal with cyber security such as Mr. Robot and The Real Hustle. He is currently the co-founder and chief technology officer of the startup nbhd.ai. Book In 1999, Rogers published A New Hacker Taxonomy. In this book, he suggested the classification of computer criminals based on factors that provide opportunities to commit cybercrime such as affordability, acceptable risk, attractiveness, availability, and anonymity. He also identified internal and external factors that drive people into hacking. For Rogers, hackers driven by internal reasons are those who do it for pleasure and also for the benefit of gaining new knowledge. Externally driven hacking, he explained, are undertaken for money or securing work for proving a successful computer break-in. References Living people Cybercrime Hackers White hat (computer security) Year of birth missing (living people) COVID-19 pandemic Cloudflare people
Marc Rogers (security researcher)
[ "Technology" ]
704
[ "Lists of people in STEM fields", "Hackers" ]
75,159,304
https://en.wikipedia.org/wiki/Social%20media%20use%20in%20health%20awareness
Social media is being increasingly used for health awareness. It is not only used to promote health and wellness but also to motivate and guide public for various disease and ailments. Use of social media was proven to be cornerstone for awareness during COVID-19 management. In recent times, it is one of the most cost effective tool for cardiovascular health awareness since it can be used to motivate people for adoption of healthy lifestyle practices. Over the span of a decade, cardiologist Dr. Om Murti Anil utilized social media to significantly impact the public about cardiovascular health awareness. Background Social media is proven to be useful for various chronic and incurable diseases where patients form groups and connect for sharing of knowledge. Similarly, health professionals, health institutions, and various other individuals and organizations have their own social media accounts for health information, awareness, guidance, or motivation for their patients. The utilization of social media for health awareness campaigns has become increasingly prevalent in recent years. The history of utilizing social media in health campaigns can be traced back to the early 2000s with the rise of platforms such as Facebook, Twitter, and YouTube. Health campaigns Health campaigns especially for chronic diseases like Cancer and heart diseases are increasingly common on different social media platforms because Social media serves as a cost-effective medium for launching and promoting health campaigns. Many organizations and governmental bodies use platforms like Twitter and Instagram to reach a wide audience. This wide outreach gives health campaigns more attention and support while raising awareness of their specific cause. Research When incorporating social media into health research recruitment, there is potential for a greater number of individuals to participate. Social media allows researchers to reach a wide range of participants while also allowing for recruitment 24 hours a day. There are many health organizations with large social media followings to allow them to reach a large amount of individuals. If these organizations pair with researchers and post flyers or make posts about a study they may be able to find the population that they are looking for. Although there are positives to using social media for health research recruitment, looking at the issues is important. Using this method in recruitment may cause competition between companies for the attention of the users. Another important point is that this is dependent on the type of health condition that is being researched. For chronic conditions, there are many organizations and platforms for support while for acute illnesses, there are not as many organizations that would be able to promote these studies and post for outreach. Patient education Patients increasingly turn to social media for health communication and health-related information. Online health communities, forums and blogs enable individuals to share their experiences, offer support, and seek advice from peers. Healthcare professionals also use social media to provide valuable insights and address common health concerns. The use of social media for patient education allows individuals to gain more information for their illness or disease along with gaining support from individuals who may be experiencing the same. Many health organizations such as cancer organizations or organizations for chronic health conditions often have social media platforms that allow individuals to connect and even share their own stories. Peer support is beneficial to patients emotionally and even for them to understand their condition and how to cope. Another way that social media allows individuals to gain more information is the improvement of health literacy. Medical jargon can be confusing for individuals especially when they are newly diagnosed with an illness or disease. Social media has been able to create platforms that explain the information that individuals may need when they are newly diagnosed or if they just want to learn more about their illness. Medical conditions can be confusing but using social media may allow for individuals to develop a better understanding in a manner that they understand. When patients have a better understanding of their health there will be a result of better health outcomes. Misinformation While social media is a powerful tool for health awareness, it comes with challenges. Misinformation can spread rapidly, potentially leading to incorrect or harmful health practices. Ensuring the accuracy of health-related information on social media is an ongoing concern. Health misinformation can be easily spread through social media to large amounts of individuals which can make this dangerous. One example of this was in 2020, when President Donald Trump said in speeches and on Twitter that hydroxychloroquine and chloroquine could be used to treat Covid-19. While these drugs are antimalaria, it was being spread that they could be used for Covid-19. This resulted in increased deaths and individuals falling ill from taking this drug and the misinformation that was spread about this drug. Spreading misinformation regarding health is one of the biggest concerns when using social media for health awareness. When spreading misinformation about health there is an increase in confusion about what is true and what is false regardless of who is saying this information. Along with the confusion of the public, there is a sense of mistrust that is a consequence of misinformation. Individuals are seeing different opinions which leads people to a situation where they do not know who to trust. While health misinformation is one of the largest issues, there are ways to help prevent it. As individuals, it is important to know where you are getting your information from and learn how to identify what is misinformation and avoid the spread of it. Privacy and ethical issues The sharing of personal health information on social media raises privacy and ethical concerns. Striking a balance between raising awareness and respecting individuals' privacy remains a delicate issue. References Social media Health education
Social media use in health awareness
[ "Technology" ]
1,101
[ "Computing and society", "Social media" ]
63,657,044
https://en.wikipedia.org/wiki/Allerton%20waste%20recovery%20park
Allerton waste recovery park is a waste recovery and incineration site located on a former quarry at Allerton Mauleverer, near Knaresborough, England. It is operated by AmeyCespa on behalf of North Yorkshire County Council and City of York Council, the site is capable of handling of household waste per year. The site is expected to cost £1.4 billion over 25 years, but is estimated it that the cost of not incinerating over the same time period would be £1.7 billion in landfill and other costs. Despite being labelled as just an incinerator, it also recycles and uses biodegradable waste to generate biogas, which is why it is known as a waste recovery park. The site is just off the A168, east of Knaresborough and north of Wetherby. History A suitable location to burn the waste from both the City of York and the county of North Yorkshire had been underway since the mid 2000s. A site on Marston Business Park near Tockwith was considered before the site at Allerton Mauleverer was decided upon. The project proved controversial with those in the area and MPs, and a 10,000 signature petition against the plant promoted a legal challenge and the submission of the petition to No. 10 Downing Street. North Yorkshire County Council approved the plan in October 2012, with final approval granted in September 2014, even after the UK government announced that it was withdrawing £65 million worth of funding. The funding had been part of an EU directive on landfill diversion targets, however, 29 other projects under consideration or approval were found to have been sufficient to fulfil the directive. A disused part of the Allerton Park Quarry was used to locate the plant on, with of earth excavated out to form the waste banks beneath the plant. The quarry used to produce sand and gravel, and had also been used as a landfill site, with permission to carry on with landfilling until 2030. The main plant is built at the bottom of part of the former quarry, which is why the chimney does not seem as tall as its height. However, the plant is very noticeable in the landscape, especially from the A1(M) motorway and the adjacent A168 road. The construction of the plant was undertaken by Taylor Woodrow Construction. The site is expected to cost £1.4 billion to run over its estimated 25-year lifespan. Critics have pointed out the high cost of the scheme, whereas both York City and North Yorkshire County Councils have stated that the project will deliver £300 million worth of savings over the 25-year time period as incineration is cheaper than a landfill option. Process The plant can handle up to of waste per day. The process first filters out recyclable material, such as plastic and metals, before all the bio-degradable material is removed, which is sent to an anaerobic digester and turned into biogas. The remaining waste is burnt in the incinerator, which is estimated to create over 218 GWh in a year; this is enough electricity to power between 40,000 and 60,000 homes. However, up to 10% of the waste or residue cannot be processed and is still sent to landfill (. The site will export over of ash per year, which will be sold on to construction projects. A visitor centre is also located on site to allow people to see the plant in action. The facility was designed with the option of adapting it into a combined heat and power generator. A project to build over 2,500 homes at Flaxby Park, on the opposite side of the A1(M), has registered an interest in taking the steam from the site to heat the new homes. References External links Allerton Park Amey webpage Timelapse of the plant being built Plant process diagram Incinerators Waste power stations in England Buildings and structures in North Yorkshire 2018 establishments in England Industrial buildings completed in 2018
Allerton waste recovery park
[ "Chemistry" ]
802
[ "Incinerators", "Incineration" ]
63,657,352
https://en.wikipedia.org/wiki/Cultural%20hitchhiking
Cultural hitchhiking is a hypothesized gene-culture coevolutionary process through which cultural selection, sexual selection based on cultural preference, limits the diversity at genetically neutral loci being transmitted in parallel to selective cultural traits. The process is thought to account for exceptionally low diversity in neutral loci such as control regions of the mitochondrial genome unaccounted for by any other selective forces. Simply put, selection for certain learned social and cultural behaviors can manifest in specific shaping of a population’s genetic makeup. While the notion that culture plays a significant role in shaping community genetics is widely accepted in the context of human populations it had not been considered or documented in non-human organisms until the late 1990s. The term was coined by the cetologist Hal Whitehead who studies the cultures and population genetics of matrilineal whale communities. Cultural hitchhiking has been proposed as a cause for reduced genetic diversity at certain loci in prehistoric Homo sapiens, dolphins, killer whales, and sperm whales. Cultural hitchhiking is a significant hypothesis because it investigates the relationship between population genetics and culture. By understanding how social behavior can shape the genetic makeup of communities scientists are better able to explain why certain communities have genetic traits distinct from the larger population. In whales The process was initially proposed by Whitehead in a 1998 paper as an explanation for the low genetic diversity in matrilineal whale species. In these communities, female individuals remain grouped together with their mothers and other female relatives. They appear to select mates from outside their immediate community based on culturally valued social traits and aptitude. Sequencing of the mitochondrial genome of individuals within these communities revealed them to have a significantly reduced diversity in certain control loci compared to comparable panmictic populations. Whitehead discovered that the relative frequency of mitochondrial haplotypes characteristic of groups exhibiting more highly adaptive socially learned traits would increase, while the frequency of these haplotypes in groups with less adaptive socially learned traits would decrease. This coincidental selective pressure is thought to have led to an overall reduction in haplotype diversity across the entire species population. In dolphins In 2014 a team of biologists from the University of South Wales attributed a remarkable geographic distribution of mitochondrial haplotypes among adjacent populations of bottlenose dolphins in a bay in Western Australia to cultural hitchhiking. The researchers found Whitehead’s hypothesis that selection for learned social traits affected diversity in neutral gene loci fit well with their observations. It was discovered that dolphins with two of three mitochondrial haplotypes were found predominantly in water deeper than 10 m while those with the third haplotypes were found predominantly in depths less than 10 m. This geographic distinction between these populations is also associated with different learned behaviors. Some of the dolphins predominantly found in deeper waters exhibit foraging strategies that implement tools such as a sponge placed on their beak. This "sponging" behavior is found to be spread through vertical social transmission along a matrilineal pattern (i.e. the mothers teach the behavior to their offspring). All dolphins exhibiting one of the deepwater haplotypes belong to a single matriline. The researchers ultimately concluded that these fine-scale genetic structures, the distinct mitochondrial haplotypes, have probably arisen based on socially transmitted behaviors or in other words, through cultural hitchhiking. A study in the same area and species showed that shelling is also marginally learned from associates. In early humans Cultural hitchhiking has been proposed as an explanation for a widely inferred and abrupt Y-chromosome population bottleneck across several Old World (Africa, Europe, Asia) populations around 4000-6000 BC. This bottleneck is thought to suggest a significant decline in the effective male population during Neolithic times to an estimated 1/20th its original size. Though mitochondrial sequence records seem to indicate uninhibited population increase at this time meaning there was likely an extreme divergence in the size of male and female effective population sizes during the bottleneck period. A team of international genetics, archaeology, and anthropology researchers in a 2018 article hypothesized that the bottleneck was a consequence of intergroup competition between patrilineal kin groups, which caused cultural hitchhiking between Y-chromosomes and cultural groups and reduction in Y-chromosomal diversity. The authors Zeng et al. argue that competition between patrilineal kin communities produces two mechanisms with limiting effect on Y-chromosome diversity. One mechanism being these patrilineal groups by virtue of common descent produce elevated levels of Y-chromosome homogeneity and high inter-group diversity. The second mechanism is violent inter-group competition which disproportionately results in male group member casualties and therefore concentration of like y-chronotypes and in some cases extinction complete extinction of entire lineages. References Genetics Evolution
Cultural hitchhiking
[ "Biology" ]
977
[ "Genetics" ]
63,657,653
https://en.wikipedia.org/wiki/OnePlus%208
OnePlus 8 and OnePlus 8 Pro are Android-based smartphones manufactured by OnePlus, unveiled on April 14, 2020. They became available for purchase in the United States on April 29, 2020. Specifications Hardware Both the OnePlus 8 and 8 Pro use the Snapdragon 865 processor with the Adreno 650 GPU, with either 128 or 256 GB of non-expandable UFS 3.0 storage. Both have 8 GB or 12 GB of RAM; the 8 has LPDDR4X RAM and the 8 Pro has faster, more efficient LPDDR5 RAM. Both have stereo speakers with active noise cancellation; there is no audio jack. Design The 8 and 8 Pro are constructed similarly to previous OnePlus phones, using an anodized aluminum frame and curved Gorilla Glass 5 on both the front and back. Both have a circular cutout in the upper-left-hand corner for the front-facing camera. On the 8 Pro, this replaces the pop-up camera used on the 7 Pro and 7T Pro. The rear camera module is similar to that of the 7 Pro and 7T Pro, protruding slightly from the back panel. On the 8, the dual-LED flash is located below, while on the 8 Pro, the telephoto camera, laser autofocus and dual-LED flash are all located to the left of the module. The 8 and 8 Pro are the first OnePlus phones to receive an official IP Code water resistance rating, rated at IP68. All 8 Pro models have water resistance, although for the 8 it is present only on carrier models. Both are available in Onyx Black (glossy) and Glacial Green (matte), while the 8 Pro has its own Ultramarine Blue (matte) finish. The 8 has two additional colors, a Polar Silver (matte) finish exclusive to the Verizon model, and an Interstellar Glow (glossy) finish exclusive to the T-Mobile model. Display AMOLED panels with HDR10+ support are used on both phones. The 8's display is carried over from the 7T, a 6.55-inch (166.4 mm) 1080p screen with a 20:9 aspect ratio and a 90 Hz refresh rate. The 8 Pro's display has a 6.78-inch (172.2 mm) 1440p screen with a 19.8:9 aspect ratio and a 120 Hz refresh rate. This makes it the first smartphone to support both 1440p resolution and a 120 Hz refresh rate. The 8 Pro has an Adaptive Display feature, similar to Apple's True Tone, and an MEMC (Motion Estimation, Motion Compensation) option akin to "motion smoothening" on high-end TVs. MEMC works with supported apps and games, and analyzes footage of at least 24 fps and interpolates frames to make the footage playback in what looks to be a higher frame rate. The 8 Pro is also one of the first smartphones able to display 1 billion colors. Biometric options include an optical (in-display) fingerprint scanner and facial recognition. Camera The camera system has been changed to further differentiate the 8 and 8 Pro. The 8's camera array consists of a 48 MP wide sensor, a 16 MP ultrawide sensor, and a 2 MP macro sensor, while the 8 Pro's camera array consists of a 48 MP wide sensor, a 48 MP ultrawide sensor, an 8 MP telephoto sensor, and a 5 MP "Color Filter Camera" for infrared photography. The color filter camera was later disabled in China. The 8's wide sensor is the same as on the 7T series, the Sony IMX586, while the 8 Pro's wide sensor is the newer Sony IMX689. Unlike the 7T, the 8 does not have a telephoto camera, which is now exclusive to the 8 Pro. OnePlus also claims that the 8 Pro uses Nokia OZO audio recording technology for its triple microphone array, which is used for the Audio 3D, Audio Zoom and Audio Windscreen camera features. The front camera on both phones uses a 16 MP sensor. Battery The battery capacity has been increased to 4300 mAh on the 8 and 4510 mAh on the 8 Pro. Both smartphones support wired fast charging at 30W via Warp Charge, and the 8 Pro also supports wireless charging via the new OnePlus Warp Charge 30 Wireless, which is able to charge 50% of the phone's battery in under 30 minutes. The OnePlus 8 Pro also supports reverse wireless charging. Software The 8 and 8 Pro run on OxygenOS 11, which is based on Android 11. OnePlus states both phones will continue to receive software updates until April 2023. Network compatibility Connectivity options have been improved with the implementation of 5G technology for all models, however only the Verizon OnePlus 8 5G UW model is compatible with ultra-fast millimeter-wave networks. Verizon and T-Mobile sell the 8 but not the 8 Pro, however the 8 Pro still works on their networks. Variants There are six model OnePlus 8 variants available depending on the country of intended use or USA carrier: IN2010 (China), IN2011 (India), IN2013 (Europe/Asia), North American variants include the IN2015 (NA/USA Dual Sim), IN2017 (T-Mobile), and IN2019 (Verizon). The IN2017 supports T-Mobile 5G bands, the IN2019 supports Verizon 5G bands, and no variant supports AT&T 5G bands. There are four model OnePlus 8 Pro variants available depending on the country of intended use: IN2020 (China), IN2023 (Europe/Asia), IN2021 (India), and IN2025 (NA/USA). Reception Both the OnePlus 8 and OnePlus 8 Pro were met with generally positive reviews from critics, with praise for the design, display, performance, and battery life. However, the price increase was said to have signified that OnePlus phones were no longer "flagship killers". The OnePlus 8 received an 8/10 from The Verge, an 8.7/10 from CNET and a 3/5 from Digital Trends. Jon Porter of The Verge remarked that the 8 was "a phone that absolutely delivers flagship Android performance" and called the display "bright, vibrant and buttery smooth", but the camera quality was inferior to the 8 Pro; Tom's Guide and CNET also noted the lack of optical zoom. The lack of wireless charging and water resistance were criticized, and the macro camera was panned for being of limited use. The OnePlus 8 Pro received an 8.5/10 from The Verge, an 8.6/10 from CNET and a 4/5 from Digital Trends. Mark Spoonauer of Tom's Guide stated that "overall the OnePlus 8 Pro is easily one of the best Android phones you can buy if you want a premium phone without the $1,000-plus sticker shock from Samsung or Apple". Several reviewers questioned the inclusion of the color filter sensor, which was widely seen as a gimmick. The OnePlus 8 Pro received an overall score of 119 from DXOMARK, with a photo score of 126 and video score of 103, the tenth-highest ranking as of May 2020. The OnePlus 8 became the first phone by OnePlus to be a part of the Android Enterprise Recommended program. Issues After pre-orders delivered, OnePlus 8 Pro owners reported issues with black crush, green and red tints, and image retention resembling display burn-in. OnePlus has attempted to fix the display issues with software, but the issue may be with the screen hardware itself. As of 20 July 2020, the dual-sim function on the NA version of either mode of the phone is not functional. In August 2020, users reported a dark bar appearing across the selfie camera. The issue is easily visible over gray backgrounds such as the incognito window of Google Chrome, or the Google Search app in dark mode. In October 2020, users reported battery drain issues on the OnePlus 8 Pro after updating to OxygenOS 11. A software update was released in mid-November to fix the issue. Controversy It was discovered that the 8 Pro's Color Filter camera can see through certain plastics, including clothing, producing an X-ray like effect. This occurs because the sensor lacks an IR filter. OnePlus later apologized for "creating privacy concerns and causing troubles for OnePlus users and other netizens", and temporarily disabled the filter on Chinese models with HydrogenOS. An over-the-air update, OxygenOS 10.5.9, disabled the filter globally; however, OnePlus later stated this was an error, and subsequently pulled the update. A future update is set to reinstate the filter. The feature was still usable after enabling an ADB command. See also Comparison of ARMv8-A cores, ARMv8 family List of Qualcomm Snapdragon processors References External links OnePlus mobile phones Mobile phones introduced in 2020 Mobile phones with multiple rear cameras Mobile phones with 4K video recording Discontinued flagship smartphones
OnePlus 8
[ "Technology" ]
1,937
[ "Phablets", "Crossover devices", "Discontinued flagship smartphones", "Flagship smartphones" ]
63,658,677
https://en.wikipedia.org/wiki/Please
Please is a word used in the English language to indicate politeness and respect while making a request. Derived from shortening the phrase "if you please" or "if it please(s) you", the term has taken on substantial nuance based on its intonation and the relationship between the persons between whom it is used. In much of the Western world, use of the word is considered proper etiquette, and parents and authority figures often imprint upon children the importance of saying "please" when asking for something from an early age, leading to the description of the term as "the magic word". Origin and understanding "Please" is a shortening of the phrase, if you please, an intransitive, ergative form taken from if it please you, which is in turn a calque of the French s'il vous plaît, which replaced pray. The exact time frame of the shortening is unknown, though it has been noted that this form appears not to have been known to William Shakespeare, for whom "please you" is the shortest form used in any of his works. A variation of the phrase, "may it please the court", remains in use as a formality for attorneys addressing judges in legal proceedings. Despite its straightforward definition as a term of courtesy, "please" has become highly variable in its meaning based on its intonation. The use of "please" often reflects an illocutionary act, making its presence in a sentence more a matter of functionality than politeness, but it remains the case that omitting "please" in certain circumstances can be perceived as impoliteness. On a philosophical level, it has been argued that use of "please" embodies the Kantian ethic of treating the person to whom it is spoken as an end, rather than a means, acknowledging them to be inherently worthy of respect. One study found that using "please" in unusual situations, such as with a seller asking someone to buy something for a charitable cause, yielded a negative result, with customers being less likely to make a purchase when it was used. The researchers theorized that this was because the use of "please" focused the attention of the customer on the seller rather than the cause, and the unusual circumstance of use made the customers suspicious of the interaction. Another study found that when asking strangers of the opposite sex to help with a task like looking for a lost earring or watching a bicycle while the experimenter stepped away, asking without saying "please" was actually more effective in gaining the requested help, possibly because saying "please" indicates the weaker position of lacking an expectation that the other person will comply. Similarly, one group of researchers found that saying "please" as part of a request was associated with situations in which the request occurred in an "inhospitable interactional environment," such as when the other party had shown prior unwillingness to perform the requested action. Another study differentiated between uses by pitch contour, finding "that please-requests ending in a rising contour occurred in situations where the participants were equal in power and status", while those with a falling contour "occurred in unequal encounters, and were much closer to commands than requests". The perception that use of "please" diminishes the forcefulness of the request does not necessarily change the legal status of a phrase incorporating it. In one case, for example, a federal court in Florida ruled that where a legal document stated, "If you dispute this balance or the validity of this debt, please let us know in writing", the use of "please" did not make the clause merely an optional request—particularly where the document went on to say that in the absence of a written dispute, it would be presumed that there was no dispute. In a North Dakota case where a police officer asked a suspect to "please unlock the door", the court found that the use of "please" in an utterance "can be viewed as a request rather than an order or command", so that it did not constitute a stop or a seizure of the person being asked. Learning to use the term In certain Western cultures, "parents put a lot of effort into teaching their children to be polite, to say 'thank you' or 'please' for every single favor done by anyone". One method of imparting the habit of saying "please" is to respond to requests with an instruction like "say please", or a question like "what is the magic word?" The latter method has been criticized, as it has been suggested that asking "What's the magic word?" frames the question in a negative context of the child being forgetful, and that the parent should merely remind the child to "Say please and thank you". It has also been noted that "teachers easily fall into the pattern of withholding food from children while they elicit the appropriate 'please'", which "may teach children that the words 'please' and 'thank you' are tokens they must use to get their food rather than genuine expressions of gratitude". Other sources consider the use of phrases like "What's the magic word?" to constitute "a less intrusive prompt" than directly reminding the child to say please. Parents and other role models or authority figures can also effectively reinforce in children the habit of saying please by regularly using the term themselves in making requests to the child, or to others in the presence of the child. Children as young as two have been observed to spontaneously add "please" to the ends of requests, possibly as a self-correcting behavior when gauging the apparent reaction to the request. Cultural variations Western cultures tend to promote the use of "please" in requests made to anyone, including family members, although other cultures may not promote the use of such formalities in exchanges within the family. A 1902 newspaper article suggested that use of "please" in England was, at that time, limited to servants, and that children who used it would find that it "stamped them as underbred", leading to the conclusion that "please" would fall out of use elsewhere. The politeness function of "please" can be accomplished by other phrases, such as "would you mind" or "would you be so kind". Although other terms might accomplish the same end, "the word 'please' is an agreed-upon device for showing respect". See also RSVP References English words Etiquette Magic words
Please
[ "Biology" ]
1,329
[ "Etiquette", "Behavior", "Human behavior" ]
63,659,864
https://en.wikipedia.org/wiki/Krist%C3%ADn%20Vala%20Ragnarsd%C3%B3ttir
Kristín Vala Ragnarsdóttir (born 1954) is an Icelandic Earth and sustainability scientist and activist who is Professor of Sustainability Science in the Faculty of Earth Sciences and the Institute of Earth Sciences at the University of Iceland. She was the first woman to be a full professor in Earth Sciences at the University of Bristol in the UK and at the same time the first woman to become a full professor in the Science Faculty there. She was also the first woman to serve as Dean of a School at the University of Iceland. Kristín Vala is a member of Academia Europaea (since 2012), the Norwegian Academy of Science and Letters, and the Icelandic Academy of Science. She is a fellow of the Royal Society of Arts, distinguished fellow of the Schumacher Institute, and a member of the Wellbeing Economy Alliance. She is a member of the sustainability think tanks the Balaton Group and the Club of Rome. Career Appointments Kristín Vala was on the faculty of the University of Bristol for 20 years from 1989, starting as a research fellow in the Department of Geology, becoming professor of Environmental Geochemistry in the Department of Earth Sciences in 2001 and professor of Environmental Sustainability from 2006 to 2008. She moved to the University of Iceland as Professor of Sustainability Science in the Faculty of Earth Sciences in 2008, and was Dean of the School of Engineering and Natural Sciences from 2008 to 2012. Board memberships Kristín Vala was a board member of the Geological Society of London the European Association of Geochemistry, and the Schumacher Society (now Schumacher Institute). She was a member of the steering committee of the Balaton Group, and the Alliance for Sustainability and Prosperity (ASAP). She was also on the board/steering group of TreeSisters, Pyramid2030, 17Goals, Health Empowernment Through Nutrition, Framtíðarlandið (FutureIceland), Initiative for Equality, Landvernd (Nature Protection) and Landsvirkjun (National Energy). Kristín Vala is a scientific advisor to the Ecological Sequestration Trust, serves on the global council of Wellbeing Economy Alliance (WEAll) and is a board member of Breiddalssetur Science and Culture Centre, and the Red Cross. Editorial memberships Kristín Vala was a member of the editorial boards of eEarth, Geochemical Transactions, Geochimica et Cosmochimice Acta and Chemical Geology. Currently, she is a member of the editorial boards of Anthropocene Review, System Change, BioPhysical Economics and Sustainability (previously BioPhysical Economics and Resource Quality), and Solutions (for a Sustainable and Desirable Future). Background Training Kristín Vala trained in geochemistry and petrology at the University of Iceland and geological sciences at Northwestern University, Evanston, Illinois. Awards Kristín Vala received the Award of Excellence Furthering Sustainability and Equality Learning from the Schumacher Institute. She was co-recipient of the Times Higher Education Supplement (THES) Award to the University of Bristol for Outstanding Contribution to Sustainable Development. Expert member panels Kristín Vala was a member of the UN Environment Program Depleted Uranium Scientific Assessment Teams, Kosovo (2000) and Bosnia Herzegovina (2002). She was a member of the International Expert Working Group of the Government of Bhutan on the New Development Paradigm (2013) and represented Academia Europaea in the European Academies Scientific Advisory Council (EASAC) working group on the Circular Economy (2016). In Iceland, Kristín Vala has advised the government on issues relating to higher education and research, education for sustainability, climate strategy, prosperity, quality of life and wellbeing, and energy policy. Research During her career, Kristín Vala has published over 100 research articles, book chapters, and books and has been awarded prizes and memberships/fellowships by academies and sustainability think tanks. Among many other topics, Kristín Vala has published work on geothermal systems, mineral solubility, mineral dissolution kinetics, structure and coordination of aqueous species, sorption of aqueous species to mineral surfaces, backfill materials for radioactive waste disposal, link between environment and health, bacterial and fungal weathering, and critical zone processes. At the turn of the century, Kristín Vala's research turned to issues related to transdisciplinary sustainability science, including city carbon emission management, natural resource availability and management, soil sustainability, sustainable tourism, and achieving the UN Sustainability Goals through the wellbeing economy. Politics Kristín Vala is a member of the Pirate Party and has been influential in developing its policies relating to environment, climate, and sustainability. She was instrumental in facilitating the participation of the Icelandic government in joining the Wellbeing Economy Governments (WEGo). Selected bibliography Books Ragnarsdottir K.V. and Banwart S.A. (editors) (2016) Soil: The Life Supporting Skin on Earth. eBook University of Iceland and University of Sheffield. Plant J.A., Voulvoulis N. and Ragnarsdottir K.V. (editors) (2011) Pollutants, Human Health and the Environment. A Risk Approach. Wiley Blackwell, 356 pages. Hancock P.L. and Skinner B.J. (editors), D.l. Dineley (associate editor) and Dawson A.G., Ragnarsdottir K.V. and Steward I.S. (subject editors) (2000) The Oxford Companion to the Earth, 1174 pp. Oxford University Press. Book chapters Lohrenz U., Sverdrup H.U. and Ragnarsdottir K.V. (2018) Global megatrends and resource use - A systemic reflection. In H. Lehmann (ed) Factor X. Eco-Efficiency in Industry and Science vol 32. Springer, Berlin. Thorarinsdottir, R., Coaten, D., Pantanella, E., Shultz, C., Stander, H. and Ragnarsdottir, K.V. (2017) Renewable energy use for aquaponics development on global scale towards sustainable food production. In J. Bundschuh, G. Chen, D. Chandrasekharam, J. Piechocki (Eds.) Geothermal, Wind and Solar Energy Applications in Agriculture and Aquaculture, Sustainable Energy Development Series, CRC Press, 362 pages. References Kristín Vala Ragnarsdóttir Kristín Vala Ragnarsdóttir Women earth scientists Sustainability scientists Kristín Vala Ragnarsdóttir Academics of the University of Bristol Northwestern University alumni Kristín Vala Ragnarsdóttir Members of Academia Europaea Members of the Norwegian Academy of Science and Letters Living people 1954 births
Kristín Vala Ragnarsdóttir
[ "Chemistry", "Environmental_science" ]
1,367
[ "Geochemists", "Icelandic geochemists", "Environmental scientists", "Sustainability scientists" ]
63,661,588
https://en.wikipedia.org/wiki/Europium%28III%29%20fluoride
Europium(III) fluoride is an inorganic compound with a chemical formula EuF3. Production Europium(III) fluoride can be produced by reacting europium(III) nitrate and ammonium fluoride: Eu(NO3)3 + 3 NH4F → EuF3 + 3 NH4NO3 Europium(III) fluoride nanoparticles can be synthesized by microwave irradiation of europium(III) acetate in an ionic liquid that has tetrafluoroborate as the anion. References Europium(III) compounds Fluorides Lanthanide halides
Europium(III) fluoride
[ "Chemistry" ]
137
[ "Fluorides", "Salts" ]
63,661,740
https://en.wikipedia.org/wiki/Star%20unfolding
In computational geometry, the star unfolding of a convex polyhedron is a net obtained by cutting the polyhedron along geodesics (shortest paths) through its faces. It has also been called the inward layout of the polyhedron, or the Alexandrov unfolding after Aleksandr Danilovich Aleksandrov, who first considered it. Description In more detail, the star unfolding is obtained from a polyhedron by choosing a starting point on the surface of , in general position, meaning that there is a unique shortest geodesic from to each vertex of . The star polygon is obtained by cutting the surface of along these geodesics, and unfolding the resulting cut surface onto a plane. The resulting shape forms a simple polygon in the plane. The star unfolding may be used as the basis for polynomial time algorithms for various other problems involving geodesics on convex polyhedra. Related unfoldings The star unfolding should be distinguished from another way of cutting a convex polyhedron into a simple polygon net, the source unfolding. The source unfolding cuts the polyhedron at points that have multiple equally short geodesics to the given base point , and forms a polygon with at its center, preserving geodesics from . Instead, the star unfolding cuts the polyhedron along the geodesics, and forms a polygon with multiple copies of at its vertices. Despite their names, the source unfolding always produces a star-shaped polygon, but the star unfolding does not. Generalizations of the star unfolding using a geodesic or quasigeodesic in place of a single base point have also been studied. Another generalization uses a single base point, and a system of geodesics that are not necessarily shortest geodesics. Neither the star unfolding nor the source unfolding restrict their cuts to the edges of the polyhedron. It is an open problem whether every polyhedron can be cut and unfolded to a simple polygon using only cuts along its edges. References Polygons Polyhedra Computational geometry
Star unfolding
[ "Mathematics" ]
418
[ "Computational geometry", "Computational mathematics" ]
63,662,169
https://en.wikipedia.org/wiki/Cortinarius%20walkeri
Cortinarius walkeri is a basidiomycete fungus of the genus Cortinarius native to Australia. It was first described by Mordecai Cubitt Cooke & George Edward Massee in 1893 from a specimen, MEL 0220681A, collected by Anna Frances Walker in the Blue Mountains, and named in her honour. See also List of Cortinarius species References External links Cortinarius walkeri occurrence data (Atlas of Living Australia) walkeri Fungi of Australia Fungi described in 1893 Taxa named by Mordecai Cubitt Cooke Fungus species
Cortinarius walkeri
[ "Biology" ]
121
[ "Fungi", "Fungus species" ]
63,662,558
https://en.wikipedia.org/wiki/Eminence%20%28anatomy%29
In anatomy, eminence implies a protuberance, and may refer to a variety of structures: Collateral eminence, alongside the hippocampus in the brain Cruciform eminence, in the occipital bone of the skull Frontal eminence, on the frontal bone of the skull Hypothenar eminence, a group of three palmar muscles that control the pinky finger Iliopubic eminence, in the pelvis Intercondylar eminence, in the tibia bone of the leg Medial eminence, in the rhomboid fossa of the fourth ventricle of the brain Median eminence, below the hypothalamus of the brain Müllerian eminence, in the cloaca of an embryo Parietal eminence, in the parietal bone of the skull Pyramidal eminence, in the middle ear Thenar eminence, muscle on the thumb side of the hand Musculoskeletal system
Eminence (anatomy)
[ "Biology" ]
215
[ "Organ systems", "Musculoskeletal system" ]
63,662,981
https://en.wikipedia.org/wiki/Climate%20psychology
Climate psychology is a field that aims to further our understanding of our psychological processes' relationship to the climate and our environment. It aims to study both how the climate can impact our own thoughts and behaviors, as well as how our thoughts and behaviors impact the climate. This field often focuses on climate change, both in our reaction to it and how our behaviors can be changed in order to minimize the impact humanity has on the climate. These behavior changes include: engaging with the public about climate change, contributing at a personal, communal, cultural and political level by supporting effective change through activists, scientists, and policy makers, and finally nurturing psychological resilience to the destructive impacts climate change creates now and in the future. Climate psychology includes many subfields and focuses including: the effects of climate change on mental health, the psychological impact of climate change, the psychological explanation of climate inaction, and climate change denial. Climate psychology is a sub-discipline of environmental psychology. History The origins of climate psychology can be traced back to the work of psychoanalyst Harold Searles and his work on the unconscious factors that influence the estrangement of people from the rest of nature. It has also been strongly related to the field of Ecopsychology, as Sigmund Freud tied the interests of the ego to the natural world. Due to the increase in society-wide acceptance of the dangers of climate change, there has been greater interest in understanding the psychological processes underlying the resistance to taking appropriate action, and in particular, the phenomenon of climate change denial. More recently, a literature base by climate psychologists has started to focus on the powerful emotions associated with climate change and planetary-wide biodiversity loss. Academic discipline Climate psychology is a trans-disciplinary approach to research and practice. It focuses on the society-wide reluctance to take appropriate action in relation to the escalating threat of climate change. It seems the problem as requiring a deeper approach, that examines our resistance to knowing and acting, rather than seeing it as an “information deficit” to be treated by cognitive or behavioral approaches. It stresses the significance of human emotions, identities and cultural assumptions. Furthermore, it acknowledges the human subject as nested within their social and ecological context. In order to meet its aims and develop its approach, climate psychology draws on a broad range of perspectives, including: literature, philosophy, world religions, the arts, humanities and systems thinking. The core of the approach is based on various psychotherapeutic traditions and psycho-social studies, allowing climate psychologists to understand the unconscious or unacknowledged emotions and processes influencing people’s thoughts, motivations and behaviors. This applies especially to these processes that manifest in the broader context of the wider society and culture. As of 2020, the discipline of climate psychology had grown to include many subfields. This is in response to the spread of, what has recently been called, climate anxiety, which is a manifestation of the decades-old understanding of eco-anxiety. Climate psychologists are working with the United Nations, national and local governments, with corporations, NGOs, and individuals. Climate psychology in practice In recent years, climate psychologists are facilitating support groups for activists, particularly those active in the support of pro-environmental behaviors across society. Climate psychologists have also engaged directly with climate activists, and even engaged in climate activism themselves. For example, in August 2022, scientists and their colleagues came together to protest rebellion outside of the Department of Business, Energy, and Industrial Strategy in London. During this time, as shown on the news, many climate scientists were having mental breakdowns and showing extreme signs of emotional turmoil and anguish. Climate psychologists over the years have watched not only scientists go through this environmental change, seeing how it has negatively impacted millions. They support groups through behavioral practices and studies to help obtain precise data and comprehension from person to person within these activist groups. The United States Agency for International Development (USAID) reports that roughly 971 million individuals are residing in regions with moderate to high exposure to climate hazards due to industrial development, environmental exploitation, and excessive consumerism, particularly in the Asia-Pacific and South Asia regions. In response to the issues and difficulties resulting from climate change, the Psychological Association of the Philippines (PAP) is actively providing psychological aid during natural disasters and catastrophic events. In addition, psychologists around the globe encourage networking and connections to maintain knowledge exchange and create a community of climate action proponents to assure that all individuals have access to the aid and amenities needed in areas currently under pressure from the ongoing climate crisis. Climate and Mental Health The climate can have various impacts on mental health. For example, increased temperature can be linked to a worsening of a variety of mental health issues such as aggression, anxiety, dementia, mood, and suicide. The worsening of these symptoms can lead to increases in crime rates and hospital admissions rates during heat waves. The increased prevalence of natural disasters can also cause mental distress which can cause PTSD in many patients, which is a pressing concern for some climate psychologists. Natural disasters have also been linked to acute stress disorder, drug abuse disorder, and depression in some people. Climate change may also result in socioeconomic impacts; the associated economic hardship can negatively impact mental health, leading to stress and depression. Workers often face worse conditions due to climate change leading to increased risk of injury. The negative impacts on physical health can then lead to decreases in mental health as well. Workers may then become demoralized and lose interest in their work as a result of worsened mental health. These socioeconomic impacts can also lead to disproportionate impacts on minorities and repressed groups within a society. For example women are often disproportionately impacted compared to men in the aftermath of a natural disaster. Due to the impacts of climate change on mental health, psychologists and social workers have begun to take climate into account when assessing patients. This includes reaching out to the community and applying psychological principals to decrease climate change and to address climate anxiety in clinical sessions.Psychologists may council patients with climate related anxiety and attempt to shift those anxieties into positive changes. Psycho-analytical approaches Psycho-analytical approaches are approaches based on the ideas of Sigmund Freud. They focus on how people respond to anxiety, a response which in turn may trigger psychological defense mechanisms. The psychological defenses triggered will often define how that person then reacts to climate change. Climate psychologists use this to explain reactions such as climate change denial, apathy, and inaction in towards climate change. Psychologists consider how coping responses can be adaptive or maladaptive, and climate psychologists use environmental impact of a behavior to determine the adaptivity of the response. For example, climate psychologists might ask if responses promote positive psychological adjustment and stimulate appropriate and proportional pro-environmental action, or do they serve to justify the individual in their inaction and allow them to refrain from the necessary, radical changes? Psycho-social approaches A psycho-social approach to Climate psychology examines the interplay between internal, psychological factors and external, sociocultural factors- such as values, beliefs, and norms- in people’s responses to climate change. Furthermore, it offers a distinctive qualitative methodology for understanding the lived experience of research subjects, which has been adopted by researchers seeking to investigate how climate change and environmental destruction are experienced by different groups across society. In this case, ‘lived experience’ refers to the feelings, thoughts and imaginations and the meaning frames which both affect and are effected by those experiences. Coping responses to impending climate destabilization are psycho-social phenomena, culturally sanctioned and maintained by social norms and structures, not simply isolated psychological processes. For example, modern mass consumerism is dictated by the needs of a globalized, deregulated economy, yet it is one of the driving forces of climate change. It has been suggested that this “culture of un-care” performs an ideological function, insulating consumers from experiencing too much anxiety and moral disquiet. Cultural mechanisms also support ways of down-regulating the powerful feelings that would otherwise be elicited by the awareness of potential threats. These include strong, embedded cultural assumptions such as entitlement, exceptionalism, and faith in progress. Entitlement is the belief that certain groups or species deserve more than others and is embedded in the unequal relations governing developed and developing human societies. Exceptionalism is the idea that one’s species, nation, ethnic group or individual self is special and therefore absolved from the rules that apply to others, giving license to breach natural limits of resource consumption. Faith in progress, a key element of post-industrial ideology, results in a conviction that science and technology can solve every problem, therefore encouraging wishful thinking and false optimism. Many people perceive unfairness in how they are affected by Climate change. This is often caused by the wealth inequality correlated with climate change. In other words, those who experience more inequality also experience worse impacts from climate change. Lack of fairness is then perpetuated as many wealthier people have a disproportionate negative impact on the environment due to the direct correlation between wealth and a person's carbon footprint. This perceived lack of fairness can then lead to increased radicalization of a person's belief's about climate change according to psycho-social models of political radicalization.References External links Climate Psychology Alliance Psychologists for a Safe Climate Climate & Mind Environmental psychology Climate change and society Climate Climate change
Climate psychology
[ "Environmental_science" ]
1,905
[ "Environmental social science", "Environmental psychology" ]
63,663,102
https://en.wikipedia.org/wiki/Insertion%20symbol
The term insertion symbol has more than one meaning,, when using a cursor (user interface), it is (usually) a vertical bar indicating where text being typed will be inserted a caret (proofreading) is a V-shaped grapheme, usually inverted and sometimes extended, used to indicate that additional material needs to be inserted at this point in the text. See also Caret (computing) Typographical symbols
Insertion symbol
[ "Mathematics" ]
91
[ "Symbols", "Typographical symbols" ]
63,665,258
https://en.wikipedia.org/wiki/Bateman-Mukai%20method
In genetics, the Bateman–Mukai method, sometimes referred to as the Bateman–Mukai technique, is a traditional method used for describing the mutation rates for genes through the observation of physical traits (phenotype) of a living organism. The method involves the maintenance of many mutation accumulation lineages of the organism studied, and it is therefore labor intensive. Origin The foundational papers from which this method gets its name were conducted by geneticists A. J. Bateman in 1959 and T. Mukai in 1964. Bateman used an early form of this method to understand how radiation affects the survival of chromosomes due to radiation induced mutations. Mukai's experimental design largely followed the design of Bateman's study, but rather than inducing mutations via any external factor, the study aimed to describe the spontaneous naturally occurring deleterious mutation rate of the common fruit fly. Procedure The method requires the establishment of many mutation accumulation lineages using within line breeding of diploid organisms. These lines are maintained in a favorable environment for deleterious mutations to accumulate so that they are not to be purged by natural selection: excess food and other resources are kept available to eliminate competition, and the parents of the next generation are chosen at random without any regards to fitness. Importantly, in this way, mutation accumulation experiments attempt to describe the true mutation rates that would be observed in the absence of natural selection. Asexually reproducing organisms can simply have a single parent selected as the parent for the next generation of each line. In sexually reproducing organisms, measures must be taken such that researchers can be sure that mutations are inherited in by future generations of the mutation accumulation lines. The use of a balancer gene can be implemented towards this end. In the Mukai experiment, male flies homozygous for the wild type chromosome 2 were always mated with female heterozygotes for the Pm/Cy balancer gene that produces an observable phenotype in the wings, with homozygous Pm/Cy being lethal. This ensures that researches can select for organisms that do not exhibit the phenotypic trait of the balancer gene, which in turn means that only wild type chromosomes will be passed down to the next generation. In this way, in sexually reproducing organisms, any spontaneously occurring mutations that occur in the mutation accumulation line should have a random chance, due to the independent assortment, of being fixed in the next generation of line. The main measurements that are derived from results of a Bateman–Mukai method are: the deleterious mutation rate, , and the average selection coefficient, , although these must be derived from phenotype observation. , which is specifically the mutation rate for a single copy of a gene is derived from the assumption that deleterious mutations are randomly fixed and from the nature of the diploid organisms, which have two copies of each gene. So the mutation rate from both copies, , multiplied by the population of the line, , with the assumption of random fixation gives that the deleterious mutation rate,, such that each mutation per line per generation directly counts towards the mutation rate. By tracking the quantitative deleterious change in a trait delta M (ie. number of offspring), the mutation rate can be defined within one line as: . References Evolutionary biology Mutation
Bateman-Mukai method
[ "Biology" ]
679
[ "Evolutionary biology" ]
63,668,052
https://en.wikipedia.org/wiki/Cannabis%20etiquette
Cannabis etiquette is the set of conventional rules of behavior when consuming cannabis. Smoking "Bogarting" is holding a joint for too long without actually smoking, therefore wasting the cannabis and possibly preventing others from also consuming. Books Higher Etiquette: A Guide to the World of Cannabis, From Dispensaries to Dinner Parties, Lizzie Post References External links Pot Etiquette: Mind Your Marijuana Manners, Boston A Modern Smoker's Guide to Cannabis Etiquette, Thrillist.com A Simple Guide to Proper Weed Etiquette, Wikileaf Cannabis Etiquette
Cannabis etiquette
[ "Biology" ]
120
[ "Etiquette", "Behavior", "Human behavior" ]
63,668,207
https://en.wikipedia.org/wiki/Mutation%20accumulation%20experiments
A mutation accumulation (MA) experiment is a genetic experiment in which isolated and inbred lines of organisms (so-called MA lines) are maintained such that the effect of natural selection is minimized, with the aim of quantitatively estimating the rates at which spontaneous mutations (mutations not caused by exogenous mutagens) occur in the studied organism. Spontaneous mutation rates may be directly estimated using molecular techniques such as DNA sequencing, or indirectly estimated using phenotypic assays (observing how an organism’s phenotype changes as mutations accumulate). The earliest mutation accumulation experiments were performed by American geneticist Hermann Joseph Muller in the 1920s, using Drosophila melanogaster. Principles and procedures All MA lines used in a MA experiment are bred from a single common ancestor, and are often propagated by single-progeny descent, where a single offspring is randomly selected to sire the next generation of organisms. This serves to prevent the loss of mutant alleles through sexual reproduction. Notably, single-progeny descent is only possible if the organism being studied is capable of asexual reproduction or self-fertilization. A control line is maintained parallel to the MA lines and under the same conditions, except organisms are allowed to reproduce sexually (they are not constrained to single-progeny descent). The assumption underlying this procedure is that the larger, sexually reproducing population of the control line will cause all spontaneous mutations to be ‘weeded out’ by sexual reproduction. Mutations that arise in MA lines are heterozygous at first, and can become fixed or lost at random in subsequent generations. Thus, the control line will be relatively free of mutations, and can be compared with the MA lines to assess the impact of the mutations that have accumulated therein. Both the MA lines and the control line are maintained under relaxed natural selection, to minimize the strain that natural selection places on mutant organisms (which may have reduced fitness). For example, the organisms in a MA experiment may be kept in the ideal environmental conditions and provided with an excess of nutrients. Mutation accumulation experiments are time-consuming and labor intensive, as a result of the requirement that multiple MA lines be raised parallel to one another for multiple generations, under carefully maintained environmental conditions. Estimating mutation rate Phenotypic assays may be performed on the organisms within each MA line, to determine the extent to which the accumulated mutations have affected the organism’s various phenotypic traits. The measured changes in phenotype across generations can be used to indirectly estimate the mutation rate for that organism. With the advent of whole-genome sequencing, the mutation rate of an MA line can be directly estimated by sequencing the MA line and comparing it with sequence data for the control line (i.e., the wild-type organism). Historically, most MA experiments have estimated mutation rates by using a phenotypic assay to measure the change in the trait value of a phenotype across generations. However, the validity of this approach relies on several assumptions: Since the average mutation is thought to be slightly deleterious, it is assumed that accumulated mutations will cause an organism to become gradually less viable (i.e., that the effect of mutations on the organism’s viability will be unidirectional). It is also assumed that the control line will be free of mutations: since the organisms within the control line are able to freely sexually reproduce, it is thought that most mutations that arise within individuals will be relatively quickly lost during sexual reproduction. Furthermore, since the organisms in an MA experiment are raised under relaxed natural selection, it is assumed that all mutations that arise have arisen randomly, and become fixed or lost at random, in the absence of any selective pressure exerted by the environment. Typical MA experiment A mutation accumulation experiment conducted by Ruth Shaw and colleagues serves as a good example of a typical MA experiment: the group sought to measure the effects of spontaneous mutations on the reproductive traits of Arabidopsis thaliana. A.thaliana is an ideal candidate for a MA experiment because it is capable of self-fertilization, has a relatively short life cycle (of about 10 weeks), and is a well-studied model organism in plant biology and genetics. Shaw and colleagues established 120 lines of A.thaliana, and advanced each line 17 generations by single-progeny descent: each generation was propagated by a single individual randomly selected from a number of self-fertilized seeds sown. The reproductive traits measured as part of the phenotypic assay included seed number per fruit, fruit number, and reproductive mass (the total mass of fruits and seeds from a single plant). The group found that each trait diverged significantly between MA lines, suggesting that mutations had accumulated in some lines. MA experiments in obligate sexually reproducing organisms Single-progeny descent is only possible if the organism being studied is capable of asexual reproduction or self-fertilization. In cases where an organism is only capable of sexual reproduction (such as Drosophila melanogaster, which was the species used in many early MA experiments), organisms with balancer chromosomes are used. In MA experiments involving an obligate sexually reproducing species such as Drosophila, mutations are accumulated on only one of a pair of homologous chromosomes. The other homologous chromosome is a modified so-called balancer chromosome. Balancer chromosomes contain a sizable inversion relative to their homologue, which serves to prevent recombination of the balancer chromosome with its homologue during meiosis. Additionally, the balancer chromosome may contain a number of mutations. While the exact mutation(s) may vary, it is important that they achieve two things. First, the mutation(s) must create a physically visible phenotype in heterozygous organisms. This allows organisms that carry a balancer and an unmodified chromosome (organisms that are ideal for the MA experiment) to be easily identified. Second, the mutation(s) must also be homozygous lethal, so that any organism that inherits two balancer chromosomes (i.e. an organism useless to the MA experiment) will not survive. Using this system, a proportion of the offspring created by sexual reproduction will inherit a balancer chromosome and its homologue. This homologue is passed down across generations without having its mutations disrupted by recombination during sexual reproduction, allowing it to properly accumulate mutations. Significance MA experiments allow researchers to study the rates and properties of new mutations. Since mutation is the ultimate source of genetic diversity in all living organisms, researchers are interested in knowing how often mutations arise, and in understanding the phenotypic impacts on newly-arisen mutations, in order to better understand the patterns underlying adaptation and evolution. Derived Estimates of Mutational Parameters Mutation accumulation experiments are an important means by which to estimate mutational parameters. As the name suggests, these parameters define the rate at which different types of mutations occur in a given organism. Their estimation is important because mutations are the ultimate source of genetic variation as well as being involved in the pathogenesis of many common diseases. As such, understanding how often they occur, as well as what consequences they are likely to carry when they do occur can yield significant insight into these issues. The mutation rate in the nuclear genome varies quite significantly across species, while that of the mitochondrial genome is much more consistent, with most estimates of this latter parameter in unicellular and multicellular eukaryotes ranging from 0.76 to 1.6 x 10−7 mutations per site per generation. Since these parameters have been ascertained for only a small number of species because of how labor intensive this approach can be, a significant amount of this type of work has been done in various model organisms. For example, in the nuclear genome of Drosophila melanogaster, one study placed the rate of single-nucleotide mutation at 3.5 × 10−9 mutations per site per generation, while another study obtained a value of 5.8 × 10−9 mutations per site per generation for the same parameter. Estimates of the D. melanogaster mitochondrial genome mutation rate are generally ~10 times higher than that of the nuclear genome, with one estimate being 6.2 x 10−8 mutations per site per generation. The rate of occurrence for deleterious mutations per generation (Ud) can be obtained by multiplying the average number of mutations per generation by the proportion of mutations which are expected to be deleterious within a given species. However, it can also be estimated by way of fitness assays using MA lines of known genotype. In Drosophila melanogaster, one study placed the value of this parameter at 1.2 deleterious mutations per diploid genome per generation, which matches closely the average value of this parameter within this species across all studies on the subject conducted as of 1999. However, the putative Ud values obtained by way of similar mutation accumulation-based studies have varied significantly, with some estimates being as low as 0.02 deleterious mutations per diploid genome per generation, a value not much higher than the putative lethal mutation rate of 0.01 mutations per generation. Uncharacteristically low values such as this are usually obtained in studies that classify mutations as either deleterious or not by screening for quantifiable decreases in viability. This is because these studies are thought to overlook the vast majority of deleterious mutations for which the overall effect on viability is too small in magnitude to be observed by way of such assays. Higher Ud values (>1) that are more in line with current scientific consensus regarding the estimation of this parameter in D. melanogaster suggest that selection against these deleterious mutations may play a significant role in shaping patterns of genetic variation in the genome, as well as in maintaining selection for recombination and sexual reproduction. For Caenorhabditis elegans, a nematode that is one of the most commonly used model organisms in molecular and developmental biology research, one estimate of the nuclear genome mutation rate is 2.1 x 10−8 mutations per site per generation. The value of Ud within the species was in 1997 directly estimated to be 0.0026 deleterious mutations per generation, which is two orders of magnitude smaller than previous indirect estimates. A newer estimate based on laboratory fitness assays of MA lines placed this parameter at 0.015 deleterious mutations per generation. However, as previously mentioned, such assays may overlook mutations which produce negative consequences of modest magnitude. These are thought to represent the vast majority of deleterious mutations, and possibly even the majority of all mutations within this species. Authors have also noted that insertions are the predominant type of mutation observed in MA studies using C. elegans. The mitochondrial mutation rate in this species has been estimated at 1.05 x 10−7 mutations per site per generation. In the yeast, Saccharomyces cerevisiae, the nuclear genomic rate of single nucleotide mutations was estimated to be 1.67 ± 0.04 × 10−10 per site per generation, while the rate of small insertions/deletions was estimated to be 5.03 ± 0.99 × 10−12 per site per generation. As far as aneuploidy and other large copy number variant events in this organism, the rate of whole-chromosome duplication was found to be 9.7 ± 1.8 × 10−5 events per diploid genome per generation, while the rate of chromosome loss was estimated at 0.7 ± 0.04 × 10−5 events per diploid genome per generation. References Evolutionary biology
Mutation accumulation experiments
[ "Biology" ]
2,403
[ "Evolutionary biology" ]
63,668,273
https://en.wikipedia.org/wiki/CCR4-Not
Carbon Catabolite Repression—Negative On TATA-less, or CCR4-Not, is a multiprotein complex that functions in gene expression. The complex has multiple enzymatic activities as both a poly(A) 3′-5′ exonuclease and a ubiquitin ligase. The exonuclease activity of CCR4-Not shortens the poly(A) tail found at 3' end of almost every eukaryotic mRNA. The complex is present both in the nucleus where it regulates transcription and in the cytoplasm where it associates with translating ribosomes and RNA processing bodies. In mammalian cell, it has a function in the regulation of the cell cycle, chromatin modification, activation and inhibition of transcription initiation, control of transcription elongation, RNA export, nuclear RNA surveillance, and DNA damage repair in nucleus. Ccr4–Not complex plays an important role in mRNA decay and protein quality control in the cytoplasm. Subunits The human CCR4-Not complex is composed of structural (non-catalytic) subunits and those that have exonuclease and E3 ligase activity. Some but not all of the human subunits are conserved in budding yeast. In yeast the complex has nine core subunits, comprising Ccr4 (carbon catabolite repression), Caf proteins (Ccr4 associated factor) (Caf1, Caf40, Caf130) and Not proteins (Not1, Not2, Not3, Not4, and Not5). Molecular weight of human subunits from Uniprot. See also Deadenylation Gene expression References Gene expression Protein complexes RNA-binding proteins
CCR4-Not
[ "Chemistry", "Biology" ]
353
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
63,668,520
https://en.wikipedia.org/wiki/Chao-Jun%20Li
Chao-Jun "C.-J." Li, a Canadian chemist, is E. B. Eddy Professor of Chemistry and Canada Research Chair in Green Chemistry at McGill University, Montréal. He is known for his pioneering works in Green Solvent (organic reactions in water) and Green Syntheses (water/functional group-tolerating organometallics, C-H activation, and photochemistry). Education C.-J. Li was born in 1963, and obtained his BSc from Zhengzhou University (1979–1983), and completed his MSc. in organic synthesis at the Chinese Academy of Sciences (1985–1988) with Prof. T.H. Chan. He then moved to McGill University (Montréal, Québec) to do his PhD (1989–1992) with Prof. T.H. Chan again (and discovered the indium-mediated allylation reaction in water) along with Prof. David Harpp (working on organosulfur/selenium/tellurium chemistry), and went on a NSERC-funded postdoc with Prof. Barry Trost at Stanford University in the United States (1992–1994) (and discovered the so-called phosphine-catalyzed γ-Addition Reaction). Career and research C.-J. Li started as an assistant professor at Tulane University in 1994, and attained the title of Professor of Chemistry in 2000. He then moved in 2003 to McGill University, where he obtained a Canada Research Chair (Tier I) in Green Chemistry. He has also been the director of NSERC CREATE for Green Chemistry (2012–2018), the director of CFI Infrastructure for Green Chemistry and Green Chemicals (2008–2018) and has been the co-director of the FQRNT Center for Green Chemistry and Catalysis since 2009. He was the founding Co-Chair of the Canadian Green Chemistry and Engineering Network. C.-J. Li's research encompasses various aspects of green chemistry applied to organic chemistry: organometallics, catalysis (thermal and light-based). Most notably, he is known for introducing water as a Green Solvent for various chemical reactions (C-H activation/Functionalization, Grignard type-reactions in water. Li originated the concepts of Aldehyde-Alkyne-Amine Coupling (A3 coupling reaction) and the cross dehydrogenative coupling (CDC Reaction, or C-H/C-H coupling, or oxidative C-H cross coupling). His work on GaN nanowires and nanoparticals as photocatalysts for the conversion of methane into benzene was covered by Phys.org in 2015, leaving prospects for hydrogen storage. Subsequently, his team showed that they were also able to convert methanol into ethanol, ethylene and cyclohexane. He also made breakthroughs in using hydrazones as organometallic surrogates in nucleophilic addition and cross-coupling, the direct amination of phenol derivatives. and the earliest report on fluorescence enhancement due to self-assembling (SAFE) in solution. Selected publications Reactions in water: Indium-mediated allylations in water The Barbier–Grignard-type alkylation of aldehydes using unactivated alkyl iodides in water The Barbier–Grignard-type arylation of aldehydes using unactivated aryl iodides in water Silver-Catalyzed Hydrogenation of Aldehydes in Water A3 coupling reaction Cross dehydrogenative coupling(CDC) GaN Photocatalysts Photoinduced conversion of methane into benzene over GaN nanowires Direct catalytic methanol-to-ethanol photoconversion via methyl carbene Direct catalytic conversion of methane into cyclohexane (Methane Liquefaction) Direct catalytic conversion of methane into methanol or formic acid Hydrazones as organo-metallic equivalent (HOME Chemistry): Carbonyls as alkyl carbanion equivalents for 1,2-nucleophilic additions, conjugate additions, and cross-couplings References Stanford University alumni Canadian people of Chinese descent Academic staff of McGill University American organic chemists Zhengzhou University alumni 1963 births Living people McGill University alumni
Chao-Jun Li
[ "Chemistry" ]
893
[ "Organic chemists", "American organic chemists" ]
69,359,873
https://en.wikipedia.org/wiki/Collin%20Y.%20Ewald
Collin Yvès Ewald (born 1980 in Basel) is a Swiss scientist investigating the molecular mechanisms of healthy aging. He is a molecular biologist and a professor at ETH Zurich, where he leads the Laboratory of Extracellular Matrix Regeneration. His research focuses on the remodeling of the extracellular matrix during aging and upon longevity interventions. Career Collin Ewald was educated at Mathematisch-Naturwissenschaftliches Gymnasium (MNG) Basel, Switzerland. After completing his bachelor in molecular biology at the University of Basel, he joined the labs of Joy Alcedo and Nancy Hynes studying the function of a breast cancer metastasis gene Memo1 in the model organism C. elegans at the Friedrich Miescher Institute (FMI) for Biomedical Research, Basel, Switzerland. In the Alcedo lab, he became interested in how neurons regulate aging and went on to do a Ph.D. in neuroscience at the Graduated Center of the City University of New York, USA. Mentored by his Ph.D. supervisor Chris Li, he discovered a conserved genetic link between Insulin/IGF-1 signaling and Alzheimer's disease amyloid precursor protein (APP) orthologues. To deepen his knowledge in aging research, he joined T. Keith Blackwell's lab as a postdoctoral fellow at Harvard Medical School, unraveling the importance of Insulin/IGF-1 signaling in promoting collagen homeostasis during longevity. In 2015, he became a junior faculty member at the Joslin Diabetes Center, instructor in medicine at Harvard Medical School, and visiting scholar at the Whitehead Institute (Massachusetts Institute of Technology). After ten years in the US, in 2016, he secured the SNSF professorship to return to Switzerland to join the Institute for Translational Medicine. as an assistant professor at ETH Zürich. To be at the forefront and to interconnect the two research fields of aging and matrix biology, he founded the Swiss Society for Aging Research, is the vice president of the German Society of Aging Research, and he re-established the Swiss Society for Matrix Biology. He is an independent scientific advisor of the longevity-start-up-company-builder Maximon AG, and a co-founder of Avea Life AG. Research His research centers around the remodeling of the extracellular matrix, ensuring tissue and cellular homeostasis during healthy aging. In collaboration with Alexandra Naba, he defined the proteins outside of cells forming the extracellular matrix, the so-called matrisome of C. elegans. Over ten thousand phenotypes stem from mutations in matrisome genes in humans and across species. He coined the term matreotype that is the extracellular matrix composition caused or associated by a cellular or physiological status, genotype, or phenotype. Using gene expression data from humans at different ages and tissues, his team defined the youthful matreotype and used it to predict drugs that slow aging.  His research group also showed that even close to the end of an individual's life, it is possible to double the lifespan of C. elegans. Distinctions He is named under the top 15 Longevity Influencer in Switzerland and world influencer ETH domain, Who is Who in Medical Research, is in the top 0.5% of the worldwide longevity experts, and has received multiple awards, including the Ellison Medical Foundation and American Federation for Aging Research fellowship in 2013, the DeLill Nasser Award in 2015, and the SNSF Professorship in 2016. Selected works References External links Website of the Laboratory of Extracellular Matrix Regeneration 1980 births Living people University of Basel alumni CUNY Graduate Center alumni Academic staff of ETH Zurich Swiss biochemists Molecular biologists Swiss expatriates in the United States
Collin Y. Ewald
[ "Chemistry" ]
768
[ "Biochemists", "Molecular biology", "Molecular biologists" ]
69,360,178
https://en.wikipedia.org/wiki/June%202075%20lunar%20eclipse
A partial lunar eclipse will occur at the Moon’s descending node of orbit on Friday, June 28, 2075, with an umbral magnitude of 0.6235. A lunar eclipse occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. A partial lunar eclipse occurs when one part of the Moon is in the Earth's umbra, while the other part is in the Earth's penumbra. Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. Occurring only about 5.5 hours after perigee (on June 28, 2075, at 4:10 UTC), the Moon's apparent diameter will be larger. Visibility The eclipse will be completely visible over eastern Australia, western North America, Antarctica, and the central and eastern Pacific Ocean, seen rising over east Asia and western Australia and setting over much of North and South America. Eclipse details Shown below is a table displaying details about this particular solar eclipse. It describes various parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2075 A penumbral lunar eclipse on January 2. A total solar eclipse on January 16. A partial lunar eclipse on June 28. An annular solar eclipse on July 13. A partial lunar eclipse on December 22. Metonic Preceded by: Lunar eclipse of September 9, 2071 Followed by: Lunar eclipse of April 16, 2079 Tzolkinex Preceded by: Lunar eclipse of May 17, 2068 Followed by: Lunar eclipse of August 8, 2082 Half-Saros Preceded by: Solar eclipse of June 22, 2066 Followed by: Solar eclipse of July 3, 2084 Tritos Preceded by: Lunar eclipse of July 28, 2064 Followed by: Lunar eclipse of May 28, 2086 Lunar Saros 121 Preceded by: Lunar eclipse of June 17, 2057 Followed by: Lunar eclipse of July 8, 2093 Inex Preceded by: Lunar eclipse of July 18, 2046 Followed by: Lunar eclipse of June 8, 2104 Triad Preceded by: Lunar eclipse of August 27, 1988 Followed by: Lunar eclipse of April 29, 2162 Lunar eclipses of 2074–2078 This eclipse is a member of a semester series. An eclipse in a semester series of lunar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit. The penumbral lunar eclipses on February 11, 2074 and August 7, 2074 occur in the previous lunar year eclipse set, and the penumbral lunar eclipses on April 27, 2078 and October 21, 2078 occur in the next lunar year eclipse set. Saros 121 Half-Saros cycle A lunar eclipse will be preceded and followed by solar eclipses by 9 years and 5.5 days (a half saros). This lunar eclipse is related to two total solar eclipses of Solar Saros 128. See also List of lunar eclipses and List of 21st-century lunar eclipses References 2075-06 2075-06
June 2075 lunar eclipse
[ "Astronomy" ]
732
[ "Future astronomical events", "Future lunar eclipses" ]
69,361,192
https://en.wikipedia.org/wiki/Diamagnetic%20inequality
In mathematics and physics, the diamagnetic inequality relates the Sobolev norm of the absolute value of a section of a line bundle to its covariant derivative. The diamagnetic inequality has an important physical interpretation, that a charged particle in a magnetic field has more energy in its ground state than it would in a vacuum. To precisely state the inequality, let denote the usual Hilbert space of square-integrable functions, and the Sobolev space of square-integrable functions with square-integrable derivatives. Let be measurable functions on and suppose that is real-valued, is complex-valued, and . Then for almost every , In particular, . Proof For this proof we follow Elliott H. Lieb and Michael Loss. From the assumptions, when viewed in the sense of distributions and for almost every such that (and if ). Moreover, So for almost every such that . The case that is similar. Application to line bundles Let be a U(1) line bundle, and let be a connection 1-form for . In this situation, is real-valued, and the covariant derivative satisfies for every section . Here are the components of the trivial connection for . If and , then for almost every , it follows from the diamagnetic inequality that The above case is of the most physical interest. We view as Minkowski spacetime. Since the gauge group of electromagnetism is , connection 1-forms for are nothing more than the valid electromagnetic four-potentials on . If is the electromagnetic tensor, then the massless Maxwell–Klein–Gordon system for a section of are and the energy of this physical system is The diamagnetic inequality guarantees that the energy is minimized in the absence of electromagnetism, thus . See also Citations Inequalities Electromagnetism
Diamagnetic inequality
[ "Physics", "Mathematics" ]
381
[ "Electromagnetism", "Physical phenomena", "Mathematical theorems", "Binary relations", "Mathematical relations", "Fundamental interactions", "Inequalities (mathematics)", "Mathematical problems" ]
69,362,483
https://en.wikipedia.org/wiki/HD%2063584
HD 63584 (HR 3038) is a solitary star in the southern circumpolar constellation Volans. With an apparent magnitude of 6.15, it is barely visible to the naked eye under ideal conditions. The star is located 420 light years away based on parallax, but is drifting away with a radial velocity of 10.4 km/s. HD 63584 has a classification of "A0 IV/V", which states it is an A0 star with the characteristics of a main sequence and subgiant star. It has 2.58 times the Sun's mass, but a radius around 3 times that of the Sun. HD 63584 radiates at 60 times the Sun's luminosity from its photosphere at an effective temperature of 9,954 K, which gives it the blueish-white hue of an A0 star. Despite the "IV" part of the classification, HD 63584 is only 256 million years old, having completed only 59.6% of its main sequence lifetime. References Volantis, 18 063584 Durchmusterung objects 37720 3038 A-type main-sequence stars A-type subgiants Volans
HD 63584
[ "Astronomy" ]
255
[ "Volans", "Constellations" ]
69,363,002
https://en.wikipedia.org/wiki/Hypomyces%20camphorati
Hypomyces camphorati is a parasitic ascomycete in the family Hypocreaceae. Its host species is Lactarius camphoratus, and it causes a whitish to yellowish subiculum to form on the hymenium of the host, covering and preventing formation of the gills. It also causes deformed cap shape and densifying flesh. Hypomyces camphorati is often treated as Hypomyces lateritius, but it is set apart by its yellowish coloration and slightly larger ascospores. Further research is required to determine whether H. camphorati is indeed a separate species. References Hypocreaceae Taxa named by Charles Horton Peck Fungi described in 1906 Fungi of North America Parasitic fungi Fungus species
Hypomyces camphorati
[ "Biology" ]
157
[ "Fungi", "Fungus species" ]
69,363,429
https://en.wikipedia.org/wiki/Hydrocarbonoclastic%20bacteria
Hydrocarbonoclastic bacteria (also known as hydrocarbon degrading bacteria, oil degrading bacteria or HCB) are a heterogeneous group of prokaryotes which can degrade and utilize hydrocarbon compounds as source of carbon and energy. Despite being present in most of environments around the world, several of these specialized bacteria live in the sea and have been isolated from polluted seawater. Taxonomy and distribution The taxonomic diversity of hydrocarbon-degrading bacteria has not changed dramatically if we consider the higher taxa, many studies have provided information on 25 kinds of hydrocarbon-degrading bacteria and 25 kinds of fungi isolated from marine environments. Bacterial genera such as Gordonia, Brevibacterium, Aeromicrobium, Dietzia, Burkholderia and Mycobacterium isolated from oil have been shown to be potential organisms for hydrocarbon degradation. Cerniglia et al. observed that nine cyanobacteria, five green algae, one red alga, one brown alga and two diatoms could oxidise naphthalene. Temperature is crucial because it influences microbial physiology and diversity; the rate of biodegradation generally decreases as the temperature decreases. Hydrocarbonoclastic bacteria are diazophilic, i.e. they can live in environments extremely poor in nitrogen compounds, which allows them to distribute themselves throughout the environment. They are extremely useful for environmentally friendly biosanitation; the fastest and most complete degradation occurs under aerobic conditions. Hydrocarbons occur in marine environments where there are oil spills, which makes us understand that they are nutritionally independent of nitrogen sources, a characteristic due to their ability to fix atmospheric nitrogen. In Lagos in a city in Nigeria, nine bacterial strains Pseudomonas fluorescens, P. aeruginosa, Bacillus subtilis, Bacillus sp., Alcaligenes sp., Acinetobacter lwoffi, Flavobacterium sp., Micrococcus roseus and Corynebacterium sp, isolated from the polluted flow that could degrade crude oil, were detected ; in north-east India they were also detected. In the Louisiana incident in the Gulf of Mexico, about 100 strains were detected and studied, revealing that the isolates all belong to the phylum Proteobacteria and three classes (Alteromonadales, Rhodospirillales and Enterobacteriales). These organisms are normally present in very small numbers, which gives them an advantage over hydrocarbons such as carbon and energy, as they grow and multiply rapidly. Alcanivorax-like bacteria have been detected in oil-affected environments around the world, including the US, Germany, the UK, Spain, Italy, Singapore, China, the western Philippines, Japan, the mid-Atlantic ridge near Antarctica, and from deepwater sediments in the eastern Pacific Ocean. Physiology and metabolism Hydrocarbonoclastic bacteria have two fundamental characteristics: (1) specific membrane-bound dioxygenases and (2) mechanisms for optimizing contact with water-insoluble hydrocarbons. Microbial biodegradation occurs wherever oil contamination occurs. However, biodegradation rates are slow and as a result there are severe toxic effects on marine life in the water and on the coast. The hydrocarbons contained in petroleum have a different behavior in water depending on their chemical nature. This process is called weathering, those with low molecular weight volatilize when they reach the surface. The rest is attacked by bacteria that are able to do this. These bacteria do not adhere to the oil and do not have a high hydrophobicity of the cell surface. The next stage of degradation involves microorganisms with high cell surface hydrophobicity, which can adhere to residual high molecular weight hydrocarbons. Adhesion is due to hydrophobic fimbriae, fibrils, lipids and proteins of the outer membrane and some small molecules of the cell surface, such as gramicidin S and prodigiosin. All petroleum products are derived from crude oil whose major constituents are hydrocarbons, that can be separated into four fractions: saturated, aromatic, resin and asphaltene fractions. The susceptibility of hydrocarbons to microbial degradation can be generally ranked as follows: linear alkanes branched alkanes > small aromatics > cyclic alkanes. Some compounds, such as the high molecular weight polycyclic aromatic hydrocarbons (PAHs), may not be degraded at all, asphaltenes and resins are considered to be recalcitrant to biodegradation. Alkane degradation pathways Alkanes are readily biodegraded aerobically in the sea by several different pathways. Terminal oxidation The degradation of medium-length ones by Pseudomonas putida starts from the alkane hydroxylase, this enzyme is made up of three components: the membrane-bound oxygenase component and two soluble components called rubredoxin and rubredoxin reductase. From the oxidation of the methyl group of n-alkanes by the alkane hydroxylase, n-alkanols are released which are further oxidized by a membrane-bound alcohol dehydrogenase in n-alkanals. The n-alkanals are subsequently transformed into fatty acids and then into acyl CoA, respectively by the aldehyde dehydrogenase and by the acyl-CoA synthetase. CH3-R-CH3 -> CH3-R-CH2OH -> CH3-R-CHO -> CH3-R-COOH -> (CH2OH)-R-COOH CH3-R-CH2OH -> (CH2OH)-R-CH2OH -> (CH2OH)-R-CHO -> (CH2OH)-R-COOH (CH2OH)-R-COOH -> CHO-R-COOH -> HOOC-R-COOH Subterminal oxidation This path leads to the release of secondary alcohols. The n-alkanes are oxidized by monooxygenase to secondary alcohols, then to ketones and finally to fatty acids. R1-(CH2)(CH2)-R2 -> R1-(CH2)(CHOH)-R2 -> R1-(CH2)(CO)-R2 -> R1-(CH2)O(CO)-R2 -> R1-COOH + R2-COOH Cycloalkane degradation pathways Cycloalkanes are degraded by a co-oxidation mechanism, the process leading to the formation of a cyclic alcohol and a ketone. A monooxygenase introduces an oxygen into the cyclic ketone and the cyclic ring is cleaved. Aromatic hydrocarbon degradation pathways For aromatic compounds there are different pathways, considering toluene at least five are known, each of these is present in specific bacterial species, Burkholderia sp. strain JS150 is unique in using multiple pathways for toluene metabolism: TOL pathway: This path takes the name of the homonymous plasmid that codes for it. Toluene is degraded to alcohol to benzyl, to benzaldehyde and then to benzoate, which is further transformed into the intermediates of the TCA cycle. F1 pathway: P.putida is able to undertake this pathway, which consists in the introduction of two hydroxyl groups into toluene, forming cis-toluene dihydrodiol. This intermediate is then converted to 3-methylcatechol. KR1 pathway: Pseudomonas mendocina KR1, is able to convert toluene into p-cresol, by the enzyme toluene 4-monooxygenase. Subsequently, p-hydroxybenzoate is formed through oxidation of the methyl side chain. PK01: Pseudomonas pickettii PKO1 oxidizes toluene with the enzyme toluene 3-monooxygease to m-cresol, which is further oxidized to 3-methylcatechol by another monooxygenase. G4: The G4 pathway was observed in Bukholderia cepacia G4, where toluene is converted into o-cresol by toluene 2-monooxygenase and subsequently another monooxygenase converts it to 3-methylcatechol. Anaerobic degradation pathways Oil components that are trapped in marine sediments tend to persist in anaerobic conditions. Some hydrocarbons can be oxidized under anaerobic conditions when nitrate reduction, sulfate reduction, methane production, Fe3+ reduction or photosynthesis are coupled to hydrocarbon oxidation. Anaerobic bacterium strain HD-1 grows on in the presence of or tetradecane. In the absence of , tetradecane is degraded, and the major metabolic intermediate is 1-dodecene Factors influencing the biodegradation The biodegradation of hydrocarbons is limited by a number of chemical, physical and biological factors. Biosurfactants: The contact between bacteria and hydrocarbons is fundamental because the first degradative step involves the use of oxygenase. Contact is favored by adhesion and emulsifying mechanisms Bacteria that break down hydrocarbons often produce bioemulsifiers as secondary metabolites. These can be divided into low molecular weight molecules which effectively lower surface and interfacial tensions, and high molecular weight polymers which bond firmly to surfaces. Some bioemulsifiers promote the growth of bacteria on hydrophobic substrates insoluble in water by increasing their bioavailability, increasing their surface, desorbing them from surfaces and increasing their solubility. Biosurfactants have an amphiphilic nature, which allows the microorganisms that produce them to exploit hydrophobic substrates, allowing motility, avoiding competitors. The hydrophobic part usually comprises saturated or unsaturated fatty acids, fatty hydroxy acids or fatty alcohols with a chain length between 8 and 18 carbon atoms. The hydrophilic components consist either of small hydroxyl, phosphate or carboxylic groups, or of portions of carbohydrates or peptides. Biosurfactants are predominantly anionic and non-ionic compounds. Nutritional requirements for growth: The HCB also need large amounts of nitrogen and phosphorus. It has been estimated that 150 g of nitrogen and 30 g of phosphorus are required for 1 kg of oxidized hydrocarbon. Temperature: The biodegradation of petroleum hydrocarbons occurs in an optimum temperature between 20 °C and 50 °C, while at lower temperatures the deterioration rate is slower. pH and oxygen: Bacteria require a neutral pH, and in this the same oil can help neutralize environments that are too acidic for microbial growth. Oxygen is critical for aerobic degradation. Ecology At the moment, most studies of the ecology of hydrocarbonoclastic bacteria refer to a wide group of genera found principally in marine environments. Since each of them is characterized by a different metabolism, these organisms work together in order to degrade all types of hydrocarbon compounds in a very efficient way. They also play a fundamental role in the carbon biogeochemical cycle and several studies show that some species can create intricate relationships with different marine organisms. Oil degrading microbial communities and ecological successions When a release of oil (or whichever kind of hydrocarbon compound) happens in a specific marine area, a lot of bacterial species begin to colonize it, changing the microbial community already present there. Analyzing the dynamics of those communities has led to the discovery of common patterns that are associated with biodegradation, and those information can be useful for the improving of bioremediation methods. Microbial community in situ shuffles since the quantities of nutrients change as the presence of hydrocarbons increases: this ecological situation is able to select only those organisms which can use hydrocarbons as an energy source and possess all the enzymes to do so. In addition, most oil biodegrading species require specific quantities of phosphorus and nitrogen to carry out their catabolic processes. It is possible to state therefore, that hydrocarbonoclastic bacteria rate is limited by the availability of nitrogen and phosphorus mainly. Several experiments conducted both in vitro and in situ showed the fundamental role that OHCBs (obligate hydrocarbonoclastic bacteria) play during events like an oil spill. The very first microorganism populations that bloom when hydrocarbons are released are the so-called generalists, which can break (through specific enzymes) the most simple bonds in hydrocarbons (generally they are n-alkane degraders); among them, the most common genus is Alcanivorax (the most important species is Alcanivorax borkumensis) which can degrade aliphatic hydrocarbon compounds. Subsequently, specialists replace generalists to degrade stronger and more complex bounds; among them one of the most known genera is Cycloclasticus which can, for example, degrade aromatic hydrocarbons such as PAH (polycyclic aromatic hydrocarbons). Up to now, no hydrocarbonoclastic Archaea species have been found, since it appears that they are too sensitive to the effects of an oil spill, as shown by many studies carried out on beaches and coastal waters. Nevertheless, Archaea species could be used as markers of the ecological status of an environment. Relationships with other marine organisms during bioremediation processes Hydrocarbonoclastic bacteria form just a part of the ecological network during bioremediation and biodegradation processes, which involves many direct and indirect relationships and interactions with other communities and with the surrounding environment too. Such interactions include competition for limiting nutrients, predation by protozoa, lysis by phages and cooperative interactions that can decrease or increase degradation of hydrocarbons. Nutrients availability, as well as nutrients recycling, are important aspects of biodegradation communities. As said before, the amount of phosphorus and nitrogen can modify the structure of a microbial populations and consequently the composition of the community that is shaped by the presence of certain molecules in the ecosystem. Predation and interactions with phages also affect development of a hydrocarbonoclastic bacterial community. It is possible that the increase of the turnover of biomass (which can be obtained by stimulating the activity of bacteriophages lysis or protozoa predation) could benefit hydrocarbonoclastic populations by stimulating biological remediation. In fact, the presence of oil in the environment can induce prophages and the subsequent lysis of a huge number of bacteria. At the same time, nutrients recycling caused by phages' lysis can trigger a bloom of those species who are favored by the presence of both nutrients and hydrocarbons (used as energy resource). On the other hand, the presence of protozoa can create the opposite situation (it has a negative effect on biodegradation), by limiting the growth of bacterial populations in the ecosystem. That is why interactions with predators are fundamental in marine environments. Nevertheless, in specific occasions, the presence of predators can boost bacterial degradation, as it happens for benzene or toluene. Moreover, in a similar way to what happens with phages, the activity of predation does create a nutritional loop, because predators can remineralize nutrients, which increases bacterial growth. Role in biogeochemical carbon cycle Since hydrocarbonoclastic bacteria can oxidize long carbon compounds, their metabolism includes part of the large family of biotic reactions in the biogeochemical carbon cycle. Hydrocarbons, especially alkanes, are produced by myriad organisms as waste, for defense, as structural elements, and as chemoattractants. Therefore, this type of biodegradation represents one of the major sinks of hydrocarbon compounds and one of the source of carbon dioxide in marine environments. In conclusion, hydrocarbonoclastic bacteria can mobilize hydrocarbons from natural sources and use the oxidized carbon atoms and introduce them into their metabolic central pathways. Those oxidized molecules enter the biotic phase of the carbon cycle and can be assimilated by primary and secondary consumers through predation or can assume them after cells' death. Biotechnological applications Hydrocarbon-degrading bacteria have many different applications but has specially importance their role in the field of environmental microbiology. Marine hydrocarbonoclastic bacteria are powerful tools for bioremediation, as they can degrade and convert contaminant oils because of their catabolic versatility. In that way, using biotechnology is possible accelerate the cleaning up of a contaminated site such as coastal regions and offshore after an oil spills or human activities' pollution, but also it is possible to contain and mitigate their damage. They normally bloom after an oil spill or other pollution, and because as they are very versatile metabolically, they can grow on minimal mediums. One example of this is the nitrogen-fixing and heavy oil-degrading bacterium Azospirillum oleiclasticum, which was isolated from an oil production mixture. But A. oleiclasticum is not the only strain that can grow on oil, a 2013 study discovered that there are at least 125 strains, adapted to grow on minimal medium supplemented with crude oil. The predominant bacterial detected by approaches were the Proteobacteria and the most abundant species were in genera Acinetobacter and Stenotrophomons. They are also used in biosynthesis because they are an extraordinary archive of enzymes like mono and dioxygenases, oxidases, dehydrogenases and others. Furthermore, as they are adapted to grow in hydrocarbon-rich environments, they often synthesize characteristic compounds like polymeric storage substances of industrial interest and bio-detergents with high emulsifying activity. One example of this is the use of the oleaginous yeast Yarrowia lipolytica. As this yeast has a versatile lipid metabolism, by its combination with specific bacterial genes it can use specific enzymatic pathways to bioconvert different lipids (petroleum, alkane, vegetable oil, fatty acid), fats and oils into industrially valuable lipid-derived compounds like isoprenoid-derived compounds (carotenoids, polyenic carotenoid ester), wax esters (WE), polyhydroxyalkanoates (PHAs) and free hydroxylated fatty acids (HFAs). References External links Microbiology Bioremediation Microbial population biology Microbial growth and nutrition Biodegradation Hydrocarbon-degrading bacteria Oil spill remediation technologies
Hydrocarbonoclastic bacteria
[ "Chemistry", "Biology", "Environmental_science" ]
3,907
[ "Hydrocarbon-degrading bacteria", "Microbiology", "Biodegradation", "Ecological techniques", "Bacteria", "Microscopy", "Bioremediation", "Environmental soil science" ]
69,365,391
https://en.wikipedia.org/wiki/Aaron%20Robertson%20%28mathematician%29
Aaron Robertson (born November 8, 1971) is an American mathematician who specializes in Ramsey theory. He is a professor at Colgate University. Life and education Aaron Robertson was born in Torrance, California, and moved with his parents to Midland, Michigan at the age of 4. He studied actuarial science as an undergraduate at the University of Michigan, and went on to graduate school in mathematics at Temple University in Philadelphia, where he was supervised by Doron Zeilberger. Robertson received his Ph.D. in 1999 with his thesis titled Some New Results in Ramsey Theory. Following his Ph.D., Robertson became an assistant professor of mathematics at Colgate University, where he is currently a full professor. Mathematical work Robertson's work in mathematics since 1998 has consisted predominantly of topics related to Ramsey theory. One of Robertson's earliest publications is a paper, co-authored with his supervisor Doron Zeilberger, which came out of his Ph.D. work. The authors prove that "the minimum number (asymptotically) of monochromatic Schur Triples that a 2-colouring of can have ". After completing his dissertation, Robertson worked with 3-term arithmetic progressions where he found the best-known values that were close to each other and titled this piece New Lower Bounds for Some Multicolored Ramsey Numbers. Another notable piece of Robertson's research is a paper co-authored with Doron Zeilberger and Herbert Wilf titled Permutation Patterns and Continued Fractions. In the paper, they "find a generating function for the number of (132)-avoiding permutations that have a given number of (123) patterns" with the result being "in the form of a continued fraction". Robertson's contribution to this specific paper includes discussion on permutations that avoid a certain pattern but contain others. A notable paper Robertson wrote titled A Probalistic Threshold For Monochromatic Arithmetic Progressions explores the function (where is fixed) and the r-colourings of . Robertson analyzes the threshold function for -term arithmetic progressions and improves the bounds found previously. In 2004, Robertson and Bruce M. Landman published the book Ramsey Theory on the Integers, of which a second expanded edition appeared in 2014. The book introduced new topics such as rainbow Ramsey theory, an “inequality” version of Schur's theorem, monochromatic solutions of recurrence relations, Ramsey results involving both sums and products, monochromatic sets avoiding certain differences, Ramsey properties for polynomial progressions, generalizations of the Erdős–Ginzberg–Ziv theorem, and the number of arithmetic progressions under arbitrary colourings. More recently, in 2021, Robertson published a book titled Fundamentals of Ramsey Theory. Robertson's goal in writing this book was to "help give an overview of Ramsey theory from several points of view, adding intuition and detailed proofs as we go, while being, hopefully, a bit gentler than most of the other books on Ramsey theory". Throughout the book, Robertson discusses several theorems including Ramsey's Theorem, Van der Waerden's Theorem, Rado's Theorem, and Hales–Jewett Theorem. References External links Aaron Robertson's homepage Archive of papers published by Aaron Robertson from ResearchGate 1971 births Living people 20th-century American mathematicians 21st-century American mathematicians Combinatorialists Ramsey theory People from Torrance, California University of Michigan alumni Temple University alumni Colgate University faculty
Aaron Robertson (mathematician)
[ "Mathematics" ]
710
[ "Combinatorialists", "Ramsey theory", "Combinatorics" ]
69,365,840
https://en.wikipedia.org/wiki/Sea%20of%20Suf
In Mandaean cosmology, the Sea of Suf (or Sea of Sup, ) is a primordial sea in the World of Darkness. It is analogous to Tehom in the Book of Genesis. It is a great sea that the soul has to pass in the first steps of ascending, and is also considered to be the limit of worldly desire. In the Ginza Rabba The Sea of Suf is mentioned in Right Ginza 1, 2.3, 3, 5.2, 9.1, 15.1, 15.10, 15.12, 15.18, 16.1, 16.6, and Left Ginza 2.14, often as iama rba ḏ-sup or the "Great Suf-Sea." See also (Hebrew cognate) References Water and religion Chaos (cosmogony) Mandaean cosmology Ginza Rabba
Sea of Suf
[ "Astronomy" ]
192
[ "Cosmogony", "Chaos (cosmogony)" ]
69,366,677
https://en.wikipedia.org/wiki/Dayok
Dayok is a Philippine condiment originating from the islands of Visayas and Mindanao in the Philippines. It is made from fish entrails (usually from yellowfin tuna), excluding the heart and the bile sac. It is fermented with salt, and sometimes rice wine (pangasi) and various herbs. It has a sharp umami and salty flavor very similar to patis (fish sauce) and bagoong. They are sold in sealed glass bottles. See also Bekasang, a similar Indonesian preparation Shiokara, a similar Japanese preparation Bagoong Shrimp paste Fish sauce References External links Fermented fish Fish sauces Philippine condiments Fermented foods
Dayok
[ "Biology" ]
140
[ "Fermented foods", "Biotechnology products" ]
69,368,082
https://en.wikipedia.org/wiki/Chromium%28III%29%20perchlorate
Chromium(III) perchlorate is an inorganic compound, a salt with the chemical formula Cr(ClO4)3. It's hexahydrate Cr(ClO4)3·6H2O is a cyan solid that dissolves in water. Preparation Chromium perchlorate can prepared by reacting chromium(III) oxide or chromium(III) hydroxide with perchloric acid: Cr2O3 + 6HClO4 → 2Cr(ClO4)3 + 3H2O Hydrates Chromium perchlorate has many hydrates, such as the hexahydrate Cr(ClO4)3·6H2O and a nonahydrate Cr(ClO4)3·9H2O. All of them are cyan substances that are soluble in water. Related compounds Cr(ClO4)3 will react with NH3 in suitable conditions to form an orange hexammine complex Cr(ClO4)3·6NH3. Other compounds with the general formula Cr(ClO4)3(NH3)x are also known. When x = 3, this compound is red, when x = 4 or 5, it is orange. The hexammine complex will explode. Cr(ClO4)3 can also form complexes with N2H4, such as purple Cr(ClO4)3·2N2H4. Cr(ClO4)3 can also form complexes with urea (CO(NH2)2), such as Cr(ClO4)3·6CO(NH2)2 with a hexagonal structure. References Chromium(III) compounds Perchlorates
Chromium(III) perchlorate
[ "Chemistry" ]
364
[ "Perchlorates", "Salts" ]
69,368,391
https://en.wikipedia.org/wiki/List%20of%20liquid%E2%80%93liquid%20phase%20separation%20databases
Liquid-liquid phase separation (LLPS) is well defined in the Biomolecular condensate page. LLPS databases cover different aspects of LLPS phenomena, ranging from cellular location of the Membraneless Organelles (MLOs) to the role of a particular protein/region forming the condensate state. These databases contain manually curated data supported by experimental evidence in the literature and can include related features as presence of protein disorder, low complexity, post-translational modifications, experimental details, phase diagrams, among others. See also Biomolecular condensate MobiDB database Intrinsically disordered proteins DisProt database References Protein structure Structural bioinformatics software Proteomics Neurodegenerative disorders
List of liquid–liquid phase separation databases
[ "Chemistry" ]
148
[ "Protein structure", "Structural biology" ]
69,368,785
https://en.wikipedia.org/wiki/Ericson-Ericson%20Lorentz-Lorenz%20correction
Ericson-Ericson Lorentz-Lorenz correction, also called the Ericson-Ericson Lorentz-Lorenz effect (EELL), refers to an analogy in the interface between nuclear, atomic and particle physics, which in its simplest form corresponds to the well known Lorentz-Lorenz equation (also referred to as the Clausius-Mossotti relation) for electromagnetic waves and light in a refractive medium. These relations link the macroscopic quantities such as the refractive index to the dipole polarization of the individual atoms or molecules.  When the latter are kept apart the polarizing field is no longer the average electric field in the medium.  Similarly for the pion, the lightest meson and the carrier of the long range part of the nuclear force, its typical non-relativistic scattering for individual nucleons has a dominant dipole structure with a known average dipole polarizability of strength ("the average scattering volume"). The physics becomes closely similar although the nuclear density is about 15 orders of magnitude larger than that of ordinary matter and the nature of the dipole interaction is totally different. The correction was predicted in 1963 by Magda Ericson and was derived in 1966 together with Torleif Ericson. The effect has since been re-derived in various ways, but is now understood as a general effect as long as the nucleons keep their individuality, independent of the detailed cause. This is the reason why in the molecular case of the classical Lorentz-Lorenz effect so many incompatible derivations give the same result. The EELL correction was first applied to the line shifts of hydrogen-like atoms, where the electron in the Coulomb field is replaced by a negatively charged pion. Its interaction with the central nucleus causes deviations in the line positions in such Bohr-like atoms. The effect has greatly influenced the understanding of the pion-nucleus many-body problem by realizing that the scattering of a pion from a nucleon in nuclear matter is determined by the local pion field at the nucleon. The effect has also found applications in other contexts of pionic phenomena in nuclei such as the modification of the axial coupling constant in beta-decay, pion scattering, axial locality, pion condensation in nuclear matter, structure of the nuclear pion field, etc. Further reading Spin excitations in nuclei, Fred Petrovich (ed.) et al., Springer (1984), see in particular G. E. Brown's contribution. Mesons in nuclei, Denys H. Wilkinson and Mannque Rho, North-Holland (1979). Ericson, M. (1978). "Pion field and weak interactions in nuclei". Progress in Particle and Nuclear Physics. 1: 67–104. References Nuclear physics Scattering theory
Ericson-Ericson Lorentz-Lorenz correction
[ "Physics", "Chemistry" ]
580
[ "Scattering", "Scattering theory", "Nuclear physics" ]
69,369,517
https://en.wikipedia.org/wiki/Quiet%20room
A quiet room or silent room is a room, typically in an office, built with regard to silence by shielding noise from or towards the surroundings. In an office setting, they are often made for one person, and differs from a meeting room which typically are larger and can accommodate several people. Quiet rooms have been described as necessity in office landscapes in addition to telephone rooms and meeting rooms. Some have suggested that an office landscape should have at least one quiet room per six employees. Use A quiet room can both shield against noise from the surroundings, and shield the surroundings from noise from inside the quiet room. For concentration In one sense, the quiet room is a place of silence where one can stay without noise disturbance from others, for example in order to perform work requiring concentration or which is confidential in nature. An example of use can be if a person is seated in a noisy office landscape and is handed a task which requires special concentration. There are also examples of quiet rooms being installed in small private homes which otherwise lack the space or number of rooms to give the occupants privacy (see also man cave). For noisy work In another sense, a quiet room can be used to shield the surroundings from noisy work. This may be relevant, for example, if a person in an office landscape is to have a video meeting, and especially if the meeting is longer or if the person will be having an active role in the meeting. In this context, a distinction is often made between a quiet room and a meeting room; a quiet room is usually used when one person in the office is having a longer conversation, while meeting rooms are used for accommodating several meeting participants in the same room. Other meanings A quiet room can, depending on context, be used as a euphemism for special types of rooms. Multifaith spaces are sometimes called quiet rooms. Breastfeeding rooms are also sometimes called quiet rooms. Quiet room or rest room can also refer to sensory rooms, for example for recovering stroke patients. Quiet rooms can also refer to a "sanctuary" or a place where you can rest and let the mind fly, or even perform yoga. In some cases, quiet rooms have been set up temporarily as a place where people can mourn. See also Anechoic chamber, a room designed to be completely echo free Organizational culture, hereunder office culture Remote work Safe room References Office work Rooms
Quiet room
[ "Engineering" ]
483
[ "Rooms", "Architecture" ]
69,371,760
https://en.wikipedia.org/wiki/Mary%20Fama
Mary Elizabeth Fama (née Duncan; 23 October 1938 – 6 July 2021) was a New Zealand applied mathematician who became "a leading international figure" in the analysis of stress and deformation of rock and the application of this analysis to mining. A method she developed for solving analytically the convergence-confinement curve of a circular mining tunnel has been variously called the Duncan-Fama convergence curve, Duncan-Fama analytical method, or Duncan-Fama solution. Education and career After early education at a boarding school in Scotland, Fama became a student at Erskine College, Wellington, where she excelled in mathematics but was stripped of her academic honours after being caught rebelling against the school rules. She went on to earn a bachelor's degree at the University of Canterbury and a second bachelor's degree at the University of Oxford. Returning to New Zealand, she became a researcher at the Department of Scientific and Industrial Research (DSIR) in 1962. In 1964, she took a Fulbright scholarship to Harvard University, and she completed her PhD there in 1967. In 1968, she married and moved with her husband to the Sydney North Shore, and Fama became a junior lecturer at the University of Sydney. In 1970, they returned to New Zealand, and Fama returned to her position at the DSIR. She was appointed to the University of Waikato as a temporary senior lecturer in 1980. In 1983, they moved back to Australia again, and Fama became a senior scientist for the Commonwealth Scientific and Industrial Research Organisation (CSIRO) in Brisbane, becoming the second woman scientist at the Brisbane centre. She retired in 2010 and returned with her husband to Havelock North in New Zealand. Personal life Fama was born on 23 October 1938 in Windsor, England, to a Catholic family of five children; her father was a New Zealander, army officer, and government official. They returned to New Zealand when Fama was ten, living in the Wellington region. In 1968, she met and married Australian psychiatrist Peter Fama, then working in Auckland but slated to return to Australia. Their first child died soon after childbirth, but they had three more in the early 1970s, all three of whom suffered from Friedreich's ataxia, a genetic degenerative disorder, and died in their 20s and 30s. Fama suffered for many years from pulmonary tuberculosis, likely contracted as a teenager but not diagnosed until much later. By the 1980s she was diagnosed with bronchiectasis. As a complication of her tuberculosis, she went blind in one eye in 2013. She died in Hastings on 6 July 2021. References 1938 births 2021 deaths New Zealand women mathematicians Applied mathematicians University of Canterbury alumni Alumni of the University of Oxford Harvard University alumni Academic staff of the University of Sydney People from Windsor, Berkshire People educated at Erskine College, Wellington People associated with Department of Scientific and Industrial Research (New Zealand) CSIRO people
Mary Fama
[ "Mathematics" ]
586
[ "Applied mathematics", "Applied mathematicians" ]
69,375,146
https://en.wikipedia.org/wiki/Tecno%20Spark%208
Tecno Spark 8 and Tecno Spark 8P are Android-based smartphones manufactured, released and marketed by Tecno Mobile as part of Tecno Spark 8 series. The devices serve as successors to Tecno Spark 7 series. The Spark 8 and Spark 8P is an upgraded version of Spark 7 series, coming with different features, including the processor, camera and design. The phone has received generally favorable reviews, with critics mostly noting the design and the camera. Critics, however, criticized the lack of fast charging capacity. Specifications Hardware The Spark 8 feature a 720p resolution with an 20:9 aspect ratio, while the Spark 8P feature a 1080p resolution with an 20:9 aspect ratio. Spark 8 feature a display size of 6.52-inches, while the Spark 8P feature a display size of 6.6-inches. Spark 8 comes with a MediaTek Helio P22 SoC, while the Spark 8P comes with a MediaTek Helio G35 SoC. The Spark 8 comes with 2 GB of RAM, while the Spark 8P comes with 4 GB of RAM. Spark 8 comes with 64 GB storage, while Spark 8P comes with 64/128 GB storage. All of the device feature the ability to use a microSD. Both devices come with a battery capacity of 5000 mAh, while supporting the charging of 10 watt. Spark 8 feature a dual rear camera with a 16-megapixel main camera and 2-megapixel depth, it feature an 8-megapixel front camera. The Spark 8P feature a dual rear camera with a 50-megapixel main camera, it feature an 8-megapixel front camera. Software The Spark 8 runs on Android 11 (Go edition), while the Spark 8P runs on the standard Android 11, with both running on HiOS 7.6. The HiOS 7.6 features Peek Proof, Voice Changer, Phone cloner and Document correction. Reception Cashify praised the Spark 8 for its battery, display and DTS sound effect while noting that the device has "an excellent design". However, the lack of fast charging capacity and 5G connectivity were criticized. Stephen Ekpa from DroidAfrica praised the Spark 8 for its design and performance. He however criticized the device's sound quality, while noting that "the speaker is economically integrated with the earpiece grill". Alfred Gitonga from Mobi Trends praised the Spark 8P for its display, design and camera, while noting that "camera-wise, the Spark 8P is a major improvement and is definitely the unique selling proposition of the smartphone". Yinkmedia gave a positive review of the Spark 8P. Praise was directed towards the battery, design, display and camera, while noting that "the selfie camera needs improvement". References Android (operating system) devices Phablets Mobile phones introduced in 2021 Tecno smartphones
Tecno Spark 8
[ "Technology" ]
604
[ "Crossover devices", "Phablets" ]
72,360,809
https://en.wikipedia.org/wiki/AI%20safety
AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models. Beyond technical research, AI safety involves developing norms and policies that promote safety. It gained significant popularity in 2023, with rapid progress in generative AI and public concerns voiced by researchers and CEOs about potential dangers. During the 2023 AI Safety Summit, the United States and the United Kingdom both established their own AI Safety Institute. However, researchers have expressed concern that AI safety measures are not keeping pace with the rapid development of AI capabilities. Motivations Scholars discuss current risks from critical systems failures, bias, and AI-enabled surveillance, as well as emerging risks like technological unemployment, digital manipulation, weaponization, AI-enabled cyberattacks and bioterrorism. They also discuss speculative risks from losing control of future artificial general intelligence (AGI) agents, or from AI enabling perpetually stable dictatorships. Existential safety Some have criticized concerns about AGI, such as Andrew Ng who compared them in 2015 to "worrying about overpopulation on Mars when we have not even set foot on the planet yet". Stuart J. Russell on the other side urges caution, arguing that "it is better to anticipate human ingenuity than to underestimate it". AI researchers have widely differing opinions about the severity and primary sources of risk posed by AI technology – though surveys suggest that experts take high consequence risks seriously. In two surveys of AI researchers, the median respondent was optimistic about AI overall, but placed a 5% probability on an "extremely bad (e.g. human extinction)" outcome of advanced AI. In a 2022 survey of the natural language processing community, 37% agreed or weakly agreed that it is plausible that AI decisions could lead to a catastrophe that is "at least as bad as an all-out nuclear war". History Risks from AI began to be seriously discussed at the start of the computer age: In 1988 Blay Whitby published a book outlining the need for AI to be developed along ethical and socially responsible lines . From 2008 to 2009, the Association for the Advancement of Artificial Intelligence (AAAI) commissioned a study to explore and address potential long-term societal influences of AI research and development. The panel was generally skeptical of the radical views expressed by science-fiction authors but agreed that "additional research would be valuable on methods for understanding and verifying the range of behaviors of complex computational systems to minimize unexpected outcomes". In 2011, Roman Yampolskiy introduced the term "AI safety engineering" at the Philosophy and Theory of Artificial Intelligence conference, listing prior failures of AI systems and arguing that "the frequency and seriousness of such events will steadily increase as AIs become more capable". In 2014, philosopher Nick Bostrom published the book Superintelligence: Paths, Dangers, Strategies. He has the opinion that the rise of AGI has the potential to create various societal issues, ranging from the displacement of the workforce by AI, manipulation of political and military structures, to even the possibility of human extinction. His argument that future advanced systems may pose a threat to human existence prompted Elon Musk, Bill Gates, and Stephen Hawking to voice similar concerns. In 2015, dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI and outlining concrete directions. To date, the letter has been signed by over 8000 people including Yann LeCun, Shane Legg, Yoshua Bengio, and Stuart Russell. In the same year, a group of academics led by professor Stuart Russell founded the Center for Human-Compatible AI at the University of California Berkeley and the Future of Life Institute awarded $6.5 million in grants for research aimed at "ensuring artificial intelligence (AI) remains safe, ethical and beneficial". In 2016, the White House Office of Science and Technology Policy and Carnegie Mellon University announced The Public Workshop on Safety and Control for Artificial Intelligence, which was one of a sequence of four White House workshops aimed at investigating "the advantages and drawbacks" of AI. In the same year, Concrete Problems in AI Safety – one of the first and most influential technical AI Safety agendas – was published. In 2017, the Future of Life Institute sponsored the Asilomar Conference on Beneficial AI, where more than 100 thought leaders formulated principles for beneficial AI including "Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards". In 2018, the DeepMind Safety team outlined AI safety problems in specification, robustness, and assurance. The following year, researchers organized a workshop at ICLR that focused on these problem areas. In 2021, Unsolved Problems in ML Safety was published, outlining research directions in robustness, monitoring, alignment, and systemic safety. In 2023, Rishi Sunak said he wants the United Kingdom to be the "geographical home of global AI safety regulation" and to host the first global summit on AI safety. The AI safety summit took place in November 2023, and focused on the risks of misuse and loss of control associated with frontier AI models. During the summit the intention to create the International Scientific Report on the Safety of Advanced AI was announced. In 2024, The US and UK forged a new partnership on the science of AI safety. The MoU was signed on 1 April 2024 by US commerce secretary Gina Raimondo and UK technology secretary Michelle Donelan to jointly develop advanced AI model testing, following commitments announced at an AI Safety Summit in Bletchley Park in November. Research focus AI safety research areas include robustness, monitoring, and alignment. Robustness Adversarial robustness AI systems are often vulnerable to adversarial examples or "inputs to machine learning (ML) models that an attacker has intentionally designed to cause the model to make a mistake". For example, in 2013, Szegedy et al. discovered that adding specific imperceptible perturbations to an image could cause it to be misclassified with high confidence. This continues to be an issue with neural networks, though in recent work the perturbations are generally large enough to be perceptible. All of the images on the right are predicted to be an ostrich after the perturbation is applied. (Left) is a correctly predicted sample, (center) perturbation applied magnified by 10x, (right) adversarial example. Adversarial robustness is often associated with security. Researchers demonstrated that an audio signal could be imperceptibly modified so that speech-to-text systems transcribe it to any message the attacker chooses. Network intrusion and malware detection systems also must be adversarially robust since attackers may design their attacks to fool detectors. Models that represent objectives (reward models) must also be adversarially robust. For example, a reward model might estimate how helpful a text response is and a language model might be trained to maximize this score. Researchers have shown that if a language model is trained for long enough, it will leverage the vulnerabilities of the reward model to achieve a better score and perform worse on the intended task. This issue can be addressed by improving the adversarial robustness of the reward model. More generally, any AI system used to evaluate another AI system must be adversarially robust. This could include monitoring tools, since they could also potentially be tampered with to produce a higher reward. Monitoring Estimating uncertainty It is often important for human operators to gauge how much they should trust an AI system, especially in high-stakes settings such as medical diagnosis. ML models generally express confidence by outputting probabilities; however, they are often overconfident, especially in situations that differ from those that they were trained to handle. Calibration research aims to make model probabilities correspond as closely as possible to the true proportion that the model is correct. Similarly, anomaly detection or out-of-distribution (OOD) detection aims to identify when an AI system is in an unusual situation. For example, if a sensor on an autonomous vehicle is malfunctioning, or it encounters challenging terrain, it should alert the driver to take control or pull over. Anomaly detection has been implemented by simply training a classifier to distinguish anomalous and non-anomalous inputs, though a range of additional techniques are in use. Detecting malicious use Scholars and government agencies have expressed concerns that AI systems could be used to help malicious actors to build weapons, manipulate public opinion, or automate cyber attacks. These worries are a practical concern for companies like OpenAI which host powerful AI tools online. In order to prevent misuse, OpenAI has built detection systems that flag or restrict users based on their activity. Transparency Neural networks have often been described as black boxes, meaning that it is difficult to understand why they make the decisions they do as a result of the massive number of computations they perform. This makes it challenging to anticipate failures. In 2018, a self-driving car killed a pedestrian after failing to identify them. Due to the black box nature of the AI software, the reason for the failure remains unclear. It also raises debates in healthcare over whether statistically efficient but opaque models should be used. One critical benefit of transparency is explainability. It is sometimes a legal requirement to provide an explanation for why a decision was made in order to ensure fairness, for example for automatically filtering job applications or credit score assignment. Another benefit is to reveal the cause of failures. At the beginning of the 2020 COVID-19 pandemic, researchers used transparency tools to show that medical image classifiers were 'paying attention' to irrelevant hospital labels. Transparency techniques can also be used to correct errors. For example, in the paper "Locating and Editing Factual Associations in GPT", the authors were able to identify model parameters that influenced how it answered questions about the location of the Eiffel tower. They were then able to 'edit' this knowledge to make the model respond to questions as if it believed the tower was in Rome instead of France. Though in this case, the authors induced an error, these methods could potentially be used to efficiently fix them. Model editing techniques also exist in computer vision. Finally, some have argued that the opaqueness of AI systems is a significant source of risk and better understanding of how they function could prevent high-consequence failures in the future. "Inner" interpretability research aims to make ML models less opaque. One goal of this research is to identify what the internal neuron activations represent. For example, researchers identified a neuron in the CLIP artificial intelligence system that responds to images of people in spider man costumes, sketches of spiderman, and the word 'spider'. It also involves explaining connections between these neurons or 'circuits'. For example, researchers have identified pattern-matching mechanisms in transformer attention that may play a role in how language models learn from their context. "Inner interpretability" has been compared to neuroscience. In both cases, the goal is to understand what is going on in an intricate system, though ML researchers have the benefit of being able to take perfect measurements and perform arbitrary ablations. Detecting trojans Machine learning models can potentially contain "trojans" or "backdoors": vulnerabilities that malicious actors maliciously build into an AI system. For example, a trojaned facial recognition system could grant access when a specific piece of jewelry is in view; or a trojaned autonomous vehicle may function normally until a specific trigger is visible. Note that an adversary must have access to the system's training data in order to plant a trojan. This might not be difficult to do with some large models like CLIP or GPT-3 as they are trained on publicly available internet data. Researchers were able to plant a trojan in an image classifier by changing just 300 out of 3 million of the training images. In addition to posing a security risk, researchers have argued that trojans provide a concrete setting for testing and developing better monitoring tools. A 2024 research paper by Anthropic showed that large language models could be trained with persistent backdoors. These "sleeper agent" models could be programmed to generate malicious outputs (such as vulnerable code) after a specific date, while behaving normally beforehand. Standard AI safety measures, such as supervised fine-tuning, reinforcement learning and adversarial training, failed to remove these backdoors. Alignment Systemic safety and sociotechnical factors It is common for AI risks (and technological risks more generally) to be categorized as misuse or accidents. Some scholars have suggested that this framework falls short. For example, the Cuban Missile Crisis was not clearly an accident or a misuse of technology. Policy analysts Zwetsloot and Dafoe wrote, "The misuse and accident perspectives tend to focus only on the last step in a causal chain leading up to a harm: that is, the person who misused the technology, or the system that behaved in unintended ways… Often, though, the relevant causal chain is much longer." Risks often arise from 'structural' or 'systemic' factors such as competitive pressures, diffusion of harms, fast-paced development, high levels of uncertainty, and inadequate safety culture. In the broader context of safety engineering, structural factors like 'organizational safety culture' play a central role in the popular STAMP risk analysis framework. Inspired by the structural perspective, some researchers have emphasized the importance of using machine learning to improve sociotechnical safety factors, for example, using ML for cyber defense, improving institutional decision-making, and facilitating cooperation. Others have emphasized the importance of involving both AI practitioners and domain experts in the design process to address structural vulnerabilities. Cyber defense Some scholars are concerned that AI will exacerbate the already imbalanced game between cyber attackers and cyber defenders. This would increase 'first strike' incentives and could lead to more aggressive and destabilizing attacks. In order to mitigate this risk, some have advocated for an increased emphasis on cyber defense. In addition, software security is essential for preventing powerful AI models from being stolen and misused. Recent studies have shown that AI can significantly enhance both technical and managerial cybersecurity tasks by automating routine tasks and improving overall efficiency. Improving institutional decision-making The advancement of AI in economic and military domains could precipitate unprecedented political challenges. Some scholars have compared AI race dynamics to the cold war, where the careful judgment of a small number of decision-makers often spelled the difference between stability and catastrophe. AI researchers have argued that AI technologies could also be used to assist decision-making. For example, researchers are beginning to develop AI forecasting and advisory systems. Facilitating cooperation Many of the largest global threats (nuclear war, climate change, etc.) have been framed as cooperation challenges. As in the well-known prisoner's dilemma scenario, some dynamics may lead to poor results for all players, even when they are optimally acting in their self-interest. For example, no single actor has strong incentives to address climate change even though the consequences may be significant if no one intervenes. A salient AI cooperation challenge is avoiding a 'race to the bottom'. In this scenario, countries or companies race to build more capable AI systems and neglect safety, leading to a catastrophic accident that harms everyone involved. Concerns about scenarios like these have inspired both political and technical efforts to facilitate cooperation between humans, and potentially also between AI systems. Most AI research focuses on designing individual agents to serve isolated functions (often in 'single-player' games). Scholars have suggested that as AI systems become more autonomous, it may become essential to study and shape the way they interact. Challenges of large language models In recent years, the development of large language models (LLMs) has raised unique concerns within the field of AI safety. Researchers Bender and Gebru et al. have highlighted the environmental and financial costs associated with training these models, emphasizing that the energy consumption and carbon footprint of training procedures like those for Transformer models can be substantial. Moreover, these models often rely on massive, uncurated Internet-based datasets, which can encode hegemonic and biased viewpoints, further marginalizing underrepresented groups. The large-scale training data, while vast, does not guarantee diversity and often reflects the worldviews of privileged demographics, leading to models that perpetuate existing biases and stereotypes. This situation is exacerbated by the tendency of these models to produce seemingly coherent and fluent text, which can mislead users into attributing meaning and intent where none exists, a phenomenon described as 'stochastic parrots'. These models, therefore, pose risks of amplifying societal biases, spreading misinformation, and being used for malicious purposes, such as generating extremist propaganda or deepfakes. To address these challenges, researchers advocate for more careful planning in dataset creation and system development, emphasizing the need for research projects that contribute positively towards an equitable technological ecosystem. In governance AI governance is broadly concerned with creating norms, standards, and regulations to guide the use and development of AI systems. Research AI safety governance research ranges from foundational investigations into the potential impacts of AI to specific applications. On the foundational side, researchers have argued that AI could transform many aspects of society due to its broad applicability, comparing it to electricity and the steam engine. Some work has focused on anticipating specific risks that may arise from these impacts – for example, risks from mass unemployment, weaponization, disinformation, surveillance, and the concentration of power. Other work explores underlying risk factors such as the difficulty of monitoring the rapidly evolving AI industry, the availability of AI models, and 'race to the bottom' dynamics. Allan Dafoe, the head of longterm governance and strategy at DeepMind has emphasized the dangers of racing and the potential need for cooperation: "it may be close to a necessary and sufficient condition for AI safety and alignment that there be a high degree of caution prior to deploying advanced powerful systems; however, if actors are competing in a domain with large returns to first-movers or relative advantage, then they will be pressured to choose a sub-optimal level of caution". A research stream focuses on developing approaches, frameworks, and methods to assess AI accountability, guiding and promoting audits of AI-based systems. Efforts to enhance AI safety include frameworks designed to align AI outputs with ethical guidelines and reduce risks like misuse and data leakage. Tools such as Nvidia's Guardrails, Llama Guard, and Preamble's customizable guardrails mitigate vulnerabilities like prompt injection and ensure outputs adhere to predefined principles. These frameworks are often integrated into AI systems to improve safety and reliability. Philosophical perspectives The field of AI safety is deeply intertwined with philosophical considerations, particularly in the realm of ethics. Deontological ethics, which emphasizes adherence to moral rules, has been proposed as a framework for aligning AI systems with human values. By embedding deontological principles, AI systems can be guided to avoid actions that cause harm, ensuring their operations remain within ethical boundaries. Scaling local measures to global solutions In addressing the AI safety problem it is important to stress the distinction between local and global solutions. Local solutions focus on individual AI systems, ensuring they are safe and beneficial, while global solutions seek to implement safety measures for all AI systems across various jurisdictions. Some researchers argue for the necessity of scaling local safety measures to a global level, proposing a classification for these global solutions. This approach underscores the importance of collaborative efforts in the international governance of AI safety, emphasizing that no single entity can effectively manage the risks associated with AI technologies. This perspective aligns with ongoing efforts in international policy-making and regulatory frameworks, which aim to address the complex challenges posed by advanced AI systems worldwide. Government action Some experts have argued that it is too early to regulate AI, expressing concerns that regulations will hamper innovation and it would be foolish to "rush to regulate in ignorance". Others, such as business magnate Elon Musk, call for pre-emptive action to mitigate catastrophic risks. Outside of formal legislation, government agencies have put forward ethical and safety recommendations. In March 2021, the US National Security Commission on Artificial Intelligence reported that advances in AI may make it increasingly important to "assure that systems are aligned with goals and values, including safety, robustness and trustworthiness". Subsequently, the National Institute of Standards and Technology drafted a framework for managing AI Risk, which advises that when "catastrophic risks are present – development and deployment should cease in a safe manner until risks can be sufficiently managed". In September 2021, the People's Republic of China published ethical guidelines for the use of AI in China, emphasizing that AI decisions should remain under human control and calling for accountability mechanisms. In the same month, The United Kingdom published its 10-year National AI Strategy, which states the British government "takes the long-term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for ... the world, seriously". The strategy describes actions to assess long-term AI risks, including catastrophic risks. The British government held first major global summit on AI safety. This took place on the 1st and 2 November 2023 and was described as "an opportunity for policymakers and world leaders to consider the immediate and future risks of AI and how these risks can be mitigated via a globally coordinated approach". Government organizations, particularly in the United States, have also encouraged the development of technical AI safety research. The Intelligence Advanced Research Projects Activity initiated the TrojAI project to identify and protect against Trojan attacks on AI systems. The DARPA engages in research on explainable artificial intelligence and improving robustness against adversarial attacks. And the National Science Foundation supports the Center for Trustworthy Machine Learning, and is providing millions of dollars in funding for empirical AI safety research. In 2024, the United Nations General Assembly adopted the first global resolution on the promotion of “safe, secure and trustworthy” AI systems that emphasized the respect, protection and promotion of human rights in the design, development, deployment and the use of AI. In May 2024, the Department for Science, Innovation and Technology (DSIT) announced £8.5 million in funding for AI safety research under the Systemic AI Safety Fast Grants Programme, led by Christopher Summerfield and Shahar Avin at the AI Safety Institute, in partnership with UK Research and Innovation. Technology Secretary Michelle Donelan announced the plan at the AI Seoul Summit, stating the goal was to make AI safe across society and that promising proposals could receive further funding. The UK also signed an agreement with 10 other countries and the EU to form an international network of AI safety institutes to promote collaboration and share information and resources. Additionally, the UK AI Safety Institute planned to open an office in San Francisco. Corporate self-regulation AI labs and companies generally abide by safety practices and norms that fall outside of formal legislation. One aim of governance researchers is to shape these norms. Examples of safety recommendations found in the literature include performing third-party auditing, offering bounties for finding failures, sharing AI incidents (an AI incident database was created for this purpose), following guidelines to determine whether to publish research or models, and improving information and cyber security in AI labs. Companies have also made commitments. Cohere, OpenAI, and AI21 proposed and agreed on "best practices for deploying language models", focusing on mitigating misuse. To avoid contributing to racing-dynamics, OpenAI has also stated in their charter that "if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project" Also, industry leaders such as CEO of DeepMind Demis Hassabis, director of Facebook AI Yann LeCun have signed open letters such as the Asilomar Principles and the Autonomous Weapons Open Letter. See also AI alignment Artificial intelligence and elections Artificial intelligence detection software References External links Unsolved Problems in ML Safety On the Opportunities and Risks of Foundation Models An Overview of Catastrophic AI Risks AI Accidents: An Emerging Threat Engineering a Safer World Artificial intelligence Existential risk from artificial general intelligence Cybernetics
AI safety
[ "Technology", "Engineering" ]
5,050
[ "Safety engineering", "AI safety", "Existential risk from artificial general intelligence" ]
72,361,522
https://en.wikipedia.org/wiki/Amanda%20Montejano
Amanda Montejano Cantoral is a Mexican mathematician specializing in combinatorics, and particularly in the application of graph coloring to geometric graphs. She is a professor at the Juriquilla campus of the National Autonomous University of Mexico, in the Multidisciplinary Unit of Teaching and Research of the Faculty of Sciences. Education and career Montejano graduated from the National Autonomous University of Mexico in 2004, and earned a doctorate in applied mathematics at the Polytechnic University of Catalonia in Spain in 2009. Her doctoral dissertation, Colored combinatorial structures: homomorphisms and counting, was supervised by Oriol Serra Albó. She was a postdoctoral researcher at the National Autonomous University of Mexico, in the Center for Applied Physics and Advanced Technology, before taking her present position in the Multidisciplinary Unit of Teaching and Research. Recognition Montejano is a member of the Mexican Academy of Sciences. References External links Home page Year of birth missing (living people) Living people Mexican mathematicians Mexican women mathematicians Graph theorists National Autonomous University of Mexico alumni Academic staff of the National Autonomous University of Mexico Members of the Mexican Academy of Sciences
Amanda Montejano
[ "Mathematics" ]
226
[ "Mathematical relations", "Graph theory", "Graph theorists" ]
72,366,513
https://en.wikipedia.org/wiki/Unconventional%20%28oil%20and%20gas%29%20reservoir
Unconventional (oil and gas) reservoirs, or unconventional resources (resource plays) are accumulations where oil and gas phases are tightly bound to the rock fabric by strong capillary forces, requiring specialized measures for evaluation and extraction. Conventional reservoir Oil and gas are generated naturally at depths of around 4 or 5 kms below Earth’s surface. Being lighter than the water saturating rocks below the water table, the oil and gas are driven by buoyancy up through aquifer pathways towards Earth's surface over time. Some of the oil and gas percolate all the way to the surface as natural seepages, either on land or on the sea floor. The rest remains trapped underground by geological barriers in a variety of trap geometries. In this way, underground pockets of oil and gas accumulate by displacing water in porous rock. If the pockets are permeable, they are referred to as conventional reservoirs. Wells are drilled into these reservoirs to create a path for oil and gas to reach the surface. When pressure differences are relatively high, oil and gas rise to the well bore naturally through buoyancy. Where the pressures are low, flow can be assisted with pumps (e.g. nodding donkeys). History In the early days of the oil industry, there was no need for stimulation to improve recovery efficiency, because supply vastly outstripped demand and leaving "difficult" oil in the ground was economically expedient. Two world wars, followed by huge economic growth resulted in surging demand for cheap portable energy, while the availability of new conventional oil and gas resources declined. The industry initially sought to enhance recovery of trapped oil and gas, using techniques like restricted, or low volume hydraulic fracturing to stimulate the reservoir further, thereby reducing the volume of oil and gas left in the ground to an economic minimum. By the turn of the millennium, a new kind of energy resource was required, particularly by the USA, who were driven to achieve energy independence. The USA turned to unconventional reservoirs to achieve their goals, which had been known about for decades but had previously been too costly to be economically attractive. Today, unconventional reservoirs include basin-centered gas, shale gas, coalbed methane (CBM), gas hydrates, tar sands, light tight oil and oil shale, mostly from North America. Essential differences between conventional and unconventional reservoirs The distinction between conventional and unconventional resources reflects differences in the qualities of the reservoir and/or the physical properties of the oil and gas (i.e. permeability and/or viscosity). These characteristics significantly impact predictability (risk to find, appraise and develop) and in turn the methods of extraction from those reservoirs such as fracking. Conventional oil & gas accumulations are concentrated by buoyancy driven aquifer pathways into discrete geological traps, which are detectable from the surface. These traps constitute relatively small but high resource density fields. Most conventional oil or gas fields initially flow naturally by buoyancy alone into the well bore, with their limits defined by fluid mechanics measurable from the well bore (e.g. fluid pressure, OWC/GWC etc.). In general, the technical and commercial risk associated with discrete conventional reservoirs can be reduced using relatively inexpensive remote techniques such as reflection seismology and extracted with relatively few appraisal and development wells. Unconventional reservoirs, in contrast, are regionally dispersed over large areas with no indicative trap geometry that can be used for predictive purposes. The oil and gas in unconventional reservoirs are generally low density resources, frequently trapped in the rock by strong capillary forces incapable of flowing naturally through buoyancy. The limits of an unconventional field are therefore usually defined by relatively expensive well testing for delivery. Extraction from unconventional reservoirs requires changing the physical properties of the reservoir, or the flow characteristics of the fluid, using techniques such as fracking or steam injection. The technical and commercial risk associated with unconventional reservoirs is generally higher than conventional reservoirs owing to the lack of predictability of the trap extent and of the reservoir quality, which requires extensive well placement and testing to determine the economic reserves/well limit defined by well delivery. Environmental differences As with all forms of fossil fuel, there are established issues with greenhouse gas emissions through export (distribution) as well as consumption (combustion), which are identical whether the oil or gas are derived from conventional or unconventional reservoirs. Their carbon footprints, however, are radically different: conventional reservoirs use the natural energy in the environment to flow oil and gas to the surface unaided; unconventional reservoirs require putting energy into the ground for extraction, either as heat (e.g. tar sands and oil shales) or as pressure (e.g. shale gas and CBM). The artificial transfer of heat and pressure require the use of large volumes of fresh water creating supply and disposal issues. The distribution of the resource over large areas creates land use issues, with implications for local communities on infrastructure, freight traffic and local economies. Impact on the environment is an unavoidable consequence of all human activity but the difference between the impact of conventional reservoirs compared with unconventional is significant, measurable and predictable. See also Source rock Petroleum trap Fracking in the United States Environmental impact of fracking Coalbed methane Methane clathrate (gas hydrate) Shale gas Synthetic natural gas, such as oil shale gas Tight gas Oil sand Tight oil Extreme energy Renewable energy Future energy development Hubbert peak Energy development Alternative fuels World energy resources and consumption Oil megaprojects References and notes Notes Abbreviated definitions Petroleum industry Unconventional oil Unconventional gas Peak oil Petroleum production Petroleum geology Reservoir rock formations
Unconventional (oil and gas) reservoir
[ "Chemistry" ]
1,138
[ "Unconventional oil", "Petroleum industry", "Petroleum", "Chemical process engineering", "Petroleum geology" ]
72,368,932
https://en.wikipedia.org/wiki/Halovir
Halovir refers to a multi-analogue compound belonging to a group of oligopeptides designated as lipopeptaibols (chemical features including lipophilic acyl chain at the N-terminus, abundant α-aminoisobutyric acid content, and a 1,2-amino alcohol located at the C-terminus) which have membrane-modifying capacity and are fungal in origin. These peptides display interesting microheterogeneity; slight variation in encoding amino acids gives rise to a mixture of closely related analogues and have been shown to have antibacterial/antiviral properties. Background Nonribosomal peptides compose a significant group of secondary metabolites in bacterial/fungal organisms (though Drosophila melanogaster and Caenorhabditis elegans both exhibit products of nonribosomal peptide synthetases); having been observed functioning as self-defense substances/iron-chelating siderophores, they serve as coping mechanisms for environmental stress, perform as virulence factors/toxins promoting pathogenesis, and act in signalling (enabling communications within and between species). In lieu of these functionalities, many nonribosomal peptides have been utilized in development of medical drugs and biocontrol agents (examples of such include β-lactams, daptomycin, echinocandins, emodepside, bleomycin, cyclosporine, and bialaphos). Peptaibols are a family of linear, amphipathic polypeptides (typically consisting of 4-21 amino acids residues) that are generated as a result of the assembly of a variety of aminoacyl, ketoacyl or hydroxyacyl monomers by fungal multimodular megaenzymes denoted as nonribosomal-peptide synthetases (NRPSs). Typically, NRPSs are composed of three highly conserved core domains: an adenylation (A) domain which recognizes, activates and loads monomers onto NRPS, a thiolation (T) domain (also denoted as the peptidyl carrier protein domain) that transports covalently linked monomers/peptidyl intermediates between nearby NRPS domains, and a condensation (C) domain (catalyzes sequential condensation of monomers within the nascent peptide chain). In addition, a chain-terminating domain [thioesterase (TE) domain, a terminal C (CT) or a reductase (R) domain] is commonly observed at the end of an NRPS in order to relinquish full-length peptide chains in linear or cyclic forms. Furthermore, often seen are feature tailoring domains [epimerase, N-methyltransferase (M), oxidase (Ox), ketoacyl reductase (KR) and cyclase (Cy)], allowing for further modification of monomers/polypeptide intermediates. Notable characteristics of peptaibols include: C-terminal alcohol residues (phenylalaninol, leucinol, isoleucinol, valinol), an N-acyl terminus (usually acetyl), and high levels of α,α-dialkylated non-proteinogenic amino acids [α-aminoisobutyric acid (Aib), isovaleric acid (Iva), hydroxyproline (Hyp)]. In most cases, peptaibols form α-helix and β-bend patterns in their 3D structures (α-aminoisobutyric acid is a turn/helix forming agent). α,α-dialkylated amino acid residues in peptaibols create substantial conformation constrictions in the peptide backbone, resulting in the formation of right-handed α-helical structures. Membrane modification abilities can be attributed to the formation of transmembrane voltage-dependent channels; this occurs as the peptide takes on an α-helical conformation upon contact of lipid bilayers, drilling through and forming ion channels with similar electrophysiological configurations of ion channel proteins. The principle functionality of the peptides is to rupture membranes, in turn triggering cytolysis via loss of osmotic balance. Structurally speaking, lipopeptaibols are peptaibols with a fatty acyl moiety linked to the N-terminal amino acid (thusly named), and have been isolated from a number of soil fungi. Their primary structures all have the L-(S-) configuration at the 2-(α-)carbon. They overwhelmingly display microheterogeneity (being very structurally similar; with a limited pool of conserved variation in natural sample). Structure C45H83N7O9-Halovir A: contains L-leucine, L-valine, and L-glutamine C43H79N7O9-Halovir B: contains L-alanine, L-leucine, L-glutamine C45H83N7O8-Halovir C: contains L-leucine, L-valine, L-glutamine, C43H79N7O9-Halovir D C43H79N7O8-Halovir E C42H77N7O8-Halovir I C44H79N7O9-Halovir J C43H78N7O9-Halovir K Medical applications The antibiotic capabilities of these compounds can be attributed to membrane insertion and pore-forming functionalities, and typically exhibit antimicrobial activity in Gram-positive bacteria and fungi. Halovirs A-E (isolated from Scytidium sp.) has displayed potent antiviral activity against HSV-1, and has been observed performing replication inhibition of HSV-1 and HSV-2 in standard plaque reduction assay without cytotoxicity (at concentrations upwards of 0.85μM) Mechanistic studies suggest that halovirs kills virus in direct contact and in a time-dependent manner before it can affect host cells. Halovirs I and J were analyzed for antibacterial and cytotoxic activities, and displayed significant growth inhibition against two Gram-positive bacteria (Staphylococcus aureus and Enterococcus faecium), but not Gram-negative Escherichia coli. Notably, these two halovirs were found to be effective against methicillin-resistant S. aureus (MRSA) and vancomycin-resistant E. faeccium, indicating that the activity against them to be persistent. Additionally, strong cytotoxic activity was observed against a panel of cancer cell lines, including: human lung carinoma A549, human breast carcinoma MCF-7, and human cervical carcinoma HeLa cells (cytotoxicity was not specific to cancer cells in the referenced study). References Peptides
Halovir
[ "Chemistry" ]
1,475
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
72,369,345
https://en.wikipedia.org/wiki/Sensory%20trap%20hypothesis
The sensory trap hypothesis describes an evolutionary idea that revolves around mating behavior and female mate choice. It is a model of female preference and male sexual trait evolution through what is known as sensory exploitation. Sensory exploitation, or a sensory trap is an event that occurs in nature where male members of a species perform behaviors or display visual traits that resemble a non-sexual stimulus which females are responsive to. This tricks females into engaging with the males, thus creating more mating opportunities for males. What makes it a sensory trap is that these female responses evolved in a non-sexual context, and the male produced stimulus exploits the female response which would not otherwise occur without the mimicked stimulus. Limitations The term "trap" indicates that these sensory trap events may be detrimental to female mating success, but they may not always be costly. In fact, there are circumstances where not responding to the stimulus itself can be costly, as females may ignore the actual stimulus in the correct context, and lose the fitness benefits that come with it. There are also circumstances where these traps can actually be beneficial in the context of mate choice, where the females who are responding to the trap end up gaining high-quality males to mate with. While these sensory traps can be quite successful when they appropriately mimic the non-sexual stimulus, they often become exaggerated as a result of excessive selection to the point where they are no longer useful. This is due to the trait or behavior becoming imperceptible or no longer resembling the original stimulus. Sensory traps in nature Photinus, Photuris, and Pyractomena Firefly males use patterned light flashes that mimic the females' prey species when they are flying above them, which evokes a female response including their own pattern of flashing lights that the males use to locate them for mating. Neumania papillator males engage in leg trembling (known as male courtship trembling) that hunting females mistake for prey, leading them to engage with the males and increase the likelihood of mating. Metaplastes ornatus Bush Cricket males have a genital plate that, when inserted into the female genital chamber, potentially mimics a stimulus that is created by the egg during female ovulation, which then leads to a fertilization response and increased mating success. Grapholita molesta Oriental Fruit Moth males release an odor that contains the chemical ethyl trans-cinnamate, a compound that is found in fruit juices that are fed on by these moths, leading to a female response and interaction with the male. Philanthus triangulum European Beewolf males release pheromones that include the chemical (Z)-11-eicosen-1-ol, a chemical also found on the cuticle of females' sole prey, honeybees, which attracts the females to these males. Pisaura mirabilis Nursery Web Spider males use nuptial gifts in the form of prey covered in silk that resembles the female egg sac, which is thought to exploit either the female's maternal instinct or foraging instinct, and can lead to mating with the males who presented the gift. Calopterygidae Damselfly males exploit a sensory bias in females of different Damselfly species by causing them to eject previously stored sperm from their spermathecae through stimulation of the vaginal sensilla using the aedeagus, which is not capable of invoking the same response in females of their own species. Hirundo rustica gutturalis barn swallow males attract females by mimicking the food-begging calls of nestlings. Leptuca musica/beebei fiddler crab males build sand hoods or mud pillars next to the entrances of their burrows. Females approach the structures built by males for safety, regardless of whether they're searching for mates or not. Therefore, they act as a successful mimic of objects that crabs normally approach for landmark orientation and to escape from predators. Furthermore, comparative research between species of fiddler crabs has shown that female preference for the structures is not species-specific, as is usually the case for female preferences for sexual signals. This indicates that this behavior has evolved for predator avoidance and landmark orientation, and not in the context of mate choice. Regardless, females benefit from the presence of such structures built by males as they help females locate males more efficiently and they reduce predation risk. Structure building is suggested to be a condition-dependent trait in these crabs. Females that mate with structure building males therefore, may have access to higher quality mates. Petromyzon marinus sea lamprey females use chemical cues released by larvae to locate spawning grounds. The males release a sex pheromone that contains the same compound as the larval cue to attract the females. It seems that the preference of females for the compound evolutionary precedes the exploitation of the compound by males for female attraction. It is imperative for females to distinguish between the larval cue and the male pheromone because the females are timely constrained to mate. The females use a second compound within the pheromone to distinguish between the larval cue and the male pheromone to not mistakenly approach spawning grounds while trying to locate males. This is an example of how females can distinguish between the original and the mimicked signal and benefit from a sensory trap. In some species of the Goodeidae fish, the males have a terminal yellow band at the end of their tail, which is visually similar to damselfly larvae. The females (and sometimes males) are attracted to this band because it resembles their prey. In the species where the band is more conspicuous, the feeding response of females is reduced towards the band. However, the sexual attractiveness of the band to females does not change. This suggests that selection might have enabled the females to distinguish between the feeding and sexual response without becoming resistant to the mimetic male signal that might prevent females from recognizing their prey. In Iberolacerta monticola Iberian rock lizards, the femoral secretion of the males may act as a sensory trap for females as it contains, provitamin-D3, that is also found in their prey. However, only males that are of higher quality and are better fed can allocate more of this compound to secretions and better attract females. Therefore, this signal can act as an honest indicator of male quality. Sensory traps also sometimes play a role between species in predation and parasitism. For example, in a firefly species, the females attract and prey upon male fireflies of other species by mimicking the courtship signal of their females. Similarly some spiders mimic sex pheromones or courtship displays of other species of animals to attract prey. References Evolutionary biology
Sensory trap hypothesis
[ "Biology" ]
1,360
[ "Evolutionary biology" ]
72,369,346
https://en.wikipedia.org/wiki/Parallel%20speciation
In biology, parallel speciation is a type of speciation where there is repeated evolution of reproductively isolating traits via the same mechanisms occurring between separate yet closely related species inhabiting different environments. This leads to a circumstance where independently evolved lineages have developed reproductive isolation from their ancestral lineage, but not from other independent lineages that inhabit similar environments. In order for parallel speciation to be confirmed, there is a set of three requirements that has been established that must be met: there must be phylogenetic independence between the separate populations inhabiting similar environments to ensure that the traits responsible for reproductive isolation evolved separately, there must be reproductive isolation not only between the ancestral population and the descendent population, but also between descendent populations that inhabit dissimilar environments, and descendent populations that inhabit similar environments must not be reproductively isolated from one another. To determine if natural selection specifically is the cause of parallel speciation, a fourth requirement has been established that includes identifying and testing an adaptive mechanism, which eliminates the possibility of a genetic factor such as polyploidy being the responsible agent. Parallel speciation vs. parallel evolution Parallel evolution is a common phenomenon that occurs when separate yet closely related lineages evolve the same, non-ancestral trait as a result of inhabiting the same environment, and thus, facing the same selection pressures. An example given of parallel evolution is the independent development of small body sizes in two or more descendent populations in a new, similar environment that diverged from the same ancestral population. Parallel speciation differs from this slightly, as it is a form of parallel evolution, but the traits that are independently evolving in these differing lineages are those that are responsible for reproductive isolation. Using the previous example of independently evolved small body sizes, it changes from parallel evolution to parallel speciation when the descendent populations that have both evolved small body sizes due to their similar environments have become reproductively isolated from their ancestral population, but they are not reproductively isolated from one another. Problems in detecting parallel speciation The required analysis of several variables, including genetic markers, morphology, and ecology of these independent populations, makes it hard to attribute speciation events specifically to parallel speciation. Failing to address all three of these contributing factors could incorrectly attribute the event to some other form of speciation, when in fact, parallel speciation was the occurring process. It can also be difficult to assess due to populations of the same species that may have a small amount of gene flow occurring between them despite living in different areas, but do not have physical barriers to overcome. Without these physical barriers, gene flow cannot be considered insignificant, which can further conceal the evidence of parallel speciation taking place. Reported cases However, identifying and demonstrating the cases of parallel speciation is not an easy task to perform because of the many challenges especially in-depth analysis have to be performed in multiple aspects like phylogenetics, ecology, phenotypes and specifically the recurrent formation of reproductive isolation between species. According to previous studies, there are four distinct criteria for a convincing example of parallel speciation: In similar environments the populations must have distinct phylogeny and the populations must share multiple origins of arise rather than from gene flow caused by secondary contact of allopatric populations. The descendent populations must be in reproductive isolation from ancestral populations. There must be no reproductive isolation in descendent populations. The evolution of shared characteristics in descendent populations must occur through natural selection. Even though, there are multiple well characterized cases of parallel speciation for example sticklebacks, stick insects finches, marine snails, and cichlid fishes, have been documented but in case of plants only a couple of cases have been reported. Although, the mechanisms and adaptive processes involved in parallel speciation are largely unknown. Parallel speciation in plants Parallel speciation is documented in animals multiple times. although, in plants the parallel speciation cases are not much which suggest that plants are not prone towards the parallel speciation, but this also indicates that there are not enough empirical studies available which are based on rigorous evaluation and testing, like in the cases of animals.  A well characterized case of parallel speciation in wild rice has been demonstrated in which all the four criteria of parallel speciation have been qualified. In this case cutting edge methods and tools like whole genome sequencing and sanger sequencing of populations samples were used. The verification of meeting the multiple origin of derived species criteria, was performed by phylogenetic analysis and ABC modelling. With this case of wild rice Oryza nivara from Oryza rufipogon and other reported case in plants lays a foundation that the parallel speciation is not common in plant species. The reproduction isolation is most important criteria in parallel speciation, and it was achieved because of the flowering time difference across the wild species in the habitat and the examples of such premating isolation mechanism are reported previously.   Environmental conditions and abiotic stresses are one of the many reasons of parallel speciation in plant species. It is hypothesized that the plant species Oryza nivara is originated from Oryza rufipogon because of the ecological shift from prolonged damp to a seasonally dry habitat during the recent glaciations.  The consistency of this hypothesis can be verified through estimated time of origin of Oryza nivara and distribution modelling of species, suggesting that precipitation and temperature were the main climatic drivers of Oryza nivara distribution. Similarly, this hypothesis is supported by the fact that the annual grasses have been evolved (adapted) to the dry climate of monsoonal Asia. Furthermore, the climatic stresses also interfere with the ecology, morphology, and physiology of plants for example the drought can affect the flowering time and pattern in plant species. Flowering time is heavily investigated in plant species and used as a tool to identify the drought escape in plants. Interestingly, early flowering helps plant species to avoid seasonal drought and results in increased fitness in shortened growing seasons. Thus, the flowering is considered a “magic trait” in plant species that help in adaptation and enables the reproductive isolation required for parallel speciation. The almost complete isolation in flowering time combined with the difference in mating system is making it a strong premating barrier to gene flow among the species and played a pivotal role in Oryza nivara origin. Parallel speciation and natural selection Natural selection plays a pivotal role in almost all the theories of speciation . Selection is one of the driving forces of genetic diversity among the allopatric populations which gave rise to reproductive isolation as incidental by-product. However, the laboratory-based experiments are supporting this argument, but due to the inadequate evidence in nature it is unclear that how natural selection and environment plays their roles in the origination of reproductive isolation. Testing the role of natural selection in parallel speciation have focusing on the reinforcement of premating isolation. But for the reinforcement, the requirement is preexisting reproductive isolation in the form of decreased hybrid fitness and is normally considered a final stride towards the process of speciation. Interestingly, the instances of repeated, parallel evolution in response to environmental stimuli presents the tiny bits of evidence of evolution by natural selection. The role of natural selection in the parallel speciation of stick insect populations has been reported. Similarly other studies also suggest the similar results in which the role of natural selection have been indicated in the process of parallel speciation. References Speciation
Parallel speciation
[ "Biology" ]
1,486
[ "Evolutionary processes", "Speciation" ]
72,369,453
https://en.wikipedia.org/wiki/Bloodline%20theory
The bloodline theory () or blood lineage theory was a political theory associated with the "Loyalist Faction" (Baohuang Pai) of the Red Guards during the early phase of the Cultural Revolution in the People's Republic of China. Opponents included the "Rebel Faction" (Zaofan Pai) of the Red Guards. According to the bloodline theory, the defining factor in a person's class standing was their family's class position. It was expressed by the bloodline couplet, "from a revolutionary father a hero, from a reactionary father a bastard." Although this position was politically-discredited, it continued to have a political impact during the Cultural Revolution. Definition According to the bloodline theory, the defining factor in a person's class standing was their family's class position. Regardless of a person's current position, they could not be considered as among the revolutionary people unless their family background was that of the poor or middle peasants, proletariat, or soldiers and would instead be considered among the so-called Five Black Categories. In some instances, the criteria for Red Guard membership was so exacting that for purposes of the bloodline theory, family background would be traced back to grandparents or distant relatives. The essence of the bloodline theory was summarized by the slogan, "from a revolutionary father a hero, from a reactionary father a bastard." Other permutations of this bloodline couplet included the phrase, "it is basically like this" or the phrase "it is exactly like this," suggesting that advocates of the bloodline theory may have disagreed about the weight that it should have. Development and rejection In the period after the success of the Chinese Communist Revolution, the newly-founded People's Republic of China had to address the building of socialist governance, norms, and order. These state building tasks involved questions of who the next leaders would be and where they would be found (i.e., among families that had already included revolutionary leaders, or among the youth). Under normal circumstances, the Chinese Communist Party (CCP)'s policy was that individuals should not be judged on family class background alone, but rather by their political performance. At the Cultural Revolution's outset, questions of legacy or succession reached their peak, with some children of senior cadre criticized the emphasis on political performance as part of Peng Zhen's "revisionist class line." The bloodline theory began in secondary schools and spread from there to universities. In late July 1966, variations of the bloodline couplet began appearing among Red Guard groups, with several different student groups claiming to have originated it. In the view of these early Red Guards (also known as "Old Red Guards"), class origin was the most important criteria for group membership. In descending order of status, these were: (1) children of army officials, (2) children of civilian state cadres, (3) children with working class family backgrounds, and (4) children from peasant backgrounds. Anyone not from "red origins" would be excluded, and those with the "purest bloodline" were still viewed in a hierarchy. The bloodline couplet caused major controversy. Shortly after its appearance, arguments broke out primarily within the ranks of Red Guard students over how to interpret the principle. The bloodline theory was initially widespread among student activists during the Cultural Revolution, but was then strongly criticized by the Maoists. Significant public opposition to the bloodline theory began in late 1966. Chen Boda was the first Maoist leader to criticize the bloodline couplet, stating, "A theory of 'born-redness' has become popular lately. Those advancing this fallacy actually have attacked and marginalized the children of workers and peasants . . . . They confuse some students and encourage them to present the couplet, 'If the father is a hero, the son is also a hero.'" Jiang Qing famously inverted the bloodline theory's slogan, and argued that if parents were revolutionaries then their children should follow their example, but if parents were reactionaries, then their children should rebel. In 1966, middle school student Yu Luoke wrote a popular pamphlet, On Class Origins, that played a significant role in discrediting the bloodline theory. Subsequent interpretations In the view of academic Alessandro Russo, the bloodline theory was a form of "biological classism" and "ideological trick" which ultimately failed because of how widespread political participation was during the early phase of the Cultural Revolution. Russo writes that even after the theory was politically discredited, it continued to have an impact during the Cultural Revolution. Historian Rebecca Karl observes that the bloodline theory had the "curious effect of casting suspicion on the vast majority of the old revolutionaries. After all, the nucleus of the CCP back in the 1920s and 1930s had been urban, educated youths along with some offspring of landlord or rich peasant families (for example, Mao himself)." See also Ancestral sin Hereditarianism Sippenhaft References Collective punishment Cultural Revolution Determinism Discrimination in China Genetic fallacies Hereditarianism Kinship and descent Political and cultural purges Political theories Victims of familial execution
Bloodline theory
[ "Biology" ]
1,059
[ "Behavior", "Human behavior", "Kinship and descent" ]
72,370,259
https://en.wikipedia.org/wiki/Copper%28II%29%20borate
Copper(II) borate is an inorganic compound with the formula Cu3(BO3)2. It consists of copper atoms in their cupric oxidation state and orthoborate groups. In the 19th century it was proposed to be used as a green pigment to replace the very toxic paris green. It has been studied for it photocatalytic properties. Preparation Copper(II) borate can be prepared by heating a stoichiometric mixture of copper(II) oxide and diboron trioxide to 900 °C. References Borates Catalysts Inorganic compounds Copper(II) compounds
Copper(II) borate
[ "Chemistry" ]
122
[ "Catalysis", "Catalysts", "Inorganic compounds", "Inorganic compound stubs", "Chemical kinetics" ]
72,371,129
https://en.wikipedia.org/wiki/D%C3%A9borah%20Oliveros
Déborah Oliveros Braniff is a Mexican mathematician whose research interests include discrete geometry, combinatorics, and convex geometry, including the geometry of bodies of constant width and related topics. Education and career After earning an undergraduate degree in mathematics from the National Autonomous University of Mexico (UNAM) in 1992, and earning a master's degree in 1994 under the mentorship of Mónica Clapp, Oliveros continued at UNAM for graduate study in mathematics, with doctoral research on an unsolved question of Stanislaw Ulam concerning the buoyancy of floating convex bodies. Her 1997 dissertation on the topic, Los volantines : sistemas dinamicos asociados al problema de la flotacion de los cuerpos, was jointly supervised by Luis Montejano and Javier Bracho. She became a professor at UNAM in 1996, but left in 1999 for postdoctoral research at the University of Calgary in Canada. She became a professor there from 2001 to 2005, when she returned to a professorship at UNAM. She became one of the founders of the branch of the UNAM Institute of Mathematics at the UNAM Juriquilla campus, and directed the institute for 2015–2016. She also holds an affiliation with the Faculty of Engineering of the Autonomous University of Queretaro. Book Oliveros is a coauthor with Horst Martini and Luis Montejano of the book Bodies of Constant Width: An Introduction to Convex Geometry with Applications (Birkhäuser, 2019). Recognition UNAM gave Oliveros the "Reconocimiento Sor Juana Inés de la Cruz" award in 2014. She is a member of the Mexican Academy of Sciences. References External links Home page Year of birth missing (living people) Living people Mexican mathematicians Mexican women mathematicians Combinatorialists Geometers National Autonomous University of Mexico alumni Academic staff of the University of Calgary Academic staff of the National Autonomous University of Mexico Members of the Mexican Academy of Sciences
Déborah Oliveros
[ "Mathematics" ]
398
[ "Combinatorialists", "Geometers", "Geometry", "Combinatorics" ]
72,371,179
https://en.wikipedia.org/wiki/Germanium%20dichloride%20dioxane
Germanium dichloride dioxane is a chemical compound with the formula is 1,4-dioxane. It is a white solid. The compound is notable as a source of Ge(II), which contrasts with the pervasiveness of Ge(IV) compounds. This dioxane complex represents a well-behaved form of germanium dichloride. Synthesis and structure It is prepared by reduction of a dioxane solution of germanium tetrachloride with tributyltin hydride: Hydrosilanes have also been used as reductants. The complex has a polymeric structure. Germanium adopts an SF4-like shape with cis Cl ligands (Cl-Ge-Cl angle = 94.4°) and axial positions occupied by oxygen provided by a bridging dioxane. The Ge-O and Ge-Cl distances are 2.40 and 2.277 A, respectively. Reactions The complex is used in the preparation of organogermanium compounds. In organic synthesis, the complex is used as a Lewis acid with reducing properties. References Germanium(II) compounds Chlorides Nonmetal halides
Germanium dichloride dioxane
[ "Chemistry" ]
244
[ "Chlorides", "Inorganic compounds", "Salts" ]