id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
58,183,109
https://en.wikipedia.org/wiki/Auto%20restaurant
An auto restaurant is a rest area facility found in Japan, equipped with food counters and vending machines. In many cases it is unmanned and it is open 24 hours a day, but the facilities that include an, amusement arcade may have established business hours. Some locations have bathing facilities such as public baths or onsen. History Suburban type shops along the national highways of Japan developed for customers who eat at midnight, such as the truck drivers of the 1970s. Initially, the range of foods available at the vending machines included hamburgers, cup noodle, and Japanese curry dishes. They were prepared in a microwave oven or used hot water from an integrated water heater. Later, vending machines appeared that could automatically cook enriched meals with variations such as tempura, soba, and udon. There is also a sit down drive-in. In 1978, the video game Space Invaders appeared, and then a fusion of the auto restaurant and the suburban game center at its peak. However, during the 1980s, when the 24-hour convenience store and fast food stores increased all over the country, it was possible to procure meals without relying on vending machines even at midnight. As a result, automatic vending machines that are complex and difficult to maintain became obsolete. Due to public security problems, the number of stores quickly fell. Thereafter, business types like the Automatic Super Delis at ampm convenience stores, in which convenience chains set up vending machines for lunch and drinks at offices and the like also appeared. Moreover, in parking areas along expressways, there were increasing cases of abolishing the shops / light snack corner or making it unmanned, and there are increasing cases to shift to this form, and parking areas with only the toilet. References External links Visit "Iron Sword Tallow" after 3 years! I ate a vending machine burger! Yamadaya Appearing on retro vending machine magazine articles and TV (retro vending machine researcher) Mr. Makoto Nomura's website Vending machines Culture of Japan
Auto restaurant
[ "Engineering" ]
412
[ "Vending machines", "Automation" ]
66,238,003
https://en.wikipedia.org/wiki/Elmer%20Peter%20Kohler
Elmer Peter Kohler (November 6, 1865 - May 24, 1938) was an American organic chemist who spent his career on the faculty at Bryn Mawr College and later at Harvard University. At both institutions, he was notable for his effectiveness in teaching. Early life and education Kohler was born in Egypt, Pennsylvania to a family of Pennsylvania Dutch heritage. He attended Muhlenberg College in Allentown, Pennsylvania, where he graduated in 1868. But he was not yet fully focused on chemistry, and took only one chemistry course there. After graduating, he took a job as a passenger agent with the Santa Fe Railroad. He returned to education and received a master's degree from Muhlenberg College in 1889. He then attended Johns Hopkins University in Baltimore, where he received his Ph.D. in 1892. Career After completing his Ph.D., Kohler was appointed as an instructor at Bryn Mawr College. He became a professor there in 1900 and later became head of the chemistry department. In 1912, he moved to Harvard University, becoming the Abbott and James Lawrence Professor two years later and the Sheldon Emery Professor in 1934. At both institutions, he was recognized as an excellent teacher and lecturer. However, he avoided other public speaking events, such as scientific meetings and talks, which those who knew him attributed to shyness. Throughout his career, Kohler was noted as a skilled experimentalist, continuing to work in the laboratory himself till very shortly before his death. He was particularly noted for skill in fractional crystallization and for investigations of the synthesis and properties of various unsaturated compounds of interest. Among his earliest graduate students at Harvard was James B. Conant, who later became president of the university. In 1920, Kohler was elected to the National Academy of Sciences. In 1926, he was elected to the German National Academy of Sciences Leopoldina. References 1865 births 1938 deaths American organic chemists Bryn Mawr College faculty Harvard University faculty Members of the German National Academy of Sciences Leopoldina Members of the United States National Academy of Sciences
Elmer Peter Kohler
[ "Chemistry" ]
415
[ "Organic chemists", "American organic chemists" ]
66,239,416
https://en.wikipedia.org/wiki/Pismis%2020
Pismis 20 is a compact open cluster in Circinus. It is located at the heart of the Circinus OB1 association in the Norma arm of the Milky Way Galaxy. Pismis 20 is about away and only about 5 million years old. HD 134959, a blue supergiant variable star also called CX Circinus, is the brightest star in Pismis 20. References Open clusters Circinus Star-forming regions
Pismis 20
[ "Astronomy" ]
95
[ "Circinus", "Constellations" ]
66,240,342
https://en.wikipedia.org/wiki/Conbercept
Conbercept, sold under the commercial name Lumitin, is a novel vascular endothelial growth factor (VEGF) inhibitor used to treat neovascular age-related macular degeneration (AMD) and diabetic macular edema (DME). The anti-VEGF was approved for the treatment of neovascular AMD by the China State FDA (CFDA) in December 2013. As of December 2020, conbercept is undergoing phase III clinical trials through the U.S. Food and Drug Administration’s PANDA-1 and PANDA-2 development programs. Conbercept was developed by Chengdu Kanghong Biotech Company in the People’s Republic of China and is marketed under the name Lumitin. Medical uses It is used for the treatment of neovascular age-related macular degeneration (nAMD), choroidal neovascularization secondary to pathologic myopia (), diabetic macular edema (DME). The medication is given through intravitreal injection (IVT). Contraindications Conbercept is contraindicated in patients with known hypersensitivity to the active ingredient, in patients with ocular or periocular infections, and in patients with active intraocular inflammation. Adverse effects Common adverse effects of the eye formulation include eye pain, transient intraocular pressure (IOP) increase and conjunctival hemorrhage. Mechanism of action Conbercept is a soluble receptor decoy that binds specifically to VEGF-B, placental growth factor (PlGF), and various isoforms of VEGF-A. Conbercept has a VEGF-R2 kinase insert domain receptor (KDR) Ig-like region 4 (KDRd4) which improves the three-dimensional structure and efficiency of dimer formation, thereby increasing the binding capacity of conbercept to VEGF. Composition Conbercept is a recombinant fusion protein composed of VEGFR-1 (second domain) and VEGFR-2 (third and fourth domains) regions fused to the Fc portion of human IgG1 immunoglobulin. History Chengdu Kanghong Pharmaceutical Group, a medical company based in Sichuan, started the development of conbercept in 2005. In 2012, the drug was included on the World Health Organization’s Drug Information 67th List of Recommended International Nonproprietary Names, which was the first Chinese innovator biotech drug to be recognized on the list. In November 2013, the Chinese Food and Drug Administration approved conbercept for the treatment of AMD. By 2014, conbercept was marketed for treatment of wAMD in China. In 2016, Phase III clinical trials of conbercept were authorized by the U.S. Food and Drug Administration. In 2017, Kanghong Pharmaceutical Group partnered with Syneos Health to process Phase III clinical trials simultaneously in more than 30 countries around the world with an investment of $228 million. In 2020, conbercept was approved for use in Mongolia. Clinical trials in China Conbercept is the only anti-VEGF drug confirmed by randomized controlled trials (RCT) to sustain visual improvements with 3+Q3M regimens (PHOENIX study) Conbercept significantly improves visual acuity and anatomical outcomes in patient with PCV (AURORA Study). Conbercept provides significantly visual acuity improvement in DME patients (SAILING study). Society and culture Legal Status In 2013, the CFDA approved conbercept for the treatment of neovascular age-related macular degeneration (nAMD) In 2017, the CFDA approved it for the treatment of pathologic myopia associated choroidal neovascularization () In 2019, the CFDA approved it for the treatment of diabetic macular edema (DME) Economic Conbercept has been shown to be a cost-effective wAMD treatment option in China. Compared to two similar anti-VEGF intravitreal drugs, ranibizumab and aflibercept, conbercept has been shown to be the most cost-effective option for treatment of wAMD in China. In 2017, the national basic medical insurance in China began covering conbercept. References External links Conbercept, Drug Information Portal. U.S. National Library of Medicine. Ophthalmology drugs Angiogenesis inhibitors Engineered proteins
Conbercept
[ "Biology" ]
937
[ "Angiogenesis", "Angiogenesis inhibitors" ]
66,241,271
https://en.wikipedia.org/wiki/Ergodicity%20economics
Ergodicity economics is a research programme that applies the concept of ergodicity to problems in economics and decision-making under uncertainty. The programme's main goal is to understand how traditional economic theory, framed in terms of the expectation values, changes when replacing expectation value with time averages. In particular, the programme is interested in understanding how behaviour is shaped by non-ergodic economic processes, that is processes where the expectation value of an observable does not equal its time average. Background Mean values and expected values are used extensively in economic theory, most commonly as a summary statistic, e.g. used for modelling agents’ decisions under uncertainty. Early economic theory was developed at a time when the expected value had been invented but its relation to the time average had not been studied. No clear distinction was made between the two mathematical objects, which can be interpreted as an implicit assumption of ergodicity. Ergodicity economics explores what aspects of economics can be informed by avoiding this implicit assumption. While one common critique of modelling decisions based on expected values is the sensitivity of the mean to outliers, ergodicity economics focuses on a different critique. It emphasizes the physical meaning of expected values as averages across a statistical ensemble of parallel systems. It insists on a physical justification when expected values are used. In essence, at least one of two conditions must hold: the average value of an observable across many real systems is relevant to the problem, and the sample of systems is large enough to be well approximated by a statistical ensemble; the average value of an observable in one real system over a long time is relevant to the problem, and the observable is well modelled as ergodic. In ergodicity economics, expected values are replaced, where necessary, by averages that account for the ergodicity or non-ergodicity of the observables involved. Non-ergodicity is closely related to the problems of irreversibility and path dependence that are common themes in economics. Relation to other sciences In mathematics and physics, the concept of ergodicity is used to characterise dynamical systems and stochastic processes. A system is said to be ergodic, if a point of a moving system will eventually visit all parts of the space that the system moves in, in a uniform and random sense. Ergodicity implies that the average behaviour along a single trajectory through time (time average) is equivalent to the average behaviour of a large ensemble at one point in time (ensemble average). For an infinitely large ensemble, the ensemble average of an observable is equivalent to the expected value. Ergodicity economics inherits from these ideas the probing of the ergodic properties of stochastic processes used as economic models. Historical Background Ergodicity economics questions whether expected value is a useful indicator of an economic observable's behaviour over time. In doing so it builds on existing critiques of the use of expected value in the modeling of economic decisions. Such critiques started soon after the introduction of expected value in 1654. For instance, expected-utility theory was proposed in 1738 by Daniel Bernoulli as a way of modeling behavior which is inconsistent with expected-value maximization. In 1956, John Kelly devised the Kelly criterion by optimizing the use of available information, and Leo Breiman later noted that this is equivalent to optimizing time-average performance, as opposed to expected value. The ergodicity economics research programme originates in two papers by Ole Peters in 2011, a theoretical physicist and current external professor at the Santa Fe Institute. The first studied the problem of optimal leverage in finance and how this may be achieved by considering the non-ergodic properties of geometric brownian motion. The second paper applied principles of non-ergodicity to propose a possible solution for the St. Petersburg paradox. More recent work has suggested possible solutions for the equity premium puzzle, the insurance puzzle, gamble-selection, probability weighting, and has provided insights into the dynamics of income inequality. Decision theory Ergodicity economics emphasizes what happens to an agent's wealth over time . From this follows a possible decision theory where agents maximize the time-average growth rate of wealth. The functional form of the growth rate, , depends on the wealth process . In general, a growth rate takes the form , where the function , linearizes , such that growth rates evaluated at different times can be meaningfully compared. Growth processes generally violate ergodicity, but their growth rates may nonetheless be ergodic. In this case, the time-average growth rate, can be computed as the rate of change of the expected value of , i.e. . (1) In this context, is called the ergodicity transformation. Relation to classic decision theory An influential class of models for economic decision-making is known as expected utility theory. The following specific model can be mapped to the growth-rate optimization highlighted by ergodicity economics. Here, agents evaluate monetary wealth according to a utility function , and it is postulated that decisions maximize the expected value of the change in utility, . (2) This model was proposed as an improvement of expected-value maximization, where agents maximize . A non-linear utility function allows the encoding of behavioral patterns not represented in expected-value maximization. Specifically, expected-utility maximizing agents can have idiosyncratic risk preferences. An agent specified by a convex utility function is more risk-seeking than an expected wealth maximizer, and a concave utility function implies greater risk aversion. Comparing (2) to (1), we can identify the utility function with the linearization , and make the two expressions identical by dividing (2) by . Division by simply implements a preference for faster utility growth in the expected-utility-theory decision protocol. This mapping shows that the two models will yield identical predictions if the utility function applied under expected-utility theory is the same as the ergodicity transformation, needed to compute an ergodic growth rate. Ergodicity economics thus emphasizes the dynamic circumstances under which a decision is made, whereas expected-utility theory emphasizes idiosyncratic preferences to explain behavior. Different ergodicity transformations indicate different types of wealth dynamics, whereas different utility functions indicate different personal preferences. The mapping highlights the relationship between the two approaches, showing that differences in personal preferences can arise purely as a result of different dynamic contexts of decision makers. Continuous example: Geometric Brownian motion A simple example for an agent's wealth process, , is geometric Brownian motion (GBM), commonly used in mathematical finance and other fields. is said to follow GBM if it satisfies the stochastic differential equation , (3) where is the increment in a Wiener process, and ('drift') and ('volatility') are constants. Solving (3) gives . (4) In this case the ergodicity transformation is , as is easily verified: grows linearly in time. Following the recipe laid out above, this leads to the time-average growth rate . (5) It follows that for geometric Brownian motion, maximizing the rate of change in the logarithmic utility function, , is equivalent to maximizing the time-average growth rate of wealth, i.e. what happens to the agent's wealth over time. Stochastic processes other than (3) possess different ergodicity transformations, where growth-optimal agents maximize the expected value of utility functions other than the logarithm. Trivially, replacing (3) with additive dynamics implies a linear ergodicity transformation, and many similar pairs of dynamics and transformations can be derived. Discrete example: multiplicative Coin Toss A popular illustration of non-ergodicity in economic processes is a repeated multiplicative coin toss, an instance of the binomial multiplicative process. It demonstrates how an expected-value analysis can indicate that a gamble is favorable although the gambler is guaranteed to lose over time. Definition In this thought experiment, discussed in, a person participates in a simple game where they toss a fair coin. If the coin lands heads, the person gains 50% on their current wealth; if it lands tails, the person loses 40%. The game shows the difference between the expected value of an investment, or bet, and the time-average or real-world outcome of repeatedly engaging in that bet over time. Calculation of Expected Value Denoting current wealth by , and the time when the payout is received by , we find that wealth after one round is given by the random variable , which takes the values (for heads) and (for tails), each with probability . The expected value of the gambler's wealth after one round is therefore By induction, after rounds expected wealth is , increasing exponentially at 5% per round in the game. This calculation shows that the game is favorable in expectation—its expected value increases with each round played. Calculation of Time-Average The time-average performance indicates what happens to the wealth of a single gambler who plays repeatedly, reinvesting their entire wealth every round. Due to compounding, after rounds the wealth will be where we have written to denote the realized random factor by which wealth is multiplied in the round of the game (either for heads; or , for tails). Averaged over time, wealth has grown per round by a factor Introducing the notation for the number of heads in a sequence of coin tosses we re-write this as For any finite , the time-average per-round growth factor, , is a random variable. The long-time limit, found by letting the number of rounds diverge , provides a characteristic scalar which can be compared with the per-round growth factor of the expected value. The proportion of heads tossed then converges to the probability of heads (namely 1/2), and the time-average growth factor is Discussion The comparison between expected value and time-average performance illustrates an effect of broken ergodicity: over time, with probability one, wealth decreases by about 5% per round, in contrast to the increase by 5% per round of the expected value. How the mind is tricked when betting on a non-stationary system To explain the danger of betting in a non-stationary system, a simple game is used. We have two people sitting opposite each other separated by a black cloth, so that they cannot see each other. They are playing the following game: the person we will call A tosses a coin and the person we will call B tries to guess the state in which the coin is on the table. This game lasts an arbitrary interval of time and person A is free to choose how many tosses to make during the chosen interval of time, person B does not see the toss of the coin but can at any time, within the interval of time, make a bet. When he makes a bet, if he guesses the state in which the coin is at that moment, he wins. The game begins, A tosses only once (result: heads), while B bets twice on heads, winning both times. Question: What is the overall probability of the outcome? • B calculates a probability of 25% (0.5 × 0.5), considering bets as independent. • A calculates a probability of 50% since there was only one toss and not two separate events. The difference arises from the estimate of the conditional probability: • B estimates the conditional probability in this way P(E2 | E1) = P(E2) treating the events (bets) as completely independent. • A estimates the conditional probability in this other way P(E2 | E1) = 1 treating the events as completely dependent. E1=first bet E2=second bet The correct answer depends on information: only the person tossing the coin (A) knows the number of tosses and can correctly estimate the probability. Application to financial markets The game highlights how traders (player B) often treat their trades as independent, ignoring the non-ergodic structure of financial markets (player A). Markets are not ergodic because sequences of events cannot be simply represented by long-term statistical averages. In other words, returns do not follow independent and identically distributed (i.i.d.) processes, and historical conditions profoundly influence future outcomes. This error leads to an overestimation of predictive capabilities and excessive risk-taking . Coverage in the wider media In December 2020, Bloomberg news published an article titled "Everything We’ve Learned About Modern Economic Theory Is Wrong" discussing the implications of ergodicity in economics following the publication of a review of the subject in Nature Physics. Morningstar covered the story to discuss the investment case for stock diversification. In the book Skin in the Game, Nassim Nicholas Taleb suggests that the ergodicity problem requires a rethinking of how economists use probabilities. A summary of the arguments was published by Taleb in a Medium article in August 2017. In the book The End of Theory, Richard Bookstaber lists non-ergodicity as one of four characteristics of our economy that are part of financial crises, that conventional economics fails to adequately account for, and that any model of such crises needs to take adequate account of. The other three are: computational irreducibility, emergent phenomena, and radical uncertainty. In the book The Ergodic Investor and Entrepreneur, Boyd and Reardon tackle the practical implications of non-ergodic capital growth for investors and entrepreneurs, especially for those with a sustainability, circular economy, net positive, or regenerative focus. James White and Victor Haghani discuss the field of ergodicity economics in their book The Missing Billionaires. Criticisms It has been claimed that expected utility theory implicitly assumes ergodicity in the sense that it optimizes an expected value which is only relevant to the long-term benefit of the decision-maker if the relevant observable is ergodic. Doctor, Wakker, and Tang argue that this is wrong because such assumptions are “outside the scope of expected utility theory as a static theory”. They further argue that ergodicity economics overemphasizes the importance of long-term growth as “the primary factor that explains economic phenomena,” and downplays the importance of individual preferences. They also caution against optimizing long-term growth inappropriately. Doctor, Wakker, and Tang gives the example of a short-term decision between A) a great loss incurred with certainty and B) a gain enjoyed with almost-certainty paired with an even greater loss at negligible probability. In the example the long-term growth rate favors the certain loss and seems an inappropriate criterion for the short-term decision horizon. Finally, an experiment by Meder and colleagues claims to find that individual risk preferences change with dynamical conditions in ways predicted by ergodicity economics. Doctor, Wakker, and Tang criticize the experiment for being confounded by differences in ambiguity and the complexity of probability calculations. Further, they criticize the analysis for applying static expected utility theory models to a context where dynamic versions are more appropriate. In support of this, Goldstein claims to show that multi-period EUT predicts a similar change in risk preferences as observed in the experiment. See also Santa Fe Institute St. Petersburg paradox References Paradoxes in economics Behavioral finance Mathematical economics Coin flipping Economic theories Ergodic theory
Ergodicity economics
[ "Mathematics", "Biology" ]
3,191
[ "Behavior", "Applied mathematics", "Ergodic theory", "Behavioral finance", "Mathematical economics", "Human behavior", "Dynamical systems" ]
66,242,082
https://en.wikipedia.org/wiki/Magnetic%20resonance%20myelography
Magnetic resonance myelography (MR myelography or MRI myelography) is a noninvasive medical imaging technique that can provide anatomic information about the subarachnoid space. It is a type of MRI examination that uses a contrast medium and magnetic resonance imaging scanner to detect pathology of the spinal cord, including the location of a spinal cord injury, cysts, tumors and other abnormalities. The procedure involves the injection of a gadolinium based contrast media into the cervical or lumbar spine, followed by the MRI scan. Procedure The radiologist will first numb the skin with the local anesthetic and then inject the gadolinium based contrast media into the spinal cord at the interspace between third and fourth lumbar vertebrae (L3-L4). Then the patient will be asked to roll on the table until the contrast is evenly distributed in the spinal cord and fill the nerve roots. Then the patient will be transferred to the MRI table and the scan will be taken. Postprocedural care The patient should be adequately hydrated to remove contrast from the body. The patient should be observed following the examination for adverse effects of contrast media. The myelogram is performed on an outpatient basis, So the patient should be properly instructed regarding limitations following the procedure such as driving. Instructions regarding postprocedural care, including warning signs of adverse reactions and the possibility of persistent headaches, should be given to the patient by a trained professional. A physician should be available to answer questions and provide patient management following the procedure. Indications Demonstration of the site of a cerebrospinal fluid leak (postlumbar puncture headache, postspinal surgery headache, rhinorrhea, or otorrhea) Surgical planning, especially in regard to the nerve roots. Radiation therapy planning. Diagnostic evaluation of spinal or basal cisternal disease. Nondiagnostic MRI studies of the spine or skull base. Poor correlation of physical findings with MRI. Contraindications Metallic Implants unless made up of titanium. Pacemakers Advantages Major advantages of MR myelography over conventional radiographic myelography include its lack of ionizing radiation, noninvasive nature, and lack of need for intrathecal contrast material. See also Myelography MRI References Magnetic resonance imaging Spinal cord
Magnetic resonance myelography
[ "Chemistry" ]
472
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
66,242,294
https://en.wikipedia.org/wiki/Gonzalo%20Blumel
Gonzalo Fernando Blumel Mac-Iver (born 17 May 1978) is a Chilean politician, civil environmental engineer and economist who served as Minister of the Interior and Public Security of Chile during Sebastián Piñera second government (2018–2022). Biography He is the son of Juan Enrique Blumel Méndez – grandson of the German explorer Santiago Blümel and Rosa Ancán, daughter of a Mapuche lonko from Nueva Imperial – and Emma Francisca Mac-Iver Prieto, great-granddaughter of Enrique Mac Iver (politician of Radical Party of Chile's liberal faction) and of Emma Ovalle Gutiérrez, granddaughter of Chilean President José Tomás Ovalle (1830–1831). Political career He began his career in 2001 as a researcher at Pontificia Universidad Católica de Chile Center for the Environment of the Department of Industrial Engineering. In 2005, he assumed as planning secretary of the Municipality of Futrono. Later, he worked as a researcher for the environment program of Libertad y Desarrollo, right–wing think tank linked with libertarian conservatism ideas then represented by the party Independent Democratic Union (known in Spanish for his acronym: «UDI»), organization promoted by Augusto Pinochet dictatorship through his ideologist: the lawyer Jaime Guzmán (UDI founder). He performed in three charges during President Piñera's first government, being – from March to July 2010 – Staff Chief of Cristián Larroulet (UDI), General Secretariat of the Presidency. Later, he was head of Studies Division in the same ministry and, finally, in March 2013, he became the president's chief adviser. Once finished Piñera's first government, he was the CEO of Avanza Chile Foundation as well as he taught lessons also at PUC, his alma mater, or Universidad del Desarrollo (UDD). In 2016, he was one of founding member of the party Political Evolution, known in Chile for its acronym «Evópoli». When Piñera returned to the presidency in 2018, Blumel was appointed as Minister Secretary General, the same charge where he collaborated with Larroulet. He served there until 28 October 2019, when he was called by Piñera to take the Ministry of the Interior charge amid 2019–20 riots in Chile commonly known as Estallido Social de Chile. References External links 1978 births Living people Alumni of the University of Birmingham Chilean civil engineers Chilean people of German descent Chilean people of British descent Chilean people of Mapuche descent Environmental engineers Evópoli politicians Politicians from Santiago, Chile People from Talca Pontifical Catholic University of Chile alumni Ministers of the interior of Chile
Gonzalo Blumel
[ "Chemistry", "Engineering" ]
543
[ "Environmental engineers", "Environmental engineering" ]
66,243,143
https://en.wikipedia.org/wiki/Chronic%20egg%20laying
Chronic egg laying is a maladaptive, behavioural disorder commonly seen in pet birds which repeatedly lay clutches of infertile eggs in the absence of a mate. It is particularly common in cockatiels, budgerigars, lovebirds, macaws and amazon parrots. Birds exhibiting chronic egg laying behavior will frequently lay eggs one after the other without stopping to brood them once the typical clutch size for their particular species has been reached. Excessive egg laying places a strain on the hen's body, depleting resources such as calcium, protein and vitamins from her body and may lead to conditions such as egg binding, osteoporosis, seizures, prolapse of the oviduct, or peritonitis – which may lead to her death. Causes While a single specific cause is unknown, chronic egg laying is believed to be triggered by hormonal imbalances influenced by a series of external factors. As in the domestic chicken, female parrots are capable of producing eggs without the involvement of a male – it is a biological process that may be triggered by environmental cues such as day length (days becoming longer, indicating the arrival of spring) and food availability, or the presence of a dark, enclosed space (such as a cupboard or space beneath furniture) in which the bird in question can hide and is not a conscious decision. An overabundance of easily available, nutritious food may trigger the bird's reproductive cycle, as can the presence of a male bird in the vicinity of her cage, or even a circumstance where the hen becomes mistakenly attracted to a mirror or other cage toy. Removing the eggs from a bird demonstrating this behavior may make the problem worse by inducing her to lay even more eggs. It is also possible for the bird's owner to inadvertently trigger chronic egg laying by regularly petting her on the back or under the wings or feeding her from the mouth, which the bird may misinterpret as courtship behaviour. Treatment Unlike cats, dogs and other pet mammals, parrots are not routinely neutered. While it is possible to spay a female parrot, it is a difficult and dangerous procedure that is seldom performed except in extreme circumstances. Instead, treatment and prevention of chronic egg laying behavior involves behavioral modification and environmental changes. Such changes may include: Shortening the parrot's perceived day length by putting her to bed early, giving her 12 hours of uninterrupted sleep Denying her access to dark, enclosed spaces Separating her from any birds to which she appears to be bonded in a sexual manner Discouraging sexual behaviors from the bird towards humans or favored objects Regularly changing around the layout of her cage Permitting the bird to continue sitting on her eggs until she loses interest if she has already laid a clutch and then stopped Hormonal treatment, such as injection of leuprolide acetate (Lupron) or a deslorelin implant placed beneath the skin may cause the egg-laying to cease, but does not resolve the underlying cause of the behavior. References Abnormal behaviour in animals Animal reproductive system Bird behavior Bird health Oology
Chronic egg laying
[ "Biology" ]
632
[ "Behavior by type of animal", "Behavior", "Abnormal behaviour in animals", "Ethology", "Bird behavior" ]
66,243,144
https://en.wikipedia.org/wiki/Methylene%20diurea
Methylene diurea (MDU) is the organic compound with the formula CH2(NHC(O)NH2)2. It is a white water-soluble solid. The compound is formed by the condensation of formaldehyde with urea. Methylene diurea is the substrate for the enzyme methylenediurea deaminase. Applications MDU is an intermediate in the production of urea-formaldehyde resins. Together with dimethylene triurea, MDU is a component of some controlled-release fertilizers. References Fertilizers Horticulture Ureas
Methylene diurea
[ "Chemistry" ]
130
[ "Organic compounds", "Fertilizers", "Soil chemistry", "Ureas" ]
66,243,318
https://en.wikipedia.org/wiki/Dimethylene%20triurea
Dimethylene triurea (DMTU) is the organic compound with the formula (H2NC(O)NHCH2NH)2CO. It is a white water-soluble solid. The compound is formed by the condensation of formaldehyde with urea. Both branched and linear isomers exist. Applications DMTU is an intermediate in the production of urea-formaldehyde resins. Together with methylene diurea, DMTU is a component of some controlled-release fertilizers. References Fertilizers Horticulture Ureas
Dimethylene triurea
[ "Chemistry" ]
124
[ "Fertilizers", "Organic compounds", "Soil chemistry", "Organic compound stubs", "Organic chemistry stubs", "Ureas" ]
66,243,383
https://en.wikipedia.org/wiki/Crotonylidene%20diurea
Crotonylidene diurea (CDU) is an organic compound formed by the condensation of crotonaldehyde with two equivalents of urea. It is a white, water-soluble solid. CDU is a component of some controlled-release fertilizers. References Fertilizers Horticulture Ureas Nitrogen heterocycles Heterocyclic compounds with 1 ring
Crotonylidene diurea
[ "Chemistry" ]
88
[ "Organic compounds", "Fertilizers", "Soil chemistry", "Ureas" ]
66,243,497
https://en.wikipedia.org/wiki/Paleobiota%20of%20the%20Kristianstad%20Basin
The Kristianstad Basin is a Cretaceous-age structural basin and geological formation in northeastern Skåne, the southernmost province of Sweden. The sediments in the basin preserves a wide assortment of taxa represented in its fossil record, including the only non-avian dinosaur fossils in Sweden and one of the world's most diverse mosasaur faunas. Though a majority of the taxa listed below lived during the latest early Campanian ( 80.5 million years ago; fossils from the site Ivö Klack alone from this time compromise about 40 vertebrate species and more than 200 invertebrate species), the Kristianstad Basin preserves fossils ranging in age from the early Santonian ( 86.3 million years ago) to the early Maastrichtian ( 72.1 million years ago); some of the animals in the list were not contemporaries, but separated from each other in time by several million years. The time spans from which fossils have been recovered is included for each species in the list. Bony fish Ray-finned fish Cartilaginous fish Sharks Holocephali Rays Crocodylomorphs In addition to the remains referred to Aigialosuchus, detached and unidentified crocodylomorph scutes have also been discovered in Campanian-age deposits at Ivö Klack. Dinosaurs Non-avian dinosaurs Birds In addition to the fossils described below, indeterminate hesperornithiform remains have also been recovered from Åsen. Mosasaurs Plesiosaurs The last comprehensive review of the plesiosaur fauna in the Kristianstad Basin was done by paleontologist Per-Ove Persson in the 1960s and his taxonomy is still used with caution, pending a much-needed new review. Pterosaurs Possible pterosaur bone fragments have been recovered from earliest Campanian-age deposits at Ullstorp, though they remain unpublished. Scincomorpha Fossils of terrestrial scincomorph lizards have been recovered in the Kristianstad Basin, but are as of yet unpublished. Turtles In addition to the fossils described below, indeterminate turtle remains, including limb bones and carapace fragments, have also been recovered from Ivö Klack, Ugnsmunnarna, Ignaberga and Åsen. Invertebrates Note: Although the Kristianstad Basin is incredibly rich in invertebrate fossils and diversity, with hundreds of species, the creation of a full list is impossible due to a lack of published overviews and recent examinations. Which lists are incomplete is specified below and estimates in regards to how many species are actually present are included if possible. Bivalves The bivalves are the most species-rich group present in the Kristianstad Basin, with Surlyk & Sørenson (2010) stating that close to 70 distinct species were present at Ivö Klack alone. Brachiopods According to Surlyk & Sørenson (2010) there were 27 distinct species of brachiopods present at Ivö Klack alone. Bryozoans Fossils of bryozoans are common in several sites throughout the Kristianstad Basin. Cephalopods Belemnites Ammonites According to Surlyk & Sørenson, a large species of ammonite is also known from Ivö Klack. Chitons Corals Fossils of corals are common in several sites throughout the Kristianstad Basin. The list below only accounts for the recently revised diversity of corals found at Ivö Klack. Crinoids Fossils of crinoids have been found at several sites throughout the Kristianstad Basin. Crustaceans Echinoids According to Surlyk & Sørenson (2010) there were 18 distinct species of echinoids present at Ivö Klack alone. Gastropods According to Surlyk & Sørenson (2010), 19 species of gastropods could be identified from fossils just from the latest Early Campanian of Ivö Klack. Their subsequent 2011 study on the gastropods of the site only listed the 15 species accounted for below. Polychaetes In addition to the diverse polychaete worm fauna of Ivö Klack listed below, encrusters of serpulid polychaetes have also been discovered at Åsen, though they are considerably fewer in number there and as of yet unpublished. Sponges Fossil sponges have been recovered at fossil sites in the Kristianstad Basin. Starfish Fossils of starfish have been found at several sites in the Kristianstad Basin. According to Surlyk & Sørenson (2010) there were 16 distinct species of starfish present at Ivö Klack alone. See also List of fossil sites (with link directory) List of dinosaur-bearing rock formations Paleobiota of the Niobrara Formation References Citations General bibliography Web sources Campanian life Cretaceous Sweden Maastrichtian life Kristianstad Basin Santonian life Mesozoic paleobiotas
Paleobiota of the Kristianstad Basin
[ "Biology" ]
1,020
[ "Mesozoic paleobiotas", "Prehistoric fauna by locality", "Prehistoric biotas" ]
66,245,053
https://en.wikipedia.org/wiki/MOA-2011-BLG-262L
MOA-2011-BLG-262L is a red dwarf with an orbiting exoplanet, both detected through the gravitational microlensing event MOA-2011-BLG-262. It was once believed to be either an exoplanet with 3.2 times the mass of Jupiter and a exomoon with 0.47 times Earth's mass or a red dwarf with a mass of 0.11 solar masses orbited by a planet, but the latter scenario was confirmed in 2024 based on observations of the host star by the Keck telescope, 10 years after the ending of the microlensing event. The system is located from Earth, in the constellation Sagittarius. The host star is a red dwarf, with 19% the Sun's mass and a faint apparent magnitude of 22.3 in the K-band. It has a transverse velocity of , the highest ever found for any star with a known exoplanet. References Sagittarius (constellation) Red dwarfs Planetary systems with one confirmed planet Astronomical objects discovered in 2013 Gravitational lensing
MOA-2011-BLG-262L
[ "Astronomy" ]
225
[ "Sagittarius (constellation)", "Constellations" ]
66,248,309
https://en.wikipedia.org/wiki/Cortinarius%20ainsworthii
Cortinarius ainsworthii is a species of webcap. It is known from central and Northern Europe, where it grows in a variety of habitats. The species was first described in 2020, and was named in honour of the mycologist A. Martyn Ainsworth. Along with five other British webcaps, C. ainsworthii was selected by Kew Gardens as a highlight of taxa described by the organisation's staff and affiliates in 2020. Taxonomy Cortinarius ainsworthii was described in a 2020 research note in the journal Fungal Diversity by Kare Liimatainen and Tuula Niskanen. The description was based on a collection made by A. Martyn Ainsworth in 2017 in Devil's Dyke, near Brighton, England. The specific name honours Ainsworth. Phylogenetic analysis placed the species in Cortinarius sect. Bovini. Collections previously identified as C. rheubarbarinus match with C. ainsworthii, though the type specimen of C. rheubarbarinus does not. A published ITS sequence of the type specimen of C. hydrobivelus matches with C. ainsworthii, but Liimatainen and Niskanen's unpublished sequencing of the specimen does not. Instead, it matches C. armeniacus. Based on this analysis, as well as morphological and ecological factors, Liimatainen and Niskanen concluded that C. hydrobivelus was a synonym of C. armeniacus, leaving their species (now described as C. ainsworthii) lacking a valid name. C. britannicus was one of over 150 botanical and mycological taxa described by staff or affiliates of Kew Gardens in 2020. In a year-end round-up, Kew scientists selected ten highlights, one of which was six newly described British Cortinarius species: C. ainsworthii described from Brighton; C. britannicus from Caithness; C. scoticus and C. aurae from the Black Wood of Rannoch; C. subsaniosus from Cumbria; and C. heatherae from Heathrow Airport. In a press release, Kew identified Cortinarius species as "ecologically important in supporting the growth of plants, particularly trees such as oak, beech, birch and pine" and playing "a key role in the carbon cycling of woodlands and providing nitrogen to trees". Description Cortinarius ainsworthii produces mushrooms with caps that are wide. They are at first convex, later plano-convex, and brown. The cap's margin features whitish fibrils. The cap is hygrophanous, drying up in a zone at the centre to a pale ochraceous brown. The gills are medium-spaced, and adnexed (narrowly attached to the stalk) to emarginate (narrowly attached to the stem though shallower at the attachment). The gills are at first pale brown, later darkening. The edge of the gill, at least when young, is paler. The stem is long and cylindrical to somewhat club-shaped. At the apex, it is thick and, at the base, it is wide. The stem is initially covered in whitish silky fibrils, though becomes pale brownish with age, especially at the base. The flesh is marbled hygrophanous; in the cap it is brown, in stem paler. The universal veil is white, rather sparse or more abundant and forming some incomplete girdles on the stem. The gills have no distinct odour. Microscopic characteristics Cortinarius answorthii has basidiospores that measure 8 to 9.5 by 5 to 5.8 micrometres (μm), averaging 8.7 by 5.3 μm. The spores are almond-shaped, and are moderately to strongly warty. The spores are moderately dextrinoid, meaning that they stain reddish to reddish-brown when treated with Melzer's reagent or Lugol's solution. The club-shaped basidia measure 27 to 39 by 7 to 9 μm, and have four sterigmata. The hyphae in the flesh of the gills are golden brown, smooth with a few spot-like encrustations. The surface of the pileipellis is pale, consisting of parallel hyphae. These measure 5.5 to 8.5 μm in width, and are smooth with a few spot-like incrustations. Lower cells are pale brown, measuring 17 to 31 by 11 to 15.5 μm. They are smooth with a few spot-like encrustations. Similar species Cortinarius ainsworthi is a medium-sized species of C. sect. Bovini. It can be distinguished from closely related species by the combination of brown cap; almond-shaped, medium-sized spores (averaging 8.7 by 5.3 μm), and its habitat in deciduous forests on calcareous ground. Ecology Cortinarius ainsworthii can be found in deciduous forests (perhaps associating with oaks, hazels, and beeches) on calcareous ground, also in open, grazed areas, presumably with rock roses. It is known from temperate to hemiboreal areas in central and northern Europe. References ainsworthii Fungi described in 2020 Fungi of Europe Fungus species
Cortinarius ainsworthii
[ "Biology" ]
1,106
[ "Fungi", "Fungus species" ]
66,248,935
https://en.wikipedia.org/wiki/Berkeley%2086
Berkeley 86 is a young open cluster in Cygnus. It is located inside the OB Stellar association Cyg OB 1, and obscured by a foreground dust cloud. References Open clusters Cygnus (constellation) Star-forming regions
Berkeley 86
[ "Astronomy" ]
49
[ "Cygnus (constellation)", "Constellations" ]
56,453,616
https://en.wikipedia.org/wiki/Clementine%20Chambon
Clementine Chambon is a chemical engineer at Imperial College London, who works on energy solutions for energy-deprived countries. She is the Chief Technology Officer (CTO) of Oorja Development Solutions, a social enterprise that focused on providing clean energy access to off-grid communities in rural India. Education Chambon completed her Masters in Chemical Engineering at the University of Cambridge in 2014. During her degree, she was an intern at Mars Petcare in Verden, Northern Germany. She was awarded a graduate prize from the Salters' Institute of Industrial Chemistry. Subsequently, Chambon completed her PhD in lignocellulosic biofuels in 2017, funded by an Imperial College President's PhD Scholarship and the Grantham Institute for Climate Change. Research Chambon received an Echoing Green Climate Fellowship with a grant of $90,000 in 2015. She has a technical experience with biomass gasification systems and deployment of viable emerging decentralised energy solutions. In 2017, she won the Institution of Chemical Engineers Young Researcher Award. She is an EPSRC doctoral prize fellow at Imperial College London working on biomass gasification and its application for rural electrification. Oorja Development Solutions Chambon is the co-founder and chief technology officer of Oorja. She says that she came up with Oorja during Climate-KIC Journey, a summer school that teaches climate entrepreneurship, in August 2014. Oorja provides clean energy and biochar to rural off-grid communities in India. Chambon is responsible for the design and building of Oorja's easily operable mini-power plants, which transform agricultural waste into affordable electricity and can be run by local people. Oorja's mission is to impact one million people by 2025. They subsidise electricity for low-income households, women-led households, schools, health centres and off-grid street lights. In 2016, Chambon was included Forbes' 30 Under 30 List for top Social Entrepreneurs. She was also listed in MIT Technology Review's list of French innovators under 35 years old. In 2017, Oorja used electrified 100 homes in Uttar Pradesh's Sarvantara Village, providing energy for 1,00 people. References Living people Women chemical engineers French chemical engineers Academics of Imperial College London Women founders Year of birth missing (living people) 21st-century French women engineers 21st-century French engineers
Clementine Chambon
[ "Chemistry" ]
492
[ "Women chemical engineers", "Chemical engineers" ]
56,454,187
https://en.wikipedia.org/wiki/Bundle%20of%20principal%20parts
In algebraic geometry, given a line bundle L on a smooth variety X, the bundle of n-th order principal parts of L is a vector bundle of rank that, roughly, parametrizes n-th order Taylor expansions of sections of L. Precisely, let I be the ideal sheaf defining the diagonal embedding and the restrictions of projections to . Then the bundle of n-th order principal parts is Then and there is a natural exact sequence of vector bundles where is the sheaf of differential one-forms on X. See also Linear system of divisors (bundles of principal parts can be used to study the oscillating behaviors of a linear system.) Jet (mathematics) (a closely related notion) References Appendix II of Exp II of Algebraic geometry
Bundle of principal parts
[ "Mathematics" ]
161
[ "Fields of abstract algebra", "Algebraic geometry" ]
56,454,596
https://en.wikipedia.org/wiki/Carl%20Posy
Carl Jeffrey Posy () is an Israeli philosopher. He is a full professor emeritus in the Department of Philosophy at the Hebrew University of Jerusalem in Jerusalem, Israel. Carl Posy received his PhD degree from Yale University in the United States in 1971. Posy's areas of research interest include the philosophy of mathematics, philosophical logic, the history of philosophy (particularly Kant and his predecessors). His publications include: Kant’s Philosophy of Mathematics: Modern Essays, Springer, 1992. Computability: Turing, Gödel, Church, and Beyond (with Jack Copeland and Oron Shagrir), MIT Press, 2013. . Posy is a former Academic Director of the National Library of Israel. References External links Carl Posy home page Carl Posy on Academia.edu Year of birth missing (living people) Living people Yale University alumni 21st-century Israeli philosophers Academic staff of the Hebrew University of Jerusalem Historians of mathematics Historians of philosophy Jewish philosophers Israeli historians Kantian philosophers Librarians at the National Library of Israel Logicians Scholars of modern philosophy Philosophers of mathematics Israeli philosophers
Carl Posy
[ "Mathematics" ]
218
[ "Philosophers of mathematics" ]
56,456,227
https://en.wikipedia.org/wiki/Spindle%20%28vehicle%29
SPINDLE (Sub-glacial Polar Ice Navigation, Descent, and Lake Exploration) is a 2-stage autonomous vehicle system consisting of a robotic ice-penetrating carrier vehicle (cryobot) and an autonomous submersible HAUV (hovering autonomous underwater vehicle). The cryobot is designed to descend through an ice body into a sub-surface ocean and deploy the HAUV submersible to conduct long range reconnaissance, life search, and sample collection. The HAUV submersible will return to, and auto-dock with, the cryobot at the conclusion of the mission for subsequent data uplink and sample return to the surface. The SPINDLE designed is targeted at sub-glacial lakes such as Lake Vostok and South Pole Lake in Antarctica. SPINDLE would develop the technologies for a Flagship-class mission to either the shallow lakes of Jupiter's moon Europa, the sub-surface ocean of Ganymede, or the geyser sources on both Europa and Enceladus. The project is funded by NASA and is being designed at Stone Aerospace under the supervision of Principal Investigator Bill Stone. Development In 2011, NASA awarded Stone Aerospace $4 million to fund Phase 2 of project VALKYRIE (Very-Deep Autonomous Laser-Powered Kilowatt-Class Yo-Yoing Robotic Ice Explorer). This project created an autonomous cryobot capable of melting through vast amounts of ice. The power source on the surface uses optic fiber to conduct a high-energy laser beam to produce hot water jets that melt the ice ahead. Some beam energy is converted to electricity via photovoltaic cells to power on-board electronics and jet pumps. Phase 2 of project VALKYRIE consisted of testing a scaled-down version of the cryobot in Matanuska Glacier, Alaska in 2015. Stone Aerospace is now looking at designs integrating a scaled-down version of the HAUV submersible called ARTEMIS ( long, ) with VALKYRIE-type technology to produce SPINDLE. This goal is a full-scale cryobot which can melt its way to an Antarctic subglacial lake —Lake Vostok— to collect samples, and then resurface. SPINDLE does not use hot water jets as its predecessor VALKYRIE, but will beam its laser straight into the ice ahead. The vehicle features a radar integrated to an intelligent algorithm for autonomous scientific sampling and navigation through ice and water. This phase of the project would be viewed as a precursor to possible future missions to an icy world such as Europa, Enceladus, or Ganymede to explore the liquid water oceans thought to be present below their ice, assess their potential habitability and seek biosignatures. If the system is flown to an icy moon, it would likely deposit radio receiver buoys into the ice from its rear as it makes its descent. SPINDLE is designed for a penetration through a terrestrial ice sheet and the HAUV has been designed for exploration up to radius from the cryobot. The cryobot is bi-directional and vertically controllable both in an ice sheet as well as following breakthrough into a subglacial water cavity or ocean. The vehicle is designed for subsequent return to the surface at a much later date or subsequent season. The required power plant must be very powerful, so the engineers are working on preliminary designs of a compact fission power plant that would be used for actual ocean planet missions. See also References Autonomous underwater vehicles NASA vehicles Astrobiology
Spindle (vehicle)
[ "Astronomy", "Biology" ]
702
[ "Origin of life", "Speculative evolution", "Astrobiology", "Biological hypotheses", "Astronomical sub-disciplines" ]
56,456,637
https://en.wikipedia.org/wiki/Dirubidium
Dirubidium is a molecular substance containing two atoms of rubidium found in rubidium vapour. Dirubidium has two active valence electrons. It is studied both in theory and with experiment. The rubidium trimer has also been observed. Synthesis and properties Dirubidium is produced when rubidium vapour is chilled. The enthalpy of formation (ΔfH°) in the gas phase is 113.29 kJ/mol. In practice, an oven heated to 600 to 800K with a nozzle can squirt out vapour that condenses into dimers. The proportion of Rb2 in rubidium vapour varies with its density, which depends on the temperature. At 200° the partial pressure of Rb2 is only 0.4%, at 400 °C it constitutes 1.6% of the pressure, and at 677 °C the dimer has 7.4% of the vapour pressure (13.8% by mass). The rubidium dimer has been formed on the surface of helium nanodroplets when two rubidium atoms combine to yield the dimer: Rb + Rb → Rb2 Rb2 has also been produced in solid helium matrix under pressure. Ultracold rubidium atoms can be stored in a magneto-optic trap and then photoassociated to form molecules in an excited state, vibrating at a rate so high they barely hang together. In solid matrix traps, Rb2 can combine with the host atoms when excited to form exciplexes, for example Rb2(3Πu)He2 in a solid helium matrix. Ultracold rubidium dimers are being produced in order to observe quantum effects on well-defined molecules. It is possible to produce a set of molecules all rotating on the same axis with the lowest vibrational level. Spectrum Dirubidium has several excited states, and spectral bands occur for transitions between these levels, combined with vibration. It can be studied by its absorption lines, or by laser induced-fluorescence. Laser induced-fluorescence can reveal the life-times of excited states. In the absorption spectrum of rubidium vapour, Rb2 has a major effect. Single atoms of rubidium in the vapour cause lines in the spectrum, but the dimer causes wider bands to appear. The most severe absorption between 640 and 730 nm makes the vapour almost opaque from 670 to 700 nm, wiping out the far red end of the spectrum. This is the band due to X→B transition. From 430 to 460 nm there is a shark-fin shaped absorption feature due to X→E transitions. Another shark fin like effect around 475 nm s due to X→D transitions. There is also a small hump with peaks at 601, 603 and 605.5 nm 1→3 triplet transitions and connected to the diffuse series. There are a few more small absorption features in the near infrared. There is also a dirubidium cation, Rb2+ with different spectroscopic properties. Bands Molecular constants for excited states The following table has parameters for 85Rb85Rb the most common for the natural element. Related species The other alkali metals also form dimers: dilithium Li2, Na2, K2, and Cs2. The rubidium trimer has also been observed on the surface of helium nanodroplets. The trimer, Rb3 has the shape of an equilateral triangle, bond length of 5.52 A˚ and a binding energy of 929 cm−1. References Rubidium Homonuclear diatomic molecules Allotropes
Dirubidium
[ "Physics", "Chemistry" ]
735
[ "Periodic table", "Properties of chemical elements", "Allotropes", "Materials", "Matter" ]
56,456,840
https://en.wikipedia.org/wiki/James%20Spilker
James Julius Spilker Jr. (August 4, 1933 – September 24, 2019) was an American engineer and a consulting professor in the aeronautics and astronautics department at Stanford University. He was one of the principal architects of the Global Positioning System (GPS). He was the co-founder of the space communications company Stanford Telecommunications, and later was the executive chairman of AOSense Inc., Sunnyvale, CA. Education Spilker had a difficult early childhood, battling illnesses and becoming legally blind in one eye when he was young. He was raised only by his mother. After graduating high school and working for awhile, Spilker began attending college at the College of Marin, a community college in Kentfield, California, primarily for financial reasons. After receiving scholarship aid from Stanford University and Hewlett-Packard, he was able to transfer to Stanford, which he attended for 5 years and received a B.S. degree in 1955, an M.S. degree in 1956, and a Ph.D. in 1958, all in electrical engineering. He also completed the senior management program at UCLA in 1985. Career From 1958 to 1963, Spilker worked as a research supervisor at Lockheed Research Labs in Palo Alto, California, where he invented an optimal tracking device for spread-spectrum signals and devised technology to communicate with aircraft flying to/from Berlin, Germany during the Russian blockade. In 1963, he became manager of the communications sciences department of Ford Aerospace Corporation where he led and managed efforts on Ford Aerospace Corporation both satellite communications, ground terminals and military communications satellite payloads for the first quasi-stationary communications satellites, and developed multiple access technologies for various satellite communications and became Director of Communications Systems. In 1973, he co-founded Stanford Telecommunications Inc. (referred to as Stanford Telecom) with Marshall Fitzgerald and John Brownie. Stanford Telecom was the first of his three Silicon Valley startup companies, with three people and no VC funding. As the company's Executive Chairman, he grew the military satellite communications and GPS company to over 1,300 employees in 5 states when he sold it in 1999. During Spilker's leadership at Stanford Telecommunications Inc., he also designed semiconductor (ASICs) Application-specific integrated circuit for error correction, number-controlled oscillators, and quadrature amplitude modulation. Aviation Week and Space Technology in 1997 ranked Stanford Telecom as the #2 most competitive aerospace company in the world and top 100 fastest growing companies. From 2001 until his death in 2019, Spilker was a consulting professor at Stanford University in the electrical engineering and aeronautics and astronautics department. In 2005, Spilker co-founded the Stanford University Research Center for Position, Navigation and Time, which continues today and has an annual international symposium at Stanford University with invited speakers from around the world. In 2005, Spilker also co-founded AOSense Inc., an atomic physics company specializing in inertial navigation using cold atom interferometry. He was Executive Chairman at AOSense Inc. He was also co-founder and chairman of Rosum, a high-tech company using digital and analog television signals for indoor positioning services and augmentation of GPS. Spilker was a member of the Stanford University Engineering advisory board, a member of the University of Southern California (USC) Communication Sciences Institute, a member of the US Defense Science Board GPS Task Force, and the Air Force Space Command GPS Independent Review Team. He was a member of the National Academy of Engineering (NAE) and a Life Fellow of the IEEE for the development of digital satellite communications and navigation systems. Publications Books Digital Communications by Satellite, Prentice-Hall, 1977. 10 printings including 1 paperback. GPS Global Positioning System: Theory and Applications. AIAA, co-editor with Bradford Parkinson, 1996. Author and co-author of 9 chapters in the book. The book won the AIAA Sommerfield Best Book Medal. Position, Navigation, and Timing Technologies in the 21st Century: Integrated Satellite Navigation, Sensor Systems, and Civil Applications. Wiley - IEEE Press, 2019. Editors: Y. Jade Morton, Frank van Diggelen, James Spilker, and Bradford Parkinson. Associate Editors: Sherman Lo, Grace Gao. Major book chapter and technical papers Evolution of Modern Digital Communications Security Technologies, in Science, Technology, and National Security, coauthor with Jim Omura, Paul Baran, Pennsylvania Academy of Sciences, 2002. Spilker wrote over 100 technical papers for various IEEE and ION publications. Honors James Spilker was an elected member of the National Academy of Engineering (1998) and was inducted to the Air Force GPS Hall of Fame (2000) and the Silicon Valley Engineering Hall of Fame (2007). He was a Life Fellow of the IEEE and a Fellow of the Institute of Navigation (ION). As one of the originators of GPS, James Spilker shared in the Goddard Memorial Trophy (2012). He won the Arthur Young Entrepreneur of the Year Award in 1987, the ION Kepler Award (the highest award of the ION) in 1999, the Burka Award in 2002, and the US Air Force Space Command Recognition Award for 9 years of service on GPS Independent Review Team in 2000. In 2015, he received the IEEE Edison Medal for contributions to the technology and implementation of the GPS civilian navigation system. In 2019, James Spilker shared the 2019 Queen Elizabeth Prize for Engineering with three other GPS pioneers (Bradford Parkinson, Hugo Fruehauf, and Richard Schwartz). In 2012, Spilker and his wife, Anna Marie Spilker, were recognized for their contributions by Stanford University, which dedicated the James and Anna Marie Spilker Engineering and Applied Sciences Building in their honor. Personal life James Spilker was married to Anna Marie Spilker, a licensed real estate broker and the founder and president of New Pacific Investments Inc. in the Silicon Valley. He died on September 24, 2019, at the age of 86. References 2019 deaths 21st-century American engineers People associated with the Global Positioning System Stanford University faculty 1933 births Stanford University School of Engineering alumni IEEE Edison Medal recipients
James Spilker
[ "Technology" ]
1,240
[ "Global Positioning System", "People associated with the Global Positioning System" ]
56,457,036
https://en.wikipedia.org/wiki/Sensors%20and%20Materials
Sensors and Materials is a monthly peer-reviewed open access scientific journal covering all aspects of sensor technology, including materials science as applied to sensors. It is published by Myu Scientific Publishing and the editor-in-chief is Makoto Ishida (Toyohashi University of Technology). The journal was established in 1988 by a group of Japanese academics to promote the publication of research by Asian authors in English. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2022 impact factor of 1.2. References External links English-language journals Materials science journals Monthly journals Open access journals Academic journals established in 1988
Sensors and Materials
[ "Materials_science", "Engineering" ]
138
[ "Materials science journals", "Materials science" ]
56,457,574
https://en.wikipedia.org/wiki/Frank%20Hekking
Frank Willem Jan Hekking (1964 — died 15 May 2017) was a Dutch physicist working at Université Grenoble Alpes in Grenoble, France. He is notable for his contributions in theory of mesoscopic superconductivity, in particular, Andreev reflection and Josephson effect. He was a member of Institut Universitaire de France since 2012 until his death. In 1992, Hekking received a PhD degree from Delft University of Technology, where he worked under supervision of . After postdoctoral appointments at the University of Karlsruhe, the University of Minnesota, the University of Cambridge, and Ruhr University Bochum, he became a professor at Université Grenoble Alpes. He was attached to Laboratoire de Physique et de Modélisation des Milieux Condensés of CNRS, and he was a director of this laboratory on two occasions. References 1964 births 2017 deaths 21st-century Dutch physicists Academic staff of Grenoble Alpes University Delft University of Technology alumni Dutch expatriates in France Condensed matter physicists
Frank Hekking
[ "Physics", "Materials_science" ]
220
[ "Condensed matter physicists", "Condensed matter physics" ]
56,457,803
https://en.wikipedia.org/wiki/ISO%2014006
ISO 14006, Environmental management systems - Guidelines for incorporating ecodesign, is an international standard that specifies guidelines to help organizations establish, document, implement, maintain, and continuously improve their ecodesign management as part of the environmental management system. The standard is intended to be used by organizations that have implemented an environmental management system in compliance with ISO 14001, but can help to integrate ecodesign into other management systems. The guideline is applicable to any organization regardless of its size or activity. Edition and revision ISO 14006 was developed by ISO/TC207/SC1 Environmental management systems, and was published for the first time in July 2011. The second edition was published in January 2020. ISO/TC 207 was established in the year 1993. Main requirements of the standard The 'ISO 14006' adopts a scheme in 5 chapters in the following subdivision: 1 Scope 2 Normative references 3 Terms and definitions 4 Role of top management in eco-design 5 Guidelines for incorporating ecodesing into an EMS 6 Ecodesign activities in product design and development History Notes Related entries ISO 14000 Environmental management system (EMS) Ecodesign ISO standard List of International Organization for Standardization standards (ISO) European Committee for Standardization (CEN) External links ISO 14006-Environmental management systems - Guidelines for the integration of eco-design. ISO / TC 207-Environmental management systems. 14006 Technical specifications
ISO 14006
[ "Technology" ]
284
[ "nan" ]
56,458,866
https://en.wikipedia.org/wiki/Research%20software%20engineering
Research software engineering is the use of software engineering practices, methods and techniques for research software, i.e. software that was made for and is mainly used within research projects. The term was proposed in a research paper in 2010 in response to an empirical survey on tools used for software development in research projects. It started to be used in United Kingdom in 2012, when it was needed to define the type of software development needed in research. This focuses on reproducibility, reusability, and accuracy of data analysis and applications created for research. Support Various type of associations and organisations have been created around this role to support the creation of posts in universities and research institutes. In 2014 a Research Software Engineer Association was created in UK, which attracted 160 members in the first three months and which lead to the creation of the Society of Research Software Engineering in 2019. Other countries like the Netherlands, Germany, and the USA followed creating similar communities and there are similar efforts being pursued in Asia, Australia, Canada, New Zealand, the Nordic countries, and Belgium. In January 2021 the International Council of RSE Associations was introduced. UK counts over 40 universities and institutes with groups that provide access to software expertise to different areas of research. Additionally, the Engineering and Physical Sciences Research Council created a Research Software Engineer fellowship to promote this role and help the creation of RSE groups across UK, with calls in 2015, 2017, and 2020. The world first RSE conference took place in UK in September 2016 and it has been repeated annually (except for a gap in 2020) since. In 2019 the first national RSE conferences in Germany and the Netherlands were held, next editions were planned for 2020 and then cancelled. The SORSE (A Series of Online Research Software Events) community was established in late‑2020 in response to the COVID-19 pandemic and ran its first online event in September 2020. Recognition and Awards The annual Research Software Engineering Conference organised by the Society of Research Software Engineering recognises outstanding contributions to the field of research software engineering through awards presented at the conference. The RSE Society Award was first presented in 2019, at the Fourth Conference of Research Software Engineering held at the University of Birmingham, to recognise outstanding contributions to the research software engineering community over a sustained period of time. In 2022, three community awards were created to recognise contributions to the RSE community over the past 12 months: Rising Star, Training & Education, and Impact. From 2023, these were renamed the Claire Wyatt Community Awards, "to recognise the incredible contribution that Claire [Wyatt] made to the Society over the last decade}. See also Open Energy Modelling Initiative — relevant here because the bulk of the development occurs in universities References Further reading The Turing Way Research Software Engineering with Python Software Carpentry Good Research Code Handbook External links SORSE — A Series of Online Research Software Events — listing of online events tailored for the COVID-19 era Research Software Engineers: State of the Nation Report 2017 Research Software Alliance (ReSA) (international) Society of Research Software Engineering (UK) US RSE Association Software engineering
Research software engineering
[ "Technology", "Engineering" ]
623
[ "Systems engineering", "Computer engineering", "Software engineering", "Information technology", "Computing stubs" ]
56,459,938
https://en.wikipedia.org/wiki/Sequence-defined%20polymer
Sequence-defined polymer (Syn. sequence-specific polymer, sequence-ordered polymer) is a uniform macromolecule with an exact chain-length and a perfectly defined sequence of monomers. In other words, each monomer unit is at a defined position in the chain e.g. peptides, proteins, oligonucleotides. Sequence-defined polymers constitute therefore a subclass of the field of sequence-controlled polymers. References Polymers
Sequence-defined polymer
[ "Chemistry", "Materials_science" ]
93
[ "Polymer stubs", "Polymers", "Polymer chemistry", "Organic chemistry stubs" ]
56,459,958
https://en.wikipedia.org/wiki/Zettabyte%20Era
The Zettabyte Era or Zettabyte Zone is a period of human and computer science history that started in the mid-2010s. The precise starting date depends on whether it is defined as when the global IP traffic first exceeded one zettabyte, which happened in 2016, or when the amount of digital data in the world first exceeded a zettabyte, which happened in 2012. A zettabyte is a multiple of the unit byte that measures digital storage, and it is equivalent to 1,000,000,000,000,000,000,000 (1021) bytes. According to Cisco Systems, an American multinational technology conglomerate, the global IP traffic achieved an estimated 1.2 zettabytes in 2016, an average of 96 exabytes (EB) per month. Global IP traffic refers to all digital data that passes over an IP network which includes, but is not limited to, the public Internet. The largest contributing factor to the growth of IP traffic comes from video traffic (including online streaming services like Netflix and YouTube). The Zettabyte Era can also be understood as an age of growth of all forms of digital data that exist in the world which includes the public Internet, but also all other forms of digital data such as stored data from security cameras or voice data from cell-phone calls. Taking into account this second definition of the Zettabyte Era, it was estimated that in 2012 upwards of 1 zettabyte of data existed in the world and that by 2020 there would be more than 40 zettabytes of data in the world at large. The Zettabyte Era translates to difficulties for data centers to keep up with the explosion of data consumption, creation and replication. In 2015, 2% of total global power was taken up by the Internet and all its components, so energy efficiency with regards to data centers has become a central problem in the Zettabyte Era. IDC forecasts that the amount of data generated each year will grow to 175 zettabytes by 2025. It further estimates that a total of 22 zettabytes of digital storage will be shipped across all storage media types between 2018 and 2025, with nearly 59 percent of this capacity being provided by the hard drive industry. The zettabyte A zettabyte is a digital unit of measurement. One zettabyte is equal to one sextillion bytes or 1021 (1,000,000,000,000,000,000,000) bytes, or, one zettabyte is equal to a trillion gigabytes. To put this into perspective, consider that "if each terabyte in a zettabyte were a kilometre, it would be equivalent to 1,300 round trips to the moon and back (768,800 kilometers)". As former Google CEO Eric Schmidt puts it, from the very beginning of humanity to the year 2003, an estimated 5 exabytes of information was created, which corresponds to 0.5% of a zettabyte. In 2013, that amount of information (5 exabytes) took only two days to create, and that pace is continuously growing. Definitions The concept of the Zettabyte Era can be separated into two distinct categories: In terms of IP traffic: This first definition refers to the total amount of data to traverse global IP networks such as the public Internet. In Canada for example, there has been an average growth of 50.4% of data downloaded by residential Internet subscribers from 2011 to 2016. According to this definition, the Zettabyte Era began in 2016 when global IP traffic surpassed one zettabyte, estimated to have reached roughly 1.2 zettabytes. In terms of all forms of digital data: In this second definition, the Zettabyte Era refers to the total amount of all the digital data that exists in any form, from digital films to transponders that record highway usage to SMS text messages. According to this definition, the Zettabyte Era began in 2012, when the amount of digital data in the world surpassed one zettabyte. Cisco reportThe Zettabyte Era: Trends and Analysis In 2016, Cisco Systems stated that the Zettabyte Era was now reality when global IP traffic reached an estimated 1.2 zettabytes. Cisco also provided future predictions of global IP traffic in their report The Zettabyte Era: Trends and Analysis. This report uses current and past global IP traffic statistics to forecast future trends. The report predicts trends between 2016 and 2021. Here are some of the predictions for 2021 found in the report: Global IP traffic will triple and is estimated to reach 3.3 ZB per annum. In 2016 video traffic (e.g. Netflix and YouTube) accounted for 73% of total traffic. In 2021 this will increase to 82%. The number of devices connected to IP networks will be more than three times the global population. The amount of time it would take for one person to watch the entirety of video that will traverse global IP networks in one month is 5 million years. PC traffic will be exceeded by smartphone traffic. PC traffic will account for 25% of total IP traffic while smartphone traffic will be 33%. There will be a twofold increase in broadband speeds. Factors that led to the Zettabyte Era There are many factors that brought about the rise of the Zettabyte Era. Increases in video streaming, mobile phone usage, broadband speeds and data center storage are all contributing factors that led to the rise (and continuation) of data consumption, creation and replication. Increased video streaming There is a large, and ever-growing consumption of multimedia, including video streaming, on the Internet that has contributed to the rise of the Zettabyte Era. In 2011 it was estimated that roughly 25%–40% of IP traffic was taken up by video streaming services. Since then, video IP traffic has nearly doubled to an estimated 73% of total IP traffic. Furthermore, Cisco has predicted that this trend will continue into the future, estimating that by 2021, 82% of total IP traffic will come from video traffic. The amount of data used by video streaming services depends on the quality of the video. Thus, Android Central breaks down how much data is used (on a smartphone) with regards to different video resolutions. According to their findings, per hour video between 240p and 320p resolution uses roughly 0.3 GB. Standard video, which is clocked in at a resolution of 480p, uses approximately 0.7 GB per hour. Netflix and YouTube are at the top of the list in terms of the most globally streamed video services online. In 2016, Netflix represented 32.72% of all video streaming IP traffic, while YouTube represented 17.31%. The third spot is taken up by Amazon Prime Video where global data usage comes in at 4.14%. Netflix Currently, Netflix is the largest paid video streaming service in the world, accessible in over 200 countries and with more than 80 million subscribers. Streaming high definition video content through Netflix uses roughly 3 GB of data per hour, while standard definition takes up around 1 GB of data per hour. In North America, during peak bandwidth consumption hours (around 8 PM) Netflix uses about 40% of total network bandwidth. The vast amount of data marks an unparalleled period in time and is one of the major contributing factors that has led the world into the Zettabyte Era. YouTube YouTube is another major video streaming (and video uploading) service, whose data consumption rate across both fixed and mobile networks remains quite large. In 2016, the service was responsible for using up about 20% of total Internet traffic and 40% of mobile traffic. As of 2018, 300 hours of YouTube video content is uploaded every minute. Increased wireless and mobile traffic The usage of mobile technologies to access IP networks has resulted in an increase in overall IP traffic in the Zettabyte Era. In 2016, the majority of devices that moved IP traffic and other data streams were hard-wired devices. Since then, wireless and mobile traffic have increased and are predicted to continue to increase rapidly. Cisco predicts that by the year 2021, wired devices will account for 37% of total traffic while the remaining 63% will be accounted for through wireless and mobile devices. Furthermore, smartphone traffic is expected to surpass PC traffic by 2021; PCs are predicted to account for 25% of total traffic, down from 46% in 2016, whereas smartphone traffic is expected to increase from 13% to 33%. According to the Organisation for Economic Co-operation and Development (OECD), mobile broadband penetration rates are ever-growing. Between June 2016 and December 2016 there was an average mobile broadband penetration rate increase of 4.43% of all OECD countries. Poland had the largest increase coming in at 21.55% while Latvia had the lowest penetration rate having declined 5.71%. The OECD calculated that there were 1.27 billion total mobile broadband subscriptions in 2016, 1.14 billion of these subscriptions had both voice and data included in the plan. Increased broadband speeds Broadband is what connects Internet users to the Internet, thus the speed of the broadband connection is directly correlated to IP traffic – the greater the broadband speed, the greater the possibility of more traffic that can traverse IP networks. Cisco estimates that broadband speeds are expected to double by 2021. In 2016, global average fixed broadband reached speeds as high as 27.5 Mbit/s but are expected to reach 53 Mbit/s by 2021. Between the fourth quarter of 2016 and the first quarter of 2017, average fixed broadband speeds globally equated to 7.2 Mbit/s. South Korea was at the top of the list in terms of broadband speeds. In that period broadband speeds increased 9.3%. High-bandwidth applications need significantly higher broadband-speeds. Certain broadband technologies including Fiber-to-the-home (FTTH), high-speed digital subscriber line (DSL) and cable broadband are paving the way for increased broadband speeds. FTTH can offer broadband-speeds that are ten times (or even a hundred times) faster than DSL or cable. Internet service providers in the Zettabyte Era The Zettabyte Era has affected Internet service providers (ISPs) with the growth of data flowing from all directions. Congestion occurs when there is too much data flowing in and the quality of service (QoS) weakens. In both China some ISPs store and handle exabytes of data. The response by certain ISPs is to implement so-called network management practices in an attempt to accommodate the never-ending data-surge of Internet subscribers on their networks. Furthermore, the technologies being implemented by ISPs across their networks are evolving to address the increase in data flow. Network management practices have brought about debates relating to net neutrality in terms of fair access to all content on the Internet. According to The European Consumer Organisation, network neutrality can be understood as an aim that "all Internet should be treated equally, without discrimination or interference. When this is the case, users enjoy the freedom to access the content, services, and applications of their choice, using any device they choose". According to the Canadian Radio-television and Telecommunications Commission (CRTC) Telecom Regulatory Policy 2009-657 there are two forms of Internet network management practices in Canada. The first are economic practices such as data caps, the second are technical practices like bandwidth throttling and blocking. According to the CRTC, the technical practices are put in place by ISPs to address and solve congestion issues in their network, however the CRTC states that ISPs are not to employ ITMPs for preferential or unjustly discriminatory reasons. In the United States, however, during the Obama-era administration, under the Federal Communications Commission's (FCC) 15–24 policy, there were three bright-line rules in place to protect net neutrality: no blocking, no throttling, no paid prioritization. On 14 December 2017, the FCC voted 3–2 to remove these rules, allowing ISPs to block, throttle and give fast-lane access to content on their network. In an attempt to aid ISP's in dealing with large data-flow in the Zettabyte Era, in 2008 Cisco unveiled a new router, the Aggregation Services Router (ASR) 9000, which at the time was supposed to be able to offer six times the speed of comparable routers. In one second the ASR 9000 router would, in theory, be able to process and distribute 1.2 million hours of DVD traffic. In 2011, with the coming of the Zettabyte Era, Cisco had continued work on the ASR 9000 in that it would now be able to handle 96 terabytes a second, up significantly from 6.4 terabytes a second the ASR 9000 could handle in 2008. Data centers Energy consumption Data centers attempt to accommodate the ever-growing rate at which data is produced, distributed, and stored. Data centers are large facilities used by enterprises to store immense datasets on servers. In 2014 it was estimated that in the U.S. alone there were roughly 3 million data centers, ranging from small centers located in office buildings to large complexes of their own. Increasingly, data centers are storing more data than end-user devices. By 2020 it is predicted that 61% of total data will be stored via cloud applications (data centers) in contrast to 2010 when 62% of data storage was on end-user devices. An increase in data centers for data storage coincides with an increase in energy consumption by data centers. In 2014, data centers in the U.S. accounted for roughly 1.8% of total electricity consumption which equates to 70 billion kWh. Between 2010 and 2014 an increase of 4% was attributed to electricity consumption by data centers, this upward trend of 4% is predicted to continue through 2014–2020. In 2011, energy consumption from all data centers equated to roughly 1.1% to 1.5% of total global energy consumption. Information and communication technologies, including data centers, are responsible for creating large quantities of emissions. Google's green initiatives The energy used by data centers is not only to power their servers. In fact, most data centers use about half of their energy costs on non-computing energy such as cooling and power conversion. Google's data centers have been able to reduce non-computing costs to 12%. Furthermore, as of 2016, Google uses its artificial intelligence unit, DeepMind, to manage the amount of electricity used for cooling their data centers, which results in a cost reduction of roughly 40% after the implementation of DeepMind. Google claims that its data centers use 50% less energy than ordinary data centers. According to Google's Senior Vice President of Technical Infrastructure, Urs Hölzle, Google's data centers (as well as their offices) will have reached 100% renewable energy for their global operations by the end of 2017. Google plans to accomplish this milestone by buying enough wind and solar electricity to account for all the electricity their operations consume globally. The reason for these green-initiatives is to address climate change and Google's carbon footprint. Furthermore, these green-initiatives have become cheaper, with the cost of wind energy lowering by 60% and solar energy coming down 80%. In order to improve a data center's energy efficiency, reduce costs and lower the impact on the environment, Google provides 5 of the best practices for data centers to implement: Measure the Power Usage Effectiveness (PUE), a ratio used by the industry to measure the energy used for non-computing functions, to track a data center's energy use. Using well-designed containment methods, try to stop cold and hot air from mixing. Also, use backing plates for empty spots on the rack and eliminate hot spots. Keep the aisle temperatures cold for energy savings. Use free cooling methods to cool data centers, including a large thermal reservoir or evaporating water. Eliminate as many power conversion steps as possible to lower power distribution losses. The Open Compute Project In 2010, Facebook launched a new data center designed in such a way that allowed it to be 38% more efficient and 24% less expensive to build and run than the average data center. This development led to the formation of the Open Compute Project (OCP) in 2011. The OCP members collaborate in order to build new technological hardware that is more efficient, economical and sustainable in an age where data is ever-growing. The OCP is currently working on several projects, including one specifically focusing on data centers. This project aims to guide the way in which new data centers are built, but also to aid already existing data centers in improving thermal and electrical energy as well as to maximize mechanical performance. The OCP's data center project focuses on five areas: facility power, facility operations, layout and design, facility cooling and facility monitoring and control. References Further reading Big data Cloud storage Data centers Data management Digital technology Mass media technology New media Technology forecasting
Zettabyte Era
[ "Technology" ]
3,503
[ "Information and communications technology", "New media", "Mass media technology", "Data centers", "Data management", "Digital technology", "Data", "Big data", "Multimedia", "Computers" ]
56,461,377
https://en.wikipedia.org/wiki/Piloty%27s%20acid
Piloty's acid is an organic compound with the formula C6H5SO2N(H)OH. A white solid, it is the benzenesulfonyl derivative of hydroxylamine. It is one of the main reagents used to generate nitroxyl (HNO), a highly reactive species that is implicated in a some chemical and biochemical reactions. See also Angeli's salt References Hydroxylamines Sulfonamides Benzenesulfonates
Piloty's acid
[ "Chemistry" ]
103
[ "Hydroxylamines", "Reducing agents" ]
56,461,877
https://en.wikipedia.org/wiki/Human%20milk%20microbiome
The human milk microbiota, also known as human milk probiotics (HMP), encompasses the microbiota–the community of microorganisms–present within the human mammary glands and breast milk. Contrary to the traditional belief that human breast milk is sterile, advancements in both microbial culture and culture-independent methods have confirmed that human milk harbors diverse communities of bacteria. These communities are distinct in composition from other microbial populations found within the human body which constitute the human microbiome. The microbiota in human milk serves as a potential source of commensal, mutualistic, and potentially probiotic bacteria for the infant gut microbiota. The World Health Organization (WHO) defines probiotics as "living organisms which, when administered in adequate amounts, confer a health benefit on the host." Occurrence Breast milk is a natural source of lactic acid bacteria for the newborn through breastfeeding, and may be considered a symbiotic food. The normal concentration of bacteria in milk from healthy women was about 103 colony-forming units (CFU) per milliliter. The milk's bacterial communities were generally complex. Among the hundreds of operational taxonomic units detected in the milk of every woman, only nine (Streptococcus, Staphylococcus, Serratia, Pseudomonas, Corynebacterium, Ralstonia, Propionibacterium, Sphingomonas, and Nitrobacteraceae) were present in every sample from every woman, but an individual's milk bacterial community was generally stable over time. Human milk is a source of live Staphylococci, Streptococci, lactic acid bacteria, Bifidobacteria, Propionibacteria, Corynebacteria, and closely related Gram-positive bacteria for the infant gut. Composition Breast milk was considered to be free of bacteria until about the early 2000s, when lactic acid bacteria were first described in human milk hygienically collected from healthy women. Several studies have shown that there is a mother-to-infant transfer of bacterial strains belonging, at least, to the genera Lactobacillus, Staphylococcus, Enterococcus, and Bifidobacterium through breastfeeding, thus accounting for the close relationship of bacterial composition of the gut microbiota of breastfed infants with that found in the breast milk of their respective mothers. Research has also found that there are similarities between human milk and infant gut microbial flora, suggesting that dietary exposure, such as human milk probiotics, may have a contribution in supporting infant gut microbiota and immune development. Bacteria commonly isolated in human milk samples include Bifidobacterium, Lactobacillus, Staphylococcus, Streptococcus, Bacteroides, Clostridium, Micrococcus, Enterococcus, and Escherichia. Metagenomic analyses of human milk find it is dominated by Staphylococcus, Pseudomonas, and Edwardsiella. The human milk microbiome likely varies by population and between individual women, however, a study based on a group of U.S. women observed the same nine bacterial taxa in all samples from all of their participants, suggesting a common "core" of the milk microbiome, at least in that population. Bacterial communities of human colostrum have been reported as being more diverse than those found in mature milk. The three strains of Lactobacilli with probiotic properties that were isolated from breast milk were L. fermentum CECT5716, L. gasseri CECT5714, and L. salivarius CECT5713, with L. fermentum being one of the most abundant strains. Early administration of L. fermentum CECT5716 in infant formula is claimed to be safe and well tolerated for infants one to six months of age, and safe for long term use. Origin While the origins of the human milk microbiome are not exactly known, several hypotheses for its establishment have been proposed. Bacteria present in human milk may be derived from the surrounding breast skin flora, or the infant's oral cavity microbiota. Retrograde backflow during nursing or suckling may also lead to bacterial establishment in the mammary ducts, supported by the observation that a certain degree of flowback has been shown to occur during nursing using infrared photography. Alternatively, bacteria may be translocated to the mammary duct from the maternal gastrointestinal tract via an entero-mammary pathway, facilitated by dendritic cells. Environmental factors Several factors may influence the composition of human milk probiotics, such as maternal body mass index (BMI), infant sex, birth modality, and mode of breastfeeding. A study done by Soto et al also revealed that Lactobacilli and Bifidobacteria are more commonly found in the human milk of women who did not receive any antibiotics during pregnancy and lactation. Human milk oligosaccharides (HMOs), a primary component of human milk, are prebiotics which have been shown to promote growth of beneficial Bifidobacterium and Bacteroides species. Maternal health Maternal health status is associated with changes in the bacterial composition of milk. Higher maternal BMI and obesity are associated with changes in the levels of Bifidobacterium and Staphylococcus species and overall lower bacterial diversity. Milk of women with celiac disease is observed to have reduced levels of Bacteroides and Bifidobacterium. Women who are HIV-positive show higher bacterial diversity and increased abundances of Lactobacillus in their milk than do non-HIV-positive women. Mastitis has been linked to changes in human milk microbiota at the phylum level, lower microbial diversity, and decreased abundance of obligate anaerobic taxa. Women delivering term and preterm show differences in their milk microbiome composition, with mothers of term-births showing lower abundances of Enterococcus species and higher amounts of Bifidobacterium species in their milk compared to mothers of preterm births. Few studies have been conducted examining the influence of maternal diet on the milk microbiome, but diet is known to influence other aspects of milk composition, such as the lipid profile, which in turn could affect its microbial composition. Variation in the fat and carbohydrate content of the maternal diet may influence the taxonomic composition of the milk microbiome. Both the taxonomic composition and diversity of bacteria present in human milk likely vary by maternal geographic location, however studies with more geographically diverse participants are needed to better understand variation between populations. Maternal perinatal antibiotic use is associated with changes in the prevalence of Lactobacillus, Bifidobacterium, Staphylococcus, and Eubacterium in milk. Social network density of mother-infant dyads was found to be associated with increased bacterial diversity in the milk microbiome of mothers in the Central African Republic. Delivery method Mode of delivery may influence composition of the human milk microbiome. Vaginal births is associated with high taxonomic diversity and high prevalence of Bifidobacterium and Lactobacillus, and the opposite trend being seen with birth by caesarean section, however no relationship between delivery mode and the maternal milk microbiome has also been observed. Lactation stage The human milk microbiome varies across lactation stage, with higher microbial diversity observed in colostrum than in mature milk. Taxonomic composition of human milk also varies across the lactation period, initially dominated by Weissella, Leuconostoc, Staphylococcus, Streptococcus, and Lactococcus species, and later composed primarily of Veillonella, Prevotella, Leptotrichia, Lactobacillus, Streptococcus, Bifidobacterium, and Enterococcus. Influences on health Breastfeeding is thought to be an important driver of infant gut microbiome establishment. The gut microbiome of breastfed infants is less diverse, contains higher amounts of Bifidobacterium and Lactobacillus species, and fewer potential pathogenic taxa than the gut microbiome of formula-fed infants. Human milk bacteria may reduce risk of infection in breastfed infants by competitively excluding harmful bacteria, and producing antimicrobial compounds which eliminate pathogenic strains. Certain Lactobacilli and Bifidobacteria, the growth of which is stimulated by HMOs, contribute to healthy metabolic and immune-related functioning in the infant gut. Benefits for breastfeeding mother Breastfeeding is an essential component of maternal health, providing numerous benefits. It has been associated with a decreased risk of metabolic disease, improved immune function, and delayed menstrual cycles. Lactobacillus fermentum, a type of probiotic bacteria, has been identified as a means of reducing the risk of breast cancer. Research studies showed that L. fermentum could improve mastitis, a common inflammatory disease associated with lactation, by reducing the number of Streptococcus load which is believed to be the causal agent and risk factor of mastitis. Additionally, notable benefits of breastfeeding have been theoretically sustained to be able to reduce metabolic diseases such as diabetes and cardiovascular disease. The lactation process requires a substantial amount of energy expenditure, which can mitigate the risk of these diseases. Lactobacillus fermentum has been shown to facilitate weight loss and reduce fat mass, as well as improve insulin sensitivity, thereby helping to prevent diabetes and obesity. Moreover, hormonal changes during lactation can further improve metabolism and glucose homeostasis, suggesting reduction in potential metabolic diseases. However, it is hard to determine the exact factor affecting weight change after birth due to various confounding factors such as pre-pregnancy BMI, weight gain during pregnancy, and social support. A recent meta-analysis of 13 cohort studies have found that breast feeding has been shown to decrease inflammatory markers, such as C-reactive protein and interleukin-6, which are associated with insulin resistance and T2DM. On the other hand, breastfeeding can also delay menstrual cycles, reducing the risk of iron-deficiency anemia and related health issues. Prolactin, a hormone produced during lactation, suppresses ovulation, preventing the mother from menstruating. This suppression can continue for up to 6 months postpartum, serving as a natural form of birth control. It is also suggested that in addition to physical benefits, breastfeeding can reduce the risk of postpartum depression. Breastfeeding mothers report less anxiety, less negative mood, and less stress, as well as increased sleep duration and reduced sleep disturbances when compared to formula-feeding mothers. Studies on post-partum depression demonstrate that breastfeeding may protect mothers from this disorder, and researchers have strived to explain the biological processes that explain this protection. For example, lactation attenuates neuro-endocrine responses to stress, and this may be related to fewer post-partum depressive symptoms. Moreover, early breastfeeding cessation was linked to higher risk of post-partum depression. It is the psychological pressure to exclusively breastfeed that contributes to postpartum depression symptoms in mothers unable to achieve their breastfeeding intentions. In a prospective follow-up for eight weeks postpartum, mothers with breastfeeding problems (including mastitis, nipple pain, need for frequent expressing of milk, or over-supply or under-supply of milk) showed poor mental health. Benefit for infants Breastfed children have a lower incidence of infections than formula-fed children, which could be mediated in part through modulation of the intestinal microflora by breast milk components. Indeed, breast-fed infants seem to develop a gut microflora richer in Lactobacilli and Bifidobacteria with reduced pathogenic bacteria compared with formula-fed infants. Research, by Maldonado et al, found that infants receiving a follow-on formula enriched with L. fermentum demonstrated a reduction in gastrointestinal and respiratory infections, thus the administration of such formula may be useful for the prevention of community-acquired gastrointestinal and upper respiratory infections in infants. Human milk probiotics could also act as pioneering species to increase the colonization of ‘beneficial’ bacteria and support the infant’s immature immune system. It is known that Lactobacilli and Bifidobacteria can suppress the growth of pathogenic microorganisms such as Salmonella typhimurium and Clostridium perfringens by colonization of a child's intestine and competing for nutrients, thus preventing their adhesion. Intestinal colonization by commensal bacteria also plays a vital role in maintaining homeostasis of immune system. These bacteria stimulate the T helper 1 response and counteract the trend towards a T helper 2 response of neonatal immune system, which in turn reducing the incidence of the inflammatory processes such as necrotizing enterocolitis. Children with colic symptoms possibly have an imbalance in the intestinal microbiota – analyses of faecal samples found higher counts of coliform bacteria and lower counts of Lactobacilli in infants with colic symptoms compared with children not suffering from colic. On the other hand, probiotics have been shown to influence intestinal motility and sensory neurons as well as contractile activity of the intestine and to exert anti-inflammatory effects. Evolutionary implications There is some indication of relationships between milk microbiota and other human milk components, including HMOs, maternal cells, and nutrient profiles. Specific bacterial genera have been shown to be associated with variation in levels of milk macronutrients such as lactose, proteins, and fats. HMOs selectively facilitate growth of particular beneficial bacteria, notably Bifidobacterium species. Furthermore, as Bifidobacteria genomes are uniquely equipped to metabolize HMOs, which are otherwise indigestible by enzymes of the infant gut, some have suggested a coevolution between HMOs and certain bacteria common in both the milk and infant gastrointestinal microbiomes. Furthermore, relative to other mammalian milks such as primate milk, human milk appears to be unique with respect to the complexity and diversity of its oligosaccharide repertoire. Human milk is typified by greater overall HMO diversity and predominance of oligosaccharides known to promote growth of Bifidobacterium in the infant gut. Milk microbiota are thought to play an essential role in programming the infant immune system, and tend to reduce the risk of adverse infant health outcomes. Differences in milk oligosaccharides between humans and non-human primates could be indicative of variation in pathogen exposure associated with increased sociality and group sizes. Together, these observations may indicate that milk microbial communities have coevolved with their human host, supported by the expectation that microbes which promote host health facilitate their own transmission and proliferation. Comparisons with other mammals Both human and macaque milks contains high abundances of Streptococcus and Lactobacillus bacteria, but differ in their respective relative abundances of these taxa. Bacteria observed to be most common in healthy bovine milk include Ralstonia, Pseudomonas, Sphingomonas, Stenotrophomonas, Psychrobacter, Bradyrhizobium, Corynebacterium, Pelomonas, Staphylococcus, Faecalibacterium, Lachnospiraceae, Propionibacterium, Aeribacillus, Bacteroides, Streptococcus, Anaerococcus, Lactobacillus, Porphyromonas, Comamonas, Fusobacterium, and Enterococcus. See also Human microbiota Breast milk Human milk oligosaccharide References Maternal health Breast milk Microbiomes
Human milk microbiome
[ "Environmental_science" ]
3,350
[ "Microbiomes", "Environmental microbiology" ]
56,462,136
https://en.wikipedia.org/wiki/Zalo
Zalo is a Vietnamese instant messaging multi-platform service developed by VNG Corporation. Zalo is also used in other countries outside of Vietnam, including the United States, Japan, South Korea, Australia, Germany, Myanmar and Singapore. As of late September 2024, Zalo has approximately 77.6 million active monthly users with nearly 1.97 billion messages sent each day. Zalo is currently available in all 63 provinces of Vietnam, including islands, remote areas, and is accessible in more than 23 countries worldwide. History Launch The name for Zalo is a blend of Zing (an online website managed by VNG) and alo (a standard phrase used to answer a call in Vietnam). Zalo was co-created by Vương Quang Khải, the executive Vice President of VNG. After studying in the United States, he returned to Vietnam in 2007. Recently, he began to work primarily on Artificial Intelligence technology, developing Kiki, an AI based virtual assistant. The first software test version of Zalo was released in August 2012, eight months after development began in late 2011. By September 2012, Zalo had released new versions on iOS, Android and the Nokia S40. However, Zalo received little recognition due to various user issues, including having to use Zing ID for login and utilizing web platforms for the mobile application. In December 2012, the official version for Zalo was released to the public. On 8 January 2013, Zalo reached the top spot on the App Store in Vietnam, passing their main competitor, WeChat. In March 2013, Zalo reached the million-user milestone and continued to grow quickly, having over 2 million users only two months later, surpassing all international competitors, including Viber and Line. Rise in popularity In March 2014, Zalo reached 10 million users with 120 million messages sent per day, dominating 50% of the smartphone market in Vietnam. In February 2016, Zalo fastly grew with 45 million users, having a total of 1 million new users per month. In February 2017, Zalo reached 70 million users, approximately 75% of the population of Vietnam. In mid-2020, Zalo became the top messaging and communication software in Vietnam in terms of both usage and popularity. Since 2021, Zalo has continuously been creating new features, such as Short videos, enhanced security features (end-to-end encryption and auto-deleting messages,...) and other improvements to the software. In 2021, Zalo launched the Zalo Connect feature to help citizens in COVID-19-affected areas connect together and support each other. Since 2022, Zalo began to introduce a series of artificial intelligence features, including eKYC in 2022; text-to-speech, voice-to-text, Zalo AI Avatar in 2023; voice dictation and zSticker AI in 2024. Main features Free text messaging with emojis, images, videos and voicemail. Free phone calls (including video calls). Creating and replying to status messages. File sharing. Creating reminders. Adding friends (directly by phone number, finding people by phonebook or by scanning QR codes). Mobile payment by credit card via ZaloPay. Zalo is accessible by web browser and by opening an application on supported devices. Records In February 2013, Zalo was regarded as one of the "most creative mobile applications" by international technology news website Tech in Asia. In April 2013, Zalo won the 2013 Sao Khue Award – an award presented by the Vietnam Software Association (VINASA) to recognize outstanding technological and scientific achievements of Vietnam. In January 2019, at an event hosted by the Ministry of Information and Communications, former Vietnamese Prime Minister Nguyễn Xuân Phúc asserting the neccessity of Zalo in the lives of Vietnamese people, stating "Where there are Vietnamese people, there is Zalo". In 2021, the Zalo Connect feature was titled as a "Creative Vietnamese Technological Solution" at Tech Summit 2021, hosted by VnExpress. In March 2022, Global Brand Magazine awarded Zalo the Global Brand Award, rating it as the most used messaging app in Vietnam, above WeChat and WhatsApp. According to research by Decision Lab in the last quarter of 2021, up to 48% of people asked uses Zalo to communicate with friends and family, while Facebook and Messenger's percentages were both 27% and 20% respectively. Controversies Operating without a license In July 2019, the Ho Chi Minh City Department of Information and Communications issued a document demanding parties responsible for registering and managing domain names revoke Zalo.vn and Zalo.me, owned by VNG Corporation, due to them operating social networks without permission. Previously, Zalo was found operating social networks without a license in 2018, which resulted in a punishment. However, the Department allowed the domains to remain active, giving VNG time to complete licensing documents. On 11 November 2019, the Zalo.vn domain was suspended for 45 days to allow authorities to investigate potential violations, despite VNG submitting an application for license renewal. According to sources, the application did not meet the criteria for approval, resulting in a temporary domain suspension, allowing Zalo to complete the filing process. On 24 November 2019, Zalo was officially granted a social network license. Exposing private user information To use the Find nearby feature, users must enable location mode, allowing Zalo to scan for nearby users within a radius of several kilometers. However, the mobile app has this feature enabled by default, resulting in a situation where anyone can have their location exposed when location services are turned on. In May 2022, mobile versions of Zalo received end-to-end encryption to protect private conversations, while the Zalo PC and web versions were updated to enhance user information security. Paywalls On 1 August, 2022, Zalo began charging users, resulting in restricted access to certain features for those with free accounts. Immediately after, Zalo saw an increase in 1-star ratings on the App Store and Google Play, along with boycotts and multiple threats to delete the application. See also VNG Corporation References Instant messaging iOS software Android (operating system) software Windows software Mobile applications 2012 software Vietnamese social networking websites Mobile instant messaging clients Free VoIP software Social networking services
Zalo
[ "Technology" ]
1,292
[ "Instant messaging" ]
56,462,811
https://en.wikipedia.org/wiki/Rubidium%20azide
Rubidium azide is an inorganic compound with the formula . It is the rubidium salt of the hydrazoic acid . Like most azides, it is explosive. Preparation Rubidium azide can be created by the reaction between rubidium sulfate and barium azide which results in formation of easily separated insoluble barium sulfate: In at least one study, rubidium azide was produced by the reaction between butyl nitrite, hydrazine monohydrate, and rubidium hydroxide in the presence of ethanol: This formula is typically used to synthesize potassium azide from caustic potash. Uses Rubidium azide has been investigated for possible use in alkali vapor cells, which are components of atomic clocks, atomic magnetometers and atomic gyroscopes. Azides are desirable starting materials because they decompose into rubidium metal and nitrogen gas when exposed to UV light. According to one publication:Among the different techniques used to fill microfabricated alkali vapor cell , UV decomposition of rubidium azide () into metallic Rb and nitrogen in coated cells is a very promising approach for low-cost wafer-level fabrication. Structure At room temperature, rubidium azide has the same structure as potassium hydrogen fluoride; a distorted caesium chloride structure. At 315 °C and 1 atm, rubidium azide will transition to the normal caesium chloride structure. The II/I transition temperature of rubidium azide is within 2 °C of its melting point. Rubidium azide has a high pressure structure transition, which occurs at about 4.8 kilobars of pressure at 0 °C. The transition boundary of the II/III transition can be defined by the relationship , where is the pressure in kilobars and is the temperature in degrees Celsius. Reactions As with all azides, it will decompose and release nitrogen gas when heated or severely shocked: Discharge rubidium azide in nitrogen gas will produce rubidium nitride. Hazards At 4.1 kilobars of pressure and about 460 °C, rubidium azide will explosively decompose. Under normal circumstances, it explodes at 395 °C. It also decomposes upon exposure to ultraviolet light. Rubidium azide is very sensitive to mechanical shock, with an impact sensitivity comparable to that of TNT. Like all azides, rubidium azide is toxic. References Azides Rubidium compounds Explosive chemicals
Rubidium azide
[ "Chemistry" ]
503
[ "Explosive chemicals", "Azides" ]
56,463,048
https://en.wikipedia.org/wiki/Algorithms%20of%20Oppression
Algorithms of Oppression: How Search Engines Reinforce Racism is a 2018 book by Safiya Umoja Noble in the fields of information science, machine learning, and human-computer interaction. Background Noble earned an undergraduate degree in sociology from California State University, Fresno in the 1990s, then worked in advertising and marketing for fifteen years before going to the University of Illinois Urbana-Champaign for a Master of Library and Information Science degree in the early 2000s. The book's first inspiration came in 2011, when Noble Googled the phrase "black girls" and saw results for pornography on the first page. Noble's doctoral thesis, completed in 2012, was titled "Searching for Black girls: Old traditions in new media." At this time, Noble thought of the title "Algorithms of Oppression" for the eventual book. By this time, changes to Google's algorithm had changed the most common results for a search of "black girls," though the underlying biases remain influential. Noble became an assistant professor at University of California, Los Angeles in 2014. In 2017, she published an article on racist and sexist bias in search engines in The Chronicle of Higher Education. The book was published on February 20, 2018. Overview Algorithms of Oppression is a text based on over six years of academic research on Google search algorithms, examining search results from 2009 to 2015. The book addresses the relationship between search engines and discriminatory biases. Noble argues that search algorithms are racist and perpetuate societal problems because they reflect the negative biases that exist in society and the people who create them. Noble dismantles the idea that search engines are inherently neutral by explaining how algorithms in search engines privilege whiteness by depicting positive cues when key words like “white” are searched as opposed to “asian,”  “hispanic,”  or “Black.” Her main example surrounds the search results of "Black girls" versus "white girls" and the biases that are depicted in the results. These algorithms can then have negative biases against women of color and other marginalized populations, while also affecting Internet users in general by leading to "racial and gender profiling, misrepresentation, and even economic redlining." The book argues that algorithms perpetuate oppression and discriminate against People of Color, specifically women of color. Noble takes a Black intersectional feminist approach to her work in studying how google algorithms affect people differently by race and gender. Intersectional Feminism takes into account the diverse experiences of women of different races and sexualities when discussing their oppression society, and how their distinct backgrounds affect their struggles. Additionally, Noble's argument addresses how racism infiltrates the google algorithm itself, something that is true throughout many coding systems including facial recognition, and medical care programs. While many new technological systems promote themselves as progressive and unbiased, Noble is arguing against this point and saying that many technologies, including google's algorithm "reflect and reproduce existing inequities." Chapter Summaries Chapter 1 In Chapter 1 of Algorithms of Oppression, Safiya Noble explores how Google search's auto suggestion feature is demoralizing. On September 18, 2011, a mother googled “black girls” attempting to find fun activities to show her stepdaughter and nieces. To her surprise, the results encompassed websites and images of porn. This result encloses the data failures specific to people of color and women which Noble coins algorithmic oppression. Noble also adds that as a society we must have a feminist lens, with racial awareness to understand the “problematic positions about the benign instrumentality of technologies.” Noble also discusses how Google can remove the human curation from the first page of results to eliminate any potential racial slurs or inappropriate imaging. Another example discussed in this text is a public dispute of the results that were returned when “Jew” was searched on Google. The results included a number of anti-Semitic pages and Google claimed little ownership for the way it provided these identities. Google instead encouraged people to use “Jews” or “Jewish people” and claimed the actions of White supremacist groups are out of Google's control. Unless pages are unlawful, Google will allow its algorithm to continue to act without removing pages. Noble reflects on AdWords which is Google's advertising tool and how this tool can add to the biases on Google. Adwords allows anyone to advertise on Google's search pages and is highly customizable. First, Google ranks ads on relevance and then displays the ads on pages which it believes are relevant to the search query taking place. An advertiser can also set a maximum amount of money per day to spend on advertising. The more you spend on ads, the higher probability your ad will be closer to the top. Therefore, if an advertiser is passionate about his/her topic but it is controversial it may be the first to appear on a Google search. Chapter 2 In Chapter 2 of Algorithms of Oppression, Noble explains that Google has exacerbated racism and how they continue to deny responsibility for it. Google puts the blame on those who have created the content and as well as those who are actively seeking this information. Google's algorithm has maintained social inequalities and stereotypes for Black, Latina, and Asian women, mostly due in part to Google's design and infrastructure that normalizes whiteness and men. She explains that the Google algorithm categorizes information which exacerbates stereotypes while also encouraging white hegemonic norms. Noble found that after searching for black girls, the first search results were common stereotypes of black girls, or the categories that Google created based on their own idea of a black girl. Google hides behind their algorithm that has been proven to perpetuate inequalities. Chapter 3 In Chapter 3 of Algorithms of Oppression, Safiya Noble discusses how Google's search engine combines multiple sources to create threatening narratives about minorities. She explains a case study where she searched “black on white crimes” on Google. Noble highlights that the sources and information that were found after the search pointed to conservative sources that skewed information. These sources displayed racist and anti-black information from white supremacist sources. Ultimately, she believes this readily-available, false information fueled the actions of white supremacist Dylann Roof, who committed a massacre Chapter 4 In Chapter 4 of Algorithms of Oppression, Noble furthers her argument by discussing the way in which Google has oppressive control over identity. This chapter highlights multiple examples of women being shamed due to their activity in the porn industry, regardless if it was consensual or not. She critiques the internet's ability to influence one's future due to its permanent nature and compares U.S. privacy laws to those of the European Union, which provides citizens with “the right to forget or be forgotten.” When utilizing search engines such as Google, these breaches of privacy disproportionately affect women and people of color. Google claims that they safeguard our data in order to protect us from losing our information, but fails to address what happens when you want your data to be deleted. Chapter 5 In Chapter 5 of Algorithms of Oppression, Noble moves the discussion away from google and onto other information sources deemed credible and neutral. Noble says that prominent libraries, including the Library of Congress, encourage whiteness, heteronormativity, patriarchy and other societal standards as correct, and alternatives as problematic. She explains this problem by discussing a case between Dartmouth College and the Library of Congress where "student-led organization the Coalition for Immigration Reform, Equality (CoFired) and DREAMers" engaged in a two-year battle to change the Library's terminology from 'illegal aliens' to 'noncitizen' or 'unauthorised immigrants.' Noble later discusses the problems that ensue from misrepresentation and classification which allows her to enforce the importance of contextualisation. Noble argues that it is not just google, but all digital search engines that reinforce societal structures and discriminatory biases and by doing so she points out just how interconnected technology and society are. Chapter 6 In Chapter 6 of Algorithms of Oppression, Safiya Noble discusses possible solutions for the problem of algorithmic bias. She first argues that public policies enacted by local and federal governments will reduce Google's “information monopoly” and regulate the ways in which search engines filter their results. She insists that governments and corporations bear the most responsibility to reform the systemic issues leading to algorithmic bias. Simultaneously, Noble condemns the common neoliberal argument that algorithmic biases will disappear if more women and racial minorities enter the industry as software engineers. She calls this argument “complacent” because it places responsibility on individuals, who have less power than media companies, and indulges a mindset she calls “big-data optimism,” or a failure to challenge the notion that the institutions themselves do not always solve, but sometimes perpetuate inequalities. To illustrate this point, she uses the example of Kandis, a Black hairdresser whose business faces setbacks because the review site Yelp has used biased advertising practices and searching strategies against her. She closes the chapter by calling upon the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) to “regulate decency,” or to limit the amount of racist, homophobic, or prejudiced rhetoric on the Internet. She urges the public to shy away from “colorblind” ideologies toward race because it has historically erased the struggles faced by racial minorities. Lastly, she points out that big-data optimism leaves out discussion about the harms that big data can disproportionately enact upon minority communities. Conclusion In Algorithms of Oppression, Safiya Noble explores the social and political implications of the results from our Google searches and our search patterns online. Noble challenges the idea of the internet being a fully democratic or post-racial environment. Each chapter examines different layers to the algorithmic biases formed by search engines. By outlining crucial points and theories throughout the book, Algorithms of Oppression is not limited to only academic readers. This allows for Noble's writing to reach a wider and more inclusive audience. Critical reception Critical reception for Algorithms of Oppression has been largely positive. In the Los Angeles Review of Books, Emily Drabinski writes, "What emerges from these pages is the sense that Google’s algorithms of oppression comprise just one of the hidden infrastructures that govern our daily lives, and that the others are likely just as hard-coded with white supremacy and misogyny as the one that Noble explores." In PopMatters, Hans Rollman writes that Algorithms of Oppression "demonstrate[s] that search engines, and in particular Google, are not simply imperfect machines, but systems designed by humans in ways that replicate the power structures of the western countries where they are built, complete with all the sexism and racism that are built into those structures." In Booklist, reviewer Lesley Williams states, "Noble’s study should prompt some soul-searching about our reliance on commercial search engines and about digital social equity." In early February 2018, Algorithms of Oppression received press attention when the official Twitter account for the Institute of Electrical and Electronics Engineers expressed criticism of the book, saying that the results of a Google search suggested in its blurb did not match Noble's predictions. IEEE's outreach historian, Alexander Magoun, later revealed that he had not read the book, and issued an apology. See also Algorithmic bias Techlash References External links Algorithms of Oppression: How Search Engines Reinforce Racism Books about race and ethnicity Algorithms 2018 non-fiction books English-language non-fiction books Information science Human–computer interaction Machine learning algorithms New York University Press books
Algorithms of Oppression
[ "Mathematics", "Engineering" ]
2,399
[ "Applied mathematics", "Algorithms", "Mathematical logic", "Human–machine interaction", "Human–computer interaction" ]
56,463,085
https://en.wikipedia.org/wiki/Effusive%20limit
An effusive limit in ultra-low pressure fluid flow is the limit at which a gas of certain molecular weight is able to expand into a vacuum such as a molecular beam line. References Fluid dynamics
Effusive limit
[ "Chemistry", "Engineering" ]
43
[ "Piping", "Chemical engineering", "Fluid dynamics stubs", "Fluid dynamics" ]
56,463,402
https://en.wikipedia.org/wiki/For%20All%20Moonkind
For All Moonkind, Inc. is a volunteer international nonprofit organization which is working with the United Nations and the international community to manage the preservation of history and human heritage in outer space. The organization believes that the lunar landing sites and items from space missions are of great value to the public and is pushing the United Nations to create rules that will protect lunar items and secure heritage sites on the Moon and other celestial bodies. Protection is necessary as many nations and companies are planning on returning to the Moon, and it is not difficult to imagine the damage an autonomous vehicle or an errant astronaut—an explorer, colonist or tourist—could to one of the Moon landing sites, whether intentionally or unintentionally. Formed in 2017, the organization aims to work with space agencies around the world to draw up a protection plan which will be submitted to the UN Committee on the Peaceful Uses of Outer Space. The goal is to present the international community with a proposal prepared by a diverse group of space law experts, preservation law experts, scientists and engineers which takes into consideration all the necessary aspects of law, policy and science. The effort will be modeled on the United Nations Educational, Scientific and Cultural Organization's World Heritage Convention. Simonetta Di Pippo, currently the director of the United Nations Office for Outer Space Affairs, has acknowledged the work of For All Moonkind and confirmed that UNOOSA supports and facilitates international cooperation in the peaceful uses of outer space. In November 2017, the UNOOSA United Arab Emirates High Level Forum 2017 acknowledged the work of For All Moonkind and recommended that the international community should consider proclaiming universal heritage sites in outer space. In January 2018, a draft resolution was considered by the UN Committee on the Peaceful Uses of Outer Space Scientific and Technical Subcommittee recommended the creations of "a universal space heritage sites programme ... with specific focus on sites of special relevance on the Moon and other celestial bodies." For All Moonkind is also working directly private companies to preserve human heritage in outer space. German company PTScientists, which is planning to send a rover to revisit the Apollo 17 landing site, was the first private company to make a public pledge of support for For All Moonkind. In February 2018, For All Moonkind was named a Top Ten Innovator in Space in 2018 "for galvanizing agencies to preserve Moon artifacts." The honor was repeated in 2019 when the organization was recognized for its innovative "campaign to create and international agreement to preserve human artifacts in space." In May 2018, the organization announced that it is teaming up with TODAQ Financial to map heritage sites on the Moon using blockchain. In December 2018, the United Nations General Assembly granted to For All Moonkind Observer status, on a provisional basis, for a period of three years, pending on the status of their application for consultative status with the United Nations Economic and Social Council. In spring 2019, For All Moonkind worked closely with the office of Gary Peters to develop the One Small Step Act, legislation designed to permanently protect the Apollo landing sites from intentional and unintentional disturbances by codifying existing NASA preservation recommendations. The bipartisan bill, which was cosponsored by Senator Ted Cruz was passed unanimously by the United States Senate on 18 July 2019. In the United States House of Representatives, it was cosponsored by Representatives Eddie Bernice Johnson, Brian Babin, Kendra Horn, Frank Lucas (Oklahoma politician), Lizzie Fletcher and Brian Fitzpatrick (American politician). It was passed by the United States House of Representatives in December 2020 and became law on 31 December 2021. In October 2020, the United States and seven other countries signed the Artemis Accords. Section 9 of the Accords specifically includes the agreement to preserve outer space heritage, which the signatories consider to comprise historically significant human or robotic landing sites, artifacts, spacecraft, and other evidence of activity, and to contribute to multinational efforts to develop practices and rules to do so. This is the first time the protection of human heritage in space has ever been referenced in a multilateral agreement. As of 30 October, a total of 13 nations have signed the Accords. In March 2021, the organization revealed the first-of-its-kind digital registry of all the historic landing sites on the Moon. The For All Moonkind Moon Registry is free to all. Astronaut and second-to-last human on the Moon, Harrison Schmitt called the registry a "worthy cause", while fellow astronaut and moonwalker Charles Duke said it is a "spectacular resource". Neil Armstrong biographer James Hansen calls it "an all-access pass to the history of human activity on the Moon." In March 2023, the organization formed the Institute on Space Law and Ethics a "new nonprofit organization will go beyond advocating for protecting off-world heritage sites and contemplate the ethics around some activities in space that are not fully covered in existing international law." While Space ethics is a discipline that discusses the moral the ethical implications of space exploration the Institute on Space Law and Ethics will look to address current issues in space exploration. Human heritage in outer space Space heritage has been defined as heritage related to the process of carrying our science in space; heritage related to crewed space flight/exploration; and human cultural heritage that remains off the surface of planet Earth. The field of space archaeology is the research-based study of all the various human-made items in outer space. Human heritage in outer space includes Tranquility Base (Apollo 11's lunar landing site) and the robotic and crewed sites that preceded and followed Apollo 11. This also comprises all the Luna programme vehicles, including the Luna 2 (first object) and Luna 9 (first soft-landing) missions, the Surveyor program and the Yutu rover. Human heritage in outer space also includes satellites like Vanguard 1 and Asterix-1 which, though nonoperational, remain in orbit. History The organization was founded by Tim and Michelle Hanlon in 2017. In February 2018, For All Moonkind was named a Top Ten Innovator in Space in 2018 "for galvanizing agencies to preserve Moon artifacts." The honor was repeated in 2019 when the organization was recognized for its innovative "campaign to create and international agreement to preserve human artifacts in space." In May 2018, the organization announced that it is teaming up with TODAQ Financial to map heritage sites on the Moon using blockchain. In December 2018, the United Nations General Assembly granted to For All Moonkind observer status, on a provisional basis, for a period of three years, pending on the status of their application for consultative status with the United Nations Economic and Social Council. In spring 2019, For All Moonkind worked closely with the office of Gary Peters to develop the One Small Step Act, legislation designed to permanently protect the Apollo landing sites from intentional and unintentional disturbances by codifying existing NASA preservation recommendations. The bipartisan bill, which was cosponsored by Senator Ted Cruz was passed unanimously by the United States Senate on 18 July 2019. It passed the House in December 2020 and became law on 31 December 2020. Leadership and advisory councils For All Moonkind is an entirely volunteer endeavor with a Leadership Board and three Advisory Councils. The team includes space lawyers and policymakers, scientists and technical experts – including space archaeologists – and communications professionals from around the world. Noteworthy members include: Astronaut Col. Mike Mullane, USAF, Retired Astronaut Col. Robert C. Springer, USMC, Retired American space historian Robert Pearlman James R. Hansen, author of First Man: The Life of Neil A. Armstrong, the official biography of Neil Armstrong Space entrepreneur Rick Tumlinson See also Space archaeology References External links ">"Moon Registry" Catalogs Human Heritage Left Behind on Lunar Surface collectSPACE "> Moon Registry: Cataloging the Past ... and Future of Lunar Exploration Inside Outer Space President Signs Law Protecting Lunar Heritage Sites SpacePolicyOnline.com " Protecting Human Heritage in Outer Space with Michelle L.D. Hanlon Clayming Space " How Star Trek's Prime Directive is Influencing Real-Time Space Law SyFyWire Pushing the Outer Limits of Preservation with Michelle Hanlon PreserveCast Fighting to Preserve Human History on the Moon Supercluster We Need that Boot Print. Inside the Fight to Save the Moon's Historic Sites Before it's Too Late Time The Nation Celebrates the 50th Anniversary of Apollo 11 The Wall Street Journal European Space Agency Chief Urges Humanity to Protect Apollo 11, Lunokhod 1 Landing Sites Gizmodo Life After Launch: Inside the Massive Effort to Preserve NASA's Space Artifacts Digital Trends Apollo 11 Site Should be Granted Heritage Status The Guardian What is the Apollo 11 Landing Site Like Now? The Atlantic Preserving Apollo's Historic Landing Sites on the Moon WJCT Preserving Neil Armstrong's Footprints on the Moon is an Easy Decision Slate One Giant Leap for Preservation: Kent Wins Landmark Status for Boeing's Moon Buggies GeekWire America's Greatest Space Landmarks Could be at Risk Due to a Lack of Space Law Cheddar Should Neil Armstrong's Bootprints Be on the Moon Forever? The New York Times The Moon now has Hundreds of Artefacts - Should They be Protected? The Straits Times Apollo 11 Brought a Message of Peace to the Moon Snopes One Small Step: What Will the Moon Look Like in 50 Years? CNET Apollo Astronauts Left Trash, Mementos and Experiments on the Moon Science News Historic Preservation Taken Out of This World The Commercial Dispatch Apollo 11 50th Anniversary: Who Gets to Own Moon Landing Memorabilia Vox How a Park on the Moon Could Lead to More Consensus on Space Exploration Politico Space Act Calls for Protection of Apollo 11 Landing Site Space.com 50 Yrs of Moon Landing: Let's Not Forget, or Forsake, the Lessons of the Past Business Standard " Preserving Human Cultural Heritage For All Moonkind Late Night Live How do You Preserve History on the Moon? National Public Radio " The Case for Protecting the Apollo Landing Areas as Heritage Sites Discover A World Heritage Site on the Moon? That's Not as Spacey as it Sounds Los Angeles Times Mapping the Moon WMFE-FM We Make Mars More Like Earth? Making New Worlds PTScientists 'Mission to the Moon' to Take Care not to Harm Apollo 17 Landing Site collectSPACE Return to the Moon, BBC Click talks to Michelle Hanlon, For All Moonkind BBC World Service, Click SpaceWatchME Interviews: Michelle L.D. Hanlon of For All Moonkind Space Watch Middle East Broadcast 2994, Michelle Hanlon The Space Show Preserving Historic Sites on the Moon Air & Space Non-profit strives to preserve moon landing sites for posterity The Westside Story A Preservation Group Wants UNESCO-Style Protection for Apollo Moon-landing Sites Fast Company Organizations established in 2017 Space advocacy organizations Historical footprints
For All Moonkind
[ "Astronomy" ]
2,194
[ "Space advocacy organizations", "Astronomy organizations" ]
56,464,414
https://en.wikipedia.org/wiki/Bornyl%20acetate
Bornyl acetate is a chemical compound. Its molecular formula is C12H20O2 and its molecular weight is 196.29 g/mol. It is the acetate ester of borneol. It is used as a food additive, flavouring agent, and odour agent. It is a component of the essential oil from pine needles (from the family Pinaceae) and primarily responsible for its odor. References Acetate esters Terpenes and terpenoids Food additives
Bornyl acetate
[ "Chemistry" ]
103
[ "Organic compounds", "Biomolecules by chemical classification", "Terpenes and terpenoids", "Natural products" ]
56,465,007
https://en.wikipedia.org/wiki/Copper%28II%29%20chlorate
Copper(II) chlorate is a chemical compound of the transition metal copper and the chlorate anion with basic formula Cu(ClO3)2. Copper chlorate is an oxidiser. It commonly forms the tetrahydrate, Cu(ClO3)2·4H2O. Production Copper chlorate can be made by combining a hot one molar solution of copper sulfate, with barium chlorate, which results in the precipitation of barium sulfate. When the solution is filtered, cooled and evaporated under a vacuum blue crystals form. CuSO4 + Ba(ClO3)2 → Cu(ClO3)2 + BaSO4(s) Properties In 1902, A. Meusser investigated solubility of copper chlorate and found that it melted and started decomposing above 73 °C, giving off chlorine. Copper chlorate decomposes when heated, giving off a yellow gas, which contains chlorine, oxygen and chlorine dioxide. A green solid is left that is a basic copper salt. 2 Cu(ClO3)2 → 2 CuO + Cl2 + 3 O2 + 2 ClO2 Sulfur is highly reactive with copper chlorate, and it is important not to cross contaminate these chemicals, for example in pyrotechnic making. Structure Copper(II) chlorate commonly crystallizes as a tetrahydrate, though a hexahydrate is also known. Tetraaquacopper(II) chlorate, Cu(ClO3)2·4H2O, has an orthorhombic crystal structure. Each copper atom is octahedrally coordinated, surrounded by four oxygen atoms of water, and two oxygen atoms from chlorate groups, which are opposite each other. Water is closer to the copper than chlorate, 1.944 Å compared to 2.396 Å, exhibiting the Jahn-Teller effect. The chlorate groups take the shape of a distorted tetrahedron. At , the chlorine-oxygen distances in each chlorate ion are 1.498, 1.488 and 1.468 Å, with the longest being the oxygen next to copper. The ∠O-Cu-O (angle subtended at copper by oxygen atoms) is 105.2°, 108.3°, and 106.8°. At lower temperatures (), the water molecules and copper-chlorate distance shrink. Use François-Marie Chertier used tetraamminecopper(II) chlorate to colour flames blue in 1843. This material was abbreviated TACC with formula Cu(NH3)4(ClO3)2. TACC explodes on impact. The substance became known as Chertier's copper for use in blue coloured pyrotechnics. However its deliquescence causes a problem. Mixtures with other metal salts can yield violet or lilac colours also. It has also been used to colour copper brown. References Chlorates Copper(II) compounds
Copper(II) chlorate
[ "Chemistry" ]
646
[ "Chlorates", "Salts" ]
56,466,447
https://en.wikipedia.org/wiki/Lighthill%27s%20eighth%20power%20law
In aeroacoustics, Lighthill's eighth power law states that power of the sound created by a turbulent motion, far from the turbulence, is proportional to eighth power of the characteristic turbulent velocity, derived by Sir James Lighthill in 1952. This is used to calculate the total acoustic power of the jet noise. The law reads as where is the acoustic power in the far-field, is the proportionality constant (or Lighthill's constant), is the uniform fluid density, is the speed of sound, is the characteristic length scale of the turbulent source and is the characteristic velocity scale of the turbulent source. The eighth power is experimentally verified and found to be accurate for low speed flows, i.e., Mach number is small, . And also, the source has to be compact to apply this law. References Fluid dynamics Acoustics
Lighthill's eighth power law
[ "Physics", "Chemistry", "Engineering" ]
174
[ "Chemical engineering", "Classical mechanics", "Acoustics", "Piping", "Fluid dynamics" ]
61,263,594
https://en.wikipedia.org/wiki/Uji%20%28Being-Time%29
The Japanese Buddhist word uji (有時), usually translated into English as Being-Time, is a key metaphysical idea of the Sōtō Zen founder Dōgen (1200–1253). His 1240 essay titled Uji, which is included as a fascicle in the Shōbōgenzō ("Treasury of the True Dharma Eye") collection, gives several explanations of uji, beginning with, "The so-called "sometimes" (uji) means: time (ji) itself already is none other than being(s) (u) are all none other than time (ji).". Scholars have interpreted uji "being-time" for over seven centuries. Early interpretations traditionally employed Buddhist terms and concepts, such as impermanence (Pali anicca, Japanese mujō 無常). Modern interpretations of uji are more diverse, for example, authors like Steven Heine and Joan Stambaugh compare Dōgen's concepts of temporality with the existentialist Martin Heidegger's 1927 Being and Time. Terminology Dōgen's writings can be notoriously difficult to understand and translate, frequently owing to his wordplay with Late Middle Japanese terms. Dōgen's Zen neologism uji (有時, "existence-/being-time") is the uncommon on'yomi Sino-Japanese reading of the Chinese word yǒushí (有時, "sometimes; at times", Wenlin 2016), and plays with the more common kun'yomi native Japanese pronunciation of these two kanji characters as arutoki (或る時, "once; on one occasion; at one point; [in the past] once; at one time; once upon a time". In the multifaceted Japanese writing system, arutoki ("at one time; etc.") was archaically transcribed 有時 in kanbun ("Chinese character writing"), and is now either written 或る時 with -ru る in okurigana indicating a Group II verb stem, or simply あるとき in hiragana. Authors have described Dōgen's uji as an "intentional misreading" of ordinary language and a "deliberate misreading" of arutoki. Dōgen etymologizes the two components of uji (有時) with usage examples from everyday Japanese. The first element u refers to "existence" or "being", and the second ji means "time; a time; times; the time when; at the time when; sometime; for a time". Several of Dōgen's earlier writings used the word arutoki, for example, in a kōan story, it repeatedly means "and then, one day" to signal that an important event is about to happen. Interpretations of uji are plentiful. Dainin Katagiri says that Dōgen used the novel term being-time to illustrate that sentient "beings" and "time" were unseparated. Thus, being represents all beings existing together in the formless realm of timelessness, and time characterizes the existence of independent yet interconnected moments. Gudo Nishijima and Chodo Cross say, u means "existence" and ji means "time," so uji means "existent time," or "existence-time." Since time is always related with existence and existence is always related with momentary time, the past and the future are not existent time—the point at which existence and time come together—the present moment is the only existent time. The Japanese keyword uji has more meanings than any single English rendering can encompass. Nevertheless, translation equivalents include: Existence/Time Being-Time Being Time Time-Being Just for the Time Being, Just for a While, For the Whole of Time is the Whole of Existence. Existence-Time Existential moment Shōbōgenzō fascicle Dôgen wrote his Uji essay at the beginning of winter in 1240, while he was teaching at the Kōshōhōrin-ji, south of Kyoto. It is one of the major fascicles of Shôbôgenzô, and "one of the most difficult". Dôgen's central theme in Uji Being-Time, and an underlying theme in other fascicles such as Busshō (佛性, Buddha Nature), is the inseparability of time and existence in the everchanging present. The present Shōbōgenzō fascicle (number 20 in the 75 fascicle version) commences with a poem (four two-line stanzas) in which every line begins with uji (有時). The 1004 The Jingde Record of the Transmission of the Lamp collection of hagiographies for Chinese Chan and Japanese Zen monks attributes the first stanza to the Sōtō Zen Tang dynasty patriarch Yaoshan Weiyan (745-827). An old Buddha said: For the time being, I stand astride the highest mountain peaks. For the time being, I move on the deepest depths of the ocean floor. For the time being, I'm three heads and eight arms [of an Asura fighting demon]. For the time being, I'm eight feet or sixteen feet [a Buddha-body while seated or standing]. For the time being, I'm a staff or a whisk. For the time being, I'm a pillar or a lantern. For the time being, I'm Mr. Chang or Mr. Li [any Tom, Dick, or Harry]. For the time being, I'm the great earth and heavens above.. The translators note their choice of "for the time being" attempts to encompass Dōgen's wordplay with uji "being time" meaning arutoki "at a certain time; sometimes". Compare these other English translations of the first stanza: Sometimes (uji) standing so high up on the mountain top; Sometimes walking deep down on the bottom of the sea; For the time being stand on top of the highest peak. For the time being proceed along the bottom of the deepest ocean. At a time of being, standing on the summit of the highest peak; At a time of being, walking on the bottom of the deepest ocean. Standing atop a soaring mountain peak is for the time being And plunging down to the floor of the Ocean's abyss is for the time being; Being-time stands on top of the highest peak; Being-time goes to the bottom of the deepest ocean Sometimes standing on top of the highest peak, Sometimes moving along the bottom of the deepest ocean. Dōgen's Uji commentary on the poem begins by explaining that, "The 'time being' means time, just as it is, is being, and being is all time.", which shows the "unusual significance" he gives to the word uji "being-time.". Interpretations Many authors have researched and discussed Dōgen's theories of temporality. In English, there are two books and numerous articles on uji (有時, "being-time; time-being; etc."). According to the traditional interpretation, uji "means time itself is being, and all being is time". Hee-Jin Kim analyzed Dōgen's conception of uji "existence/time" as the way of spiritual freedom, and found that his discourse can be better understood in terms of ascesis rather than vision of Buddha-nature; "vision is not discredited, but penetrated, empowered by ascesis". Steven Heine's 1983 article on the hermeneutics of temporality in the Shōbōgenzō, that is, Dōgen critically reinterpreting and restating, "even at the risk of grammatical distortion," previous views of Buddha-nature in order to reflect the multidimensional unity of uji "being-time". For example, paraphrasing the venerated Nirvana Sutra, "If you wish to know the Buddha-nature's meaning, you should watch temporal conditions. If the time arrives, the Buddha-nature will manifest itself," Dōgen reinterprets the phrase "if the time arrives" (jisetsu nyakushi 時節若至) to mean "the time already arrived" (jisetsu kishi 時節既至) and comments, "There is no time right now that is not a time that has arrived,.. There is no Buddha-nature that is not Buddha-nature fully manifested right here-and-now.". Heine's 1985 book contrasted the theories of time presented in Dōgen's 1231-1253 Shōbōgenzō and the German existentialist Martin Heidegger's 1927 classic Being and Time (Sein und Zeit). Despite the vast cultural and historical gaps between medieval Japan and modern Germany, there are philosophical parallels. The conventional conceptualization of time is removed from the genuine experience of what Heidegger calls ursprüngliche Zeit ("primordial time", that is, temporalizing temporality) and similar to what Dōgen calls uji no dōri (有時の道理, "truth of [being]-time"). Masao Abe's and Steven Heine's article analyzes the origins of Dōgen's interest in being-time when he was a young monk on Mount Hiei, the headquarters of the Tendai school of Buddhism. According to the 1753 Kenzeiki (建撕記) traditional biography of Dōgen, he became obsessed by doubts about the Tendai concepts of hongaku (本覚, "original awakening") that all human beings are enlightened by nature, and shikaku (始覺, "acquired awakening") that enlightenment can only be achieved through resolve and practice. "Both exoteric and esoteric Buddhism teach the original Dharma-nature and innate self-nature. If that were true, why have the Buddhas of past, present, and future awakened the resolve for and sought enlightenment through ascetic practices?". Dōgen's doubt eventually led him to travel to Song dynasty China to seek a resolution, which was dissolved through the enlightenment experience of shinjin-datsuraku (身心脱落, "casting off of body-mind") when he was a disciple of Rujing (1162-1228). Joan Stambaugh, the philosopher and translator of Martin Heidegger's writings including Being and Time, wrote a book on Dōgen's understanding of temporality, Buddhist impermanence, and Buddha-nature. Rather than writing yet another comparative study, Stambaugh chose to produce a "dialogical" encounter between Eastern thinkers and Western philosophers, including Heraclitus, Boethius, Spinoza, Leibniz, Hegel, Schopenhauer, Kierkegaard, Nietzsche, and particularly Heidegger. J. M. E. McTaggart's classic argument that time is unreal differentiated two basic aspects of temporality, the "A-series and B-series": the A-series orders all events as continual transformations in time's passage, things are said to exist in the "future", then become "present", and finally enter the "past"; while the B-Series orders time as a set of relative temporal relationships between "earlier than" and "later than". Dirck Vorenkamp demonstrated that Dōgen's writings contained elements of the "B-theory of time". The Shōbōgenzō describes time's passage without reference to a sentient subject, "You should learn that passage [kyōraku (経歴)] occurs without anything external. For example, spring's passage is necessarily that which passes through spring." Trent Collier contrasts how Dōgen and Shinran (1173-1263), the founder of the Jōdo Shinshū sect of Pure Land Buddhism, diversely understood the role of time in Buddhist enlightenment. These two leaders in Kamakura Buddhism believed in two different forms of spiritual practice with disparate temporal concepts; Dōgen advocated zazen or shikantaza ("just sitting") meditation and Shinran emphasized the recitation of the nembutsu ("repeating the name of Amida") alone. Dōgen's notion of uji unified time and being, and consequently things in the world do not exist in time, but are time". According to the Uji fascicle, zazen falls outside the common understanding of time as past, present, and future. Dōgen declares that "When even just one person, at one time, sits in zazen, he becomes, imperceptively, one with each and all the myriad things, and permeates completely all time." Everything in reality is to be found in the absolute now of being-time. For Shinran, the central Pure Land awakening or experience is shinjin ("faith; piety; devotion"), the unfolding of Amida's wisdom-compassion in the believer. Shinran teaches that ichinen (一念, "one thought-moment") of shinjin is "time at its ultimate limit," and in the subjective experience of the practitioner, Amida's Primal Vow in the past and the Pure Land of the future are realized simultaneously. There are two ways of interpreting this "ultimate limit". In the first sense, it is the ultimate limit of samsaric existence, deluded and foolish existence stretched to its end; and in the second, "ultimate limit" refers to the absolute brevity of the one thought-moment, "the briefest instant of time, a moment so brief that it cannot be further divided". Rein Raud wrote two articles concerning Dōgen's notion of uji, translated as "being-time". and "existential moment", respectively. Raud's first study compared uji with Nishida Kitarō's interpretation of basho (場所, "place, location") as "the locus of tension, where the contradictory self-identities are acted out and complementary opposites negate each other", and is thus "the 'place' where impermanence happens". Both these Japanese philosophers believed that in order to attain self-realization one must transcend the "ordinary" reality not by rising above it, and thereby separating oneself from it, but by "becoming" it, realizing oneself in it and the totality of the world, including "being-time". His second study reinterprets Dōgen's concept of time as primarily referring to momentary rather than durational existence, and translates uji as "existential moment" in opposition to the usual understanding of time as measurable and divisible. According to Raud, this interpretation enables "more lucid readings" of many key passages in the Shōbōgenzō, such as translating the term kyōraku (経歴, "passage", etc.) as "shifting". In present day usage, this term is commonly read as Japanese keireki (経歴, "personal history; résumé; career") and Chinese jīnglì (經歷, "go through; undergo; experience"). Scholars have translated Dōgen's kyōraku as "continuity" (Masunaga), "flowing", "stepflow", "passing in a series of moments" (Nishijima and Cross), "passage", "totalistic passage or process" (Heine), and "seriatim passage". One translator says, "These attempts basically hit the mark, but fail to convey the freshness and originality of Dōgen's terminology, which is the verbal equivalent of him waving his arms wildly and screaming at the top of his lungs across the centuries to us: 'Look at my radical new idea about time!'". Compare these two renderings: Being-time has the virtue of seriatim passage; it passes from today to tomorrow, passes from today to yesterday, passes from yesterday to today, passes from today to today, passes from tomorrow to tomorrow. This is because passing seriatim is a virtue of time. Past time and present time do not overlap one another, or pile up in a row. The existential moment has the quality of shifting. It shifts from what we call "today" into "tomorrow," it shifts from "today" into "yesterday," and from "yesterday" into "today" in turn. It shifts from "today" into "today," it shifts from "tomorrow" into "tomorrow." This is because shifting is the quality of the momentary. The moments of the past and the present do not pile on each other nor do they line up side by side. Dainin Katagiri says Dōgen's uji Being-time means the complete oneness of time and space, "dynamically functioning from moment to moment as illumination that is alive in the individual self". When time, being, self, and illumination come together and work dynamically in one's life, time and being are unified. Furthermore, self is time. The "self arrays itself and forms the entire universe." One should perceive each particular thing in the universe as a moment of time. Neither things nor moments hinder one another. See also Eternalism (philosophy of time) Philosophy of space and time Metaphysics of presence References Footnotes Further reading Dumoulin, Heinrich (2005), Zen Buddhism: A History. Volume 2: Japan, World Wisdom Books. Nelson, Andrew N. and John H. Haig (1997), The New Nelson Japanese-English Character Dictionary, C. E. Tuttle Co. Lecut, Frederic (2009), Master Dōgen's Uji, 8 translations. Nishijima, Gudo and Chodo Cross 1994, 1996, 1997, 1999), Master Dōgen's Shōbōgenzō, 4 vols., Windbell Publications. Nishiyama Kōsen and John Stevens, trs., (1975, 1977, 1983, 1983), Shōbōgenzō (The Eye and Treasury of the True Law), 4 vols., Nakayama Shobō. External links On 'Just for the Time Being, Just for a While, For the Whole of Time is the Whole of Existence' (Uji), Nearman (2007) translation. Uji: The Time-Being by Eihei Dōgen, Welch and Tanahashi (1985) translation. Eihei Dōgen's The Time-Being (Uji), Reiho Masunaga translation. Uji (Existence-Time), Seijun Ishii, Sotozen-Net. For the Time-Being: Buddhism, Dōgen, and Temporality, Anthony Ridenour. Concepts in metaphysics Philosophy of time Soto Zen Zen Buddhist philosophical concepts
Uji (Being-Time)
[ "Physics" ]
3,911
[ "Spacetime", "Philosophy of time", "Physical quantities", "Time" ]
61,263,658
https://en.wikipedia.org/wiki/C21H26N2O2
{{DISPLAYTITLE:C21H26N2O2}} The molecular formula C21H26N2O2 (molar mass: 338.44 g/mol) may refer to: Coronaridine, or 18-carbomethoxyibogamine Kopsinine Molecular formulas
C21H26N2O2
[ "Physics", "Chemistry" ]
67
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,265,537
https://en.wikipedia.org/wiki/Xenambulacraria
Xenambulacraria is a proposed clade of animals with bilateral symmetry as an embryo, consisting of the Xenacoelomorpha (i.e., Xenoturbella and acoelomorphs) and the Ambulacraria (i.e., echinoderms and hemichordates). If confirmed, the clade would either be the sister group to the chordates (if deuterostomes are monophyletic) or the sister group to all the other bilaterians, grouped together in Centroneuralia (with deuterostomes being paraphyletic). Although the validity of the clade relies mostly on phylogenomics, molecular genetics studies have proposed pigment cell clusters expressing polyketide synthase (PKS) and sulfotransferase as a synapomorphy of Xenambulacraria. Phylogeny Xenambulacraria has usually been recovered as a clade inside of either of two distinct phylogenies. Basal Xenambulacraria The following phylogeny assumes a paraphyletic Deuterostomia, with Xenambulacraria at the base of Bilateria. Xenambulacraria inside Deuterostomia The following phylogeny assumes a monophyletic Deuterostomia, with Xenambulacraria nested inside of it. Gallery References Controversial taxa Bilaterian taxa
Xenambulacraria
[ "Biology" ]
305
[ "Biological hypotheses", "Animals", "Animal stubs", "Controversial taxa" ]
61,267,616
https://en.wikipedia.org/wiki/Gold%28III%29%20sulfide
Gold(III) sulfide or auric sulfide is an inorganic compound with the formula . Auric sulfide has been described as a black and amorphous solid. Only the amorphous phase has been produced, and the only evidence of existence is based on thermal analysis. Claims Early investigations claimed to prepare auric sulfide by the reaction of lithium tetrachloroaurate with hydrogen sulfide: Similar preparations via chloroauric acid, auric chloride, or gold(III) sulfate a claimed proceed in anhydrous solvents, but water evinces a redox decomposition into metallic gold in sulfuric acid: Conversely, it is claimed that cyclo-octasulfur reduces gold(III) sulfate to a mixture of gold sulfides and sulfur oxides: Auric sulfide has also been claimed as the product when auric acetate is sonicated with cyclo-octasulfur in decalin. Auric sulfide is claimed to react with nitric acid as well sodium cyanide. It is claimed to dissolve in concentrated sodium sulfide solution. See also Gold(I) sulfide References Gold(III) compounds Sesquisulfides Hypothetical chemical compounds
Gold(III) sulfide
[ "Chemistry" ]
254
[ "Theoretical chemistry", "Hypotheses in chemistry", "Hypothetical chemical compounds" ]
61,267,718
https://en.wikipedia.org/wiki/Esing%20Bakery%20incident
The Esing Bakery incident, also known as the Ah Lum affair, was a food contamination scandal in the early history of British Hong Kong. On 15 January 1857, during the Second Opium War, several hundred European residents were poisoned non-lethally by arsenic, found in bread produced by a Chinese-owned store, the Esing Bakery. The proprietor of the bakery, Cheong Ah-lum, was accused of plotting the poisoning but was acquitted in a trial by jury. Nonetheless, Cheong was successfully sued for damages and was banished from the colony. The true responsibility for the incident and its intention—whether it was an individual act of terrorism, commercial sabotage, a war crime orchestrated by the Qing government, or purely accidental—both remain matters of debate. In Britain, the incident became a political issue during the 1857 general election, helping to mobilise support for the war and the incumbent prime minister, Lord Palmerston. In Hong Kong, it sowed panic and insecurity among the local colonists, highlighting the precariousness of imperial rule in the colony. The incident contributed to growing tensions between Hong Kong's European and Chinese residents, as well as within the European community itself. The scale and potential consequences of the poisoning make it an unprecedented event in the history of the British Empire, the colonists believing at the time that its success could have wiped out their community. Background In 1841, in the midst of the First Opium War, Captain Charles Elliot negotiated the cession of Hong Kong by the Qing dynasty of China to the British Empire in the Convention of Chuenpi. The colony's early administrators held high hopes for Hong Kong as a gateway for British influence in China as a whole, which would combine British good government with an influx from China of what were referred to at the time as "intelligent and readily improvable artisans", as well as facilitating the transfer of coolies to the West Indies. However, the colonial government soon found it difficult to govern Hong Kong's rapidly expanding Chinese population, and was also faced with endemic piracy and continued hostility from the Qing government. In 1856, the Governor of Hong Kong, John Bowring, supported by the British prime minister, Lord Palmerston, demanded reparations from the Qing government for the seizure of a Hong Kong Chinese-owned ship, which led to the Second Opium War between Britain and China (1856–1860). At the opening of the war in late 1856, Qing imperial commissioner Ye Mingchen unleashed a campaign of terrorism in Hong Kong by a series of proclamations offering rewards for the deaths of what he called the French and British "rebel barbarians", and ordering Chinese to renounce employment by the "foreign dogs". A committee to organise resistance to the Europeans was established at Xin'an County on the mainland. At the same time, Europeans in Hong Kong became concerned that the turmoil in China caused by the Taiping Rebellion (1850–1864) was producing a surge of Chinese criminals into the colony. Tensions between Chinese and European residents ran high, and in December 1856 and January 1857 the Hong Kong government enacted emergency legislation, imposing a curfew on Hong Kong Chinese and giving the police sweeping powers to arrest and deport Chinese criminals and to resort to lethal force at night-time. Well-off Chinese residents became increasingly disquieted by the escalating police brutality and the level of regulation of Chinese life. Course of events On 15 January 1857, between 300 and 500 predominantly European residents of the colony—a large proportion of the European population at the time—who had consumed loaves from the Esing Bakery () fell ill with nausea, vomiting, stomach pain, and dizziness. Later testing concluded that the bread had been adulterated with large amounts of arsenic trioxide. The quantity of arsenic involved was high enough to cause the poison to be vomited out before it could kill its victims. There were no deaths immediately attributable to the poisoning, though three deaths that occurred the following year, including that of the wife of Governor Bowring, would be ascribed to its long-term effects. The colony's doctors, led by Surgeon General Aurelius Harland, dispatched messages across the town advising that the bread was poisoned and containing instructions to induce vomiting and consume raw eggs. The proprietor of the bakery, Cheong Ah-lum (), left for Macau with his family early in the day. He was immediately suspected of being the perpetrator, and as news of the incident rapidly spread, he was detained there and brought back to Hong Kong the next day. By the end of the day, 52 Chinese men had been rounded up and detained in connection to the incident. Many of the local Europeans, including the Attorney General, Thomas Chisholm Anstey, wished Cheong to be court-martialled—some called for him to be lynched. Governor Bowring insisted that he be tried by jury. On 19 January ten of the men were committed to be tried at the Supreme Court after a preliminary examination. This took place on 21 January. The other detainees were taken to Cross Roads police station and confined in a small cell, which became known as the 'Black Hole of Hong Kong' after the Black Hole of Calcutta. Some were deported several days later, while the rest remained in the Black Hole for nearly three weeks. Supreme Court trial The trial opened on 2 February. The government had difficulty selecting appropriate charges because there was no precedent in English criminal law for dealing with the attempted murder of a whole community. One of the victims of the poisoning was selected, and Cheong and the nine other defendants were charged with "administering poison with intent to kill and murder James Carroll Dempster, Colonial Surgeon". Attorney General Anstey led the prosecution, William Thomas Bridges and John Day the defence. Chief Justice John Walter Hulme, who had himself been poisoned, presided. The arguments at the trial focused more on Cheong's personal character than on the poisoning itself: the defence argued that Cheong was a highly regarded and prosperous member of the local community with little reason to take part in an amateurish poisoning plot, and suggested that Cheong had been framed by his commercial competitors. The prosecution, on the other hand, painted him as an agent of the Qing government, ideally positioned to sabotage the colony. They claimed he was financially desperate and had sold himself out to Chinese officials in return for money. The defence noted that Cheong's own children had shown symptoms of poisoning; Attorney General Anstey argued that they had merely been seasick, and added that even if Cheong were innocent, it was "better to hang the wrong man than confess that British sagacity and activity have failed to discover the real criminals". Hulme retorted that "hanging the wrong man will not further the ends of justice". Cheong himself called for his own beheading, along with the rest of his family, if he were found guilty, in accordance with Chinese practice. On 6 February, the jury rejected the arguments of the prosecution and returned a 5–1 verdict of 'not guilty'. Banishment of Cheong The verdict triggered a sensation, and despite his acquittal, public opinion among the European residents of Hong Kong remained extremely hostile to Cheong. Governor Bowring and his Executive Council had determined while the trial was still underway that Cheong should be detained indefinitely regardless of its outcome, and he was arrested soon afterwards under emergency legislation on the pretext of being what the authorities called a "suspicious character". William Tarrant, the editor of the Friend of China, sued Cheong for damages. He was awarded $1,010. Before the sentence could be executed, Bridges, now Acting Colonial Secretary, accepted a petition from the Chinese community for Cheong to be allowed to leave peaceably from Hong Kong after putting his affairs in order. Cheong was accordingly released and left the colony on 1 August, abandoning his business. Tarrant blamed Bridges publicly for permitting Cheong to escape, but was himself consequently sued for libel by Bridges and forced to pay a fine of £100. Analysis Responsibility Modern scholars have been divided in attributing the responsibility for the incident. The historian George Beer Endacott argued that the poisoning was carried out on the instruction of Qing officials, while Jan Morris depicts Cheong as a lone wolf acting out of personal patriotism. Cheong's own clan record, written in China in 1904 at the command of the imperial court, states that the incident was entirely accidental, the result of negligence in preparing the bread rather than intentional poisoning. Yet another account says that the poisoning was carried out by two foremen at the bakery who fled Hong Kong immediately afterwards, and Cheong was uninvolved. Lowe and McLaughlin, in their 2015 investigation of the incident, classify the plausible hypotheses into three categories: that the poisoning was carried out by Cheong or an employee on orders from Chinese officials, that the poisoning was an attempt by a rival to frame Cheong, and that the poisoning was accidental. Lowe and McLaughlin state that the chemical analyses conducted at the time do not support the theory that the incident was accidental. Cheong's clan record reports that "one day, through carelessness, a worker dropped some 'odd things' into the flour", even though the arsenic was found only in the bread itself, and in massive quantities—not in the flour, yeast, pastry, or in scrapings collected from the table, all of which were tested. If these results are correct, the poison must have been introduced shortly before baking. Moreover, despite its ultimate failure, Lowe and McLaughlin argue that the incident had certain characteristics of careful strategic planning: the decision to poison European-style bread, a food generally not eaten by Chinese at the time, would have served to separate the intended targets of the plot, while white arsenic (arsenic trioxide) was a fast-acting poison naturally available in China, and so well-suited to the task. In June 1857, the Hong Kong Government Gazette published a confiscated letter written to Chan Kwei-tsih, the head of the resistance committee in Xin'an County, from his brother Tsz-tin, informing him of the incident. The second-hand report in the missive suggests that the committee was unlikely to have instigated the incident directly. Toxicology Aurelius Harland, the Surgeon General, conducted the initial tests on the bread and other materials recovered from the bakery. He recorded: Portions of the poisoned bread were subsequently sealed and dispatched to Europe, where they were examined by the chemists Frederick Abel and Justus von Liebig, and the Scottish surgeon John Ivor Murray. Murray found the incident to be scientifically interesting because of the low number of deaths that resulted from the ingestion of such a massive quantity of arsenic. Chemical tests enabled him to obtain 62.3 grains of arsenous acid per pound of bread (9 parts per thousand), while Liebig found 64 grains/lb (10 parts per thousand). Liebig theorised that the poison had failed to act because it was vomited out before digestion could take place. Effects and aftermath Reception in Britain News of the incident reached Britain during the 1857 general election, which had been called following a successful parliamentary vote of censure of Lord Palmerston's support for the Second Opium War. Mustering support for Palmerston and his war policy, the London Morning Post decried the poisoning in hyperbolic terms, describing it as a "hideous villainy, [an] unparalleled treachery, of these monsters of China", "defeated ... by its very excess of iniquity"; its perpetrators were "noxious animals ... wild beasts in human shape, without one single redeeming value" and "demons in human shape". Another newspaper supportive of Palmerston, the Globe, published a fabricated letter by Cheong admitting that he "had acted agreeably to the order of the Viceroy [Ye Mingchen]". By the time the news of Cheong's acquittal was published in London on 11 April, the election was all but over, and Palmerston was victorious. In London, the incident came to the attention of the German author Friedrich Engels, who wrote to the New York Herald Tribune on 22 May 1857, saying that the Chinese now "poison the bread of the European community at Hong Kong by wholesale, and with the coolest premeditation". "In short, instead of moralizing on the horrible atrocities of the Chinese", he argued, "as the chivalrous English press does, we had better recognize that this is a war pro aris et focis, a popular war for the maintenance of Chinese nationality, with all its overbearing prejudice, stupidity, learned ignorance and pedantic barbarism if you like, but yet a popular war." Others in England denied that the poisoning had even happened. In the House of Commons, Thomas Perronet Thompson alleged that the incident had been fabricated as part of a campaign of disinformation justifying the Second Opium War. Much of the disbelief centred on Cheong's name, which became an object of sarcasm and humour—bakers in 19th-century Britain often adulterated their dough with potassium alum, or simply 'alum', as a whitener —and Lowe and McLaughlin note that "a baker named Cheong Alum would have been considered funny by itself, but a baker named Cheong Alum accused of adding poison to his own dough seemed too good to be true". An official of the Colonial Office annotated a report on Cheong from Hong Kong with the remark, "Surely a mythical name". Hong Kong Both the scale of the poisoning and its potential consequences make the Esing bakery incident unprecedented in the history of the British Empire, with the colonists believing at the time that its success could have destroyed their community. Morris describes the incident as "a dramatic realization of that favourite Victorian chiller, the Yellow Peril", and the affair contributed to the tensions between the European and Chinese communities in Hong Kong. In a state of panic, the colonial government conducted mass arrests and deportations of Chinese residents in the wake of the poisonings. 100 new police officers were hired and a merchant ship was commissioned to patrol the waters surrounding Hong Kong. Governor Bowring wrote to London requesting the dispatch of 5,000 soldiers to Hong Kong. A proclamation was issued reaffirming the curfew on Chinese residents, and Chinese ships were ordered to be kept beyond of Hong Kong, by force if necessary. Chan Tsz-tin described the aftermath in his letter : The Esing Bakery was closed, and the supply of bread to the colonial community was taken over by the English entrepreneur George Duddell, described by the historian Nigel Cameron as "one of the colony's most devious crooks". Duddell's warehouse was attacked in an arson incident on 6 March 1857, indicative of the continuing problems in the colony. In the same month, one of Duddell's employees was reported to have discussed being offered $2,000 to adulterate biscuit dough with a soporific—the truth of this allegation is unknown. Soon after the poisoning, Hong Kong was rocked by the Caldwell affair, a series of scandals and controversies involving Bridges, Tarrant, Anstey, and other members of the administration, similarly focused on race relations in the colony. Cheong himself made a prosperous living in Macau and Vietnam after his departure from Hong Kong, and later became consul for the Qing Empire in Vietnam. He died in 1900. A portion of the poisoned bread, well-preserved by its high arsenic content, was kept in a cabinet of the office of the Chief Justice of the Hong Kong Supreme Court until the 1930s. Notes References Sources (open-access preprint) Political scandals in Hong Kong Murder in Hong Kong British Hong Kong Second Opium War Mass poisoning Arsenic poisoning incidents 1857 in Hong Kong Political scandals in the United Kingdom
Esing Bakery incident
[ "Chemistry", "Environmental_science" ]
3,273
[ "Biology and pharmacology of chemical elements", "Toxicology", "Arsenic poisoning incidents" ]
61,267,793
https://en.wikipedia.org/wiki/Afromodernism
Afromodernism is modernist architecture in Africa that incorporates local building materials such as adobe bricks and thatch as well as modern materials like reinforced concrete and works them along minimalist forms. It emerged in the early 1960s. References African architecture by style 20th-century architectural styles
Afromodernism
[ "Engineering" ]
56
[ "Architecture stubs", "Architecture" ]
61,269,319
https://en.wikipedia.org/wiki/Tidally%20detached%20exomoon
Tidally detached exomoons, also known as orphaned exomoons or ploonets, are hypothetical exoplanets that were formerly exomoons of another planet, before being ejected from their orbits around their parent planets by tidal forces during planetary migration, and becoming planets in their own right. As of , no tidally detached moons have yet been definitively detected, but they are believed to be likely to exist around other stars, and potentially detectable by photometric methods. Researchers at Columbia University have suggested that a disrupting detached exomoon may be causing the unusual fluctuations in brightness exhibited by Tabby's Star. History The term ploonet, a blend of the words planet and moon, was first used in a 2019 paper in the Monthly Notices of the Royal Astronomical Society. It received attention from mainstream media sources, with CNET calling it "charmingly goofy". See also References Exomoons
Tidally detached exomoon
[ "Astronomy" ]
187
[ "Astronomy stubs", "Planetary science stubs" ]
61,269,605
https://en.wikipedia.org/wiki/C20H24FN3O3
{{DISPLAYTITLE:C20H24FN3O3}} The molecular formula C20H24FN3O3 (molar mass: 373.43 g/mol) may refer to: CP-615,003 Ondelopran
C20H24FN3O3
[ "Chemistry" ]
60
[ "Isomerism", "Set index articles on molecular formulas" ]
61,269,614
https://en.wikipedia.org/wiki/C13H13NO4
{{DISPLAYTITLE:C13H13NO4}} The molecular formula C13H13NO4 (molar mass: 247.25 g/mol, exact mass: 247.0845 u) may refer to: CPCCOEt CX614 Molecular formulas
C13H13NO4
[ "Physics", "Chemistry" ]
60
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,269,744
https://en.wikipedia.org/wiki/GrapheneOS
GrapheneOS is an open source, privacy and security-focused Android operating system that runs on selected Google Pixel devices, including smartphones, tablets and foldables. History The main developer, Daniel Micay, originally worked on CopperheadOS, until a schism over software licensing between the co-founders of Copperhead Limited led to Micay's dismissal from the company in 2018. After the incident, Micay continued working on the Android Hardening project, which was renamed as GrapheneOS and announced in April 2019. In March 2022, two GrapheneOS apps, "Secure Camera" and "Secure PDF Viewer", were released on the Google Play Store. Also in March 2022, GrapheneOS reportedly released Android 12L for Google Pixel devices before Google did, second to ProtonAOSP. In May 2023, Micay announced he would step down as lead developer of GrapheneOS and as a GrapheneOS Foundation director. As of September 2024, the GrapheneOS Foundation's Federal Corporation Information lists Micay as one of its directors. Features Sandboxed Google Play By default Google apps are not installed with GrapheneOS, but users can install a sandboxed version of Google Play Services from the pre-installed "AppStore". The sandboxed Google Play Services allows access to the Google Play Store and apps dependent on it, along with features including push notifications and in-app payments. Around January 2024, Android Auto support was added to GrapheneOS, allowing users to install it via the AppStore. The Sandboxed Google Play compatibility layer settings adds a new permission menu with 4 toggles for granting the minimal access required for wired Android Auto, wireless Android Auto, audio routing and phone calls. Security & Privacy features GrapheneOS introduces revocable network access and sensors permission toggles for each installed app. And a PIN scrambling option for the lock screen is included with GrapheneOS. GrapheneOS by default, randomizes Wi-Fi MAC address' per-connection (to a Wi-Fi network), instead of the Android per-network default. A hardened Chromium-based web browser and WebView implementation known as Vanadium, is developed by GrapheneOS and included as the default web browser/WebView. Auditor, a hardware-based attestation app, developed by GrapheneOS, which "provide strong hardware-based verification of the authenticity and integrity of the firmware/software on the device" is also included. Reception In 2019, Georg Pichler of Der Standard, and other news sources, quoted Edward Snowden saying on Twitter, "If I were configuring a smartphone today, I'd use Daniel Micay's GrapheneOS as the base operating system." In discussing why services should not force users to install proprietary apps, Lennart Mühlenmeier of netzpolitik.org suggested GrapheneOS as an alternative to Apple or Google. Svět Mobilně and Webtekno repeated the suggestions that GrapheneOS is a good security- and privacy-oriented replacement for standard Android. In a detailed review of GrapheneOS for Golem.de, Moritz Tremmel and Sebastian Grüner said they were able to use GrapheneOS similarly to other Android systems, while enjoying more freedom from Google, without noticing differences from "additional memory protection, but that's the way it should be." They concluded GrapheneOS cannot change how "Android devices become garbage after three years at the latest", but "it can better secure the devices during their remaining life while protecting privacy." In June 2021, reviews of GrapheneOS, KaiOS, AliOS, and Tizen OS, were published in Cellular News. The review of GrapheneOS called it "arguably the best mobile operating system in terms of privacy and security." However, they criticized GrapheneOS for its inconvenience to users, saying "GrapheneOS is completely de-Googled and will stay that way forever—at least according to the developers." They also noticed a "slight performance decrease" and said "it might take two full seconds for an app—even if it’s just the Settings app—to fully load." In March 2022, writing for How-To Geek Joe Fedewa said that Google apps were not included due to concerns over privacy, and GrapheneOS also did not include a default app store. Instead, Fedewa suggested, F-Droid could be used. In 2022, Jonathan Lamont of MobileSyrup reviewed GrapheneOS installed on a Pixel 3, after one week of use. He called GrapheneOS install process "straightforward" and concluded that he liked GrapheneOS overall, but criticized the post-install as "often not a seamless experience like using an unmodified Pixel or an iPhone", attributing his experience to his "over-reliance on Google apps" and the absence of some "smart" features in GrapheneOS default keyboard and camera apps, in comparison to software from Google. In his initial impressions post a week prior, Lamont said that after an easy install there were issues with permissions for Google's Messages app, and difficulty importing contacts; Lamont then concluded, "Anyone looking for a straightforward experience may want to avoid GrapheneOS or other privacy-oriented Android experiences since the privacy gains often come at the expense of convenience and ease of use." In July 2022, Charlie Osborne of ZDNet suggested that individuals who suspect a Pegasus infection use a secondary device with GrapheneOS for secure communication. In January 2023, a Swiss startup company, Apostrophy AG, announced AphyOS, which is a subscription fee-based Android operating system and services "built atop" GrapheneOS. See also Comparison of mobile operating systems List of custom Android distributions Security-focused operating system References and notes External links Android (operating system) software ARM operating systems Computing platforms Custom Android firmware Embedded Linux distributions Linux distributions Linux distributions without systemd Mobile Linux Operating system families Mobile operating systems Software using the Apache license
GrapheneOS
[ "Technology" ]
1,252
[ "Computing platforms" ]
61,269,914
https://en.wikipedia.org/wiki/C16H24N2O6
{{DISPLAYTITLE:C16H24N2O6}} The molecular formula C16H24N2O6 (molar mass: 340.37 g/mol, exact mass: 340.1634 u) may refer to: CPHPC Pirisudanol Molecular formulas
C16H24N2O6
[ "Physics", "Chemistry" ]
64
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,270,124
https://en.wikipedia.org/wiki/Rainfall%20simulator
A rainfall simulator is used in soil science and hydrology to study how soil reacts to rainfall. Natural rainfall is difficult to use in experimentation because its timing and intensity cannot be reliably reproduced. Using simulated rainfall significantly speeds the study of erosion, surface runoff and leaching. The simplest rainfall simulators qualitatively demonstrate what happens to soil during rainfall events. They are useful for explaining how fertilizer may run off (wash away) rather than supply nutrients to crops. History The evolution of the rainfall simulator started in the late 1800s when German Scientist Ewald Wollny formally studied erosion. As the study continued into the early 1910s, experimental field plots were designed to capture runoff from natural rainfall. In the 1930s, pioneers of erosion studies tightened control of their experiments by building the first rainfall simulators, ‌ ordinary sprinkle cans or pipes with holes. These holes were replaced in the 1960s with full cone nozzles, carefully selected to accurately approximate: The size of a rainfall water drop The velocity of a drop as it strikes the ground The uniformity of drops across a plot, to assure an accurate simulation of rainfall Simulators of the 1960s could simulate only a single rainfall intensity. By the 1980s, solenoid valves could modulate water flow to dynamically vary the intensity of simulated rainfall, much as rainfall intensity naturally varies in storms. As the technology matured in the early 1990s, rainfall simulators were used in the United States as part of the Water Erosion Prediction Project to update the universal soil loss equation. Design considerations Purpose Modern research simulators are typically designed around the tasks they are intended to perform, ranging from simple demonstrations for farmers to the advanced scientific study of erosion, surface runoff, and sediment size. Other scientific studies may include evaluating tillage management, the effects of soil compaction, soil crusting, and infiltration in agricultural soils. In erosion studies, if no crop canopy is present over the soil, the size distribution and terminal velocity of the raindrop must be accurately simulated, as they affect splash erosion. Requirements The main components of a rainfall simulator are the drop generator, a water feed system, and possibly a windshield. Water feed system The water feed system can be either unpressurized or pressurized. Unpressurized systems usually consist of a water tank suspended above a field plot. Gravity moves the water to the plot. Pressurized systems use a pump to move water to the plot. Drop generators Drop generators convert a flow of water to simulated rainfall drops. Two types of drop generators exist. The first type is a gravity-fed unpressurized feed system such as a perforated pipe, hanging yarns, or an array of syringe needles which form drops. The second type is a pressurized feed system connected to a nozzle. Drop generator height is important in many scientific simulators to ensure water droplets approach terminal velocity in the downward direction. The typical height is three meters (ten feet) high. Pressurized drop generators used in scientific work often have a full cone spray nozzle which is different than most irrigation nozzles. Full cone nozzles are specifically designed to spray a very uniform distribution. Full cone nozzles may be either square or circular. Square nozzles are better suited to rectangular plots, while round nozzles are better suited for round plots. Windshields Windshields prevent wind from blowing the water drops away from the plot. A windshield may be a lightweight tarp common in a portable rainfall simulator used for experiments of shorter duration, or it could be a sizable structure in the case of a permanent simulator common in long-term studies. A trade-off exists between heavier windshields, which can typically withstand higher winds, and lighter windshields which are easier to transport. Frame type A rainfall simulator can be distinguished by the type of frame it uses: A box rainfall simulator has a rectangular frame. A Norton ladder simulator is distinguishable by its large triangular frame. A rotating boom simulator is a permanent simulator with similarities to a center pivot irrigation system. An oscillating nozzle simulator is a simulator with a sprinkler that swings to cover the distribution of a large plot. Other considerations Fixed intensity simulators are cheaper to produce; however, variable intensity simulators can accurately simulate the intensity variation of a natural rain typical of most storms. Variable rate flows are generated by either rapidly switching on and off a solenoid valve or by injecting air into the lines. Field-based simulators are useful for experiments which simulate what happens in field conditions, while lab-based simulators offer tighter control of experiments. References Hydrology models Soil erosion
Rainfall simulator
[ "Biology", "Environmental_science" ]
933
[ "Hydrology", "Environmental modelling", "Hydrology models", "Biological models" ]
61,271,227
https://en.wikipedia.org/wiki/Scandium%20nitride
Scandium nitride (ScN) is a binary III-V indirect bandgap semiconductor. It is composed of the scandium cation and the nitride anion. It forms crystals that can be grown on tungsten foil through sublimation and recondensation. It has a rock-salt crystal structure with lattice constant of 0.451 nanometer, an indirect bandgap of 0.9 eV and direct bandgap of 2 to 2.4 eV. These crystals can be synthesized by dissolving nitrogen gas with indium-scandium melts, magnetron sputtering, MBE, HVPE and other deposition methods. Scandium nitride is also an effective gate for semiconductors on a silicon dioxide (SiO2) or hafnium dioxide (HfO2) substrate. References Scandium compounds Nitrides Rock salt crystal structure
Scandium nitride
[ "Chemistry" ]
183
[ "Inorganic compounds", "Inorganic compound stubs" ]
61,272,670
https://en.wikipedia.org/wiki/Valery%20Khodemchuk
Valery Ilyich Khodemchuk (; ; 24 March 195126 April 1986) was a Soviet engineer who was the night shift circulating pump operator at the Chernobyl power plant, and the first casualty of the Chernobyl disaster. Biography Valery Khodemchuk was born 24 March 1951 in Kropyvnia, Ivankiv Raion, Kyiv Oblast. He began his career at the Chernobyl Nuclear Power Plant in September 1973. During his first years at Chernobyl, he held the positions of the engineer of boilers, the senior engineer of boilers of the workshop of thermal and underground communications, the operator of the 6th group and the senior operator of the 7th group of the main circulation pump of the 4th unit of the reactor workshop. On the night of 26 April 1986, Khodemchuk was in the northern main circulation pump hall. Shortly before his death, Khodemchuk placed a phone call to another pump operator, Aleksandr Odintsov, and said "I need to recharge the lower feed for [pump number] 22… Okay, come on. So it’s shortened, one sec…" Then, just seconds later, at approximately 1:23:48 a.m. (Moscow Time), two powerful explosions tore through Unit 4, including the main circulation pump halls. Valery Khodemchuk was the first person to die in the Chernobyl disaster; it is thought he was killed instantly by the blast or crushed by large chunks of falling debris. His body was never found, and it is presumed that he is entombed under the remnants of the circulation pumps. A monument to Khodemchuk was built into the side of the Sarcophagus' interior dividing wall, east of the pump hall where he died. In 1998, a cenotaph honoring him was placed at Mitinskoe Cemetery in Moscow, the final resting place of firefighters and power plant workers who died putting out the fires from the Chernobyl disaster. Recognition In 2008, Khodemchuk was posthumously awarded with the 3rd degree Order For Courage by Viktor Yushchenko, the President of Ukraine. Actor Kieran O'Brien portrayed him in the 2019 HBO miniseries Chernobyl. See also Deaths due to the Chernobyl disaster References 1951 births 1980s missing person cases 1986 deaths Deaths from explosion Industrial accident deaths Missing people Missing person cases in Europe People associated with the Chernobyl disaster People declared dead in absentia People from Vyshhorod Raion Recipients of the Order For Courage, 3rd class Soviet engineers
Valery Khodemchuk
[ "Chemistry" ]
534
[ "Deaths from explosion", "Explosions" ]
61,273,185
https://en.wikipedia.org/wiki/International%20Journal%20of%20Hygiene%20and%20Environmental%20Health
The International Journal of Hygiene and Environmental Health is a peer-reviewed scientific journal covering environmental health. It was established by Max von Pettenkofer in 1883 as Archiv für Hygiene, and it was originally focused exclusively on hygiene. It was later published under the title Zentralblatt für Hygiene und Umweltmedizin after 1971, and it obtained its current title in 2000. It is published 8 times per year by Elsevier and the editors-in-chief are Antonia Calafat (National Center for Environmental Health) and Holger Koch (German Social Accident Insurance). The journal's official website lists its 2021 Journal Citation Reports impact factor as 7.401. References External links Environmental health journals Elsevier academic journals Publications established in 1883 English-language journals 8 times per year journals
International Journal of Hygiene and Environmental Health
[ "Environmental_science" ]
163
[ "Environmental science journals", "Environmental health journals" ]
63,414,839
https://en.wikipedia.org/wiki/Ora%20Entin-Wohlman
Ora Entin-Wohlman is an Israeli condensed matter physicist. She is a professor emeritus at Tel Aviv University and at Ben-Gurion University of the Negev. Education and career Entin-Wohlman studied mathematics and physics at the Technion – Israel Institute of Technology, earning a bachelor's degree in both in 1965 and a master's degree in physics in 1967. She completed her Ph.D. at Bar-Ilan University in 1973. After completing her Ph.D., she became a lecturer at Tel Aviv University, eventually becoming a professor in 1986 and retiring as professor emeritus in 2006. In the same year, she took up a professorship at Ben-Gurion University, where she became professor emeritus in 2013. Contributions With Amnon Aharony, Entin-Wohlman is the author of Introduction to Solid State Physics (World Scientific, 2018). Her work with Hamutal Bary-Soroker and Yoseph Imry on persistent currents in resistive conductors has been described as "so simple, yet compelling, that physicists may wonder why no one has thought of it before". Recognition Entin-Wohlman was elected as a Fellow of the American Physical Society in 1993, "for contributions to the theory of granular superconductivity, fractions, strong localization and nonlinear optics in novel materials". She was elected to the American Academy of Arts and Sciences in 2018, as an international honorary member, for her "seminal contributions to a wide range of topics in condensed matter physics theory, including superconductivity, electron and phonon localization, magnetism, nanoscience and spintronics". She is also a foreign member of the Norwegian Academy of Science and Letters. In 2022, she was elected to the Israel Academy of Sciences and Humanities and became an Honorary Fellow of the Institute of Physics. References External links Home page at Tel Aviv University Living people Israeli physicists Israeli women physicists Condensed matter physicists Technion – Israel Institute of Technology alumni Bar-Ilan University alumni Academic staff of Tel Aviv University Academic staff of Ben-Gurion University of the Negev Fellows of the American Academy of Arts and Sciences Fellows of the American Physical Society Members of the Norwegian Academy of Science and Letters Members of the Israel Academy of Sciences and Humanities 1943 births
Ora Entin-Wohlman
[ "Physics", "Materials_science" ]
482
[ "Condensed matter physicists", "Condensed matter physics" ]
63,415,056
https://en.wikipedia.org/wiki/H%C3%A9l%C3%A8ne%20Bouchiat
Hélène Bouchiat (born 1958) is a French condensed matter physicist specializing in mesoscopic physics and nanoscience. She is a director of research in the French National Centre for Scientific Research (CNRS), associated with the Laboratoire de Physique des Solides at Paris-Sud University. Topics in her research include supercurrents, persistent currents, graphene, carbon nanotubes, and bismuth-based topological insulators. Early life and education Bouchiat is the daughter of physicists Marie-Anne Bouchiat and Claude Bouchiat. She was a student at the École normale supérieure. Her 1986 doctoral dissertation, Transition verre de spin : comportement critique et bruit magnétique, was supervised by at Paris-Sud University. Career With the exception of an 18-month postdoctoral research visit at Bell Labs, Bouchiat has spent her entire career with the CNRS. Honors and awards Bouchiat is a member of the French Academy of Sciences, elected in 2010. The academy also gave her their Anatole and Suzanne Abragam Prize in 1994, and their in 1998. She won the CNRS bronze and silver medals in 1987 and 2007 respectively. Personal life Bouchiat has a brother, Vincent Bouchiat who is also a physicist. References 1958 births Living people 20th-century French physicists French women physicists Condensed matter physicists Members of the French Academy of Sciences Research directors of the French National Centre for Scientific Research
Hélène Bouchiat
[ "Physics", "Materials_science" ]
307
[ "Condensed matter physicists", "Condensed matter physics" ]
63,415,138
https://en.wikipedia.org/wiki/Berge%20equilibrium
The Berge equilibrium is a game theory solution concept named after the mathematician Claude Berge. It is similar to the standard Nash equilibrium, except that it aims to capture a type of altruism rather than purely non-cooperative play. Whereas a Nash equilibrium is a situation in which each player of a strategic game ensures that they personally will receive the highest payoff given other players' strategies, in a Berge equilibrium every player ensures that all other players will receive the highest payoff possible. Although Berge introduced the intuition for this equilibrium notion in 1957, it was only formally defined by Vladislav Iosifovich Zhukovskii in 1985, and it was not in widespread use until half a century after Berge originally developed it. History The Berge equilibrium was first introduced in Claude Berge's 1957 book Théorie générale des jeux à n personnes. Moussa Larbani and Vladislav Iosifovich Zhukovskii write that the ideas in this book were not widely used in Russia partly due to a harsh review that it received shortly after its translation into Russian in 1961, and they were not used in the English speaking world because the book had only received French and Russian printings. These explanations are echoed by other authors, with Pierre Courtois et al. adding that the impact of the book was likely dampened by its lack of economic examples, as well as by its reliance on tools from graph theory that would have been less familiar to economists of the time. Berge introduced his original equilibrium notion only in intuitive terms, and the first formal definition of the Berge equilibrium was published by Vladislav Iosifovich Zhukovskii in 1985. The topic of Berge equilibria was then studied in detail by Konstantin Semenovich Vaisman in his 1995 PhD dissertation, and Larbani and Zhukovskii document that the tool became more widely used in the mid-2000s as economists became interested in increasingly complex systems in which players might be more inclined to seek globally favourable equilibria and attach value to other players' payoffs. Colman et al. connect interest in the Berge equilibrium to interest in cooperative game theory, the evolution of cooperation, and topics like altruism in evolutionary game theory. Definition Formal definition Consider a normal-form game , where is the set of players, is the (nonempty) strategy set of player where , and is that player's utility function. Denote a strategy profile as , and denote an incomplete strategy profile . A strategy profile is called a Berge equilibrium if, for any player and any , the strategy profile satisfies . Informal definition The players in a game are playing a Berge equilibrium if they have chosen a strategy profile such that, if any given player sticks with their chosen strategy while some of the other players change their strategies, then player 's payoff will not increase. So, every player in a Berge equilibrium guarantees the best possible payoff for every other player who is playing their Berge equilibrium strategy; this is a contrast with Nash equilibria, in which each player is only concerned about maximizing their own payoffs from their strategy, and no other player cares about the payoff obtained by player . Example Consider the following prisoner's dilemma game, from Larbani and Zhukovskii (2017): Berge result A Berge equilibrium of this game is the situation in which both players pick "cooperate", denote it . This is a Berge equilibrium because each player can only lower the other player's payoff by switching their strategy; if either player switched from "cooperate" to "defect", then they would lower the other player's payoff from 20 down to 5, so they must be in a Berge equilibrium. Berge versus Nash result Notice first that the Berge equilibrium is not a Nash equilibrium, because either the row player or the column player could increase their own payoff from 20 to 25 by switching to "defect" instead of "cooperate". A Nash equilibrium of this prisoner's dilemma game is the situation in which both players pick "defect", denote it . That strategy pair yields a payoff of 10 to the row player and 10 to the column player, and no player has a unilateral incentive to switch their strategy to maximize their own payoff. However, is not a Berge equilibrium, because the row player could ensure a higher payoff for the column player by switching strategies and giving the column player a payoff of 25 instead of 10, and the column player could do the same for the row player. The cooperative nature of the Berge equilibrium therefore avoids the mutual defection problem that has made the prisoner's dilemma a notorious example of the potential for Nash equilibrium reasoning to produce a mutually suboptimal result. Motivation The Berge equilibrium has been motivated as the exact opposite of a Nash equilibrium, in that while the Nash equilibrium models selfish behaviours, the Berge equilibrium models altruistic behaviours. Moussa Larbani and Vladislav Iosifovich Zhukovskii note that Berge equilibria could be interpreted as a method for formalising the Golden Rule in strategic interactions. One advantage of the Berge equilibrium over the Nash equilibrium is that the Berge results may agree more closely with results obtained from experimental psychology and experimental economics. Several authors have noted that players asked to play games like the Prisoner's Dilemma or the ultimatum game in laboratory scenarios rarely reach the Nash Equilibrium result, in part because people in real situations often do attach value to the well-being of others, and that therefore Berge equilibria could sometimes be a better fit to real behaviour in certain situations. A challenge for the use of Berge equilibria is that they do not have as strong existence properties as Nash equilibria, although their existence may be assured by adding extra conditions. The Berge equilibrium solution concept may also be used for games that do not satisfy the conditions for Nash's existence theorem and have no Nash equilibria, such as certain games with infinite strategy sets, or in situations where equilibria in pure strategies are desired and yet there are no Nash equilibria among the pure strategy profiles. References Game theory equilibrium concepts
Berge equilibrium
[ "Mathematics" ]
1,274
[ "Game theory", "Game theory equilibrium concepts" ]
63,415,739
https://en.wikipedia.org/wiki/Smart%20thermometer
A smart thermometer is a medical thermometer which is able to transmit its readings so that they can be collected, stored and analyzed. Since 2012 Kinsa has distributed smart thermometers to two million households across the US. The thermometers transmit their readings to an app on the users' phones. Users are then able to see a history of their temperature readings. The information is also consolidated to show an overall temperature map. This shows hot spots where the level of high temperatures is exceptional and so can be used to identify outbreaks of disease. Continuous readings can be provided by wearable or adhesive thermometers but it is difficult to measure core body temperature in this way. Invasive sensors can be attached in a hospital setting such as intensive care or neonatal care. For more general use, form-factors such as arm bands and earbuds have been tried. Novel technologies under development include skin tattoos using flexible electronics and deep body heat flux sensors. References External links US Health Weather Map – Kinsa's map of illness levels, based on readings from its smart thermometers Medical statistics Medical testing equipment Thermometers
Smart thermometer
[ "Technology", "Engineering" ]
235
[ "Thermometers", "Measuring instruments" ]
63,415,758
https://en.wikipedia.org/wiki/Rupintrivir
Rupintrivir (AG-7088, Rupinavir) is a peptidomimetic antiviral drug which acts as a 3C and 3CL protease inhibitor. It was developed for the treatment of rhinoviruses, and has subsequently been investigated for the treatment of other viral diseases including those caused by picornaviruses, norovirus, and coronaviruses, such as SARS and COVID-19. See also 3CLpro-1 Carmofur Ebselen GC376 Iscartrelvir Theaflavin digallate References Antiviral drugs SARS-CoV-2 main protease inhibitors
Rupintrivir
[ "Biology" ]
141
[ "Antiviral drugs", "Biocides" ]
63,417,066
https://en.wikipedia.org/wiki/Waterborne%20resins
Waterborne resins are sometimes called water-based resins. They are resins or polymeric resins that use water as the carrying medium as opposed to solvent or solvent-less. Resins are used in the production of coatings, adhesives, sealants, elastomers and composite materials. When the phrase waterborne resin is used, it usually describes all resins which have water as the main carrying solvent. The resin could be water-soluble, water reducible or water dispersed. History Most coatings have four basic components. These are the resin, solvent, pigment and additive systems but the resin or binder is the key ingredient. Continuing environmental legislation in many countries along with geopolitics such as oil production are ensuring that chemists are increasingly turning to waterborne technology for paint/coatings and since resins or binders are the most important part of a coating, more of them are being developed and designed waterborne and there is a constantly increasing use by coating formulators. The use of waterborne coatings and hence waterborne resins really started to grow in the 1960s led by the United States and was driven by: a) the need to reduce flammability; b) environmental legislation aimed at reducing the amount of solvent vapor (VOC - Volatile organic compound) discharged into the atmosphere; c) cost; d) political factors i.e. security of supply. All these factors helped the desire to reduce the reliance on oil derived solvents. The use of water as the carrying solvent for coatings and hence resins has been increasing ever since. The same holds true for adhesives. Water is generally a low cost (but not free) commodity in plentiful supply with no toxicity problems so there has always been a desire to produce paints, inks, adhesives and textile sizes etc. with water as the carrying solvent. This has required the production of waterborne resins designed for these systems. In recent years legislative pressure has ensured that waterborne systems and hence waterborne resins are coming increasingly to the fore. Types of waterborne resins Waterborne epoxy resins An epoxy resin system generally consists of a curing agent and an epoxy resin. Both the curing agent and the epoxy resin can be made waterborne. Solid epoxy resin (molecular weight >1000) dispersions are available and consist of an epoxy resin dispersed in water sometimes with the aid of co-solvents and surfactants. The resin backbone is often modified to ensure water dispersibility. These resins dry in their own right by water/co-solvent evaporation and the particles coalescence. To cure the resin and crosslink it, an amine-based curing agent is usually added. This produces a two-component system. An alternative is to use standard medium viscosity liquid epoxy resins and emulsify them in a water-soluble polyamine or polyaminoamide hardener resin which also gives a two-component system. Polyaminoamides (or polyamidoamines) are made by reacting ethylene amines with dimerized fatty acids to give a species with amide links but still having amine functionality. Water is liberated during the condensation reaction. These resins can then  be made water-soluble by reacting further with glacial organic acids  or formaldehyde. Resins like these are usually left with yet further amine functionality on the polymer backbone to enable them to cure and crosslink an epoxy resin. Paints may then be made from them by pigmenting  either the epoxy or the amine hardener portion or even both. Polyamine curing resins as opposed to polyaminoamide resins are generally made by partially adducting polyfunctional amines with an epoxy resin and/or epoxy diluent and leaving the species with residual amine functionality. This adduct can then be dissolved in water and used to emulsify more epoxy resin and again either portion or both may be pigmented. The advantage with these systems is that they do not need glacial organic acids to solubilize them. This is an advantage if the coating is to be used over a highly alkaline substrate such as fresh concrete, as the alkali from the cement will neutralise the acid and cause instability on repeated dipping of a brush into the can. Even though water is present and is a fuel for corrosion, water-based metal coatings based on waterborne epoxy can also be formulated. Other research is investigating the benefits of combining graphene technology with waterborne epoxy. Research continues and many patents and journal papers continue to be published with novel ways of converting epoxy systems to their waterborne counterparts. One such method is to take a molecule that already is intrinsically partially hydrophilic such as a diol with a polypropylene oxide backbone, and then reacting it with epichlorohydrin and then dehydrochlorinated with sodium hydroxide. This produces a diepoxy terminated polypropylene glycol molecule. This can now be reacted with an ethyleneamine such as triethylenetetramine (TETA) to produce an amine terminated moiety that is intrinsically hydrophilic and able to cure an epoxy resin. These waterbased wpoxy coatings when used with the right choice of pigments, can be used to coat the inside of oil tanks. Waterborne alkyd resins Water reducible alkyds are basically conventional alkyd resins (i.e., polyesters based on saturated or unsaturated oils or fatty acids, polybasic acids and alcohols) modified to confer water miscibility. Typical components are vegetable oils or fatty acids such as linseed, soybean, castor, dehydrated castor, safflower, tung, coconut and tall oil. Acids include isophthalic, terephthalic, adipic, benzoic, succinic acids and phthalic, maleic and trimellitic anhydride. Polyols include glycerol, pentaerythritol, Trimethylolpropane, ethylene glycol, propylene glycol, diethylene glycol, neopentyl glycol, 1,6-hexanediol and 1,4-butanediol. Typical methods for introducing varying degrees of water miscibility are similar to other resin systems. Methods basically involve introducing hydrophilic centres such as acid groups that can then be neutralised to form a salt. Introducing polar groups onto the backbone is another method. With alkyds typical methods include maleinazation of unsaturated fatty acids with maleic anhydride. This involves making a Diels-Alder adduct near the double bond sites. The acid groups introduced can then be further reacted with polyols. A Diels-Alder reaction only occurs where there is a conjugated double bond system. Simple addition occurs if not conjugated. Other techniques include synthesizing the resin with hydroxyl functional oligomers e.g. containing ethylene glycol then adding specific acid or hydroxyl containing substances towards the end of the reaction. Another technique is making an acrylic functional alkyd with an acrylic monomer blend rich in carboxylic acid groups. Synthesis techniques have been studied and published for acrylic modified water-reducible alkyds. Alkyd emulsions Late twentieth century technology allowed the production of alkyd emulsions. The technology continues to evolve including production of DTM (Direct To Metal) finishes. The biggest issue has been getting VOC content below 250g/L. Poor corrosion resistance has also been an issue. Alkyd emulsion technology uses a reactive surfactant that has double bonds and thus oxidative drying properties like a conventional alkyd. The material is then put under shear and water added slowly. Initially a water in oil emulsion is formed but continued water addition and shear results in inversion and a stable oil in water emulsion is formed. Sustainability and other market factors mean a number of companies are entering the market. As well as patents, doctoral theses are being done at universities on the subject. Waterborne polyester resins Saturated polyester resins contain many of the materials used in conventional alkyd resins but without the oil or fatty acid components. Typical components for these resins are poly carboxylic and polyhydroxyl components. The more commonly used polyacids are phthalic, isophthalic, terephthalic and adipic acid. Phthalic and trimellitic anhydrides may also be used. Polyols tend to be neopentyl glycol, 1,6-hexanediol and trimethylolpropane. To make them waterborne organic acids or anhydrides are added in a two-stage process but there are other methods too. Waterborne polyurethane resins Polyurethanes resins are available waterborne. The single component versions are usually referred to as Polyurethane dispersions (PUD). They are available in anionic, cationic and nonionic versions though anionic moieties are the most readily available commercially. The use of an anionic or cationic center or indeed a hydrophilic non-ionic manufacturing technique tends to result in a permanent inbuilt water resistance weakness. Research is being conducted and techniques developed to combat this weakness. Cationic PUD also introduce hydrophilic components when synthesized, but techniques have and are being researched to improve the performance and water resistance properties by various techniques. This includes introducing star-branched polydimethylsiloxane. Waterborne polyurethanes are also available in 2 component versions. As a 2 component polyurethane consists of polyol(s) and an isocyanate and isocyanates react with water this requires special formulating and production techniques. The polyisocyanate that is water-dispersible maybe modified with sulfonate for example. PUDs are not usually synthesised with plant based polyols because they don't have other performance enhancing functional groups. Recent work (2021) reports modification to achieve this and enable even greener versions. Work is also ongoing to get the performance of 1 component waterborne polyurethanes to match that of 2 component versions. Self-healing versions of two-component waterborne polyurethanes are being researched. Research has shown that modification of these resin systems with polyaniline improves a number of properties including corrosion resistance. Higher solid content is also desired for economics of not transporting water. Ionic centers are usually introduced with waterborne PUDs, and so the water resistance in the resultant film has been studied. The nature of the polyol and the level of COOH groups and hydrophobic modification with other moieties can improve the hydrophilicity. Polyester polyols give the biggest improvements. Polycarbonate polyols also enhance properties, especially if the polycarbonate is also fluorinated. Silicone modification of the resin makes the species much more hydrophobic and water resistant. As the world attempts to move towards a low-carbon economy, carbon capture by using carbon dioxide from the atmosphere is gaining attention and research being done. Using carbon dioxide in PUD production is being researched. Waterborne Latexes A latex is a stable dispersion (emulsion) of polymer in water. Synthetic lattices are usually made by polymerizing a monomer such as vinyl acetate that has been emulsified with surfactants dispersed in water. The overall technique is called Emulsion polymerization. Other techniques including inversion from water in oil to oil in water emulsions are available. Particular emphasis in recent years has been the production of self-crosslinking versions especially acrylic emulsions. As an example, these may be produced by modifying with divinyl silane. Some examples include vinyl acetate based latices, acrylics and styrene-butadiene versions. They may be used to produce waterborne direct to metal coatings. Waterborne acrylic resins are also used frequently in water-based paints. Acrylic latices prepared by emulsion polymerization are often improved by copolymerizing other functional monomers. Glycidyl methacrylate is one such monomer used which then incorporates oxirane functionality into the polymer. This would then improve the properties (such as scrub resistance) of the paint formulated from this resin. DMAEMA (dimethylaminoethyl methacrylate) is another such species. Other innovative techniques for improving acrylic latices include incorporating a biocide with acrylic functionality as the modifying monomer. This allows the binder for a waterborne paint to be inherently anti-biocidal. Techniques exist to speed up the cure of waterborne acrylics. Waterborne acrylic latices and polyurethane acrylates that are UV curable have also been produced. Polymeric and oligomeric aziridines are one of the moieties used to crosslink waterborne resins. They usually react with the carboxyl groups present on these species. Potlife is usually improved along with other properties. Waterborne electrophoretic deposition resins see article Electrophoretic deposition The resins used for electrodeposition are usually epoxy, acrylic or phenolic resin types. They are formulated with functional groups which when neutralised form ionic groups on the polymer backbone. These confer water solubility on the polymer. They are available as anodic versions which deposit on the cathode of an electrochemical cell or cathodic which deposit on the cathode. Cathodic electrodeposition resins dominate and they have revolutionised corrosion protection in the automotive industry. Ceramics as well as metals may be coated this way. They are applied as OEM (Original Equipment Manufacture) rather than as a refinishing system. Cathodic resins contain amines on the polymer backbone which are neutralized by acids groups such as acetic acid to give a stable aqueous dispersion. When an electric current is passed through a car body that is dipped in a bath containing a paint based on a cathodic electrodeposition resin, the hydroxyl ions formed near the cathode deposit the paint on the car body. The electric current needed for this is determined by the number of ionic centers. Dispersions of waterborne resins for electrocoating usually contain some co-solvents such as butyl glycol and isopropanol and are usually very low in solids content i.e. 15%. They usually have molecular weights in the region of 3000–4000. Paints based on them tend to have PVCs of less than 10 i.e. a very high binder to pigment ratio. Cathodic electrophoretic deposition coatings can be made that are self-healing even at room temperature. The base polymer used for this synthesis is, a waterborne Polyurethane Dispersion (PUD) that is cationic rather than anionic. Waterborne hybrid resins Many resins are available waterborne but can be hybrids or blends. An example would be polyurethane dispersions blended or hybridized with acrylic resins, which are commonly used in automotive paint. Such systems can be made by using acrylic monomers and a polyurethane dispersion which will polymerise simultaneously to give an interpenetrating polymer network, without the need for NMP as a cosolvent. This combines the lower cost of acrylic with the high performance of a polyurethane. Waterborne epoxy resins may be modified with acrylate and then further modified with side chains having many fluorine atoms on them. Waterborne resins are also available that use both water and renewable raw materials. Another example is to combine alkyd resins with acrylics to make them waterborne. Using hyperbranched alkyds and modifying them with acrylic monomers and using mini emulsion polymerization, suitable hybrids maybe formed. As well as hybridization of the resins, a combination of techniques maybe employed. As an example, ultraviolet curing coatings that can be electrodeposited and are waterborne hybrids of epoxy and acrylic resins maybe produced. Hybrid resins include among others, PUDs that are both waterborne and UV curable. They are being researched and many papers published. PUD- acrylics using epoxidized soybean oil have been produced that are UV curable. The structure and type of acrylate will affect the properties. Hybrid resins used in coatings that are vegetable based, waterborne and UV curable are considered very green and have also been investigated. Similarly, UV-curable waterborne fluorinated polyurethane-acrylate resins can be designed and used in coatings. As well as acrylic PUD hybridization, further modification with silane monomers can be undertaken. Other examples of hybridization include modifying waterborne epoxy with latex dispersions. The latex-modified epoxy aqueous dispersions are treated by evaporation techniques. Nitrile latices were used in the study. Modification of soybean oil that has been epoxidized and then reacted with acrylic acid will produce waterborne epoxy acrylates that are also based on some renewable content. The corrosion resistance properties are improved using this technique. Alkyds can likewise be hybridised and made water reducible. This may be achieved by acrylic modification. Waterborne epoxy resins may also be acrylated and hybridized and much research has gone into these systems. Research is also taking place using waterborne alkyd resins hybridized with styrene-acrylic emulsions. These then find use in waterborne exterior decorative and architectural paints. Waterborne resins with high bio-based or renewable content High bio-based content or renewability of materials is highly prized as there is a trend in some parts of the world to a low-carbon economy. Waterborne resins are already perceived as environmentally friendly but work is ongoing to improve this further by using non-petroleum based raw materials where possible. Waterborne epoxies are one such area of research. Since waterborne resins are usually considered green and environmentally friendly, techniques are being researched that include capturing carbon dioxide from the atmosphere to make the raw materials and then further synthesis. Water Water is in some ways an unusual chemical. It is a very powerful and universal solvent. Most liquids reduce in volume on freezing, but water expands. It occurs naturally on earth in all three states of solid (ice), liquid (water) and gas(water vapour and steam). At 273.16 K or 0.16 °C (known as the triple point) it can coexist in all three states simultaneously. It has a very low molecular weight of 18 and yet a relatively high boiling point of 100 0 C. This is due to inter molecular forces and in particular hydrogen bonding. The surface tension is also high at 72 dynes/cm (mN/metre) which affects its ability to wet certain surfaces. It evaporates (latent heat of evaporation 2260 kJ per kg) very slowly in comparison to some solvents and hardly at all when the relative humidity is very high. It has a very high specific heat capacity (4.184 kJ/kg/K ) and that is why it is used in central heating systems in the United Kingdom and Europe. These factors have to be borne in mind when formulating waterborne resins and other water based systems such as adhesives and coatings. Uses Waterborne resins find use in Coatings, Adhesives, Sealants and Elastomers and other applications. Specifically they find use in textile coatings, industrial coatings, UV coatings, floor coatings, hygiene coatings, wood coatings, adhesives, concrete coatings, automotive coatings, clear coatings and anticorrosive applications including waterborne epoxy based anticorrosive primers They are also used in the design and manufacture of medical devices such as the polyurethane dressing, a liquid bandage based on polyurethane dispersion. Over the years they have also been used in polymer modified cements and repair mortars They have also found use in general textile applications including coating nonwovens. Recent (post 2020) innovations have included producing a waterborne polyurethane that has embedded silver particles to combat COVID. Waterborne polyurethane dispersions with antimicrobial properties have also been developed. See also Coatings Polymer science Prepolymer Synthetic resin Water References Further reading "Electrodeposition of Coatings"; American Chemical Society; Washington D.C.; 1973; External links Covestro DSM DIC PU General Info Incorez range Allnex website Hexion Waterbone Resins ARKEMA Waterborne Resins Alkyd Emulsions- Van Horn, Metz & Co. Inc. Polymers Coatings Water
Waterborne resins
[ "Chemistry", "Materials_science", "Environmental_science" ]
4,458
[ "Hydrology", "Synthetic resins", "Synthetic materials", "Coatings", "Polymer chemistry", "Water", "Polymers" ]
63,417,934
https://en.wikipedia.org/wiki/Mary%20Leng
Mary Leng is a British philosopher specialising in the philosophy of mathematics and philosophy of science. She is a professor at the University of York. Career Leng studied as an undergraduate at Balliol College, University of Oxford and as postgraduate student at the University of Toronto. She worked at the University of Cambridge from 2002 to 2006, and then the University of Liverpool from 2006 to 2011. In 2007, she co-edited a collection called Mathematical Knowledge with Alexander Paseau and Michael Potter, which was published by Oxford University Press, and, in 2010, she published a monograph called Mathematics and Reality, again with Oxford University Press. In Mathematics and Reality, Leng defends mathematical fictionalism. Leng joined the University of York in 2012, where she is now a professor. References Living people Year of birth missing (living people) British women philosophers 21st-century British philosophers Philosophers of mathematics Academics of the University of York Academics of the University of Liverpool Academics of the University of Cambridge University of Toronto alumni Alumni of Balliol College, Oxford
Mary Leng
[ "Mathematics" ]
207
[ "Philosophers of mathematics" ]
63,420,126
https://en.wikipedia.org/wiki/Fosnetupitant
Fosnetupitant is a medication used for the treatment of chemotherapy-induced nausea and vomiting. It is a prodrug of netupitant. It is used in combination with palonosetron hydrochloride and formulated as the salt fosnetupitant chloride hydrochloride for intravenous use. In 2018, the US Food and Drug Administration approved the intravenous formulation of a fixed dose combination of fosnetupitant and palonosetron. The combination is also approved for medical use in the European Union, and in Canada. References Prodrugs Antiemetics NK1 receptor antagonists Trifluoromethyl compounds Piperazines Pyridines Quaternary ammonium compounds 2-Tolyl compounds
Fosnetupitant
[ "Chemistry" ]
160
[ "Chemicals in medicine", "Prodrugs" ]
63,420,573
https://en.wikipedia.org/wiki/Oil%20and%20gas%20reserves%20and%20resource%20quantification
Oil and gas reserves denote discovered quantities of crude oil and natural gas (oil or gas fields) that can be profitably produced/recovered from an approved development. Oil and gas reserves tied to approved operational plans filed on the day of reserves reporting are also sensitive to fluctuating global market pricing. The remaining resource estimates (after the reserves have been accounted) are likely sub-commercial and may still be under appraisal with the potential to be technically recoverable once commercially established. Natural gas is frequently associated with oil directly and gas reserves are commonly quoted in barrels of oil equivalent (BOE). Consequently, both oil and gas reserves, as well as resource estimates, follow the same reporting guidelines, and are referred to collectively hereinafter as oil & gas. Quantification As with other mineral resource estimation, detailed classification schemes have been devised by industry specialists to quantify volumes of oil and gas accumulated underground (known as subsurface). These schemes provide management and investors with the means to make quantitative and relative comparisons between assets, before underwriting the significant cost of exploring for, developing and extracting those accumulations. Classification schemes are used to categorize the uncertainty in volume estimates of the recoverable oil and gas and the chance that they exist in reality (or risk that they do not) depending on the resource maturity. Potential subsurface oil and gas accumulations identified during exploration are classified and reported as prospective resources. Resources are re-classified as reserves following appraisal, at the point when a sufficient accumulation of commercial oil and/or gas are proven by drilling, with authorized and funded development plans to begin production within a recommended five years. Reserve estimates are required by authorities and companies, and are primarily made to support operational or investment decision-making by companies or organisations involved in the business of developing and producing oil and gas. Reserve volumes are necessary to determine the financial status of the company, which may be obliged to report those estimates to shareholders and "resource holders" at the various stages of resource maturation. Currently, the most widely accepted classification and reporting methodology is the 2018 petroleum resources management system (PRMS), which summarizes a consistent approach to estimating oil and gas quantities within a comprehensive classification framework, jointly developed by the Society of Petroleum Engineers (SPE), the World Petroleum Council (WPC), the American Association of Petroleum Geologists (AAPG), the Society of Petroleum Evaluation Engineers (SPEE) and the Society of Economic Geologists (SEG). Public companies that register securities in the U.S. market must report proved reserves under the Securities and Exchange Commission (SEC) reporting requirements which shares many elements with PRMS. Attempts have also been made to standardize more generalized methodologies for the reporting of national or basin level oil and gas resource assessments. Reserves and resource reporting An oil or gas resource refers to known (discovered fields) or potential accumulations of oil and/or gas (i.e undiscovered prospects and leads) in the subsurface of the Earth's crust. All reserve and resource estimates involve uncertainty in volume estimates (expressed below as Low, Mid or High uncertainty), as well as a risk or chance to exist in reality, depending on the level of appraisal or resource maturity that governs the amount of reliable geologic and engineering data available and the interpretation of those data. Estimating and monitoring of reserves provides an insight into, for example, a company's future production and a country's oil & gas supply potential. As such, reserves are an important means of expressing value and longevity of resources. In the PRMS, the terms 'Resources' and 'Reserves' have distinct and specific meaning with respect to oil & gas accumulations and hydrocarbon exploration in general. However, the level of rigor required in applying these terms varies depending on the resource maturity which informs reporting requirements. Oil & gas reserves are resources that are, or are reasonably certain to be, commercial (i.e. profitable). Reserves are the main asset of an oil & gas company; booking is the process by which they are added to the balance sheet. Contingent and prospective resource estimates are much more speculative and are not booked with the same degree of rigor, generally for internal company use only, reflecting a more limited data set and assessment maturity. If published externally, these volumes add to the perception of asset value, which in turn can influence oil & gas company share or stock value. The PRMS provides a framework for a consistent approach to the estimation process to comply with reporting requirements of particularly, listed companies. Energy companies may employ specialist, independent, reserve valuation consultants to provide third party reports as part of SEC filings for either reserves or resource booking. Reserves Reserves reporting of discovered accumulations is regulated by tight controls for informed investment decisions to quantify differing degrees of uncertainty in recoverable volumes. Reserves are defined in three sub-categories according to the system used in the PRMS: Proven (1P), Probable and Possible. Reserves defined as Probable and Possible are incremental (or additional) discovered volumes based on geological and/or engineering criteria similar to those used in estimating Proven reserves. Though not classified as contingent, some technical, contractual, or regulatory uncertainties preclude such reserves being classified as Proven. The most accepted definitions of these are based on those originally approved by the SPE and the WPC in 1997, requiring that reserves are discovered, recoverable, commercial and remaining based on rules governing the classification into sub-categories and the declared development project plans applied. Probable and Possible reserves may be used internally by oil companies and government agencies for future planning purposes but are not routinely or uniformly compiled. Proven reserves Proven reserves are discovered volumes claimed to have a reasonable certainty of being recoverable under existing economic and political conditions, and with existing technology. Industry specialists refer to this category as "P90" (that is, having a 90% certainty of producing or exceeding the P90 volume on the probability distribution). Proven reserves are also known in the industry as 1P. Proven reserves may be referred to as proven developed (PD) or as proven undeveloped (PUD). PD reserves are reserves that can be produced with existing wells and perforations, or from additional reservoirs where minimal additional investment (operating expense) is required (e.g. opening a set of perforations already installed). PUD reserves require additional capital investment (e.g., drilling new wells) to bring the oil and/or gas to the surface. Accounting for production is an important exercise for businesses. Produced oil or gas that has been brought to surface (production) and sold on international markets or refined in-country are no longer reserves and are removed from the booking and company balance sheets. Until January 2010, "1P" proven reserves were the only type the U.S. SEC allowed oil companies to report to investors. Companies listed on U.S. stock exchanges may be called upon to verify their claims confidentially, but many governments and national oil companies do not disclose verifying data publicly. Since January 2010 the SEC now allows companies to also provide additional optional information declaring 2P (both proven and probable) and 3P (proven plus probable plus possible) with discretionary verification by qualified third party consultants, though many companies choose to use 2P and 3P estimates only for internal purposes. Probable and possible reserves Probable additional reserves are attributed to known accumulations and the probabilistic, cumulative sum of proven and probable reserves (with a probability of P50), also referred to in the industry as "2P" (Proven plus Probable) The P50 designation means that there should be at least a 50% chance that the actual volumes recovered will be equal to or will exceed the 2P estimate. Possible additional reserves are attributed to known accumulations that have a lower chance of being recovered than probable reserves. Reasons for assigning a lower probability to recovering Possible reserves include varying interpretations of geology, uncertainty due to reserve infill (associated with variability in seepage towards a production well from adjacent areas) and projected reserves based on future recovery methods. The probabilistic, cumulative sum of proven, probable and possible reserves is referred to in the industry as "3P" (proven plus probable plus possible) where there is a 10% chance of delivering or exceeding the P10 volume.(ibid) Resource estimates Resource estimates are undiscovered volumes, or volumes that have not yet been drilled and flowed to surface. A non-reserve resource, by definition, does not have to be technically or commercially recoverable and can be represented by a single, or an aggregate of multiple potential accumulations, e.g. an estimated geological basin resource. There are two non-reserve resource categories: Contingent resources Once a discovery has been made, prospective resources can be reclassified as contingent resources. Contingent resources are those accumulations or fields that are not yet considered mature enough for commercial development, where development is contingent on one or more conditions changing. The uncertainty in the estimates for recoverable oil & gas volumes is expressed in a probability distribution and is sub-classified based on project maturity and/or economic status (1C, 2C, 3C, ibid) and in addition are assigned a risk, or chance, to exist in reality (POS or COS). Prospective resources Prospective resources, being undiscovered, have the widest range in volume uncertainties and carry the highest risk or chance to be present in reality (POS or COS). At the exploration stage (before discovery) they are categorized by the wide range of volume uncertainties (typically P90-P50-P10). In the PRMS the range of volumes is classified by the abbreviations 1U, 2U and 3U again reflecting the degrees of uncertainty. Companies are commonly not required to report publicly their views of prospective resources but may choose to do so voluntarily. Estimation techniques The total estimated quantity (volumes) of oil and/or gas contained in a subsurface reservoir, is called oil or gas initially in place (STOIIP or GIIP respectively). However, only a fraction of in place oil & gas can be brought to the surface (recoverable), and it is only this producible fraction that is considered to be either reserves or a resource of any kind. The ratio between in place and recoverable volumes is known as the recovery factor (RF), which is determined by a combination of subsurface geology and the technology applied to extraction. When reporting oil & gas volumes, in order to avoid confusion, it should be clarified whether they are in place or recoverable volumes. The appropriate technique for resource estimations is determined by resource maturity. There are three main categories of technique, which are used through resource maturation to differing degrees: analog (substitution), volumetric (static) and performance-based (dynamic), which are combined to help fill gaps in knowledge or data. Both probabilistic and deterministic calculation methods are commonly used to calculate resource volumes, with deterministic methods predominantly applied to reserves estimation (low uncertainty) and probabilistic methods applied to general resource estimation (high uncertainty). The combination of geological, geophysical and technical engineering constraints means that the quantification of volumes is usually undertaken by integrated technical, and commercial teams composed primarily of geoscientists and subsurface engineers, surface engineers and economists. Because the geology of the subsurface cannot be examined directly, indirect techniques must be used to estimate the size and recoverability of the resource. While new technologies have increased the accuracy of these estimation techniques, significant uncertainties still remain, which are expressed as a range of potential recoverable oil & gas quantities using probabilistic methods. In general, most early estimates of the reserves of an oil or gas field (rather than resource estimates) are conservative and tend to grow with time. This may be due to the availability of more data and/or the improved matching between predicted and actual production performance. Appropriate external reporting of resources and reserves is required from publicly traded companies, and is an accounting process governed by strict definitions and categorisation administered by authorities regulating the stock market and complying with governmental legal requirements. Other national or industry bodies may voluntarily report resources and reserves but are not required to follow the same strict definitions and controls. Analog (YTF) method Analogs are applied to prospective resources in areas where there are little, or sometimes no, existing data available to inform analysts about the likely potential of an opportunity or play segment. Analog-only techniques are called yet-to-find (YTF), and involve identifying areas containing producing assets that are geologically similar to those being estimated and substituting data to match what is known about a segment. The opportunity segment can be scaled to any level depending on the specific interest of the analyst, whether at a global, country, basin, structural domain, play, license or reservoir level. YTF is conceptual and is commonly used as a method for scoping potential in frontier areas where there is no oil or gas production or where new play concepts are being introduced with perceived potential. However, analog content can also be substituted for any subsurface parameters where there are gaps in data in more mature reserves or resource settings (below). Volumetric method Oil & gas volumes in a conventional reservoir can be calculated using a volume equation: Recoverable volume = Gross Rock Volume * Net/Gross * Porosity * Oil or Gas Saturation * Recovery Factor / Formation Volume Factor Deterministic volumes are calculated when single values are used as input parameters to this equation, which could include analog content. Probabilistic volumes are calculations when uncertainty distributions are applied as input to all or some of the terms of the equation (see also Copula (probability theory)), which preserve dependencies between parameters. These geostatistical methods are most commonly applied to prospective resources that still need to be tested by the drill bit. Contingent resources are also characterized by volumetric methods with analog content and uncertainty distributions before significant production has occurred, where spatial distribution information may be preserved in a static reservoir model. Static models and dynamic flow models can be populated with analog reservoir performance data to increase the confidence in forecasting as the amount and quality of static geoscientific and dynamic reservoir performance data increase. Performance-based methods Once production has commenced, production rates and pressure data allow a degree of prediction on reservoir performance, which was previously characterized by substituting analog data. Analog data can still be substituted for expected reservoir performance where specific dynamic data may be missing, representing a "best technical" outcome. Reservoir simulation Reservoir simulation is an area of reservoir engineering in which computer models are used to predict the flow of fluids (typically, oil, water, and gas) through porous media. The amount of oil & gas recoverable from a conventional reservoir is assessed by accurately characterising the static recoverable volumes and history matching that to dynamic flow. Reservoir performance is important because the recovery changes as the physical environment of the reservoir adjusts with every molecule extracted; the longer a reservoir has been flowing, the more accurate the prediction of remaining reserves. Dynamic simulations are commonly used by analysts to update reserves volumes, particularly in large complex reservoirs. Daily production can be matched against production forecasts to establish the accuracy of simulation models based on actual volumes of recovered oil or gas. Unlike analogs or volumetric methods above, the degree of confidence in the estimates (or the range of outcomes) increases as the amount and quality of geological, engineering and production performance data increase. These must then be compared with previous estimates, whether derived from analog, volumetric or static reservoir modelling before reserves can be adjusted and booked. Materials balance method The materials balance method for an oil or gas field uses an equation that relates the volume of oil, water and gas that has been produced from a reservoir and the change in reservoir pressure to calculate the remaining oil & gas. It assumes that, as fluids from the reservoir are produced, there will be a change in the reservoir pressure that depends on the remaining volume of oil & gas. The method requires extensive pressure-volume-temperature analysis and an accurate pressure history of the field. It requires some production to occur (typically 5% to 10% of ultimate recovery), unless reliable pressure history can be used from a field with similar rock and fluid characteristics. Production decline curve method The decline curve method is an extrapolation of known production data to fit a decline curve and estimate future oil & gas production. The three most common forms of decline curves are exponential, hyperbolic, and harmonic. It is assumed that the production will decline on a reasonably smooth curve, and so allowances must be made for wells shut in and production restrictions. The curve can be expressed mathematically or plotted on a graph to estimate future production. It has the advantage of (implicitly) conflating all reservoir characteristics. It requires a sufficient production history to establish a statistically significant trend, ideally when production is not curtailed by regulatory or other artificial conditions. Reserves growth Experience shows that initial estimates of the size of newly discovered oil & gas fields are usually too low. As years pass, successive estimates of the ultimate recovery of fields tend to increase. The term reserve growth refers to the typical increases (but narrowing range) of estimated ultimate recovery that occur as oil & gas fields are developed and produced. Many oil-producing nations do not reveal their reservoir engineering field data and instead provide unaudited claims for their oil reserves. The numbers disclosed by some national governments are suspected of being manipulated for political reasons. In order to achieve international goals for decarbonisation, the International Energy Agency said in 2021 that countries should no longer expand exploration or invest in projects to expand reserves to meet climate goals set by the Paris Agreement. Unconventional reservoirs The categories and estimation techniques framed by the PRMS above apply to conventional reservoirs, where oil & gas accumulations are controlled by hydrodynamic interactions between the buoyancy of oil & gas in water versus capillary forces. Oil or gas in unconventional reservoirs are much more tightly bound to rock matrices in excess of capillary forces and therefore require different approaches to both extraction and resource estimation. Unconventional reservoirs or accumulations also require different means of identification and include coalbed methane (CBM), basin-centered gas (low permeability), low permeability tight gas (including shale gas) and tight oil (including shale oil), gas hydrates, natural bitumen (very high viscosity oil), and oil shale (kerogen) deposits. Ultra low permeability reservoirs exhibit a half slope on a log-plot of flow-rates against time believed to be caused by drainage from matrix surfaces into adjoining fractures. Such reservoirs are commonly believed to be regionally pervasive that may be interrupted by regulatory or ownership boundaries with the potential for large oil & gas volumes, which are very hard to verify. Non-unique flow characteristics in unconventional accumulations means that commercial viability depends on the technology applied to extraction. Extrapolations from a single control point, and thereby resource estimation, are dependent on nearby producing analogs with evidence of economic viability. Under these circumstances, pilot projects may be needed to define reserves. Any other resource estimates are likely to be analog-only derived YTF volumes, which are speculative. See also Extraction of petroleum Oil in place Decline curve analysis Uncertainty Probability density function Copula (probability theory) Global strategic petroleum reserves Oil exploration Peak oil Petroleum Industry List of acronyms in oil and gas exploration and production List of natural gas fields List of oil fields List of oilfield service companies Petroleum play Stranded gas reserve Estimated ultimate recovery Energy and resources: Proven reserves Resource curse Energy security List of countries by proven oil reserves World energy resources and consumption Natural gas List of countries by natural gas proven reserves References, notes and working definitions References Notes Working definitions External links Energy Supply page on the Global Education Project website, including many charts and graphs on the world's energy supply and use Oil reserves (most recent) by country Oil reserves Peak oil Oil exploration Resource economics Petroleum industry Fossil fuels Industries (economics) Petroleum geology Petroleum production Natural gas fields
Oil and gas reserves and resource quantification
[ "Chemistry" ]
4,093
[ "Petroleum industry", "Petroleum", "Petroleum geology", "Chemical process engineering" ]
63,423,264
https://en.wikipedia.org/wiki/Marie-Anne%20Bouchiat
Marie-Anne Bouchiat-Guiochon (born 1934) is a French experimental atomic physicist whose research has included studies of neutral currents, parity violation, and hyperpolarization. She is an honorary director of research for the French National Centre for Scientific Research (CNRS). Education and career Bouchiat was a student at the École normale supérieure de jeunes filles from 1953 to 1957, and a visiting researcher at Princeton University from 1957 to 1959. She completed a doctorate in 1964; her dissertation was Étude par pompage optique de la relaxation d'atomes de rubidium. She worked as a researcher for CNRS from 1972, associated with the Kastler–Brossel Laboratory, until her retirement in 2005. Personal life Bouchiat married physicist Claude Bouchiat. Their daughter Hélène Bouchiat is a physicist. They have a son, Vincent Bouchiat who is a physicist. Recognition Bouchiat was elected as a corresponding member of the French Academy of Sciences in 1986, and a full member in 1988. She became a member of Academia Europaea in 1993. She won the CNRS bronze and silver medals in 1966 and 1971 respectively. The French Academy of Sciences gave her their Hughes Prize in 1968, and the Ampère Prize in 1983. She is a knight in the Ordre des Palmes académiques, a commander in the Ordre national du Mérite, and a commander in the Legion of Honour. References 1934 births Living people 20th-century French physicists French women physicists Condensed matter physicists Members of the French Academy of Sciences Members of Academia Europaea Commanders of the Legion of Honour Research directors of the French National Centre for Scientific Research
Marie-Anne Bouchiat
[ "Physics", "Materials_science" ]
349
[ "Condensed matter physicists", "Condensed matter physics" ]
63,424,816
https://en.wikipedia.org/wiki/Government%20by%20algorithm
Government by algorithm (also known as algorithmic regulation, regulation by algorithms, algorithmic governance, algocratic governance, algorithmic legal order or algocracy) is an alternative form of government or social ordering where the usage of computer algorithms is applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration. The term "government by algorithm" has appeared in academic literature as an alternative for "algorithmic governance" in 2013. A related term, algorithmic regulation, is defined as setting the standard, monitoring and modifying behaviour by means of computational algorithmsautomation of judiciary is in its scope. In the context of blockchain, it is also known as blockchain governance. Government by algorithm raises new challenges that are not captured in the e-government literature and the practice of public administration. Some sources equate cyberocracy, which is a hypothetical form of government that rules by the effective use of information, with algorithmic governance, although algorithms are not the only means of processing information. Nello Cristianini and Teresa Scantamburlo argued that the combination of a human society and certain regulation algorithms (such as reputation-based scoring) forms a social machine. History In 1962, the director of the Institute for Information Transmission Problems of the Russian Academy of Sciences in Moscow (later Kharkevich Institute), Alexander Kharkevich, published an article in the journal "Communist" about a computer network for processing information and control of the economy. In fact, he proposed to make a network like the modern Internet for the needs of algorithmic governance (Project OGAS). This created a serious concern among CIA analysts. In particular, Arthur M. Schlesinger Jr. warned that "by 1970 the USSR may have a radically new production technology, involving total enterprises or complexes of industries, managed by closed-loop, feedback control employing self-teaching computers". Between 1971 and 1973, the Chilean government carried out Project Cybersyn during the presidency of Salvador Allende. This project was aimed at constructing a distributed decision support system to improve the management of the national economy. Elements of the project were used in 1972 to successfully overcome the traffic collapse caused by a CIA-sponsored strike of forty thousand truck drivers. Also in the 1960s and 1970s, Herbert A. Simon championed expert systems as tools for rationalization and evaluation of administrative behavior. The automation of rule-based processes was an ambition of tax agencies over many decades resulting in varying success. Early work from this period includes Thorne McCarty's influential TAXMAN project in the US and Ronald Stamper's LEGOL project in the UK. In 1993, the computer scientist Paul Cockshott from the University of Glasgow and the economist Allin Cottrell from the Wake Forest University published the book Towards a New Socialism, where they claim to demonstrate the possibility of a democratically planned economy built on modern computer technology. The Honourable Justice Michael Kirby published a paper in 1998, where he expressed optimism that the then-available computer technologies such as legal expert system could evolve to computer systems, which will strongly affect the practice of courts. In 2006, attorney Lawrence Lessig, known for the slogan "Code is law", wrote: [T]he invisible hand of cyberspace is building an architecture that is quite the opposite of its architecture at its birth. This invisible hand, pushed by government and by commerce, is constructing an architecture that will perfect control and make highly efficient regulation possible Since the 2000s, algorithms have been designed and used to automatically analyze surveillance videos. In his 2006's book Virtual Migration, A. Aneesh developed the concept of algocracy — information technologies constrain human participation in public decision making. Aneesh differentiated algocratic systems from bureaucratic systems (legal-rational regulation) as well as market-based systems (price-based regulation). In 2013, algorithmic regulation was coined by Tim O'Reilly, founder and CEO of O'Reilly Media Inc.: Sometimes the "rules" aren't really even rules. Gordon Bruce, the former CIO of the city of Honolulu, explained to me that when he entered government from the private sector and tried to make changes, he was told, "That's against the law." His reply was "OK. Show me the law." "Well, it isn't really a law. It's a regulation." "OK. Show me the regulation." "Well, it isn't really a regulation. It's a policy that was put in place by Mr. Somebody twenty years ago." "Great. We can change that!" [...] Laws should specify goals, rights, outcomes, authorities, and limits. If specified broadly, those laws can stand the test of time. Regulations, which specify how to execute those laws in much more detail, should be regarded in much the same way that programmers regard their code and algorithms, that is, as a constantly updated toolset to achieve the outcomes specified in the laws. [...] It's time for government to enter the age of big data. Algorithmic regulation is an idea whose time has come. In 2017, Ukraine's Ministry of Justice ran experimental government auctions using blockchain technology to ensure transparency and hinder corruption in governmental transactions. "Government by Algorithm?" was the central theme introduced at Data for Policy 2017 conference held on 6–7 September 2017 in London. Examples Smart cities A smart city is an urban area where collected surveillance data is used to improve various operations. Increase in computational power allows more automated decision making and replacement of public agencies by algorithmic governance. In particular, the combined use of artificial intelligence and blockchains for IoT may lead to the creation of sustainable smart city ecosystems. Intelligent street lighting in Glasgow is an example of successful government application of AI algorithms. A study of smart city initiatives in the US shows that it requires public sector as a main organizer and coordinator, the private sector as a technology and infrastructure provider, and universities as expertise contributors. The cryptocurrency millionaire Jeffrey Berns proposed the operation of local governments in Nevada by tech firms in 2021. Berns bought 67,000 acres (271 km2) in Nevada's rural Storey County (population 4,104) for $170,000,000 (£121,000,000) in 2018 in order to develop a smart city with more than 36,000 residents that could generate an annual output of $4,600,000,000. Cryptocurrency will be allowed for payments. Blockchains, Inc. "Innovation Zone" was canceled in September 2021 after it failed to secure enough water for the planned 36,000 residents, through water imports from a site located 100 miles away in the neighboring Washoe County. Similar water pipeline proposed in 2007 was estimated to cost $100 million and to would have taken about 10 years to develop. With additional water rights purchased from Tahoe Reno Industrial General Improvement District, "Innovation Zone" would have acquired enough water for about 15,400 homes - meaning that it would have barely covered its planned 15,000 dwelling units, leaving nothing for the rest of the projected city and its 22 million square-feet of industrial development. In Saudi Arabia, the planners of The Line assert that it will be monitored by AI to improve life by using data and predictive modeling. Reputation systems Tim O'Reilly suggested that data sources and reputation systems combined in algorithmic regulation can outperform traditional regulations. For instance, once taxi-drivers are rated by passengers, the quality of their services will improve automatically and "drivers who provide poor service are eliminated". O'Reilly's suggestion is based on control-theoreric concept of feed-back loop—improvements and disimprovements of reputation enforce desired behavior. The usage of feed-loops for the management of social systems is already been suggested in management cybernetics by Stafford Beer before. These connections are explored by Nello Cristianini and Teresa Scantamburlo, where the reputation-credit scoring system is modeled as an incentive given to the citizens and computed by a social machine, so that rational agents would be motivated to increase their score by adapting their behaviour. Several ethical aspects of that technology are still being discussed. China's Social Credit System was said to be a mass surveillance effort with a centralized numerical score for each citizen given for their actions, though newer reports say that this is a widespread misconception. Smart contracts Smart contracts, cryptocurrencies, and decentralized autonomous organization are mentioned as means to replace traditional ways of governance. Cryptocurrencies are currencies, which are enabled by algorithms without a governmental central bank. Central bank digital currency often employs similar technology, but is differentiated from the fact that it does use a central bank. It is soon to be employed by major unions and governments such as the European Union and China. Smart contracts are self-executable contracts, whose objectives are the reduction of need in trusted governmental intermediators, arbitrations and enforcement costs. A decentralized autonomous organization is an organization represented by smart contracts that is transparent, controlled by shareholders and not influenced by a central government. Smart contracts have been discussed for use in such applications as use in (temporary) employment contracts and automatic transfership of funds and property (i.e. inheritance, upon registration of a death certificate). Some countries such as Georgia and Sweden have already launched blockchain programs focusing on property (land titles and real estate ownership) Ukraine is also looking at other areas too such as state registers. Algorithms in government agencies According to a study of Stanford University, 45% of the studied US federal agencies have experimented with AI and related machine learning (ML) tools up to 2020. US federal agencies counted the number of artificial intelligence applications, which are listed below. 53% of these applications were produced by in-house experts. Commercial providers of residual applications include Palantir Technologies. In 2012, NOPD started a collaboration with Palantir Technologies in the field of predictive policing. Besides Palantir's Gotham software, other similar (numerical analysis software) used by police agencies (such as the NCRIC) include SAS. In the fight against money laundering, FinCEN employs the FinCEN Artificial Intelligence System (FAIS) since 1995. National health administration entities and organisations such as AHIMA (American Health Information Management Association) hold medical records. Medical records serve as the central repository for planning patient care and documenting communication among patient and health care provider and professionals contributing to the patient's care. In the EU, work is ongoing on a European Health Data Space which supports the use of health data. US Department of Homeland Security has employed the software ATLAS, which run on Amazon Cloud. It scanned more than 16.5 million of records of naturalized Americans and flagged approximately 124,000 of them for manual analysis and review by USCIS officers regarding denaturalization. There were flagged due to potential fraud, public safety and national security issues. Some of the scanned data came from Terrorist Screening Database and National Crime Information Center. The NarxCare is a US software, which combines data from the prescription registries of various U.S. states and uses machine learning to generate various three-digit "risk scores" for prescriptions of medications and an overall "Overdose Risk Score", collectively referred to as Narx Scores, in a process that potentially includes EMS and criminal justice data as well as court records. In Estonia, artificial intelligence is used in its e-government to make it more automated and seamless. A virtual assistant will guide citizens through any interactions they have with the government. Automated and proactive services "push" services to citizens at key events of their lives (including births, bereavements, unemployment, ...). One example is the automated registering of babies when they are born. Estonia's X-Road system will also be rebuilt to include even more privacy control and accountability into the way the government uses citizen's data. In Costa Rica, the possible digitalization of public procurement activities (i.e. tenders for public works, ...) has been investigated. The paper discussing this possibility mentions that the use of ICT in procurement has several benefits such as increasing transparency, facilitating digital access to public tenders, reducing direct interaction between procurement officials and companies at moments of high integrity risk, increasing outreach and competition, and easier detection of irregularities. Besides using e-tenders for regular public works (construction of buildings, roads, ...), e-tenders can also be used for reforestation projects and other carbon sink restoration projects. Carbon sink restoration projects may be part of the nationally determined contributions plans in order to reach the national Paris agreement goals Government procurement audit software can also be used. Audits are performed in some countries after subsidies have been received. Some government agencies provide track and trace systems for services they offer. An example is track and trace for applications done by citizens (i.e. driving license procurement). Some government services use issue tracking system to keep track of ongoing issues. Justice by algorithm Judges' decisions in Australia are supported by the "Split Up" software in case of determining the percentage of a split after a divorce. COMPAS software is used in USA to assess the risk of recidivism in courts. According to the statement of Beijing Internet Court, China is the first country to create an internet court or cyber court. The Chinese AI judge is a virtual recreation of an actual female judge. She "will help the court's judges complete repetitive basic work, including litigation reception, thus enabling professional practitioners to focus better on their trial work". Also Estonia plans to employ artificial intelligence to decide small-claim cases of less than €7,000. Lawbots can perform tasks that are typically done by paralegals or young associates at law firms. One such technology used by US law firms to assist in legal research is from ROSS Intelligence, and others vary in sophistication and dependence on scripted algorithms. Another legal technology chatbot application is DoNotPay. Algorithms in education Due to the COVID-19 pandemic in 2020, in-person final exams were impossible for thousands of students. The public high school Westminster High employed algorithms to assign grades. UK's Department for Education also employed a statistical calculus to assign final grades in A-levels, due to the pandemic. Besides use in grading, software systems like AI were used in preparation for college entrance exams. AI teaching assistants are being developed and used for education (e.g., Georgia Tech's Jill Watson) and there is also an ongoing debate on whether perhaps teachers can be entirely replaced by AI systems (e.g., in homeschooling). AI politicians In 2018, an activist named Michihito Matsuda ran for mayor in the Tama city area of Tokyo as a human proxy for an artificial intelligence program. While election posters and campaign material used the term robot, and displayed stock images of a feminine android, the "AI mayor" was in fact a machine learning algorithm trained using Tama city datasets. The project was backed by high-profile executives Tetsuzo Matsumoto of Softbank and Norio Murakami of Google. Michihito Matsuda came third in the election, being defeated by Hiroyuki Abe. Organisers claimed that the 'AI mayor' was programmed to analyze citizen petitions put forward to the city council in a more 'fair and balanced' way than human politicians. In 2018, Cesar Hidalgo presented the idea of augumented democracy. In an augumented democracy, legislation is done by digital twins of every single person. In 2019, AI-powered messenger chatbot SAM participated in the discussions on social media connected to an electoral race in New Zealand. The creator of SAM, Nick Gerritsen, believes SAM will be advanced enough to run as a candidate by late 2020, when New Zealand has its next general election. In 2022, the chatbot "Leader Lars" or "Leder Lars" was nominated for The Synthetic Party to run in the 2022 Danish parliamentary election, and was built by the artist collective Computer Lars. Leader Lars differed from earlier virtual politicians by leading a political party and by not pretending to be an objective candidate. This chatbot engaged in critical discussions on politics with users from around the world. In 2023, In the Japanese town of Manazuru, a mayoral candidate called "AI Mayer" hopes to be the first AI-powered officeholder in Japan in November 2023. This candidacy is said to be supported by a group led by Michihito Matsuda In the 2024 United Kingdom general election, a businessman named Steve Endacott ran for the constituency of Brighton Pavilion as an AI avatar named "AI Steve", saying that constituents could interact with AI Steve to shape policy. Endacott stated that he would only attend Parliament to vote based on policies which had garnered at least 50% support. AI Steve placed last with 179 votes. Management of infection In February 2020, China launched a mobile app to deal with the Coronavirus outbreak called "close-contact-detector". Users are asked to enter their name and ID number. The app is able to detect "close contact" using surveillance data (i.e. using public transport records, including trains and flights) and therefore a potential risk of infection. Every user can also check the status of three other users. To make this inquiry users scan a Quick Response (QR) code on their smartphones using apps like Alipay or WeChat. The close contact detector can be accessed via popular mobile apps including Alipay. If a potential risk is detected, the app not only recommends self-quarantine, it also alerts local health officials. Alipay also has the Alipay Health Code which is used to keep citizens safe. This system generates a QR code in one of three colors (green, yellow, or red) after users fill in a form on Alipay with personal details. A green code enables the holder to move around unrestricted. A yellow code requires the user to stay at home for seven days and red means a two-week quarantine. In some cities such as Hangzhou, it has become nearly impossible to get around without showing one's Alipay code. In Cannes, France, monitoring software has been used on footage shot by CCTV cameras, allowing to monitor their compliance to local social distancing and mask wearing during the COVID-19 pandemic. The system does not store identifying data, but rather allows to alert city authorities and police where breaches of the mask and mask wearing rules are spotted (allowing fining to be carried out where needed). The algorithms used by the monitoring software can be incorporated into existing surveillance systems in public spaces (hospitals, stations, airports, shopping centres, ...) Cellphone data is used to locate infected patients in South Korea, Taiwan, Singapore and other countries. In March 2020, the Israeli government enabled security agencies to track mobile phone data of people supposed to have coronavirus. The measure was taken to enforce quarantine and protect those who may come into contact with infected citizens. Also in March 2020, Deutsche Telekom shared private cellphone data with the federal government agency, Robert Koch Institute, in order to research and prevent the spread of the virus. Russia deployed facial recognition technology to detect quarantine breakers. Italian regional health commissioner Giulio Gallera said that "40% of people are continuing to move around anyway", as he has been informed by mobile phone operators. In USA, Europe and UK, Palantir Technologies is taken in charge to provide COVID-19 tracking services. Prevention and management of environmental disasters Tsunamis can be detected by Tsunami warning systems. They can make use of AI. Floodings can also be detected using AI systems. Wildfires can be predicted using AI systems. Wildfire detection is possible by AI systems (i.e. through satellite data, aerial imagery, and GPS phone personnel position) and can help in the evacuation of people during wildfires, to investigate how householders responded in wildfires and spotting wildfire in real time using computer vision. Earthquake detection systems are now improving alongside the development of AI technology through measuring seismic data and implementing complex algorithms to improve detection and prediction rates. Earthquake monitoring, phase picking, and seismic signal detection have developed through AI algorithms of deep-learning, analysis, and computational models. Locust breeding areas can be approximated using machine learning, which could help to stop locust swarms in an early phase. Reception Benefits Algorithmic regulation is supposed to be a system of governance where more exact data, collected from citizens via their smart devices and computers, is used to more efficiently organize human life as a collective. As Deloitte estimated in 2017, automation of US government work could save 96.7 million federal hours annually, with a potential savings of $3.3 billion; at the high end, this rises to 1.2 billion hours and potential annual savings of $41.1 billion. Criticism There are potential risks associated with the use of algorithms in government. Those include algorithms becoming susceptible to bias, a lack of transparency in how an algorithm may make decisions, and the accountability for any such decisions. According to a 2016's book Weapons of Math Destruction, algorithms and big data are suspected to increase inequality due to opacity, scale and damage. There is also a serious concern that gaming by the regulated parties might occur, once more transparency is brought into the decision making by algorithmic governance, regulated parties might try to manipulate their outcome in own favor and even use adversarial machine learning. According to Harari, the conflict between democracy and dictatorship is seen as a conflict of two different data-processing systems—AI and algorithms may swing the advantage toward the latter by processing enormous amounts of information centrally. In 2018, the Netherlands employed an algorithmic system SyRI (Systeem Risico Indicatie) to detect citizens perceived being high risk for committing welfare fraud, which quietly flagged thousands of people to investigators. This caused a public protest. The district court of Hague shut down SyRI referencing Article 8 of the European Convention on Human Rights (ECHR). The contributors of the 2019 documentary iHuman expressed apprehension of "infinitely stable dictatorships" created by government AI. Due to public criticism, the Australian government announced the suspension of Robodebt scheme key functions in 2019, and a review of all debts raised using the programme. In 2020, algorithms assigning exam grades to students in the UK sparked open protest under the banner "Fuck the algorithm." This protest was successful and the grades were taken back. In 2020, the US government software ATLAS, which run on Amazon Cloud, sparked uproar from activists and Amazon's own employees. In 2021, Eticas Foundation has launched a database of governmental algorithms called Observatory of Algorithms with Social Impact (OASI). Algorithmic bias and transparency An initial approach towards transparency included the open-sourcing of algorithms. Software code can be looked into and improvements can be proposed through source-code-hosting facilities. Public acceptance A 2019 poll conducted by IE University's Center for the Governance of Change in Spain found that 25% of citizens from selected European countries were somewhat or totally in favor of letting an artificial intelligence make important decisions about how their country is run. The following table lists the results by country: Researchers found some evidence that when citizens perceive their political leaders or security providers to be untrustworthy, disappointing, or immoral, they prefer to replace them by artificial agents, whom they consider to be more reliable. The evidence is established by survey experiments on university students of all genders. A 2021 poll by IE University indicates that 51% of Europeans are in favor of reducing the number of national parliamentarians and reallocating these seats to an algorithm. This proposal has garnered substantial support in Spain (66%), Italy (59%), and Estonia (56%). Conversely, the citizens of Germany, the Netherlands, the United Kingdom, and Sweden largely oppose the idea. The survey results exhibit significant generational differences. Over 60% of Europeans aged 25-34 and 56% of those aged 34-44 support the measure, while a majority of respondents over the age of 55 are against it. International perspectives also vary: 75% of Chinese respondents support the proposal, whereas 60% of Americans are opposed. In popular culture The novels Daemon and Freedom™ by Daniel Suarez describe a fictional scenario of global algorithmic regulation. Matthew De Abaitua's If Then imagines an algorithm supposedly based on "fairness" recreating a premodern rural economy. See also Anti-corruption Civic technology Code for America Cyberocracy Cyberpunk Cybersyn Digital divide Digital Nations Distributed ledger technology law Dutch childcare benefits scandal ERulemaking Lawbot Legal informatics Management cybernetics Multivac Predictive analytics Sharing economy Smart contract Citations General and cited references Wikipedia article: Code: Version 2.0. External links Government by Algorithm? by Data for Policy 2017 Conference Government by Algorithm by Stanford University A governance framework for algorithmic accountability and transparency by European Parliament Algorithmic Government by Zeynep Engin and Philip Treleaven, University College London Algorithmic Government by Prof. Philip C. Treleaven of University College London Artificial Intelligence for Citizen Services and Government by Hila Mehr of Harvard University The OASI Register, algorithms with social impact iHuman (Documentary, 2019) by Tonje Hessen Schei How Blockchain can transform India: Jaspreet Bindra Can An AI Design Our Tax Policy? New development: Blockchain—a revolutionary tool for the public sector, An introduction on the Blockchain's usage in the public sector by Vasileios Yfantis A bold idea to replace politicians by César Hidalgo Applications of artificial intelligence Collaboration E-government Social influence Social information processing Social networks Social systems Sociology of technology Sustainability Technological utopianism Technology in society Transhumanism
Government by algorithm
[ "Technology", "Engineering", "Biology" ]
5,327
[ "Information society", "Government by algorithm", "Genetic engineering", "Transhumanism", "Automation", "Computing and society", "nan", "Ethics of science and technology" ]
63,425,559
https://en.wikipedia.org/wiki/Caesium%20cyanide
Cesium cyanide (chemical formula: CsCN) is the cesium salt of hydrogen cyanide. It is a white solid, easily soluble in water, with a smell reminiscent of bitter almonds, and with crystals similar in appearance to sugar. Caesium cyanide has chemical properties similar to potassium cyanide and is very toxic. Production Hydrogen cyanide reacts with cesium hydroxide giving cesium cyanide and water: HCN + CsOH → CsCN + H2O. References Cyanides Caesium compounds
Caesium cyanide
[ "Chemistry" ]
121
[ "Inorganic compounds", "Inorganic compound stubs" ]
63,425,845
https://en.wikipedia.org/wiki/Digital%20religion
Digital religion is the practice of religion in the digital world, and the academic study of such religious practice. History Digital religion is the practice of religion in the digital world, and the academic study of such religious practice. Now digital religion is a modern field sub-category stemming out of digital culture. In the mid-1990s, "cyber-religion" was a term that arose to describe the interface between religion and virtual reality technologies. Most scholars started documenting how religious groups moved worship online and how religious rituals were performed. By the first decade of the 21st century, the term "digital religion" became more dominant, and has often been studied in terms of religion's developments in the Web 2.0 world. It has tended to also make a distinction between "religion online" (religious practice facilitated by the digital) and "online religion" (religious practice transforms and offers new forms of religiosity in the digital). As Heidi Campbell's work emphasizes about Digital Religion in the Digital Creatives and the Rethinking of Religious Authority it is imperative to re-frame one's spirituality in conjunction with navigating in an online community. Different religions and cultures may incorporate the use of digital mediums and religion using various reasons and practices to accomplish goals. One common theme in digital religion that they typically all have virtual religious spaces. Virtual Christianity The practice of engaging others in the Christian religion using virtual methods such as streaming services, mobile phone apps, social media platforms. Cyber- Christian spaces engage with people in their congregations, organizations and non-practicers of the faith. Some use technology to hold bible studies, worship services and even as a Christian dating tool. Virtual Christianity is commonly referred to as Virtual Church instead of a direct reference to religious affiliation. Virtual Buddhism The practice of engaging others in the Buddhism religion using virtual methods such as streaming services, mobile phone apps, social media platforms. Connelly suggests that it is important to identify the positioning of Buddhists in society and in the virtual world as well. Buddhism approaches a virtual reality through an interdisciplinary approach. Virtual Judaism The practice of engaging others in the Judaism religion using virtual methods such as streaming services, mobile phone apps, social media platforms. In particular the Chabad, a Jewish ultra-Orthodox movement, sheds light on a fundamentalist society interacting with new media, by negotiating between modernity and religious piety. Virtual Hikari no Wa The practice of engaging others in the Hikari no Wa religion using virtual methods such as streaming services, mobile phone apps, social media platforms. Japanese New Religions Online: Hikari no Wa and Net Religion use different forms of technology and as a result of different forms. For instance they use tech for training members and leaders, communication, broadening the reach of their messages, and hopefully to connect with new converts. References Digital humanities Religious studies
Digital religion
[ "Technology" ]
571
[ "Digital humanities", "Computing and society" ]
63,425,909
https://en.wikipedia.org/wiki/Nokia%208.3%205G
The Nokia 8.3 5G is a Nokia-branded smartphone released by HMD Global, running the Android One variant of Android. It is HMD's first 5G phone, and was announced on 19 March 2020 alongside the Nokia 5.3, Nokia 1.3, and Nokia 5310 (2020). The Nokia 8.3 5G is claimed by HMD executive Juho Sarvikas to be the first smartphone, which is compatible with every 5G network, operational as of its announcement. It was released in September 2020. Design and specifications The Nokia 8.3 5G is powered by the Qualcomm Snapdragon 765G system-on-chip (SoC). Depending on model, it has either 6 or 8 GB of RAM, and 64 or 128 GB of internal storage, which can be expanded with a microSD card, up to 512 GB. In the dual-SIM model, there is space for two nano-SIMs and a microSD card. The phone weighs 220 g and is 9 mm thick, which is comparable with the Nokia Lumia 1320 and the Lumia 1520, introduced in 2013. It has a 6.81" FHD+ IPS LCD screen with 'PureDisplay' technology, a 24 MP punch hole camera and a chin at the bottom with the Nokia logo. The Nokia 8.3 5G has a quad-camera system with ZEISS optics, consisting of a 64 MP rear camera, a 12 MP ultra-wide camera, a 2 MP macro camera, and a 2 MP depth camera. The 8.3 5G is the second Nokia Android smartphone to be advertised with a 'PureView' camera, after the Nokia 9 PureView. Nokia 8V 5G UW In March 2020, Nokia released a Verizon carrier-locked version of this phone called the Nokia 8V 5G UW. It is identical to the Nokia 8.3 except that it is Verizon locked and is not an Android One smartphone. It also is locked to Android 10 and cannot receive updates. Following the expiration of the contract and release of unlocked Verizon-compatible Nokia phones, the price has been significantly reduced. James Bond film The James Bond film No Time to Die product placement includes the Nokia 8.3 5G. References 8.3 5G Mobile phones introduced in 2020 Mobile phones with multiple rear cameras Mobile phones with 4K video recording PureView Discontinued smartphones
Nokia 8.3 5G
[ "Technology" ]
501
[ "Mobile technology stubs", "Mobile phone stubs" ]
63,426,102
https://en.wikipedia.org/wiki/PR%20toxin
Penicillin Roquefort toxin (PR toxin) is a mycotoxin produced by the fungus Penicillium roqueforti. In 1973, PR toxin was first partially characterized by isolating moldy corn on which the fungi had grown. Although its lethal dose was determined shortly after the isolation of the chemical, details of its toxic effects were not fully clarified until 1982 in a study with mice, rats, anesthetized cats and preparations of isolated rat auricles. Structure and reactivity PR toxin contains multiple functional groups, including acetoxy (CH3COO-), aldehyde (-CHO), α,β-unsaturated ketone (-C=C-CO) and two epoxides. The aldehyde group on C-12 is directly involved in the biological activity as removal leads to inactivation of the compound. The two epoxide groups do not play an important role, as removal showed no difference in activity. When exposed to air, PR toxin may decompose. How and why this happens, is however not known. Synthesis PR toxin is derived from the 15-carbon hydrocarbon aristolochene, a sesquiterpene produced from farnesyl diphosphate catalyzed by the enzyme aristolochene synthase. Aristolochene then gains an alcohol, a ketone, and an additional alkene, mediated by hydroxysterol oxidase and quinone oxidoreductase. Addition of the fused-epoxide oxygen by P450 monooxygenase gives eremofortin B. Epoxidation of the isopropenyl sidechain, again by P450 monooxygenase, and addition of the acetyl group by an acetyltransferase gives eremofortin A. A short-chain oxidoreductase oxidizes a methyl group on the side-chain to eremofortin C, the primary alcohol analog of PR toxin (incorrectly illustrated in the following diagram), which is then further oxidized by a short-chain alcohol dehydrogenase to give the aldehyde. Eremofortin C has been isolated from microbial sources and found to be in a spontaneous equilibrium between an open-chain hydroxy–ketone structure and a lactol form. Metabolism Different experiments have shown the effects of the PR toxin on liver cells in culture (in vitro) and in the liver (in vivo). In vitro The PR toxin caused an inhibition of the incorporation of amino acids. These results show that the toxin was responsible for altering the translating process. Together with some earlier experiments it has been proved that the PR toxin was indeed active on the cell metabolism. Another interesting finding is the decreased activity of respiratory control and oxidative phosphorylation in the (isolated) mitochondria of the liver . Apparently the amount of polysomes wasn't the determining factor, the inhibition was not decreased by increasing the amount of polysomes. The increase of pH 5 enzymes on the other hand, had a significant inhibitory effect. A higher concentration of pH 5 enzymes made the inhibitory effect less effective. These findings proved that the PR toxin was not altering the polysomes but in some way dysfunctions the pH 5 enzymes. In vivo When the PR toxin was directly administered to rats, protein synthesis in the liver was not as high as it normally would be. This in vivo administration showed that the isolated cells from the rat's liver had a much lower transcriptional capacity. The process did not alter the uptake of amino acids in the liver, but the translational process was exclusively affected. The toxic effect of this toxin is as expected close with the fact that the process of protein synthesis is inhibited. However the real toxic effect could be that some required proteins aren't made in a proper amount. Mechanism of action Multiple experiments have shown the different effects of PR toxin: it can cause damage to the liver and kidney, can induce carcinogenicity, and can in vivo inhibit DNA replication, protein synthesis, and transcription. Most experiments on the effect of the PR toxin focus on the inhibition of protein synthesis and impairment of the liver. The PR toxin dysfunctions the transcriptional process in the liver. RNA polymerases I & II, the two main RNA polymerase systems in the liver, are affected by the toxin. The toxin needs no further enzymatic conversion to exert its effects on these systems. The liver seems to be the most influenced organ by the PR toxin. Toxicity The toxicity of PR toxin was measured both intraperitoneally as well as orally. The first determined median lethal dose of pure PR toxin intraperitoneal in weanling rats was 11 mg/kg. The oral median lethal dose was 115 mg/kg. The same study reported that ten minutes after an oral dose of 160 mg/kg, the animals experienced breathing problems that eventually led to death. Acute Rat studies (mg/kg) - LDLo test, via oral route: 115 - LD50 test, intraperitoneal route: 11.6 - LD50 test, intravenous: 8.2 Acute Mouse studies (mg/kg) - LD50 test, via oral route: 72 - LD50 test, intraperitoneal: 2 - LD50 test, intravenous: 2 An acute human study has yet to be done, so no LD50 test results or doses are known yet. However, there is one case report from 1982 in which toxic effects are described on a human. This person was working in a factory in which the blue cheese was produced. The mold of Penicillium roqueforti was inhaled by this person and she developed hypersensitivity pneumonitis. Because of this lung inflammation, the person experienced among other things coughing, dyspnea, reduced lung volumes and hypoxemia. Antibodies against the mold were found afterwards in serum and lavage fluid. However, the LD50 values have not yet been determined. Effects on animals Studies of the effects on animals were done on mice, rats, anesthetized cats and preparations of isolated rat auricle. Toxic effects in mice and rats included abdominal writhing, decrease of motor activity and respiration rate, weakness of the hind legs and ataxia. The effects were different for the different ways PR toxin was taken up. When the median lethal dose was ingested orally, the pathology was described as swollen-gas filled stomach and intestines as well as edema and congestion in the lungs. The kidney showed degenerative changes as well as hemorrhage. If PR toxin was injected intraperitoneally, cats, mice and rats developed ascites fluid and edema of the lungs and scrotum. While intravenous injection showed, for the same animals, large volumes of pleural and pericardial volumes as well as lung edema. In conclusion, the tissue cells and blood vessels were directly damaged by PR toxin. This caused leakage of fluid resulting among other things in edema of the lungs and ascites fluid. Also, the damage on the blood vessels resulted in increased capillary permeability. This increased permeability lead to a decrease in blood volume and direct damage to the vital organs including lungs, kidneys, liver and heart. References Mycotoxins Epoxides Acetates Ketones Heterocyclic compounds with 3 rings
PR toxin
[ "Chemistry" ]
1,545
[ "Ketones", "Functional groups" ]
70,646,393
https://en.wikipedia.org/wiki/Seam-shifted%20wake
Seam-shifted wake (SSW) is an aerodynamic phenomenon involving baseballs. The term was coined in 2019 by Andrew Smith during his work on the phenomenon with Barton L. Smith (no relation) at Utah State University (USU). Nazmus Sakib and John Garrett also contributed to the early work. The USU group showed that Major League Baseball (MLB) baseball seams, when in specific locations relative to the direction of the ball, force the boundary layer to separate earlier (closer to the front of the ball) than it normally would. If this occurs on one side of the ball and not on the opposite side, a net force is produced. This force can be similar in magnitude to that caused by spin (i.e. the Magnus effect). In the particle image velocimetry image shown, the seam on the top of the ball is causing separation (wake formation) on the top closer to the front of the ball than normal. The separation on the bottom of the ball is at the normal location. The resultant movement from this effect had been noted by others before and named the "laminar effect" based on a mistaken notion that smooth portions of the ball caused a laminar boundary layer more prone to separation than a turbulent boundary layer. This was most often discussed with respect to 2-seam fastballs. Data from MLB on pitch spin and movement, which were not available in 2019, now make it clear that SSW effects are present in most pitches. Seam effects cause two-seam fastballs and changeups to develop addition arm-side movement as well as sink. Four-seam fastballs can gain extra vertical ride as well as glove-side movement. Certain sliders will "sweep" (or move arm-side) due to seam effects. Some claim benefits from SSW pitches in terms of batter outcomes Evidence of seam-shifted wake pitches for MLB pitchers can be found by comparing spin-based movement to observed movement on MLB's Spin Leaderboard. These two values being equal indicates no seam-shifted wake, while differences between them (called deviation) are a sign of seam-shifted wake. Several extensive explanations of how SSW works and what it does to pitches are available. Laminar Flow and Turbulence In the game of cricket, Swing bowling is an effect that occurs when the hemispherical seam on the ball is positioned such that the boundary layer on one side of the ball remains ordered and steady (i.e. Laminar) while the other side becomes turbulent. The ball's trajectory deflects toward the turbulent side. Around 2019, many attempted to explain defection of baseball trajectories by claiming smooth portions of the ball resulted in laminar flow, while seams resulted in turbulent flow. It has been shown that, were this the case in baseball, the deflection would be in the opposite direction as what is observed. The baseball seam's most important effect is to remove the boundary layer from the side on which it resides, leading to a deflection toward the seam. The reason for the difference in the effect of a cricket ball seam and a baseball seam are two-fold: 1) The baseball seam has a more complicated pattern that mostly prohibits not having a seam on one side of the ball and 2) The baseball seam is shaped more like a ramp than a series of small bumps, as in the case of a cricket ball. Examples of SSW through the years Greg Maddux - Greg Maddux was given a ball that had a scuff on one side. He was known for his knee bending 2-seam, however when he threw this ball he angled the scuff on the left side of the ball inturn giving more movement to the right Shohei Ohtani- Shohei Ohtani's sweeper was the most valuable sweeper in 2023 based off bWAR and runs prevented. It exhibited 3.8 in of movement above the average sweeper in the league. Paul Skenes- Paul Skenes splinker was the best pitch in the MLB in 2024 based off avg against. It was a sinker with lower than average spin rate and a different axis allowing more downward movement compared to the average sinker. References Baseball Aerodynamics Pitching (baseball)
Seam-shifted wake
[ "Chemistry", "Engineering" ]
874
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
70,646,792
https://en.wikipedia.org/wiki/Direct%20detection%20of%20dark%20matter
Direct detection of dark matter is the science of attempting to directly measure dark matter collisions in Earth-based experiments. Modern astrophysical measurements, such as from the cosmic microwave background, strongly indicate that 85% of the matter content of the universe is unaccounted for. Although the existence of dark matter is widely believed, what form it takes or its precise properties has never been determined. There are three main avenues of research to detect dark matter: attempts to make dark matter in accelerators, indirect detection of dark matter annihilation, and direct detection of dark matter in terrestrial labs. The founding principle of direct dark matter detection is that since dark matter is known to exist in the local universe, as the Earth, Solar System, and the Milky Way Galaxy carve out a path through the universe they must intercept dark matter, regardless of what form it takes. Direct detection of dark matter faces several practical challenges. The theoretical bounds for the supposed mass of dark matter are immense, spanning some 90 orders of magnitude from to about that of a Solar Mass. The lower limit of dark matter is constrained by the knowledge that dark matter exists in dwarf galaxies. From this knowledge a lower constraint is put on the mass of dark matter, as any less massive dark matter would have a de Broglie wavelength too massive to fit inside observed dwarf galaxies. On the other end of the spectrum the upper limit of dark matter mass is constrained experimentally; gravitational microlensing using the Kepler telescope is done to detect MACHOs (MAssive Compact Halo Objects). Null results of this experiment exclude any dark matter candidate more massive than about a solar mass. As a result of this extremely vast parameter space, there exist a wide variety of proposed types of dark matter, in addition to a broad assortment of proposed experiments and methods to detect them. The spectrum of proposed dark matter matter mass is split into three broad, loosely defined categories as follows: In the range of zepto-electronvolts (zeV) to 1 eV theories predict a bosonic or field like dark matter. The primary dark matter candidate in the range are axions, or axion-like particles. From about 1 eV to the Planck Mass, dark matter is projected to be fermionic or particle-like. Favorites in this range include WIMPS, thermal relics, and sterile neutrinos. Finally, in the mass range between the Planck Mass to masses on the order of the Solar mass, dark matter would be a composite particle. The leading theory for composite dark matter are primordial black holes. Bosonic / field dark matter Any dark matter candidate with a mass less than approximately 1 eV and greater than 1 zeV is projected to be bosons, or a field, as opposed to a more traditional particle. Any lesser mass could not fit its de Broglie wavelength into dwarf galaxies. Axions Axions are theoretical, as of yet undiscovered, subatomic particles originally proposed in 1977 to solve inconsistencies in the Standard Model, i.e. the strong CP problem. A consequence of this solution is to generate an axion field, which would in turn indicate a cosmological abundance of axions that depend on the mass of the axion. If the axion mass is heavier than 5 μeV/c2, then axions could account for all dark matter phenomena. One of the only experiments to detect axions as dark matter is the Axion Dark Matter Experiment (ADMX). Located at the University of Washington, ADMX uses a resonant microwave cavity in a strong magnetic field to convert dark matter into microwave photons by means of the Primakoff effect. Microwave cavities are simple electrical devices that are built to resonate at extremely precise frequencies to create standing microwaves inside of the cavity. ADMX uses this technology to tune their microwave cavity to the resonance of axions located in the Milky Way halo. The purpose of this is to increase the interaction of axions with the high strength eight Tesla magnetic field present to better facilitate the Primakoff effect. The Primakoff effect is an as yet un proven mechanism for the production of mesons from high energy interactions of photons with a nucleus. Axions qualify for this interaction, meaning that infamously undetectable dark matter could theoretically be converted into mundane photons. Although ADMX has yet to detect dark matter, its capabilities are promising. The experiment is capable of probing previously difficult to reach sections of the parameter space. The primary downside of the ADMX experiment is that the microwave cavity requires very fine tuning, meaning only a minuscule amount of the parameter space is probed at a time. Weakly Interacting slim particles (WISPs) Weakly Interacting Slim Particles (WISPs) are a broader category of particles with extremely small masses and interaction cross sections, of which axions are a member. Active neutrinos are the only WISP confirmed to exist, although they have been definitively ruled out as a dark matter candidate. In common usage, WISP is generally used to refer to any non axion ultra light dark matter particle. Leading theories suggest that such particles would interact with the standard model largely through coupling to photons, and would survive to the modern era after creation in the early universe. Fermionic / particle dark matter Dark matter masses between 1 eV and the Planck Mass are hypothesized to be fermionic particles. Weakly interacting massive particles Weakly Interacting Massive Particles (WIMPs) are a broad category of theoretical particles, that interact not at all or very weakly with all forces except gravity. WIMPs are a member of a broader category of particles called thermal relics, particles which were created thermally in the early universe, as opposed to being created non-thermally later during a phase transition. As with all dark matter candidates, interaction probability is extraordinarily low, leading to a variety of techniques to be developed. Experimental techniques Direct detection of dark matter is based upon the premise that since it is known that dark matter exists in some form, Earth must intercept some as it carves out a path through the universe. Direct detection experiments attempt to create highly sensitive systems capable of detecting these rare and weak events. Cryogenic crystal detectors Cryogenic Crystal Detectors use disks of germanium and silicon cooled to around 50 millikelvin. These disks are coated in either tungsten or aluminum. An interacting WIMP would in theory excite the crystal lattice, sending vibrations to the surface, which is held precisely at its superconductivity threshold. Due to this the coating material's resistivity is highly dependent on heat, enough so that the energy deposited by the vibration is detectable. One such detector is the Cryogenic Rare Event Search with Superconducting Thermometers (CRESST) located at the Gran Sasso National Laboratory in Assergi, Italy. Operating in multiple generations since 2000 CRESST has continually been evolving and improving its sensitivity range, although it has not yet definitively detected dark matter. As a notable side achievement, CRESST was the first experiment to detect the alpha decay of tungsten-180. The most recent generation of CRESST has enhanced its capabilities to detect WIMP dark matter as light as 160 MeV/c2. Noble gas scintillators Noble gas scintillators use the property of certain materials to scintillate, which is when a material absorbs energy from a particle and remits the same amount of energy as light. Of particular interest for dark matter detection is the use of noble gases, even more specifically liquid xenon. The XENON series of experiments, also located at the Gran Sasso National Lab, is a forefront user of liquid xenon scintillators. Common across all generations of the experiment, the detector consists of a tank of liquid xenon with a gaseous layer on top. At the top and bottom of the detector is a layer of photomultiplier tubes (PMTs). When a dark matter particle collides with the liquid xenon, it rapidly releases a photon which is detected by the PMTs. To cross reference this data point an electric field is applied which is sufficiently large to prevent complete recombination of the electrons knocked loose by the interaction. These drift to the top of the detector and are also detected, creating two separate detections for each event. Measuring the time delay between these allows for a complete 3-D reconstruction of the interaction. The detector is also able to discriminate between electronic recoils and nuclear recoils, as both types of events would produce differing ratios of the photon energy and the released electron energy. The most recently completed version of the XENON experiment is XENON1T, which used 3.2 tons of liquid xenon. This experiment produced a then record limit for the cross section of WIMP dark matter of at a mass of 30 GeV/c2. The most recent iteration of the XENON succession is XENONnT, which is currently running with 8 tones of liquid xenon. This experiment is projected to be able to probe WIMP-nucleon cross sections of for a 50 GeV/c2 WIMP mass. At this ultra-low cross section, interference from the background neutrino flux is predicted to be problematic. Crystal scintillators Crystal scintillator experiments are a middle ground between cryogenic crystal detectors and noble gas scintillators, using the crystals of the former and the scintillation properties of the latter. One such experiment that uses this technology is the DAMA/LIBRA experiment, once again located in the Gran Sasso National Laboratory in Italy. Unique to dark matter experiments DAMA/LIBRA attempts to measure an annual variation of the flux of dark matter. This concept is born from the knowledge that as the Earth's rotation comes in sync and out of sync of the Sun's motion through the Milky Way, the relative motion of a terrestrial detector to the dark matter halo would change, resulting in a differing flux of dark matter. DAMA/LIBRA has claimed to see such modulation, although the scientific community as a whole has yet to accept these results as valid. Disbelievers of this result claim that it is not due to a variation of WIMP flux, but rather due to uncontrolled seasonal changes. To test this other similar experiments, namely the Sodium-iodide with Active Background Rejection (SABRE) are being built in Gran Sasso and another instalment in Australia. The purpose of spreading out the experiments across both hemispheres is that if the modulation for the locations is in sync then that would positively indicate a change in the dark matter flux, whereas if the measured variations are six months out of sync, then that would indicate unaccounted for seasonal variations. Bubble chambers Bubble chambers, originally invented in 1952, are largely phased out but still have some use in WIMP dark matter detection. Bubble chambers are filled with superheated liquid held close to its phase transition. When a particle interacts with the superheated liquid the energy it imparts is enough to trigger a phase transition, causing any charged particles to leave an ionization trail of bubbles, which are detected. One such experiment that uses a bubble chamber is PICO, at SNOLAB in Canada. PICO was formed in 2013 as a combination of two previous similar experiments, PICASSO and COUPP. PICO employs a more advanced form of a bubble chamber, using individual droplets of a superheated gas, namely Freon, that are suspended in a gel matrix. The advantage of this setup is that the individual droplets slow down the phase transition, allowing for longer periods of detector activity. PICO currently has a 2-liter and a 60-liter detector, with a new version with a mass in the range of 250-500 liters being planned. Although PICO like all bubble chambers has fantastically low background noise, they are still detecting anomalous background events inconsistent with assumed dark matter characteristics. Additionally PICO was capable of ruling out interactions with unwanted iodine as the cause of the previously mentioned DAMA/LIBRA experiment's claimed dark matter modulation. Sterile neutrinos A Sterile neutrino is a theoretical type of neutrino that interacts only via gravity. The weak force only interacts with particles with left chirality, or left-handed neutrinos. Sterile neutrinos are proposed to be right handed, meaning they would only interact with gravity. Sterile neutrinos are viable dark matter candidates because they only interact via gravity, as is predicted for dark matter. Unfortunately, most current theories predict cold dark matter, meaning dark matter candidates that are non-relativistic. Due to their mass and energy, sterile neutrinos would be likely relativistic and thus count as hot dark matter. Sterile neutrinos could still be a constituent of dark matter, but it is highly unlikely that they are the only component. Composite dark matter Dark matter mass between the Planck Mass and those on the order of the Solar Mass are hypothesized to be macroscopic composite objects. Masses much beyond the solar mass are ruled out observationally by the lack of gravitational microlensing events using the Kepler telescope. Primordial black hole Primordial black holes are black holes that formed very early in the universe, and without the collapse of a star. The theory behind primordial black holes is that in the extremely early universe, under one second, random fluctuations would cause local gravitational collapse into black holes. Since primordial black holes did not form from stellar collapse, they can have masses far below that of a solar mass, ranging from 10 micrograms to many solar masses. However, only primordial black holes with masses above 1011 kg would still exist today, as any less massive would have completely evaporated via Hawking radiation by the modern era. Primordial black holes are plausible dark matter candidates, however arguments based upon their observed abundance cast doubt on their ability to be the only constituent of dark matter. Conversely, other research groups claim that gravitational waves detected by LIGO/VIRGO are consistent with primordial black holes making up 100% of dark matter, given if a relatively large amount of them were clustered within the halos of dwarf galaxies. An additional inconsistency with this claim is that the primordial black hole mass claimed could overlap with excluded mass range from Kepler micro-lensing. The GAIA spacecraft, launched by the European Space Agency is tasked with creating the largest and most detailed map of space and all objects within it ever created, including possible composite dark matter candidates. Although not specifically searching for dark matter, it is possible that dark matter scientists will be able to find dark matter among the 1 billion objects it will catalogue during its lifetime. References Dark matter Physics beyond the Standard Model Observational cosmology
Direct detection of dark matter
[ "Physics", "Astronomy" ]
3,033
[ "Dark matter", "Unsolved problems in astronomy", "Concepts in astronomy", "Unsolved problems in physics", "Particle physics", "Exotic matter", "Physics beyond the Standard Model", "Matter" ]
70,646,864
https://en.wikipedia.org/wiki/H3S28P
H3S28P is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates the phosphorylation the 28th serine residue of the histone H3 protein. Early mitosis phosphorylation patterns for H3S10 and H3S28 are quite similar, starting at the initiation of chromosomal condensation during prophase. Phosphorylation of H3S28 has transcription effects and acetylates other histones. Nomenclature The name of this modification indicates the protein phosphorylation of serine 28 on histone H3 protein subunit: Serine/threonine/tyrosine phosphorylation The addition of a negatively charged phosphate group can lead to major changes in protein structure, leading to the well-characterized role of phosphorylation in controlling protein function. It is not clear what structural implications histone phosphorylation has, but histone phosphorylation has clear functions as a post-translational modification. Effect of modification A substantial number of phosphorylated histone residues are associated with gene expression. Interestingly, these are often related to the regulation of proliferative genes. Phosphorylation of serines 10 and 28 of H3 and serine 32 of H2B has been associated with the regulation of epidermal growth factor (EGF)-responsive gene transcription. Depending on the environment in which it happens, the same phosphorylated residue might have drastically different consequences on chromatin structure. H3S10 and H3S28 phosphorylation is an excellent illustration of this duality: both phosphorylated residues are involved in chromatin compaction during mitosis and meiosis, as well as chromatin relaxation after transcription activation. Because of the diverse roles that these phosphorylation events play, individual residue phosphorylation can't be examined in isolation. The impact is determined by the cellular environment and interplay with other cis or trans changes. Transcription effect Early mitosis phosphorylation patterns for H3S10 and H3S28 are quite similar, starting at the initiation of chromosomal condensation during prophase. At gene promoters, H3S28 phosphorylation is expected to remove Polycomb restrictive complexes from chromatin, causing demethylation and acetylation of the nearby K27 residue, so activating transcription. This transition is mediated by the phosphorylation of H3S28 on a site next to H3K27. It was discovered that phosphorylating H3S28 in the promoters of genes like c-fos and -globin controlled their activation. Acetylation effects In addition to H3K27ac, H3S28p plays a role in transcription activation. H3S28p prevents H3K27me3 from accumulating, allowing H3K27 acetylation to occur. Similarly, it was discovered that phosphorylation of H3S28 promotes K9 acetylation. Specific kinases for modification UVB radiation phosphorylates H3S10 and H3S28, while H3S28 is additionally phosphorylated by the kinases MSK1, ERK1, p38, FYN, MLTK, ERK2, and to a lesser extent JNK1 and JNK2. Histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. Post-translational modification of histones such as histone phosphorylation has been shown to modify the chromatin structure by changing protein:DNA or protein:protein interactions. Histone post-translational modifications modify the chromatin structure. The most commonly associated histone phosphorylation occurs during cellular responses to DNA damage, when phosphorylated histone H2A separates large chromatin domains around the site of DNA breakage. Researchers investigated whether modifications of histones directly impact RNA polymerase II directed transcription. Researchers choose proteins that are known to modify histones to test their effects on transcription, and found that the stress-induced kinase, MSK1, inhibits RNA synthesis. Inhibition of transcription by MSK1 was most sensitive when the template was in chromatin, since DNA templates not in chromatin were resistant to the effects of MSK1. It was shown that MSK1 phosphorylated histone H2A on serine 1, and mutation of serine 1 to alanine blocked the inhibition of transcription by MSK1. Thus results suggested that the acetylation of histones can stimulate transcription by suppressing an inhibitory phosphorylation by a kinase as MSK1. Mechanism and function of modification Phosphorylation introduces a charged and hydrophilic group in the side chain of amino acids, possibly changing a protein's structure by altering interactions with nearby amino acids. Some proteins such as p53 contain multiple phosphorylation sites, facilitating complex, multi-level regulation. Because of the ease with which proteins can be phosphorylated and dephosphorylated, this type of modification is a flexible mechanism for cells to respond to external signals and environmental conditions. Kinases phosphorylate proteins and phosphatases dephosphorylate proteins. Many enzymes and receptors are switched "on" or "off" by phosphorylation and dephosphorylation. Reversible phosphorylation results in a conformational change in the structure in many enzymes and receptors, causing them to become activated or deactivated. Phosphorylation usually occurs on serine, threonine, tyrosine and histidine residues in eukaryotic proteins. Histidine phosphorylation of eukaryotic proteins appears to be much more frequent than tyrosine phosphorylation. In prokaryotic proteins phosphorylation occurs on the serine, threonine, tyrosine, histidine or arginine or lysine residues. The addition of a phosphate (PO43-) molecule to a non-polar R group of an amino acid residue can turn a hydrophobic portion of a protein into a polar and extremely hydrophilic portion of a molecule. In this way protein dynamics can induce a conformational change in the structure of the protein via long-range allostery with other hydrophobic and hydrophilic residues in the protein. Epigenetic implications The post-translational modification of histone tails by either histone-modifying complexes or chromatin remodeling complexes is interpreted by the cell and leads to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states, which define genomic regions by grouping different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterized by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. Certain modifications were mapped and enrichment was seen to localize in certain genomic regions. The human genome is annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation. Methods The histone mark can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. 2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well-positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well-positioned nucleosomes are seen to have enrichment of sequences. 3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation. References Epigenetics Post-translational modification
H3S28P
[ "Chemistry" ]
1,980
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
70,646,873
https://en.wikipedia.org/wiki/Jewish%20architecture
Jewish architecture comprises the architecture of Jewish religious buildings and other buildings that either incorporate Jewish elements in their design or are used by Jewish communities. Terminology Due to the diasporic nature of Jewish history, there is no single architectural style that is common across all Jewish cultures. Examples of buildings considered Jewish architecture include explicitly religious buildings such as synagogues and mikvehs, as well as Jewish schools. See also Synagogue architecture List of Jewish architects Jewish Architectural Heritage Foundation Sacral architecture References Architectural styles Jewish culture
Jewish architecture
[ "Engineering" ]
101
[ "Architecture stubs", "Architecture" ]
70,649,599
https://en.wikipedia.org/wiki/Tomaymycin
Tomaymycin is an antitumor and antibiotic pyrrolobenzodiazepine with the molecular formula C16H20N2O4. Tomaymycin is produced by the bacterium Streptomyces achromogenes. References Further reading Antibiotics Pyrrolobenzodiazepines O-methylated phenols
Tomaymycin
[ "Chemistry", "Biology" ]
75
[ "Antibiotics", "Biocides", "Biotechnology products" ]
70,649,628
https://en.wikipedia.org/wiki/Modal%20collapse
In modal logic, modal collapse is the condition in which every true statement is necessarily true, and vice versa; that is to say, there are no contingent truths, or to put it another way, that "everything exists necessarily" (and likewise if something does not exist, it cannot exist). In the notation of modal logic, this can be written as . In the context of philosophy, the term is commonly used in critiques of ontological arguments for the existence of God and the principle of divine simplicity. For example, Gödel's ontological proof contains as a theorem, which combined with the axioms of system S5 leads to modal collapse. Since some regard divine freedom as essential to the nature of God, and modal collapse as negating the concept of free will, this then leads to the breakdown of Gödel's argument. References Collapse Philosophical logic Theology Necessity
Modal collapse
[ "Mathematics" ]
184
[ "Mathematical logic stubs", "Mathematical logic", "Modal logic" ]
70,650,257
https://en.wikipedia.org/wiki/Covalent%20adaptable%20network
Covalent adaptable networks (CANs) are a type of polymer material that closely resemble thermosetting polymers (thermosets). However, they are distinguished from thermosets by the incorporation of dynamic covalent chemistry into the polymer network. When a stimulus (for example heat, light, pH, ...) is applied to the material, these dynamic bonds become active and can be broken or exchanged with other pending functional groups, allowing the polymer network to change its topology. This introduces reshaping, (re)processing and recycling into thermoset-like materials. Background Historically, polymer materials have always been subdivided in two categories based on their thermomechanical behaviour. Thermoplastic polymer materials melt upon heating and become viscous liquids, whereas thermosetting polymer materials remain solid as a result of cross-linking. Thermoplastics consist of long polymer chains that are stiff at service temperatures but become softer with increasing temperature. At low temperatures, the molecular motion of the polymer chains is limited due to chain-entanglements, resulting in a hard and glassy material. Increasing the temperature will lead to a transition from a hard to a soft material at the glass transition temperature (Tg) yielding a visco-elastic liquid. In the case of (semi-)crystalline polymer materials, viscous flow is achieved when the melting point (Tm) is reached and the intermolecular forces in the ordered crystalline domain are overcome. Thermoplastics regain their solid properties upon cooling and can thus be reshaped by polymer processing methods such as extrusion and injection moulding and they can also be recycled. Examples of thermoplastic polymers are polystyrene, polycarbonate, polyethylene, nylon, Acrylonitrile butadiene styrene (ABS), etc. Thermosets, on the other hand, are three-dimensional networks that are formed through permanent chemical cross-linking of multifunctional compounds. This is an irreversible process that results in infusible and insoluble polymer networks with superior properties compared to most thermoplastics. When a thermoset is exposed to heat, it maintains its dimensional stability and thus cannot be reshaped. These polymer materials are generally used for demanding applications (e.g. wind turbines, aerospace, etc.) that require chemical resistance, dimensional stability and good mechanical properties. Typical thermosetting materials include epoxy resins, polyester resins, polyurethanes, etc. In the framework of sustainability, the combination of the mechanical properties of thermosets with the reprocessability of thermoplastics through the introduction of dynamic bonds has been the topic of numerous research studies. The use of non-covalent interactions such as hydrogen bonding, pi-stacking or crystallization that lead to physical cross-links between polymer chains is one way of introducing dynamic cross-linking. The thermoreversible nature of the physical cross-links results in polymer materials with improved mechanical properties without losing reprocessability. The properties of these physical networks are highly dependent on the used backbone and type of non-covalent interactions, but typically they are brittle at low temperature and become elastic or rubbery above Tg. Upon further heating, the physical cross-links disappear and the material behaves as a visco-elastic liquid, allowing it to be reprocessed. These materials are also known as thermoplastic elastomers. Covalent adaptable networks (CANs) instead use dynamic covalent bonds that are able to undergo exchange reactions upon application of an external stimulus, typically heat or light. In absence of a stimulus, these materials behave as thermosets, showing high chemical resistance and dimensional stability, but when the stimulus is applied, the dynamic bonds become activated, enabling the network to rearrange its topology on a molecular level. As a result, these materials are able to undergo permanent deformations, enabling reshaping, reprocessing, self-healing, etc. As such, CANs can be seen as an intermediate bridge between thermosets and thermoplastics. In 2011, the research group of French researcher Ludwik Leibler developed a specific class of CANs based on an associative exchange mechanism (see subsection Classification). By adding a suitable catalyst to epoxy/acid polyester based networks, they were able to prepare a permanent epoxy network that showed a gradual viscosity decrease upon heating. This type of behaviour is typical for vitreous silica and had never before been seen in organic polymer materials. Therefore, the authors introduced the name Vitrimers for these kind of materials. Recent advancements in the field of CANs have shown their potential to someday replace conventional non-recyclable thermosetting materials. The exponential growth of publications involving CANs seen in literature indicate the increasing interest from academia. Additionally, there's also a growing interest in CANs from industry with, for example, the first vitrimer start-up company Mallinda and multiple European Union funded research projects with collaborations between academic and industry partners (such as Vitrimat, PUReSmart and NIPU-EJD). Classification CANs are currently subdivided in two groups, dissociative CANs and associative CANs, based on the underlying mechanism of the bond exchange reactions (i.e. the order in which the bond forming and breaking occurs) and their resulting temperature dependence. Dissociative CANs The exchange mechanism of dissociative CANs requires a bond-breaking event prior to the formation of a new bond (i.e. an elimination/addition pathway). Upon application of a stimulus, the equilibrium shifts to the dissociated state, resulting in a temporarily decreased cross-link density in the network. When a sufficient amount of dynamic bonds dissociate due to the equilibrium being shifted below the gel point, the material will suffer a loss of dimensional stability and show a sudden and drastic viscosity decrease.  After removal of the stimulus, the bonds reform and, in the ideal case, the original cross-link density is restored. This temporary decrease in cross-link density enables very fast topology rearrangements in dissociative CANs, such as viscous flow and stress relaxation, which allows the reprocessing of covalently cross-linked polymer networks. Additionally, dissociative CANs can be solubilized in good solvents. Associative CANs In contrast to dissociative CANs, networks in associative CANs do not depolymerize upon application of a stimulus and maintain a near constant cross-link density. Here, the exchange mechanism relies on the formation of a new bond before fragmentation of another bond (i.e. an addition/elimination pathway). This means that bond exchange occurs via a temporarily more cross-linked intermediate state. However, in practice, this small increase will often be negligible, resulting in a practically constant cross-link density. As a result, associative CANs typically remain insoluble in inert solvents, even at elevated temperatures, although it has become apparent that some associative CANs can be dissolved in a good solvent. In the case of Vitrimers, associative exchange is triggered by heat and the viscosity of these materials is controlled by chemical exchange reactions, leading to a linear dependence of viscosity with inverse temperature according to the Arrhenius law. The decreased viscosity caused by rapid dynamic bond exchanges enables stress relaxation and network topology rearrangements in these materials. Applications Recycling of PU foams Polyurethane (PU) foams are highly versatile engineering materials used for a wide range of applications such as mattresses, insulation, automotive, footwear and construction materials. Conventional PU foams are cross-linked materials or thermosets. PU foams can either be mechanically recycled (where PU foams are grinded and used as fillers), or chemically recycled (where PU foams are downcycled into polyols or other monomeric components via chemical degradation). However, most PU foams end up on landfills. Currently, CANs are being investigated as a replacement for conventional foams, which would allow for easier recyclability of PU waste. For example, it was shown recently that the incorporation of disulfide bonds in PU foams led to their malleability and reprocessability into elastomers. Another possible solution is the addition of catalyst to post-consumer PU, which activates the exchange of urethane bonds and makes them reprocessable . Self-healing materials Polymer networks are susceptible to damage during their use. Self-healing is a promising tool to increase the lifetime and performance of the polymer, while simultaneously reducing plastic waste. Self-healing can operate via extrinsic or intrinsic mechanisms. Extrinsic systems rely on the incorporation of small capsules containing healing agents that get released during damage/cracking and heal the material, while intrinsic systems are inherently able to restore their integrity through, for example, incorporation of dynamic bonds into the polymer network. The most known example of intrinsic self-healing is thermally healable crosslinked networks with Diels-Alder adducts, but various other chemistries have also been investigated, including transesterification, olefin metathesis, and alkoxyamine chemistry. Another promising strategy involves light-activated systems, such as photothermal and photoreversible chemistry. For photothermal systems, the healing is triggered by heating, even if light is the transient stimulus that makes the healing possible. Dynamic exchange reactions are also often activated by direct infrared heating with the assistance of photothermal fillers (e.g. carbon black, graphene, and gold nanoparticles). Self-healing materials based on direct photoreversible chemistry in principle don't involve heating. Some examples of this include the systems based on photoreversible cycloaddition that require ultraviolet (UV) irradiation, as well as photo-triggered radical reshufflings of sulfur-based dynamic covalent bonds. Nanocomposites Thermosets are currently in high demand for high-performance composites that are heavily needed in lightweight engineering and ultrahigh-performance mechanical parts. Applications include: packaging, remediation, energy storage, electromagnetic absorption, sensing and actuation, transportation and safety, defense systems, thermal flow control, information industry, catalysts, cosmetics, sports, etc. Such materials consist of a “soft” polymer phase that is combined with nanoparticles dispersed in the polymer phase. The shape of these nanoparticles can vary wildly, from rods to spheres to platelets, to fibres, etc. The unique thermo-responsive properties of CANs, induced by bond exchange kinetics, open interesting possibilities for the introduction of property switches based on various external effects. For example, the addition of a resistive heater for electrothermal conversion (e.g. single walled carbon nanotubes) can allow for an on-demand mechanical property switch via an electric current. Alternatively, by adding a filler like graphene oxide, light irradiation can be used for an induced photo-thermal effect allowing for switching of the mechanical properties as a response to light-irradiation. Other interesting nanoparticles for the application in CANs include clay nanosheets, graphene and cellulose. 3D printing In recent years, 3D printing, or additive manufacturing (AM), saw rapid developments as the technique became more and more popular. Currently, plastics are the most common raw material used for 3D printing due to their wide availability, diversity and light weight. The versatility of AM and its significant development resulted in its use for many applications ranging from manufacturing and medical sectors to the custom art and design sector. With the market of 3D printing expected to grow even further in the coming years, the use of CANs as a resource for AM is under investigation as a replacement for traditional thermosets, which could make up 22% of the global market for AM by the end of 2029. By replacing traditional thermoset ink with CAN-based inks, complicated 3D geometries can still be printed that behave like traditional thermosets with excellent mechanical properties at service conditions, but can later also be recycled into new ink for the next round of 3D printing. One example involved the 3D printing of an epoxy ink which is able to undergo transesterification reactions after printing. During the printing cycle, the ink is first slightly cured before being printed at high temperature into the desired 3D structure, and followed by a second curing step in an oven after printing. The printed epoxy parts can then be recycled by dissolving in ethylene glycol at high temperature and reused as ink in a new printing cycle. Chemistries used in CANs Various dynamic chemistries have already been incorporated in CANs; some of the more notable ones include transesterification, Diels-Alder exchange, imine metathesis, disulfide exchange, transamination of vinylogous urethanes, transcarbamoylation of urethanes, olefin metathesis, and trans-N-alkylation of 1,2,3-triazolium salts. References Polymers
Covalent adaptable network
[ "Chemistry", "Materials_science" ]
2,765
[ "Polymers", "Polymer chemistry" ]
70,650,605
https://en.wikipedia.org/wiki/Iodide%20hydride
Iodide hydrides are mixed anion compounds containing hydride and iodide anions. Many iodide hydrides are cluster compounds, containing a hydrogen atom in a core, surrounded by a layer of metal atoms, encased in a shell of iodide. List References Iodides Hydrides Mixed anion compounds
Iodide hydride
[ "Physics", "Chemistry" ]
76
[ "Ions", "Matter", "Mixed anion compounds" ]
70,651,058
https://en.wikipedia.org/wiki/List%20of%20things%20named%20after%20David%20Attenborough%20and%20his%20works
This is a list of things named after English broadcaster, biologist, natural historian and author Sir David Attenborough, and his audiovisual works. Buildings David Attenborough Building in Cambridge, which houses the Cambridge University Museum of Zoology and the Cambridge Conservation Initiative (CCI). Ships RRS Sir David Attenborough, a British polar research ship; famously involved in the Boaty McBoatface naming poll controversy. Taxonomy David Attenborough's long trajectory working on nature documentaries and supporting conservation initiatives has made him an inspiration and a popular choice among naturalists to honour him with eponyms when naming newly described organisms. Attenborough himself has said that this is the "biggest of compliments that you could ask from any scientific community." As of 2022, there are more than 50 taxa (genera and species) of organisms that have been named after David Attenborough. Additionally, there is at least one species named after one of his documentary series. This list presents the scientific names as originally given (their basionyms). By default, they are shown in the chronological order in which they were published. They can also be ordered alphabetically by genus or by type of organism. See also List of organisms named after famous people References David Attenborough Attenborough, David Taxonomic lists
List of things named after David Attenborough and his works
[ "Biology" ]
267
[ "Lists of biota", "Taxonomy (biology)", "Taxonomic lists" ]
70,651,997
https://en.wikipedia.org/wiki/Panus%20fasciatus
Panus fasciatus (common name includes hairy trumpet) is a species of fungus in the family Polyporaceae in the genus Panus of the Basidiomycota. P. fasciatus has a fruiting body in the shape of a funnel with a velvety texture, hence the nickname "hairy trumpet." When it was identified by D. Pegler of Kew, he created a subgroup of the Lentinus fungi, called Panus based on their hyphal systems. For this reason, Panus fasciatus is sometimes referred to as Lentinus fasciatus. Panus fasciatus has been described with numerous other names which were combined by Pegler in 1965. Morphology The fungus has a unique shape, with the cap in-rolled when the fungus is young, and then developing a funnel (or infundibuliform) shape over time. It is also known for having pale brown hairs that cover the cap. In dry conditions, the stalk peels like bark but returns to normal following rain. P. fasciatus also has deeply decurrent gills, a velvety pileus, and dense hairs. The fungus can have purple gills that turn brown as they mature. The spores have a white print. Reproduction Basidiospores of P. fasciatus reside on the hymenium of the gills of the fruiting body. When two germinating basidiospores of opposite mating types fuse via plasmogamy, the two nuclei remain unfused while a basidiocarp fruiting body is formed. On the gills beneath the cap, numerous basidia are formed. Karyogamy and meiosis occur and give rise to mature basidiospores. These are then released via the Buller's drop method of spore dispersal. Ecology Panus fasciatus is a wood-decaying saprotroph that feeds on rotting logs or small branches. Habitat Panus fasciatus is commonly found in drier woodland environments, amongst the grass, and beneath eucalypts, acacias, and casuarinas. It is usually exposed to sunlight most of the day. Distribution Panus fasciatus has been recorded in southern and eastern Australia, Africa, Cameroon, Oceania, Papua New Guinea, and New Caledonia, however, much of its distribution is unknown. It has been recorded in low numbers in the Jarrah forest region of western Australia. Taxonomy The species was originally described by Miles J. Berkeley in 1840 as Lentinus fasciatus. It was later renamed by David Pegler of the Kew Royal Botanical Garden in 1965 in the Australian Journal of Botany. Pegler treated Panus as a subgroup of Lentinus, however another mycologist, Corner, considered Panus and Lentinus as two separate genre based on their hyphal systems, so their relationship is controversial. These subgroups were identified based on morphology, but held true for the most part upon more molecular research. Pegler's identification of P. fasciatus was based on collections from Tasmania gathered by R.C. Dunn and R. W. Lawrence. P. fasciatus was among the first fungal species to be identified in Australia. References Polyporales Fungi described in 1965 Taxa named by Miles Joseph Berkeley Fungus species
Panus fasciatus
[ "Biology" ]
670
[ "Fungi", "Fungus species" ]
70,655,814
https://en.wikipedia.org/wiki/Mycelium-based%20materials
Mycelium, a root-like structure that comprises the main vegetative growth of fungi, has been identified as an ecologically friendly substitute to a litany of materials throughout different industries, including but not limited to packaging, fashion and building materials. Such substitutes present a biodegradable alternative (also known as a "Living Building Material") to conventional materials. Most notably, mycelium was first explored as an eco-friendly material alternative in 2006. It was popularized by Eben Bayer and Gavin McIntyre through their work developing mycelium packaging while founding their company, Ecovative, during their time at Rensselaer Polytechnic Institute. Since its inception, the material's function has diversified into many niches. Species and biological structures Mycelium-based composites require a fungus and substrate. “Mycelium” is a term referring to the network of branching fibers, called hyphae, that are created by a fungus to grow and feed. When introduced to a substrate, the fungi will penetrate using their mycelium network, which then breaks down the substrate into basic nutrients for the fungi. By this method, the fungi can grow. For mycelium-based composites, the substrate is not fully broken down during this process and is instead kept intertwined with the mycelium. The main components of fungi are chitin, polysaccharides, lipids, and proteins. Different compositional amounts of these molecules change the properties of the composites. This is also true for different substrates. Substrates that have higher amounts of chitin and are harder for the mycelium to break down and lead to a stiffer composite formation. Commonly used species of fungi to grow mycelium are aerobic basidiomycetes, which include Ganoderma sp., Pleurotus sp., and Trametes sp. Basidiomycetes have favorable properties as fungi for creating mycelium based composites because they grow at a relatively steady and quick pace, and can use many different types of organic waste as substrates. Some characteristics that these species differ in are elasticity, water absorption, and strength. As an example, Trametes hirsuta forms a thicker outer layer of mycelium than Pleurotus ostreatus. This allows the Trametes hirsuta composite to remain flexible and stable in high moisture environments. Additionally, Ganoderma lucidum exhibited higher elasticity, even with different types of substrates. Different combinations of fungi, substrate, and environmental conditions can all affect the properties of the resulting composite; this area of research continues to be explored as the applications for mycelium-based composites expand. Mechanical properties For most man-made materials there is a high degree of control in the processing methods of the final product leading to normalized properties. In the case of these mycelium based materials there is less control, because of these materials properties vary significantly and depend not only on the processing of the material but also the growth conditions of the mycelium as well. Mycelium can also be combined with other natural materials to form bio-composites. These bio-composites will have different properties than the pure materials. From the article Morphology and mechanics of fungal mycelium, samples were taken from Ecovative Design, LLC and prepared in a specific manner for mechanical testing. Their process begins with introducing the mycelium to calcium and carbohydrates in a filter patch bag where it is allowed to grow over the course of 4–6 days. After this, the mycelium is divided into smaller pieces in order to maintain uniform density and growth. The mycelium is then packed into molds with more growth nutrients that are left for 4 days with special adjustments to temperature, humidity, oxygen, and other factors. Once finished, the samples are taken in sheets and dried at high temperatures to deactivate the growing process. These are the samples that are subjected to mechanical testing. The mechanical tests included uniaxial tension and compression, conducted using a specific testing machine and performed in ambient conditions. For the tensile tests, dog bone specimens of dimensions 200 mm × 6 mm × 3.5 mm were used. Cuboid specimens of dimensions 20 mm × 20 mm × 16 mm were tested under compression. The strain rate chosen was 4 × 10−4 per second until failure for tensile tests whereas compressive samples were deformed at a rate of 6.25 × 10−3 per second ranging from 2% to 20%. Manufacturing and growth techniques The first step in the manufacturing of usable mycelium based materials is growing the raw materials. Mycelium needs a wet environment with sufficient substrate to be grown, and as touched on earlier the difficulty to break down the sugars from the substrate will lead to a tougher material driven mostly by the chitin concentration. With an easier substrate to digest the mycelium will grow faster and conversely have less toughness. The natural growth pattern of mycelium is a tight network that already has a leathery quality and through compressive manufacturing these qualities can be accentuated and used for leather. For most other uses of mycelium, the fungi are harvested, dried then chopped. In order to reconstitute it and make the material less brittle it is rehydrated and sometimes combined with other natural materials like flax, hemp, and many others. With the predicted growth in the mycelium based materials market a lot of research is going towards optimizing growth. Differing membranes, light sources, spore density, substrate and substrate moisture concentration. Applications Packaging IKEA has committed to mycelium packaging, making a deal with Ecovative acknowledging the damage that comes from polystyrene packaging and the time it takes the decompose. Plastic foams can take hundreds of years to decompose whereas mushroom based materials can decompose in a few weeks. At Ecovative, they grow mushroom packaging known as MyoComposite which can be grown in less than a week where this manufacturing starts at the Ecovative Design foundry in Green Island, New York. Ecovative partners with multiple local farmers in order to source agricultural products that get turned into packaging. The agricultural materials are cleaned and sorted into molds where the fungi is added and will grow around the material. Once the fungus grows throughout the mold, the final packaging is specially treated to stop the growth process. According to another company, Grown Bio, mycelium based packaging also has advantages because of the versatility of design shapes as well as having a high shock absorbance and insulation properties. They use a 3D printed reusable mold made from a biopolymer to template their products which are then filled with agricultural waste, water, and lastly the mycelium. The entire process takes a week and once the packaging has served its use, it breaks down and can be used as fertilizer. In 2012 Ecovative partnered with Sealed Air, at the time a $7.6 billion global company best known for bubble wrap and other packaging, to license their process for making mycelium-based packaging material. Building materials Mycelium based composites have not yet been widely adapted as construction replacements for bricks, synthetic foams, or wood. However, their potential for use has been studied in laboratories, and the results from experiments comparing bio-composites and current materials show that bio-composites do have some advantages over traditional materials. In order to form the structures of the composites, mycelium needs a substrate to grow into. To fabricate these mycelium based composites outside of natural processes, options for substrates include common “left-over” materials such as wood and straw. Recycling waste products contributes to the mycelium based composites' low cost and environmental-friendliness over the current methods and materials. The main determining factor of the composites’ properties is the type of substrate used. However, the growth conditions and moisture content can also alter the composites' characteristics. To initiate the mycelium growth into the substrate, mycelium is first grown separately and then combined with water and the substrate in a heavily monitored and sterile environment. The mycelium growth can be halted by sterilizing the composite. This is necessary to prevent the mycelium from completely digesting the substrate. In a study comparing lightweight expanded clay aggregate (LECA) and expanded vermiculite (EV), two materials used to make concrete, to a mycelium grown brick, the mycelium grown brick was found to be a better insulator. These results are similar when comparing a different mycelium based composite with extruded polystyrene. The thermal insulation and mechanical compression properties were found to be respectively better than and equivalent to the extruded polystyrene. However, there are some unwanted consequences of the mycelium based composites' structure. The first is the novelty of these materials. They are not yet accepted as replacements for common construction materials because researchers are still working to understand their properties and how these properties are affected by time, environmental conditions, substrate, and mycelium species. Mycelium based composites also have issues with water absorption. Too much water absorption will lead the composites to fail under their mechanical loads. The relationship between density and water absorption was analyzed to find if increasing the composite's density will protect the structure in high humidity environments. The results found that composites with a higher density were slightly affected by the levels of humidity, but remained mechanically sound by the standard necessary for construction materials. Acoustic dampening As with other common building applications, mycelium based materials have also been considered for the application of acoustic dampening. Some species recently under particular consideration include Pleurotus ostreatus (Oyster Mushrooms) and many individual species from the phylum class Basidiomycetes, the latter class being known to have mycelium bodies composed primarily of chitin. In order to construct said acoustic panels, the filamentous hyphae of the fungal body must be isolated, harvested and processed. This can be done through careful control of humidity, temperature (85-95F), atmospheric concentration (5-7%) and chemical/hormonal additives (forskolin/10-oxo-trans-8-decenoic acid (ODA)), in order to not only increase the volume of growth but also encourage the resultant growth to consist of a higher percentage of useful biopolymer material. Fine control over the proportion of cross linkages within the resulting chitin biopolymer is also possible. To construct a panel of acoustic dampening material, the fungus can be mechanically suspended within a rigid chamber, and allowed to grow to fill the space. After the space has been filled, the mycelium is compressed and allowed to grow again into the resultant space, after which the product is dried and post processed for specific applications (embossing or decorative purposes). Studies have found that the resultant paneling, when compared to conventional acoustic dampening materials like foam, cork, felt, cotton and ceiling tiles, displayed comparable acoustic absorption in frequencies around 3000 Hz and above, while falling short in performance at frequencies below 3000 Hz. Performance of any given panel is highly dependent on the mix of substrate, species and other previously mentioned variables, and yield varying absorbance profiles. The industry niche of designing mycelium based acoustic damping panels is currently being developed by companies like Mogu, pursuing the market with their FORESTA acoustic panel system. Fashion and cosmetics Within the contemporary fashion industry there has been a push for more ethically sourced materials in order to alleviate environmental concerns. To fulfill these needs, companies like Mycoworks and Ecovative have developed sustainable materials to substitute for leather of varying thicknesses and applications. Beyond textiles, mycelium based materials have also found use for substitution in makeup wedges, eye masks and sheet masks due to the materials highly variable mechanical properties. Many different processes can be used to create said materials, with a variety of fungal species (Ganoderma spp., Perenniporia spp., Pycnoporus spp., etc.), however generally textiles and other polymeric materials are derived from a growth phase, harvesting phase and pressing phase to create the desired sheet thickness, post processing being used to differentiate the material for specific tasks. References Materials science
Mycelium-based materials
[ "Physics", "Materials_science", "Engineering" ]
2,534
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
70,657,482
https://en.wikipedia.org/wiki/Focused%20ultrasound%20for%20intracranial%20drug%20delivery
Focused ultrasound for intracrainial drug delivery is a non-invasive technique that uses high-frequency sound waves (focused ultrasound, or FUS) to disrupt tight junctions in the blood–brain barrier (BBB), allowing for increased passage of therapeutics into the brain. The BBB normally blocks nearly 98% of drugs from accessing the central nervous system, so FUS has the potential to address a major challenge in intracranial drug delivery by providing targeted and reversible BBB disruption. Using FUS to enhance drug delivery to the brain could significantly improve patient outcomes for a variety of diseases including Alzheimer's disease, Parkinson's disease, and brain cancer. Ultrasound is commonly used in the medical field for imaging and diagnostic purposes. With FUS, a curved transducer, lens, or phased array is used to guide the ultrasound waves to a particular location, similar to the way a magnifying glass can focus sunlight to a single point. This strategy concentrates the acoustic energy in a small target area while having a minimal effect on the surrounding tissue. Depending on the acoustic parameters, FUS can be used to exert mechanical and/or thermal effects on target tissues, making it a versatile and promising tool for many therapeutic applications. Blood–brain barrier The BBB is a highly specialized structure composed of endothelial cells, pericytes, and astrocytic processes and is situated between the cerebral capillaries and brain parenchyma. The endothelial cells form tight junctions, creating a semi-permeable membrane that separates circulating blood from the central nervous system. Various mechanisms are used to transport molecules across the BBB. Small polar molecules such as glucose, amino acids, and nucleosides enter the brain through carrier-mediated transport. Small lipophilic molecules and gaseous molecules such as oxygen and carbon dioxide can passively diffuse through the BBB. Large molecules such as proteins and peptides use receptor-mediated or absorption-mediated endocytic transport to access the brain. The BBB is crucial for structural and functional brain connectivity and plays a major role in protecting the brain from circulating pathogens. However, the BBB is also a key impediment for intracranial drug delivery and significantly limits entry of therapeutic molecules into the brain. Generally, only lipid-soluble molecules with a low molecular weight (around ~400 Da) and a positive charge can cross the BBB. Many therapeutic molecules do not have these properties and thus have difficulty entering the brain. For example, doxorubicin, a common chemotherapeutic, is ~540 Da in size and trastuzumab, a monoclonal antibody, is 148 kDa in size. The molecular weight of these molecules significantly hinders their ability to pass through the tight junctions present in the BBB. Since the BBB limits the bioavailability of therapeutics in treating neurological diseases, many strategies have been developed to circumvent the BBB. Osmotic agents disrupt the BBB by causing osmotic shrinkage of the endothelial cells and creating an osmotic pressure gradient. However, this strategy results in global rather than localized BBB opening which can lead to neurological deficits and seizures. Intracranial injections can be used to directly deliver drugs to a specific part of the brain. While these injections limit systemic toxicities and remove the loss of drug from first pass clearance, these procedures are invasive and can pose additional risk to the patient. Convection enhanced delivery uses an intracranial catheter to deliver drug through the interstitial spaces of the central nervous system. This strategy is also quite invasive as it requires insertion of the catheter into the brain and the diffusion of drug from the catheter tip is limited. Radiation therapy is known to increase BBB permeability, but the temporal characteristics of radiation-induced opening are unpredictable and radiation can damage the surrounding brain tissue by causing necrosis, demyelination, and gliosis. Focused ultrasound has the potential to address the limitations posed by these strategies by offering a method for non-invasive, temporally-controlled, and localized BBB disruption. Thermal and cavitation assisted FUS-induced BBB opening Early studies that investigated the use of FUS for BBB disruption used ultrasound to increase the permeability of the blood vessels either by increasing the temperature (usually to 40-45°C) or forming gas bubbles (inertial cavitation). While these studies concluded that this method could be used to open the BBB, damage to healthy tissue was always reported. In 2004, it was discovered that the threshold for thermal BBB disruption is greater than the threshold for thermal damage, meaning thermal BBB disruption cannot be performed without causing some damage to the brain tissue. Nevertheless, ultrasound hyperthermia may still be useful for intracranial drug delivery. In vitro studies showed that moderate ultrasound hyperthermia (temperature elevations ≤ 6°C) increases endothelial cell permeability to hydrophobic molecules. FUS hyperthermia has also been used with radiation therapy to treat patients with brain tumors since FUS sensitizes tumor cells to radiation by inhibiting DNA damage repair and disrupting the protein kinase B signaling pathway. Studies have also shown that FUS can be used to both thermally ablate brain tumors and improve delivery of chemotherapy through reversible BBB opening to kill any remaining malignant cells. However, since thermal BBB disruption cannot be induced without damaging tissue, it may not be useful for non-cancer applications in which preserving healthy tissue may be essential. Cavitation occurs when gas bubbles grow, oscillate, and collapse in a fluid during exposure to ultrasound waves. There are two general types of cavitation: stable and inertial. Stable cavitation occurs when the gas bubbles oscillate, and these oscillations are sustained for many cycles of acoustic pressure. These oscillations move the surrounding fluid, generating flow around the bubble. This flow is known as microstreaming and can increase the amount of drug that passes through the openings in the tight junctions. Inertial cavitation is typically transient and results in the collapse or fragmentation of bubbles. Low acoustic pressures are used to induce stable cavitation while higher ultrasound intensities lead to inertial cavitation. Both stable and inertial cavitation apply mechanical stress to the blood vessel walls, temporarily disrupting the tight junctions between the endothelial cells. This is typically done through a push-pull action in which the bubble expansion separates the endothelial lining while the bubble contraction causes invagination of the vascular lining. Transiently disrupting the BBB allows for paracellular transport of molecules, and there is evidence that the physical stress induced on the vessel walls by the gas bubbles causes cellular changes that further enhance paracellular and transcellular transport of therapeutics across the BBB. However, this mechanical stress can damage the blood vessels, and inertial cavitation is generally avoided for BBB opening since it poses the greatest risk for injury. Microbubble-mediated FUS-induced BBB opening Due to the safety and efficacy issues related to thermal and cavitation-assisted FUS-induced BBB opening, research has focused on using preformed microbubbles to further enhance the effects of FUS. Microbubbles are small bubbles with diameters typically between .5 and 10 microns. The core is typically filled with gas, but microbubbles can also be loaded with active drug for therapeutic purposes. The microbubble shell is typically composed of polymers, lipids, proteins, or a combination of these components. Since these microbubbles are lipophilic, they can diffuse through the BBB. In 2001, Hynynen et al. used microbubbles in conjunction with FUS to induce BBB opening without causing tissue damage. By injecting microbubbles into the blood before sonication, they found the intensities required to disrupt the BBB were two times lower than the intensity required with FUS alone. Furthermore, they discovered that the BBB was restored 24 hours after the FUS treatment. Following this study, several groups have reported effective intracranial drug delivery using microbubbles with different FUS parameters. Many researchers have concluded that ultrasound pressure is the most important parameter to optimize to avoid injuring blood vessels and surrounding brain tissue. It has been seen that greater ultrasound pressure is required to allow a 2000 kDa drug to cross the BBB compared to a 70 kDa drug. In rodent models, it has been shown that BBB disruption with microbubbles is effective over a wide range of ultrasound frequencies from 28 kHz to 8 MHz. While the repetition frequency of the ultrasound bursts do not significantly affect BBB opening, if the burst frequency is too high, the microbubbles cannot effectively reperfuse the target area. The properties of the microbubbles themselves can affect treatment efficacy. As microbubble size and dose increase, BBB opening increases and there is a greater risk for damage. Microbubbles can also serve as effective drug carriers. Drugs can be adhered to the surface of microbubbles, encapsulated inside the microbubble, or embedded in the microbubble membrane. Ligands can also be conjugated to the surface of microbubbles to target specific sites. FUS-induced cavitation can be used to burst the microbubbles and release drug near the diseased region. This method achieves local drug release while limiting off-target toxicities. Drug-loaded microbubbles with FUS has been shown to be an attractive combination therapy for brain tumors. One study found that the ultrasound-triggered release of doxorubicin from loaded microbubbles resulted in a two-fold decrease in human glioblastoma cell survival rate compared to free doxorubicin or loaded microbubbles without FUS. Another study found encapsulating carmustine in microbubbles led to a five-fold increase in the circulating half-life and a five-fold decrease in drug accumulation in the liver. This study also found combining these carmustine-loaded microbubbles with FUS increased the median survival time by 12% in a rat glioma model. Due to the promising results seen with drug-loaded microbubbles and FUS for brain tumor treatment, this method may also have potential for treating other neurological disorders such as Alzheimer's and Parkinson's disease. Drug delivery and therapeutic effects FUS for intracranial drug delivery has many potential benefits for both cancer and non-cancer applications. Examples of studies that used FUS to deliver therapeutics into the brain are summarized below. Challenges and future directions While FUS has many potential advantages for intracranial drug delivery, there are several limitations. Heterogeneity in the acoustic properties of the skull leads to distortion and attenuation of the ultrasound. Intracranial structures such as gray and white matter and dense vasculature also vary between patients, so each individual may have a unique threshold for BBB opening. Patient-specific ultrasound arrays may need to be used to address this concern. After the procedure, MRI contrast agents are given to patients to measure BBB opening, so currently there is no method for intraoperative monitoring of BBB disruption. For strategies that use magnetic resonance guidance, the need for pre-procedural removal of hair, substantial operating time, and use of stereotactic frame may limit the widespread use of FUS for intracranial drug delivery. Treatment protocols and dosing schedules for many therapeutic molecules still need to be better understood to ensure off-target toxicities are limited. Finally, many studies investigating FUS for intracranial drug delivery use animals with healthy BBBs, but several neurological diseases break down the BBB, so current animal models may not be very representative. Despite the limitations, FUS is a promising tool for improving the treatment of many disorders. FUS is currently being studied to facilitate intracranial gene delivery for the treatment of Alzheimer's disease. A similar approach is also being studied for Parkinson's disease. The ability to disrupt the BBB could also allow for delivery of therapeutics to the motor cortex to treat amyotrophic lateral sclerosis. There may also be applications of FUS for both neuromodulation and targeted drug delivery in epilepsy, Alzheimer's disease, Parkinson's disease, depression, and traumatic brain injury. In the future, research efforts will focus on addressing the challenges and limitations of FUS to develop safe and effective therapies for the most challenging brain conditions. See also Drug delivery to the brain References Drug delivery devices Routes of administration Cancer treatments
Focused ultrasound for intracranial drug delivery
[ "Chemistry" ]
2,636
[ "Pharmacology", "Drug delivery devices", "Routes of administration" ]
70,658,453
https://en.wikipedia.org/wiki/Neil%20J.%20Calkin
Neil J. Calkin (born 29 March 1961) is a professor at Clemson University in the Algebra and Discrete Mathematics group of the School of Mathematical and Statistical Sciences. His interests are in combinatorial and probabilistic methods, mainly as applied to number theory. Together with Herbert Wilf he founded The Electronic Journal of Combinatorics in 1994. He and Wilf developed the Calkin–Wilf tree and the associated Calkin–Wilf sequence. Biography Neil Calkin was born 29 March 1961, in Hartford, Connecticut and moved to the UK around the age of 3. He grew up there and studied mathematics at Trinity College Cambridge before moving to Canada in 1984 to study in the Department of Combinatorics and Optimization at the University of Waterloo where he was awarded a PhD (1988) for his thesis "Sum-Free Sets and Measure Spaces" written under the supervision of Ian Peter Goulden. He was the Zeev Nehari Visiting Assistant Professor of Mathematics at Carnegie Mellon University (1988—1991), an assistant professor at Georgia Tech (1991—1997), and joined the Algebra and Discrete Mathematics group of the School of Mathematical and Statistical Sciences at Clemson University in 1997. Calkin has an Erdős number of 1. He was one of the last people to collaborate with Erdős and once said of him, "One of my greatest regrets is that I didn't know him when he was a million times faster than most people. When I knew him he was only hundreds of times faster." Selected papers Books References External links Neil Calkin at the Mathematics Genealogy Project Neil J. Calkin's Publications 20th-century English mathematicians 21st-century English mathematicians Alumni of Trinity College, Cambridge University of Waterloo alumni Clemson University faculty Mathematicians from South Carolina Combinatorial game theorists Recreational mathematicians Mathematics popularizers Combinatorialists Probability theorists British number theorists 1961 births Living people
Neil J. Calkin
[ "Mathematics" ]
379
[ "Recreational mathematics", "Combinatorialists", "Recreational mathematicians", "Combinatorics" ]
70,658,563
https://en.wikipedia.org/wiki/Offshore%20wind%20port
An offshore wind port describes several distinct types of port facilities that are used to support manufacturing, construction and operation of an offshore wind power project. Offshore wind turbine components are larger than onshore wind components. Handling of such large components requires special equipment. Transporting of components between manufacturing and assembling facilities is to be minimized. As a result, a number of offshore wind port facilities have been built in areas with a high concentration of offshore wind developments. For large offshore wind farm projects, some offshore wind ports have become strategic hubs of the industry's supply chain. The Port of Esbjerg in Denmark is considered the world's largest offshore wind port. Types Small oceanic ports These are small port facilities to launch survey vessels used in an early stage of an offshore wind farm development. Manufacturing ports Large offshore wind turbine components are difficult to transport over land. Locating a manufacturing facility at a port is more desirable. Subcomponents and materials may be brought through roads or railways. After components are built, they are typically shipped to a marshaling port for the final assembly. Marshaling ports Marshaling ports (also known as staging ports) are used to collect and store wind turbine components prior to loading them on to wind turbine installation vessels. They are preferably located where there is unrestricted air draft to the wind farm site. Operating and maintenance ports Operating and maintenance ports house facilities and vessels that are required for ongoing operating and maintenance of offshore wind farms. This may include part warehouse, offices, and training facilities. By region Europe The six leading offshore wind ports in Europe service wind farms in the North Sea. Their respective countries signed the Ejsberg Declaration in 2022 in which they agreed to coordinate supply chain activities to optimize the manufacture and delivery of wind turbine components. Port of Esbjerg, (DK), the world's largest offshore wind port Port of Ostend, (BE) Groningen Seaports/Eemshaven, (NL) Niedersachsen Ports/Cuxhaven, (DE) Nantes/Saint-Nazaire, (FR) Humber/Port of Hull (UK) United States As of 2021, offshore wind power in the United States was described as a "burgeoning" industry. At that time, a number of ports were proposing to build or convert facilities to handle the large components needed to build potential offshore wind farms. Among those on the East Coast, from north to south are: Salem Harbor (MA) Port of New Bedford (MA) Brayton Point Commerce Center/Mount Hope Bay (MA) Port of Providence (RI) State Pier, Port of New London (CT) Arthur Kill Terminal (NY) Port of Paulsboro (NJ) New Jersey Wind Port, the "first offshore wind port" in the United States, which broke ground in late 2021 Port of Baltimore (MD) - Sparrows Point Portsmouth Marine Terminal (VA) References Infrastructure Industrial buildings and structures Wind power
Offshore wind port
[ "Engineering" ]
599
[ "Construction", "Infrastructure" ]
70,659,220
https://en.wikipedia.org/wiki/Stable%20isotope%20composition%20of%20amino%20acids
The stable isotope composition of amino acids refers to the abundance of heavy and light non-radioactive isotopes of carbon (13C and 12C), nitrogen (15N and 14N), and other elements within these molecules. Amino acids are the building blocks of proteins. They are synthesized from alpha-keto acid precursors that are in turn intermediates of several different pathways in central metabolism. Carbon skeletons from these diverse sources are further modified before transamination, the addition of an amino group that completes amino acid biosynthesis. Bonds to heavy isotopes are stronger than bonds to light isotopes, making reactions involving heavier isotopes proceed slightly slower in most cases. This phenomenon, known as a kinetic isotope effect, gives rise to isotopic differences between reactants and products that can be detected using isotope ratio mass spectrometry. Amino acids are synthesized via a variety of pathways with reactions containing different, unknown isotope effects. Because of this, the 13C content of amino acid carbon skeletons varies considerably between the amino acids. There is also an isotope effect associated with transamination, which is apparent from the abundance of 15N in some amino acids. Because of these properties, amino acid isotopes record useful information about the organisms that produce them. Variations in metabolism between different taxonomical groups give rise to characteristic patterns of 13C enrichment in their amino acids. This allows the sources of carbon in food webs to be identified. The isotope effect associated with transamination also makes amino acid nitrogen isotopes a useful tool to study the structure of food webs. Repeated transamination by consumers results in a predictable increase in the abundance of 15N as amino acids are transferred up food chains. Together, these application, among others in ecology, demonstrate the utility of stable isotopes as tracers of environmental processes that are difficult to measure directly. Isotopic fractionation in reaction networks To explain the wide range of isotopic compositions observed among the amino acids, it is necessary to consider how isotopes are sorted between starting materials, intermediates, and products in reaction networks. Amino acid biosynthesis pathways contain both reversible and irreversible reactions, as well as branch points where one intermediate can react to form two different products. The following examples adapted from Hayes (2001) illustrate the isotopic consequences of these network structures. Linear irreversible network In the following reaction network, A is irreversibly converted to an intermediate B, which irreversibly reacts to form C. A ->[{\phi_{ab}}][{\delta_b, \alpha_{b/A}}] B->[{\phi_{bc}}][{\delta_c, \alpha_{c/B}}] C The pools of A, B, and C have delta values defined as δA, δB, and δC respectively. These values are related to the ratio of heavy to light isotopes in each pool, and are the conventional means by which scientists express the isotopic composition of materials. Importantly, δB is distinct from δb listed on the diagram, as δb is the isotopic composition of B produced from A before it mixes with the pool of B. The isotopic compositions of the pools and products are related through fractionation factors that reflect the kinetic isotope effects (KIEs) associated with each reaction. For A → B, Rearranging for δb gives in which . In many cases, and . This is consistent with a normal kinetic isotope effect in which the product is slightly depleted in a heavy isotope relative to the reactant. If the isotope effect is small, as is typical for C and N, and . From this, we can see that the product produced from A will be depleted by roughly ‰ relative to the starting material. At steady state, the mass flux of material entering pool B must equal the flux leaving pool B. In other words, the amounts of heavy and light isotopes entering and exiting the pool must be identical, so . Since there is no flux of material out of pool C, its delta value is also equal to . This analysis shows that the end product of a linear, irreversible reaction network has an isotopic composition determined solely by the composition of the starting material and the KIE of the first reaction in the network. Network with branch points At branch points, two or more separate reactions compete for the same reactant. This affects the isotopic composition of all products downstream of the branch point. To illustrate this, consider the network below: Here, the flux of material into pool B (φAB) is balanced by two fluxes, one into pool C and the other into pool D (φBC and φBD respectively). The mass balance for the heavier isotope in this system is represented by Define fC = φBC / (φBC + φBD) = φBC/φAB as the fractional yield of C. Dividing through by φAB gives Applying the approximation introduced in the previous section, δb ≈ δA + εb/A. Further, δc ≈ δB + εc/B and δd ≈ δB + εd/B. Substituting these relations into the mass balance and solving for δB gives The isotopic composition of pool B is clearly dependent on the fractional yield of C. Since there are no fluxes out of pools C or D, δC = δc, δD = δd. Thus, the isotopic compositions of these pools are offset from δB by εc/B and εd/B respectively. The figure at right summarizes these results. Example There is great variation in the carbon isotope composition of amino acids within a single organism. In cyanobacteria, Macko et al. observed a ~30‰ range in δ13C values amongst the amino acids. Amino acids produced from the same precursors also had widely varying compositions. It is difficult to explain these trends because of limited data on the kinetic isotope effects associated with reactions that synthesize amino acid carbon skeletons. Nevertheless, some insights can be gained by applying the logic above to the reaction networks responsible for amino acid biosynthesis. Consider the amino acids synthesized from pyruvate. Pyruvate is produced during glycolysis and can be decarboxylated by pyruvate dehydrogenase to generate acetyl groups. These acetyl groups enter the citric acid cycle as acetyl-CoA or can be used to synthesize lipids. There is a large kinetic isotope effect associated with this reaction, so the remaining pyruvate pool becomes enriched in 13C relative to the acetyl groups. This enriched pyruvate can be transaminated to produce alanine. In the experiments by Macko et al., alanine indeed had a δ13C value slightly higher than that of cyanobacterial photosynthate. Valine is synthesized by the addition of a 13C depleted acetyl group to pyruvate. Consistent with this mechanism, Takano et al. found valine to be depleted in 13C relative to alanine in anaerobic methanotrophic archaea. However, in cyanobacteria, Macko et al. observed a higher δ13C value for valine than alanine. This could be due to the branch point at the intermediate α-ketoisovalerate, which can be transaminated to produce valine or further acetylated to generate leucine. There may be different isotope effects associated with the addition of an amino or acetyl group at position C-2 in α-ketoisovalerate. As discussed above, the isotopic consequences of this branch point would depend on the relative rates of leucine vs valine production. One would also expect relative depletion of 13C in leucine because its synthesis requires the addition of another isotopically light acetyl group. In Escherichia coli, the carboxyl carbon in leucine (derived from acetyl-CoA) has a δ13C value roughly 13‰ lower than that of the entire molecule. Curiously, the same depletion is not observed in photoautotrophs. Further, there is little consistency in the δ13C of most amino acids between cyanobacteria and eukaryotic photoautotrophs. These discrepancies demonstrate the limits of our understanding of the mechanisms that set amino acid isotopic compositions. Regardless, isotopic variations between different taxa have been used to great effect in ecology. Applications Tracing nutrient sources in food webs Amino acids are a key nutrient in ecosystems. Some are essential to animals, meaning that these organisms cannot synthesize them de novo. Instead, animals rely on their diet to acquire these molecules, creating strong interdependencies between animals and organisms with complete amino acid synthesis capabilities. In a study of bacteria and archaea at Antarctica's McMurdo Dry Valleys, the distribution of 13C between their amino acids reflected the biosynthetic pathways employed by these organisms. Autotrophs and heterotrophs had distinct isotopic fingerprints, as did organisms that employed alternatives to the citric acid cycle to ferment or produce acetate. Plants, fungi, and bacteria are also distinguishable by their amino acid carbon isotopes. The compositions of the essential amino acids, which have more complex biosynthetic pathways, are particularly informative. Lysine, isoleucine, leucine, threonine, and valine all had significantly different δ13C values between at least two of these groups. It is important to note that the fungi and bacteria in this study were grown on amino acid-free media to ensure that all the amino acids were synthesized by the organisms of interest. Bacteria and fungi can also scavenge amino acids from the environment, complicating the interpretation of data from field samples. Nevertheless, researchers have successfully used these differences to identify the sources of amino acids in food webs. Terrestrial and marine producers in a mangrove forest had different patterns of 13C enrichment in their amino acids. Fishes from a coral reef with diets containing different carbon sources also had variable amino acid δ13C values. Furthermore, one study observed distinct amino acid isotopic compositions for desert C3, C4, and CAM plants. These applications in diverse ecosystems highlight the versatility of compound-specific amino acid isotope analysis. Placing organisms in food webs Human domination of the biosphere has threatened global biodiversity, with uncertain consequences for ecosystems that provide food, clean air and water, and other valuable ecosystem services. Understanding the impacts of biodiversity loss on ecosystem function requires knowledge of the interactions between organisms within both the same and different positions in a food web (i.e. trophic levels). Food webs can have very complex structures. In many ecosystems, organisms at trophic levels higher than herbivores consume a variable combination of prey and producers, exhibiting different forms of omnivory. The loss of predator species can have a cascading effect on all organisms at lower trophic levels. Networks with more omnivores that consume species at multiple trophic levels may be more resilient to these top-down effects. Together, these factors demonstrate that a food web's structure affects its sensitivity to reductions in biodiversity, highlighting the importance of food web studies. Amino acid isotopes are an important tool used in this field. The abundance of 15N in some amino acids reflects an organism's position in a food web. This is due to the ways organisms metabolize different amino acids when they are consumed. Trophic amino acids (TrAAs) are first deaminated, meaning that the amino group is removed to produce an alpha-keto acid carbon skeleton. This reaction breaks a C-N bond, causing the amino acid to become more enriched in 15N due to a kinetic isotope effect. For instance, glutamate, a representative TrAA, has a δ15N value that increases by 8‰ with each trophic level. In contrast, the first reaction in the metabolism of source amino acids (SrcAAs) is not deamination. An example is phenylalanine, with is first converted to tyrosine in a reaction that breaks no C-N bonds. Thus, there is little variation in the δ15N values of SrcAAs between trophic levels. Their isotopic composition instead resembles that of the species at the base of the food web. Though these trends are conflated by some environmental effects, they have been used to infer an organism's trophic position. References Amino acids Isotopes
Stable isotope composition of amino acids
[ "Physics", "Chemistry" ]
2,632
[ "Amino acids", "Biomolecules by chemical classification", "Isotopes", "Nuclear physics" ]
70,659,357
https://en.wikipedia.org/wiki/Smash%20%28app%29
Smash (stylized as smash.) is a Japanese video on demand mobile application. It offers original content such as music, dramas, animated series, and variety shows, all in vertical video format optimized for viewing on smartphones. It is notable for featuring exclusive content from popular Japanese and Korean pop groups, such as the AKB48 Group, the Sakamichi Series, Hey! Say! JUMP and BTS. It is officially available for use in Japan, the United States, and South Korea. Its major shareholders include the Korean entertainment company Hybe. As of October 2021, the app has over 1.63 million downloads. References See also Showroom (streaming service) Streaming television Mobile applications Video on demand services Internet properties established in 2020
Smash (app)
[ "Technology" ]
153
[ "Mobile software stubs", "Mobile technology stubs" ]
70,660,219
https://en.wikipedia.org/wiki/Precrastination
Precrastination, defined as the act of completing tasks immediately, often at the expense of increased effort or diminished quality of outcomes, is a phenomenon observed in certain individuals. This approach is often adopted to avoid the anxiety and stress associated with last-minute work and procrastination. Precrastination is considered an unhealthy behavior pattern and is accompanied by symptoms such as conscientiousness, eagerness to please, and high energy. People who precrastinate may try to find shortcuts to be more efficient and productive, but this may result in the application of non-effective energy management and cause the person to fulfill their tasks to an incomplete or insufficient degree. Precrastinators may be more likely to act impulsively instead of carefully planning ahead. Etymology The word Precrastination is a blend of pre- (before) and -crastinus (until next day). It is most likely a play on the word “procrastination”, which has an opposite definition. Research Rosenbaum et al. coined the term "precrastination" in 2014. David A. Rosenbaum is a Professor in the Department of Psychology at the University of California, Riverside. The study found evidence of human precrastination and coined the term precrastination. The Bucket Experiments The paper titled "Pre-crastination: Hastening Subgoal Completion at the Expense of Extra Physical Effort" consisted of nine different experiments that can be termed The Bucket Experiments. The goal of the research was “to get a better understanding of the evaluation of different kinds of costs in action planning”. While confirming the belief that individuals would rather carry a load weight a shorter distance rather than an equally heavy load of weight a longer distance they found contradictory results. Most university students irrationally carried the closer bucket over a long distance, rather than the further bucket over a short distance to the endpoint. When interviewed, the participants answered that they wanted to "get the task done as quickly as possible". These surprising results from the first three experiments changed the goal of research to "describing and providing a theoretical interpretation of this astonishing phenomenon". The first three experiments The Bucket Experiments were conducted using Pennsylvania State University students as participants and started with a set of three experiments that involved having a participant walk down an alleyway and from there pick up one of two weighted buckets and carry it a distance further until a stop line. Contrary to the experimenters’ expectations, participants would rather pick up and carry the bucket that was closer to them than pick up the bucket closer to the stop line, meaning that the participant would expend more energy to complete the task than necessary. When participants were questioned about their choice, they expressed the sentiment of wanting to get the task done as soon as possible. Testing the new phenomenon The fourth experiment was used to confirm the finding of the previous three while reducing statistical variability by moving the starting position of the buckets six feet further from the participants. Previous results were upheld and the statistical variability for the conclusion of the approach distance being the primary factor for picking up the bucket from the left or right was also upheld. The fifth and sixth experiments were used to remove the aspect of foot-hand coordination from the task, this was done by placing the participants in wheelchairs while they completed the task. The results from this experiment were in line with the rest of the experiments. In experiment seven, the researcher investigated “the possibility that participants actually preferred long carrying distances to short carrying distances”. This was done by altering the total distance that a left or right bucket would need to be carried, rather than just altering the approach and carry distances of the buckets. The result of this experiment was that the theory that participants preferred long carrying distances wasn't supported and that the students still cared about the total distance of the task (approach + carrying distance). Suggesting that when deciding task methodology, both approach and total distance are considered. For the eighth experiment, the experimenters test the hypothesis that “participants may have been attracted to the nearer object, grabbing it without considering the farther object because their attention was grabbed by the object that was closer at hand”. This attention hypothesis was tested by placing a screen at the end of the alleyway that told the participants to wait before starting the task, this could be for a two or four-second interval. After this an okay message would appear and the participants could complete the task at their leisure. The researchers argue that the screen “was to get participants to look down the alley so they would see the far as well as the near bucket.” and that their rationale for showing the wait message “was to discourage participants from impulsively initiating their excursion into the alley”. Ultimately there were no significant deviations from the results of the previous experiments, meaning that the close-object preference was replicated. Finally, the ninth experiment was to confirm that there was a noticeable difference in physical exertion between the two options for carrying distances and that the more labor-intensive option would be avoided. This was done by having light and heavy bucket options for the task while repeating the setup of experiments one through three. The results from experiment nine confirmed a general preference for the lighter bucket. Thereby confirming in the researchers’ mind that exertion did matter to participants, and despite this would engage in close-object preference when performing the other experiments. Even when it would result in a longer carrying distance, and thereby more exertion. Further research Sequencing preferences In April 2018 Lisa R. Fournier, Alexandra M. Stubblefield, Brian P. Dyre, and David A. Rosenbaum published a research article titled ‘Starting or finishing sooner? Sequencing preferences in object transfer tasks. This article examined task sequencing in regard to object transferring tasks in which either of two possible tasks could easily or logically come before the other task. This was done in hopes of whether precrastination would generalize a consistent logic of which of the two tasks to do first. For the experiment, they tested various conditions that differed in regard to “which task can be started sooner, finished sooner, or bring one’s current state closer to the goal state”. The testing of the experiment was to have participants transfer ping pong balls from two original buckets, one bucket at a time, into a bowl that was placed at the end of a corridor. The number of balls in the buckets, as well as the bucket distances along the corridor, differed between trials. An important factor for this experiment is the fact that the bucket selection didn't affect the total time to complete both of the transport tasks together. Also, participants walked, in grand total, an equal total distance and carried the buckets the same total distance regardless of which bucket was originally chosen. The conclusions from the experiments can be summed up with this statement extracted from the publication “We found that the chosen task order was strongly affected by which task or sub-goal could be started sooner." and thus replicating the tendency to engage in precrastination that was first observed in 2014 Rosenbaum et al. The end conclusion of the article differed from the 2014 Rosenbaum study by stating that, instead of precrastination being fueled by the wish to complete sub-goals as quickly as possible, precrastination is more likely the result of the desire to initiate the sub-goals as soon as possible. Precrastination and cognitive load On November 30 of 2018 Lisa R. Fournier, Emily Coder, Clark Kogan, Nisha Raghunath, Ezana Taddese, and David A. Rosenbaum published a research article titled ‘Which task will we choose first? Precrastination and cognitive load in task ordering’. The article “examined the generality of this recently discovered phenomenon by extending the methods used to study it, mainly to test the hypothesis that precrastination is motivated by cognitive load reduction.” The researchers experimented using the basic experimental setup from The Bucket Experiments, with a few alterations. One, instead of participants having to pick up and carry one of two possible buckets along a corridor, participants in this set of experiments were instructed to pick up both buckets that were placed in the workspace. Two, the participants had to carry both buckets at the same time in order to complete the task. Due to the task requiring the participants to pass by the location of the first bucket twice, once while walking to the second bucket and once more while walking back toward the end goal, it was determined that the action of picking up the first bucket on the first walk constituted a display of precrastination. Three, the addition of having half the participants memorize digit lists on top of performing the base physical action task mentioned previously. Four, the experiment was broken into two separate experiments. In the first experiment, the objects to be carried were buckets with golf balls, with possible variations of the number of golf balls in the near bucket versus the far bucket. In the second experiment, the objects to be carried were cups that were either full or half full of water. The conclusion of the research is split into two parts. For experiment one the researchers had the following statement: “The results of Experiment 1 showed that precrastination generalizes to the ordering of tasks (or task subgoals)." This result in another finding that replicates and extends the original findings of Rosenbaum et al. from 2014. The extension is seen with the findings that participants with a memory load showed a high probability of selecting the near-bucket-first something that wasn't the case for participants without a memory load. This means that while previous research is upheld, there is additional support to the claim that precrastination is the result of cognitive load reduction. The following quote highlights the researcher's interpretation of their results for experiment two: “Participants in the water cup transport task largely abandoned the near-object-first preference". They believe this was the result of the participants having to "pay a great deal of extra attention to the task if they started with the near cup". This interpretation supports the hypothesis that precrastination is the result of a reduction of cognitive load. As when the cognitive load would be increased as a result of engaging in precrastination, precrastination doesn't occur. End-state comfort and precrastination In January 2019, David A. Rosenbaum and Kyle S. Sauerberger published the research paper ‘End-state comfort meets pre-crastination’, with the goal of resolving an apparent conflict between the observed phenomena of precrastination and end-state comfort. End-state comfort can generally be defined as the tendency to act in physically awkward ways for the sake of comfortable or easy-to-control final postures. Through the course of the paper, the researchers determine that the two phenomena are in fact not contradictory and can interplay between them, to determine the end course of action. Studying precrastination in animals Precrastination has also been observed in animal behavior, as evidenced by Edward A. Wasserman and Stephen J. Brzykcy publishing their paper ‘Pre-crastination in the pigeon’ in 2014. In order to test the possibility of pigeons displaying precrastination, the researchers devised a basic two-alternative forced-choice task, where the pigeons had to at one time or another switch from pecking the center of three horizontally aligned buttons to pecking one of two side buttons. For the differing trials left or right was cued by red or green colors. The pigeons’ center-to-side button switch could be completed sooner (in the second step) or later (in the third step) in the sequence, with the difference between these two sequences resulting in no effect on the effort required to complete the task, the distance traveled over the course of the task, or the reward received from the task. As a result of this experiment, the researchers determined that pigeons displayed precrastination. This is due to the fact that despite the lack of any difference in the reward given for doing so, all of the pigeons “quickly came to peck the side location in Step 2 on which the upcoming star would next be presented in Step 3”. Explanations Evolutionary perspective Precrastination often involves hasty and sudden behavior as well as split-second decision-making. Evolution played an important role in making precrastination the “default response option” as animals' quick reactions and decisions might have been significant to their survival. The brain evolved to anticipate certain situations and to approach the situation in an appropriate way. Subsequently, the adapted brains' ability to adjust and adapt to different contexts in rapid manner and within a short timeframe allowed certain animals to have an advantage over others. A study from 2014, conducted by Wasserman and Brzykcy, showed further evidence that precrastination can be explained by evolution. The findings indicated that pigeons precrastinated. As the evolutionary ancestors of pigeons and humans went separate ways 300 billion years ago, it supports the evolutionary theory that the tendency to precrastinate may have emerged from a common ancestor. The suggestions why common ancestors should precrastinate can be linked to survival as well. Grabbing the lowest fruit from the tree, eating the grain while it is still nearby and getting food while it is still available, can be crucial for survival. Working memory off-loading perspective An alternative explanation for precrastination is related to decreasing the load on one's working memory. When a task is done immediately, it can be mentally checked off and does not need to be stored in one's working memory anymore. In contrast, when waiting to do a task the task-related information needs to be continuously remembered which can be taxing for one's working memory. Working memory is defined as the active manipulation and maintenance of short-term memory. In contrast to long-term memory, it has a limited capacity of about five to nine units and is readily accessible. In accordance to the limited capacity, Dr. Rosenbaum hypothesized that in order to off-load items stored within the working memory, people will try and complete certain tasks, even if this is at the expense of more physical effort. Further evidence for this perspective is shown in an experiment done by Lisa Fournier and co-authors in 2018. A similar task to The Bucket Experiments was given to participants, however, half of the participants were given an extra mental load to maintain while completing the task. This extra mental load was in the form of remembering a list of numbers that they had to recall after the task was completed. Recalling the list of numbers specifically targeted increases the mental load of the working memory of the participants. Dr. Fournier found that the participants with the extra mental load were ninety percent more likely to precrastinate during the task. It shows that people tend to structure their behavior in ways that decrease personal cognitive effort, which supports the idea of a trade-off between physical effort and cognitive load. In order to reduce the cognitive load, people are willing to put in extra physical effort. This suboptimal behavior is found within The Bucket Experiments. Individuals often chose to carry the closest bucket first, despite the fact that such action results in an increased physical effort required for competition of the task. One critique to this theory is that lifting a bucket does not tax the working memory that much, so it is not clear if this is the only thing that is contributing to precrastination. CLEAR hypothesis The working memory off-loading perspective was expanded on by the cognitive-load-reduction (CLEAR) hypothesis created by Rachel VonderHaar and coauthors in 2019. This hypothesis is aimed at explaining how the completion of tasks is ordered mentally. According to the CLEAR hypothesis, there is a strong drive to reduce the cognitive load, therefore tasks that are most efficient in doing so will be prioritized. This hypothesis takes a broader view of the cognitive load of a task and suggests that people will do what they can to free up cognitive resources. The study by VonderHaar et al. in 2019 specifically showed that when given the choice in what order to perform two different tasks, one being a physical task and the other being a cognitive task, participants were more likely to choose to undertake the cognitive task before the physical one. The CLEAR hypothesis includes off-loading working memory as well as freeing up other cognitive resources such as attention. This is illustrated by a study conducted by Raghunath et al. about precrastination and attention. The setup was similar to that of the bucket experiment, but then with half-full or full cups of water. This cup experiment showed that precrastination is dependent on the cognitive demands of the tasks. When the full cup of water was placed earlier and the half-cup later on, precrastination decreased as the full cup required more attention. This showed that when an earlier task requires more attention than a later one, the prevalence of precrastination decreases. The same was not seen when the weight of the buckets was adjusted. While precrastinating would increase the physical effort even more, participants still choose to do so. People who always precrastinated in the cup experiment even when the cognitive load was higher, mentioned that they did so out of habit. Reward perspective A simpler perspective is related to the rewards associated with completing a task. The reward center in the brain is activated when a task is completed that required less effort. The perspective suggests that individuals will perform tasks earlier as it is rewarded in the brain. Tasks that are completed earlier are more attractive, as this results in an instant reward. However, when a task requires a delay, this also delays the reward. Consequences As of 2022, not a lot is known about the positive and negative effects of precrastination. A possible positive effect that we derive from precrastinating is that we may relieve our working memory by getting a task done as soon as possible, thus making cognitive space for more important decisions. Another positive consequence may be that we are able to gain a lot of information as quickly as possible about the costs and benefits of task-related behaviors. Completing a task right away also enables us to feel an instant sense of accomplishment. However, precrastination can also lead to negative consequences due to behaving in an overall less efficient or irrational way. Making a rushed decision can also lead to completing a task incorrectly. Correlates Precrastination has been associated with different character traits. Impulsivity was expected to be among them because it has been evidenced to be a strong correlate of the contrary concept of procrastination. Results provided by the research of Sauerberger et al. showed that this trait might be unrelated to precrastination. In his published dissertation, further correlates were listed that have been found throughout studies of his research team. A negative correlation has been established between the trait of clumsiness and precrastination. Three of the big five personality traits have been investigated in more detail– conscientiousness, neuroticism, and agreeableness. Conscientiousness, neuroticism and agreeableness Conscientiousness has been shown to be positively associated with precrastination. Organization, responsibility, and productiveness are characteristics of it according to the big five inventory. Ego resilience, which resembles conscientiousness in many aspects, is also positively correlated to precrastination. Another trait is neuroticism for which the results showed no significant correlation. Depression, which is one of its features in the big five inventory next to anxiety and emotionality, has the strongest correlation with precrastination. Sauerberger stated that no observable correlation with neuroticism might be due to other factors, such as the participants' awareness of no consequence following their action and no feeling of uncertainty involved in the experiments as seen in real-life situations. He commented that there might be a correlation. A correlation with agreeableness has been proven. The relationship might be either positive or negative depending on the context, for example giving precrastinaters exhibit less prosocial behavior on planes but increased prosocial behavior on the freeway. According to the big five inventory, the main facets of this trait are compassion, respect, and trust. References Habits Anxiety Motivation Psychological stress Time management Waste of resources
Precrastination
[ "Physics", "Biology" ]
4,168
[ "Behavior", "Ethology", "Physical quantities", "Time", "Motivation", "Time management", "Spacetime", "Human behavior", "Habits" ]
70,660,540
https://en.wikipedia.org/wiki/Alternative%20abiogenesis%20scenarios
A scenario is a set of related concepts pertinent to the origin of life (abiogenesis), such as the iron-sulfur world. Many alternative abiogenesis scenarios have been proposed by scientists in a variety of fields from the 1950s onwards in an attempt to explain how the complex mechanisms of life could have come into existence. These include hypothesized ancient environments that might have been favourable for the origin of life, and possible biochemical mechanisms. A scenario The biochemist Nick Lane has proposed a possible scenario for the origin of life that integrates much of the available evidence from biochemistry, geology, phylogeny, and experimentation: Environments Many environments have been proposed for the origin of life. Fluctuating salinity: dilute and dry-down Harold Blum noted in 1957 that if proto-nucleic acid chains spontaneously form duplex structures, then there is no way to dissociate them. The Oparin-Haldane hypothesis addresses the formation, but not the dissociation, of nucleic acid polymers and duplexes. However, nucleic acids are unusual because, in the absence of counterions (low salt) to neutralize the high charges on opposing phosphate groups, the nucleic acid duplex dissociates into single chains. Early tides, driven by a close moon, could have generated rapid cycles of dilution (high tide, low salt) and concentration (dry-down at low tide, high salt) that exclusively promoted the replication of nucleic acids through a process dubbed tidal chain reaction (TCR). This theory has been criticized on the grounds that early tides may not have been so rapid, although regression from current values requires an Earth–Moon juxtaposition at around two Ga, for which there is no evidence, and early tides may have been approximately every seven hours. Another critique is that only 2–3% of the Earth's crust may have been exposed above the sea until late in terrestrial evolution. The tidal chain reaction theory has mechanistic advantages over thermal association/dissociation at deep-sea vents because it requires that chain assembly (template-driven polymerization) takes place during the dry-down phase, when precursors are most concentrated, whereas thermal cycling needs polymerization to take place during the cold phase, when the rate of chain assembly is lowest and precursors are likely to be more dilute. Hot freshwater lakes Jack W. Szostak suggested that geothermal activity provides greater opportunities for the origination of life in open lakes where there is a buildup of minerals. In 2010, based on spectral analysis of sea and hot mineral water, Ignat Ignatov and Oleg Mosin demonstrated that life may have predominantly originated in hot mineral water. Hot mineral water that contains hydrogen carbonate and calcium ions has the most optimal range. This case is similar to the origin of life in hydrothermal vents, but with hydrogen carbonate and calcium ions in hot water. At a pH of 9–11, the reactions can take place in seawater. According to Melvin Calvin, certain reactions of condensation-dehydration of amino acids and nucleotides in individual blocks of peptides and nucleic acids can take place in the primary hydrosphere with pH 9–11 at a later evolutionary stage. Some of these compounds like hydrocyanic acid (HCN) have been proven in the experiments of Miller. This is the environment in which the stromatolites have been created. David Ward described the formation of stromatolites in hot mineral water at the Yellowstone National Park. In 2011, Tadashi Sugawara created a protocell in hot water. Geothermal springs Bruce Damer and David Deamer argue that cell membranes cannot be formed in salty seawater, and must therefore have originated in freshwater environments like pools replenished by a combination of geothermal springs and rainfall. Before the continents formed, the only dry land on Earth would be volcanic islands, where rainwater would form ponds where lipids could form the first stages towards cell membranes. During multiple wet-dry cycles, biopolymers would be synthesized and are encapsulated in vesicles after condensation. Zinc sulfide and manganese sulfide in these ponds would have catalyzed organic compounds by abiotic photosynthesis. Experimental research at geothermal springs successfully synthesized polymers and were encapsulated in vesicles after exposure to UV light and multiple wet-dry cycles. At temperatures of 60 to 80 °C at geothermal fields, biochemical reactions can occur. These predecessors of true cells are assumed to have behaved more like a superorganism rather than individual structures, where the porous membranes would house molecules which would leak out and enter other protocells. Only when true cells had evolved would they gradually adapt to saltier environments and enter the ocean. 6 of the 11 biochemical reactions of the rTCA cycle can occur in hot metal-rich acidic water which suggests metabolic reactions might have originated in this environment, this is consistent with the enhanced stability of RNA phosphodiester, aminoacyl-tRNA bonds, and peptides in acidic conditions. Cycling between supercritical and subcritical CO2 at tectonic fault zones might have led to peptides integrating with and stabilizing lipid membranes. This is suggested to have driven membrane protein evolution, as it shown that a selected peptide (H-Lys-Ser-Pro-Phe-Pro-Phe-Ala-Ala-OH) causes the increase of membrane permeability to water. David Deamer and Bruce Damer states that the prebiotic chemistry does not require ultraviolet irradiation as the chemistry could also have occurred under shaded areas that protected biomolecules from photolysis. Deep sea alkaline vents Nick Lane believes that no known life forms could have utilized zinc-sulfide based photosynthesis, lightning, volcanic pyrite synthesis, or UV radiation as a source of energy. Rather, he instead suggests that deep sea alkaline vents is more likely to have been a source energy for early cellular life. Serpentinization at alkaline hydrothermal vents produce methane and ammonia. Mineral particles that have similar properties to enzymes at deep sea vents would catalyze organic compounds out of dissolved CO2 within seawater. Porous rock might have promoted condensation reactions of biopolymers and act as a compartment of membranous structures, however it is unknown about how it could promote coding and metabolism. Acetyl phosphate, which is readily synthesized from thioacetate, can promote aggregation of adenosine monophosphate of up to 7 monomers which is considered energetically favored in water due to interactions between nucleobases. Acetyl phosphate can stabilize aggregation of nucleotides in the presence of Na+ and could possibly promote polymerization at mineral surfaces or lower water activity. An external proton gradient within a membrane would have been maintained between the acidic ocean and alkaline seawater. The descendants of the last universal common ancestor, bacteria and archaea, were probably methanogens and acetogens. The earliest microfossils, dated to be 4.28 to 3.77 Ga, were found at hydrothermal vent precipitates. These microfossils suggest that early cellular life began at deep sea hydrothermal vents. Exergonic reactions at these environments could have provided free energy that promoted chemical reactions conducive to prebiotic biomolecules. Nonenzymatic reactions of glycolysis and the pentose phosphate pathway can occur in the presence of ferrous iron at 70 °C, the reactions produce erythrose 4-phosphate, an amino acid precursor and ribose 5-phosphate, a nucleotide precursor. Pyrimidines are shown to be synthesized from the reaction between aspartate and carbamoyl phosphate at 60 °C and in the presence of metals, it is suggested that purines could be synthesized from the catalysis of metals. Adenosine monophosphate are also shown to be synthesized from adenine, monopotassium phosphate or pyrophosphate, and ribose at silica at 70 °C. Reductive amination and transamination reactions catalyzed by alkaline hydrothermal vent mineral and metal ions produce amino acids. Long chain fatty acids can be derived from formic acid or oxalic acid during Fischer-Tropsch-type synthesis. Carbohydrates containing an isoprene skeleton can be synthesized from the formose reaction. Isoprenoids incorporated into fatty acid vesicles can stabilize the vesicles, which are suggested to have driven the divergence of bacterial and archaeal lipids. Volcanic ash in the ocean Geoffrey W. Hoffmann has argued that a complex nucleation event as the origin of life involving both polypeptides and nucleic acid is compatible with the time and space available in the primary oceans of Earth. Hoffmann suggests that volcanic ash may provide the many random shapes needed in the postulated complex nucleation event. This aspect of the theory can be tested experimentally. Gold's deep-hot biosphere In the 1970s, Thomas Gold proposed the theory that life first developed not on the surface of the Earth, but several kilometers below the surface. It is claimed that the discovery of microbial life below the surface of another body in our Solar System would lend significant credence to this theory. Radioactive beach hypothesis Zachary Adam claims that tidal processes that occurred during a time when the Moon was much closer may have concentrated grains of uranium and other radioactive elements at the high-water mark on primordial beaches, where they may have been responsible for generating life's building blocks. According to computer models, a deposit of such radioactive materials could show the same self-sustaining nuclear reaction as that found in the Oklo uranium ore seam in Gabon. Such radioactive beach sand might have provided sufficient energy to generate organic molecules, such as amino acids and sugars from acetonitrile in water. Radioactive monazite material also has released soluble phosphate into the regions between sand-grains, making it biologically "accessible." Thus amino acids, sugars, and soluble phosphates might have been produced simultaneously, according to Adam. Radioactive actinides, left behind in some concentration by the reaction, might have formed part of organometallic complexes. These complexes could have been important early catalysts to living processes. John Parnell has suggested that such a process could provide part of the "crucible of life" in the early stages of any early wet rocky planet, so long as the planet is large enough to have generated a system of plate tectonics which brings radioactive minerals to the surface. As the early Earth is thought to have had many smaller plates, it might have provided a suitable environment for such processes. The hypercycle In the early 1970s, Manfred Eigen and Peter Schuster examined the transient stages between the molecular chaos and a self-replicating hypercycle in a prebiotic soup. In a hypercycle, the information storing system (possibly RNA) produces an enzyme, which catalyzes the formation of another information system, in sequence until the product of the last aids in the formation of the first information system. Mathematically treated, hypercycles could create quasispecies, which through natural selection entered into a form of Darwinian evolution. A boost to hypercycle theory was the discovery of ribozymes capable of catalyzing their own chemical reactions. The hypercycle theory requires the existence of complex biochemicals, such as nucleotides, which do not form under the conditions proposed by the Miller–Urey experiment. Iron–sulfur world In the 1980s, Wächtershäuser and Karl Popper postulated the iron–sulfur world hypothesis for the evolution of pre-biotic chemical pathways. It traces today's biochemistry to primordial reactions which synthesize organic building blocks from gases. Wächtershäuser systems have a built-in source of energy: iron sulfides such as pyrite. The energy released by oxidising these metal sulfides can support synthesis of organic molecules. Such systems may have evolved into autocatalytic sets constituting self-replicating, metabolically active entities predating modern life forms. Experiments with sulfides in an aqueous environment at 100 °C produced a small yield of dipeptides (0.4% to 12.4%) and a smaller yield of tripeptides (0.10%). However, under the same conditions, dipeptides were quickly broken down. Several models postulate a primitive metabolism, allowing RNA replication to emerge later. The centrality of the Krebs cycle (citric acid cycle) to energy production in aerobic organisms, and in drawing in carbon dioxide and hydrogen ions in biosynthesis of complex organic chemicals, suggests that it was one of the first parts of the metabolism to evolve. Concordantly, geochemists Szostak and Kate Adamala demonstrated that non-enzymatic RNA replication in primitive protocells is only possible in the presence of weak cation chelators like citric acid. This provides further evidence for the central role of citric acid in primordial metabolism. Russell has proposed that "the purpose of life is to hydrogenate carbon dioxide" (as part of a "metabolism-first", rather than a "genetics-first", scenario). The physicist Jeremy England has argued from general thermodynamic considerations that life was inevitable. An early version of this idea was Oparin's 1924 proposal for self-replicating vesicles. In the 1980s and 1990s came Wächtershäuser's iron–sulfur world theory and Christian de Duve's thioester models. More abstract and theoretical arguments for metabolism without genes include Freeman Dyson's mathematical model and Stuart Kauffman's collectively autocatalytic sets in the 1980s. Kauffman's work has been criticized for ignoring the role of energy in driving biochemical reactions in cells. A multistep biochemical pathway like the Krebs cycle did not just self-organize on the surface of a mineral; it must have been preceded by simpler pathways. The Wood–Ljungdahl pathway is compatible with self-organization on a metal sulfide surface. Its key enzyme unit, carbon monoxide dehydrogenase/acetyl-CoA synthase, contains mixed nickel-iron-sulfur clusters in its reaction centers and catalyzes the formation of acetyl-CoA. However, prebiotic thiolated and thioester compounds are thermodynamically and kinetically unlikely to accumulate in the presumed prebiotic conditions of hydrothermal vents. One possibility is that cysteine and homocysteine may have reacted with nitriles from the Strecker reaction, forming catalytic thiol-rich polypeptides. It has been suggested that the iron-sulfur world hypothesis and RNA world hypothesis are not mutually exclusive as modern cellular processes do involve both metabolites and genetic molecules. Zinc world Armen Mulkidjanian's zinc world (Zn-world) hypothesis extends Wächtershäuser's pyrite hypothesis. The Zn-world theory proposes that hydrothermal fluids rich in H2S interacting with cold primordial ocean (or Darwin's "warm little pond") water precipitated metal sulfide particles. Oceanic hydrothermal systems have a zonal structure reflected in ancient volcanogenic massive sulfide ore deposits. They reach many kilometers in diameter and date back to the Archean. Most abundant are pyrite (FeS2), chalcopyrite (CuFeS2), and sphalerite (ZnS), with additions of galena (PbS) and alabandite (MnS). ZnS and MnS have a unique ability to store radiation energy, e.g. from ultraviolet light. When replicating molecules were originating, the primordial atmospheric pressure was high enough (>100 bar) to precipitate near the Earth's surface, and ultraviolet irradiation was 10 to 100 times more intense than now; hence the photosynthetic properties mediated by ZnS provided the right energy conditions for the synthesis of informational and metabolic molecules and the selection of photostable nucleobases. The Zn-world theory has been filled out with evidence for the ionic constitution of the interior of the first protocells. In 1926, the Canadian biochemist Archibald Macallum noted the resemblance of body fluids such as blood and lymph to seawater; however, the inorganic composition of all cells differ from that of modern seawater, which led Mulkidjanian and colleagues to reconstruct the "hatcheries" of the first cells combining geochemical analysis with phylogenomic scrutiny of the inorganic ion requirements of modern cells. The authors conclude that ubiquitous, and by inference primordial, proteins and functional systems show affinity to and functional requirement for K+, Zn2+, Mn2+, and . Geochemical reconstruction shows that this ionic composition could not have existed in the ocean but is compatible with inland geothermal systems. In the oxygen-depleted, CO2-dominated primordial atmosphere, the chemistry of water condensates near geothermal fields would resemble the internal milieu of modern cells. Therefore, precellular evolution may have taken place in shallow "Darwin ponds" lined with porous silicate minerals mixed with metal sulfides and enriched in K+, Zn2+, and phosphorus compounds. Clay The clay hypothesis was proposed by Graham Cairns-Smith in 1985. It postulates that complex organic molecules arose gradually on pre-existing, non-organic replication surfaces of silicate crystals in contact with an aqueous solution. The clay mineral montmorillonite has been shown to catalyze the polymerization of RNA in aqueous solution from nucleotide monomers, and the formation of membranes from lipids. In 1998, Hyman Hartman proposed that "the first organisms were self-replicating iron-rich clays which fixed carbon dioxide into oxalic acid and other dicarboxylic acids. This system of replicating clays and their metabolic phenotype then evolved into the sulfide rich region of the hot spring acquiring the ability to fix nitrogen. Finally phosphate was incorporated into the evolving system which allowed the synthesis of nucleotides and phospholipids." Biochemistry Different forms of life with variable origin processes may have appeared quasi-simultaneously in the early Earth. The other forms may be extinct, having left distinctive fossils through their different biochemistry. Metabolism-like reactions could have occurred naturally in early oceans, before the first organisms evolved. Some of these reactions can produce RNA, and others resemble two essential reaction cascades of metabolism: glycolysis and the pentose phosphate pathway, that provide essential precursors for nucleic acids, amino acids and lipids. Fox proteinoids In trying to uncover the intermediate stages of abiogenesis mentioned by Bernal, Sidney Fox in the 1950s and 1960s studied the spontaneous formation of peptide structures under plausibly early Earth conditions. In one of his experiments, he allowed amino acids to dry out as if puddled in a warm, dry spot in prebiotic conditions: In an experiment to set suitable conditions for life to form, Fox collected volcanic material from a cinder cone in Hawaii. He discovered that the temperature was over 100 °C just beneath the surface of the cinder cone, and suggested that this might have been the environment in which life was created—molecules could have formed and then been washed through the loose volcanic ash into the sea. He placed lumps of lava over amino acids derived from methane, ammonia and water, sterilized all materials, and baked the lava over the amino acids for a few hours in a glass oven. A brown, sticky substance formed over the surface, and when the lava was drenched in sterilized water, a thick, brown liquid leached out. He found that, as they dried, the amino acids formed long, often cross-linked, thread-like, submicroscopic polypeptides. Protein amyloid An origin-of-life theory based on self-replicating beta-sheet structures has been put forward by Maury in 2009. The theory suggest that self-replicating and self-assembling catalytic amyloids were the first informational polymers in a primitive pre-RNA world. The main arguments for the amyloid hypothesis is based on the structural stability, autocatalytic and catalytic properties, and evolvability of beta-sheet based informational systems. Such systems are also error correcting and chiroselective. First protein that condenses substrates during thermal cycling: thermosynthesis The thermosynthesis hypothesis considers chemiosmosis more basal than fermentation: the ATP synthase enzyme, which sustains chemiosmosis, is the currently extant enzyme most closely related to the first metabolic process. The thermosynthesis hypothesis does not even invoke a pathway: ATP synthase's binding change mechanism resembles a physical adsorption process that yields free energy. The result would be convection which would bring a continual supply of reactants to the protoenzyme. The described first protein may be simple in the sense that it requires only a short sequence of conserved amino acid residues, a sequent sufficient for the appropriate catalytic cleft. Pre-RNA world: The ribose issue and its bypass A different type of nucleic acid, such as peptide nucleic acid, threose nucleic acid or glycol nucleic acid, could have been the first to emerge as a self-reproducing molecule, later replaced by RNA. Larralde et al., say that "the generally accepted prebiotic synthesis of ribose, the formose reaction, yields numerous sugars without any selectivity". They conclude that "the backbone of the first genetic material could not have contained ribose or other sugars because of their instability", meaning that the ester linkage of ribose and phosphoric acid in RNA is prone to hydrolysis. Pyrimidine ribonucleosides and nucleotides have been synthesized by reactions which by-pass the free sugars, and are assembled stepwise using nitrogenous or oxygenous chemistries. Sutherland has demonstrated high-yielding routes to cytidine and uridine ribonucleotides from small 2 and 3 carbon fragments such as glycolaldehyde, glyceraldehyde or glyceraldehyde-3-phosphate, cyanamide and cyanoacetylene. A step in this sequence allows the isolation of enantiopure ribose aminooxazoline if the enantiomeric excess of glyceraldehyde is 60% or greater. This can be viewed as a prebiotic purification step. Ribose aminooxazoline can then react with cyanoacetylene to give alpha cytidine ribonucleotide. Photoanomerization with UV light allows for inversion about the 1' anomeric centre to give the correct beta stereochemistry. In 2009 they showed that the same simple building blocks allow access, via phosphate controlled nucleobase elaboration, to 2',3'-cyclic pyrimidine nucleotides directly, which can polymerize into RNA. Similar photo-sanitization can create pyrimidine-2',3'-cyclic phosphates. Autocatalysis Autocatalysts are substances that catalyze the production of themselves and therefore are "molecular replicators." The simplest self-replicating chemical systems are autocatalytic, and typically contain three components: a product molecule and two precursor molecules. The product molecule joins the precursor molecules, which in turn produce more product molecules from more precursor molecules. The product molecule catalyzes the reaction by providing a complementary template that binds to the precursors, thus bringing them together. Such systems have been demonstrated both in biological macromolecules and in small organic molecules. It has been proposed that life initially arose as autocatalytic chemical networks. Julius Rebek and colleagues combined amino adenosine and pentafluorophenyl esters with the autocatalyst amino adenosine triacid ester (AATE). One product was a variant of AATE which catalyzed its own synthesis. This demonstrated that autocatalysts could compete within a population of entities with heredity, a rudimentary form of natural selection. Synthesis based on hydrogen cyanide A research project completed in 2015 by John Sutherland and others found that a network of reactions beginning with hydrogen cyanide and hydrogen sulfide, in streams of water irradiated by UV light, could produce the chemical components of proteins and lipids, as well as those of RNA, while not producing a wide range of other compounds. The researchers used the term "cyanosulfidic" to describe this network of reactions. Simulated chemical pathways In 2020, chemists described possible chemical pathways from nonliving prebiotic chemicals to complex biochemicals that could give rise to living organisms, based on a new computer program named AllChemy. Viral origin Evidence for a "virus first" hypothesis, which may support theories of the RNA world, was suggested in 2015. One of the difficulties for the study of the origins of viruses is their high rate of mutation; this is particularly the case in RNA retroviruses like HIV. A 2015 study compared protein fold structures across different branches of the tree of life, where researchers can reconstruct the evolutionary histories of the folds and of the organisms whose genomes code for those folds. They argue that protein folds are better markers of ancient events as their three-dimensional structures can be maintained even as the sequences that code for those begin to change. Thus, the viral protein repertoire retain traces of ancient evolutionary history that can be recovered using advanced bioinformatics approaches. Those researchers think that "the prolonged pressure of genome and particle size reduction eventually reduced virocells into modern viruses (identified by the complete loss of cellular makeup), meanwhile other coexisting cellular lineages diversified into modern cells." The data suggest that viruses originated from ancient cells that co-existed with the ancestors of modern cells. These ancient cells likely contained segmented RNA genomes. A computational model (2015) has shown that virus capsids may have originated in the RNA world and served as a means of horizontal transfer between replicator communities. These communities could not survive if the number of gene parasites increased, with certain genes being responsible for the formation of these structures and those that favored the survival of self-replicating communities. The displacement of these ancestral genes between cellular organisms could favor the appearance of new viruses during evolution. Viruses retain a replication module inherited from the prebiotic stage since it is absent in cells. So this is evidence that viruses could originate from the RNA world and could also emerge several times in evolution through genetic escape in cells. Encapsulation without a membrane Polyester droplets Tony Jia and Kuhan Chandru have proposed spontaneously-forming membraneless polyester droplets in early cellularization before the innovation of lipid vesicles. Protein function within and RNA function in the presence of certain polyester droplets was shown to be preserved within the droplets. The droplets have scaffolding ability, by allowing lipids to assemble around them; this may have prevented leakage of genetic materials. Proteinoid microspheres Fox observed in the 1960s that proteinoids could form cell-like structures named "proteinoid microspheres". The amino acids had combined to form proteinoids, which formed small globules. These were not cells; their clumps and chains were reminiscent of cyanobacteria, but they contained no functional nucleic acids or other encoded information. Colin Pittendrigh stated in 1967 that "laboratories will be creating a living cell within ten years", a remark that reflected the typical contemporary naivety about the complexity of cell structures. Jeewanu protocell A further protocell model is the Jeewanu. First synthesized in 1963 from simple minerals and basic organics while exposed to sunlight, it is reported to have some metabolic capabilities, the presence of a semipermeable membrane, amino acids, phospholipids, carbohydrates and RNA-like molecules. However, the nature and properties of the Jeewanu remains to be clarified. Electrostatic interactions induced by short, positively charged, hydrophobic peptides containing 7 amino acids in length or fewer can attach RNA to a vesicle membrane, the basic cell membrane. RNA-DNA world In 2020, coevolution of a RNA-DNA mixture based on diamidophosphate was proposed. The mixture of RNA-DNA sequences, called chimeras, have weak affinity and form weaker duplex structures. This is advantageous in an abiotic scenario and these chimeras have been shown to replicate RNA and DNA – overcoming the "template-product" inhibition problem, where a pure RNA or pure DNA strand is unable to replicate non-enzymatically because it binds too strongly to its partners. This could lead to an abiotic cross-catalytic amplification of RNA and DNA. A continuous chemical reaction network in water and under high-energy radiation can generate precursors for early RNA. In 2022, evolution experiments of self-replicating RNA showed how RNA may have evolved to diverse complex molecules in RNA world conditions. The RNA evolved to a "replicator network comprising five types of RNAs with diverse interactions" such as cooperation for replication of other members (multiple coexisting host and parasite lineages). See also RNP world First universal common ancestor References Sources Origin of life
Alternative abiogenesis scenarios
[ "Biology" ]
6,143
[ "Biological hypotheses", "Origin of life" ]
70,660,722
https://en.wikipedia.org/wiki/Stimuli-responsive%20drug%20delivery%20systems
Conventional drug delivery is limited by the inability to control dosing, target specific sites, and achieve targeted permeability. Traditional methods of delivering therapeutics to the body experience challenges in achieving and maintaining maximum therapeutic effect while avoiding the effects of drug toxicity. Many drugs that are delivered orally or parenterally do not include mechanisms for sustained release, and as a result they require higher and more frequent dosing to achieve any therapeutic effect for the patient. As a result, the field of drug delivery systems developed into a large focus area for pharmaceutical research to address these limitations and improve quality of care for patients. Within the broad field of drug delivery, the development of stimuli-responsive drug delivery systems has created the ability to tune drug delivery systems to achieve more controlled dosing and targeted specificity based on material response to exogenous and endogenous stimuli. Endogenous stimuli consist of chemical, biological, and physical stimuli that occur naturally in the body, such as changes in pH, temperature, enzymatic action, pressure, and shear forces. More specifically, endogenous chemical stimuli include environmental pH, redox reactions, and chemical gradients, each of which are typically out of physiological range or unique to a specific or diseased tissue, which provides the ability to achieve target specificity using these particular stimuli for release. Researchers have worked to develop numerous types of drug delivery systems that harness a response to endogenous chemical stimuli to achieve targeted delivery and controlled release of drug into a specific environment. These chemically responsive drug delivery systems can be created using a wide variety of materials and carriers, including lipid, protein, or polymeric materials to create degradable scaffolds or depots and micelles and nanoparticles. An example of this includes the engineering of biopolymeric nanospheres that are triggered to release an encapsulated therapeutic when they enter the tumor microenvironment due to the drop in pH associated with the tumor microenvironment. Many of these systems rely on the application and manipulation of click chemistry to achieve stimulated response The field of endogenous chemical-responsive systems has developed greatly within the last 20 years and continues to grow as researchers determine new applications for the field, including the development of chemically responsive systems for diagnostic purposes. Endogenous chemical stimuli-responsive drug delivery systems are important in the field of drug delivery because of their ability to harness chemical phenomena within the body to overcome traditional therapeutic release limitations such as temporal release and tissue permeability. These drug delivery systems can be applied both as diagnostic and treatment tools for diseases like cancer to achieve long-term action and maximize the therapeutic effect. History While the study of drug delivery methods and techniques has been around for centuries, the modern field of drug delivery we know today was not introduced until the 1960s, when the concept of controlled drug delivery systems was introduced by Judah Folkman, MD of Harvard. He first introduced the idea of a prolonged drug release system as a means of constant rate delivery while experimenting with anesthetic gases and arterio-venous shunts on mice This inspired the formation of a company called ALZA by a chemist named Alejandro Zaffaroni, whose primary focus was on the development of drug carrying systems that would increase the specificity and efficacy of drugs. The introduction of this concept led to the development of the field we know today, with macro scale delivery devices being developed in the 1970s and 1980s before moving into more focused development of microscale and nanoscale devices in the late 1980s onward. The concept of stimuli-responsive drug delivery systems can actually be seen as ahead of this time, since the first pH-responsive drug coating was used in the late 1950s in Europe. These coatings were used on drugs delivered to the stomach, so that they would protonate and dissolve at low pH to release drug. The development of stimuli-responsive drug carriers was not popularized until the mid-1980s by researchers at Utah University, who created thermally-responsive drug delivery systems. Since the eruption of this field, substantial research has been conducted to tune stimuli-responsive drug delivery systems despite several limitations. As of 2013, a redox-responsive therapy targeting metastatic breast cancer had been approved by the FDA but was not yet currently in use. Much work is still being done to continue the development of this field in hopes of one day making stimuli-responsive drug delivery systems commonplace in medical practice. Type of stimuli and their mechanisms of action pH-responsive pH responsive drug delivery systems respond to the environmental pH of a tissue, which, when existing within a certain acidic range, can lead to structural and chemical changes of the drug delivery system. These changes can include conformational changes and surface interactions that can lead to the degradation or swelling/shrinking of the drug carrier. pH responsive drug delivery systems are possible because of the tendency of diseased or cancerous tissues to maintain a lower pH value than is physiologically normal due to high rates of tumor cell metabolism (normal: 7.4, lower range: 5.0-6.5). These systems are governed by hydrophilic and hydrophobic interactions of self-assembled drug carriers within a certain pH range. These hydrophilic and hydrophobic interactions can cause the destabilization of these systems, which lead to conformational changes that cause the drug carrier to breakdown or degrade. As a result, the drug is released from the system. pH responsive drug delivery systems are typically synthesized from pH-responsive polymers that have been conjugated with ionic residues that change charge based on the pH of the environment. Systems used with pH-responsive polymers include implantable hydrogels and micro- and nanoparticles. pH-responsive drug delivery systems are particularly suitable for the design of chemotherapeutic delivery systems due to the naturally low pH found in tumor microenvironments, but can be applied in other disease settings where the pH of the varies from physiological pH. The highly targeted and controlled release ability, as well as their broad applications, make pH-responsive drug delivery systems some of the most well-researched and sought after clinical solutions in stimuli-responsive drug delivery. Redox-responsive Redox responsive drug delivery systems rely on the natural reduction-oxidation reactions that occur in the body and the availability of reducing or oxidative-agents in the extracellular and intracellular space. In redox-stimulated responses, drug carriers enter the intracellular space through endocytosis and are destabilized by intracellular concentrations of reducing agents, leading to their disassembly and the delivery of the therapeutic intracellularly. For example, the use of redox stimulated drug delivery is primarily attributed to the high intracellular concentration of glutathione (GSH) as compared to the much lower extracellular concentration of GSH. GSH acts as a reducing agent in redox reactions, allowing it to cleave bonds like disulfide bonds. The increased level of GSH in tumor cells combined with its ability to cleave disulfide bonds has led to the development of drug delivery systems, such as polymeric micelles, synthesized with disulfide bonds that are subsequently cleaved by intracellular GSH, which cause the breakdown of the micelle and the intracellular release of the encapsulated therapeutic. Gradient-responsive Gradient responsive drug delivery systems are stimulated to deliver therapeutics through contact with an endogenous chemical gradient. When the system comes into contact with its specific chemical gradient, increased concentration of the chemical can lead to the conformational change or degradation of a drug carrier to allow drug release. Gradient responsive systems also include gradients created by pH or redox reactants. Applications pH-responsive drug delivery systems are very popular subjects of research for their variability in application. Deviations from physiological pH occur in numerous disease states including infection, inflammation, and cancer, which makes this stimulus one of the most widely researched in the field of endogenous chemically responsive drug delivery systems. Applications of pH-responsive drug delivery systems include the synthesis of pH-responsive polymers into carriers like hydrogels, micelles, and micro- and nanovesicles. pH-responsive polymers can be selected for certain applications based on characteristics like the drug concentration, number of ionizable groups, and the type of carrier being used. Examples of widely used pH-responsive polymers include but are not limited to: poly(acrylamide), poly(acrylic acid), and poly(methacrylic acid) Redox-responsive drug delivery systems are also widely studied for a variety of applications, in particular their use in targeting cancer due to the increased levels of GSH in cancerous cells. Redox-responsive drug delivery systems are also used in the delivery of DNA and siRNA for gene therapy. Redox-responsive drug carriers are primarily synthesized as micelles or polymersomes and are highly stable because they contain a high amount of cross-linking Gradient-responsive drug delivery systems do not have a substantial body of research as of yet. The primary applications of gradient-responsive drug delivery systems are usually referenced as pH or redox gradients, as opposed to the gradients of other hormones or factors found naturally in the body. Aside from pH and redox gradients, there are no published works on gradient-responsive drug delivery systems. Limitations There are many limitations that exist within the field of endogenous chemically responsive drug delivery systems that prevent many of these products from approved to be used in a clinical setting. One of the primary challenges of endogenous chemically responsive drug delivery is the inability to address or overcome patient heterogeneity. Patient heterogeneity describes the naturally-occurring difference between patient biology, such as differences in tumor pH for the same cancer and blood concentrations of redox reagents. The targeting of many chemical properties in pathological tissues is also restricted by a small range of fluctuation of the chemical property, which prevents researchers from being able to safely and specifically target that diseased tissue and stimulus due to small windows of sensitivity that have yet to be optimized. Other important limitations to consider include the formulation of the drug carrier, which can affect the clearance rate and biodistribution of the drug carrier and decrease therapeutic efficacy due to size, shape, or effective penetration of the tissue by the drug carrier. Biocompatibility and toxicity of the drug carrier formulation also poses a significant challenge to the development of the field, so future studies need to be conducted using inherently biocompatible materials to ensure feasibility and safety of these proposed delivery systems. Finally, cost of production and the scalability of the creation of stimuli responsive drug delivery systems remains an enormous barrier between the development and clinical use of these delivery systems. References Drug delivery devices
Stimuli-responsive drug delivery systems
[ "Chemistry" ]
2,190
[ "Pharmacology", "Drug delivery devices" ]
70,660,763
https://en.wikipedia.org/wiki/Patricia%20Ann%20Ferguson
Patricia Ann Ferguson (12 February 1936, Dundee – 24 March 2022, Auchterarder) was a Scottish civil engineer. She was chairwoman of Fife Health Board. She studied at the High School of Dundee and St Margaret's School for Girls, and as a mature student, the University of St Andrews and the Royal Military College of Science, having been approved by Denis Healey. She was a chief civil engineer constructing oil platforms. She was a competitive showjumper. References 1936 births 2022 deaths Scottish civil engineers Engineers from Dundee People educated at the High School of Dundee Alumni of the University of St Andrews 20th-century Scottish women engineers 20th-century Scottish engineers 21st-century Scottish women engineers 21st-century Scottish engineers
Patricia Ann Ferguson
[ "Engineering" ]
149
[ "Civil engineering", "Civil engineering stubs" ]
70,661,484
https://en.wikipedia.org/wiki/Action%20bias
Action bias is the psychological phenomenon where people tend to favor action over inaction, even when there is no indication that doing so would point towards a better result. It is an automatic response, similar to a reflex or an impulse and is not based on rational thinking. One of the first appearances of the term "action bias" in scientific journals was in a 2000 paper by Patt and Zechenhauser titled "Action Bias and Environmental Decisions", where its relevance in politics was expounded. Overview People tend to have a preference for well-justified actions. The term “action bias” refers to the subset of such voluntary actions that one takes even when there is no explicitly good reason to do so. In the case of a decision with both positive and negative outcomes, action will be taken in favor of achieving an apparent advantageous final result, which is preferred over inactivity. If besides gains, losses occur or resources are redistributed adversely, this will be neglected in the decision-making process. Its opposite effect is the omission bias. Theories Multiple different theories as to why people prefer action over inaction have been suggested. Humans might naturally aspire to act since it is perceived as being most beneficial, even though it can occasionally worsen the outcome of the action. Inaction may be perceived as an inferior alternative to action. This view can be explained from an evolutionary perspective since early action proved to be adaptive in terms of survival, becoming a reinforced behavioral pattern. Even though living circumstances for people have changed beyond the need to prefer action over inaction to ensure survival, this bias persists in modern society as actions cause visible positive outcomes more so than omissions of actions, a link which becomes reinforced. There is a general tendency to reward action and punish inaction. As shown by operant conditioning, rewards are more efficient in increasing the display of a behavior than punishments are in decreasing the likelihood of the display of a behavior. This results in humans choosing action rather than inaction. Engaging in action can also serve as means of signalling and emphasizing one's productivity to others which is rewarded by societal praise more than positive results originating from inactivity. Action also provides the doer with the impression of having control over a situation, which creates a feeling of personal security. This is in contrast to inaction, which is more readily linked with feelings of regret in face of the lack of praise and even possible punishments for it. The outcome associated with each action or inaction also affects future decisions, since the link is inevitably and immediately reinforced or punished each time a behavior is carried out; only a neutral outcome does not contribute to learning. Another reason for the existence of the bias might be that people develop the decision heuristic of taking action but then transfer it to an inappropriate context, resulting in action bias. Real-world effects In politics In politics, action bias is manifested by politicians not taking action on issues such as global warming, but wishing to appear to do so, so they make statements, which are not actually actions, and offer relatively ineffective proposals and implementations. Actions and promises of future actions are not taken primarily to bring about an impactful change, but rather to showcase that one is working on it and progressing. The symbolic power and external image of the action is far more powerful than its true benefit for change. In medicine In the field of medicine, action bias can occur in diagnosis and subsequent treatment, which is, among other things, a problem caused by specific diagnostic criteria. If a patient does not meet enough criteria or happens to meet exactly enough criteria, a premature diagnosis or misdiagnosis may be the result. This leads to the patient not receiving satisfactory or needed treatment. One way to counteract the action bias is to use a broader range of tests or to get a second opinion from colleagues and technical experts from relevant fields before making a final diagnosis. In medical decision-making there is the predisposition of professionals to interfere, even if not interfering would be a better option. Here, the action bias takes the name of intervention bias and its existence has been proven by many studies in the medical community. Action bias occurs among patients as well. When equally presented by a physician with the options of either taking medicine or just resting, most patients greatly prefer taking the medicine. This preference prevails even when patients are warned that the medicine could cause certain side effects or when they are explicitly told that there would be no effect in taking the medicine. Causes The causes of intervention bias in medicine are most likely an interplay of two other biases researched in humans: self-interest bias and confirmation bias. Another reason for intervention bias can be found in the fear of malpractice cases, as possible charges can be pressed. The self-interest bias occurs if a person shows self-serving behaviors and justifies those in favor of their own interests. Medical intervention is partly guided by the financial self-interest of practitioners and the health-care industry. Industry-sponsored studies and analyses can lead to conflicts of interest and biased interpretations of the results. The specialists then make questionable decisions and defend already biased information. Doctors seem to be more satisfied when they have a greater involvement in their patients' treatment, which means that the amount of intervention is closely linked to career happiness and personal gratification. Confirmation bias influences human decision making as sources that confirm one's pre-existing hypotheses are incorporated more readily and preferably than any challenging ideas. Those studies and assessments that justify and promote medical intervention are given more emphasis. Data that contradicts the reviewer's assumptions are either ignored or their own experience and evaluation are viewed as more reliable for the practitioner. Impacts Due to the action bias, medical intervention becomes less objective, the physician's primary focus can no longer be the best possible therapy for the patient, possible therapies may be implemented without proper, tailored testing. Other consequences include incorrect and biased medical advice, and additionally physical harm to the patient and collapse of health care systems. Although physicians also have the choice to wait and see if the symptoms subside or intensify and then perform a follow-up check, which would be temporary inaction, instead it is common to perform direct testing and prescription of medication. In sports According to some psychologists, the goalkeeper shows an action bias in over 90 percent of the penalty kicks in soccer by diving to either the left or the right. These theorists assert that it is more effective to stand still, or to wait and see which direction the ball is kicked before moving, because guessing wrong will almost guarantee giving up a goal. Researchers surmise that goalkeepers take the risk of guessing because "action" is preferred by their teammates, and success will bring social recognition and other rewards. However, this analysis ignores game theory and the dynamics of the sport. Because the penalty spot is only 12 yards away, the goalposts are 24 feet apart, the crossbar is 8 feet high, and the ball will be struck with great force, the goalkeeper cannot stand still and wait for the ball to be struck, because they will not have time to reach it. The ball could go to any of the four corners. To have a chance to make a save, the optimum strategy is to guess the target location and begin moving before the opponent's foot touches the ball. This context negates the claim of bias, because the goalkeeper is expected to put in the visible effort to make a save and actively prevent a goal, rather than arrive too late by waiting for directional certainty. Some penalty takers counter this strategy by rolling or chipping the ball down the middle, which is called a Panenka penalty, after a Czech player who made it famous at the UEFA Euro 1976 final. Action bias is also influenced by previous outcomes. If a team loses a match, the coach is more likely to choose action by changing some of the players, than inaction, even though this might not necessarily lead to a better performance. As expressed by one coach, “Just because I can do something doesn’t mean that I should, or that that activity is relevant.” In economics and management Action bias also influences decision-making in the field of economics and management. In the situations where there is an economic downfall, the central banks and governments experience the pressure to take action, as they feel increased scrutiny from the public. As they are expected to fix the situation, action is seen as more appropriate than inaction. Even if the outcome is not successful, by taking action public figures can avoid criticism more easily. In the cases of good economic performance, the authorities are more inclined towards an omission bias as they do not wish to be accused of making the wrong choices that might destroy the current equilibrium. The action/omission bias can be seen in other similar scenarios such as: investors changing their portfolio, switching a company's strategy, applying for a different job, moving to a different city. At the macro-economic level, the action/omission bias comes into play when discussing changes of politics-related variables, such as interest rates, tax rates and various types of expenditures. In environmental decision-making The effect of action bias in environmental policy decisions has been investigated by Anthony Patt and Richard Zeckhauser. They argued that action bias is more likely to lead to nonrational decision-making in this domain due to uncertainty and delayed effect of actions, contributions coming from many parties, no effective markets, unclear objectives and few strong incentives. The study concluded that the value of a decision is influenced by one's perceived involvement, individual susceptibility for action bias, as well as framing and context, leading to the occurrence of action bias in environmental policies. Other types Utility-based The utility-based action bias is a type of action bias that underlies purposive behavior. It works by comparing the advantages of possible effects of different actions and, as a result, it selects the action that will lead to the outcome with the highest utility value. The values of different options are then predicted and compared, and the action with the most chance of reward will be chosen. Advantages of this bias include finding the most beneficial option available in the environment. The main disadvantage is that the subject needs to test the environment through trial and error in order to identify the utility value of each action. When unfamiliar with a new environment, the person will often choose an action that was proven advantageous in a previous situation. If this is not suitable in the current scenario, the utility value of the action decreases, and the person will opt for a different action, even though changing strategy might not be entirely beneficial. The utility-based action bias is the opposite of the goal-based action selection, which aims at completing a goal without taking into consideration the utility value of the actions performed. Unlike the utility-based action bias, not all possible actions are compared. Once an action that leads to the goal is found, the other options are disregarded, making it a less time-consuming strategy. Single The term single-action bias was coined by Elke U. Weber when she took notice of farmers’ reactions to climate change. Decision-makers tend to take one action to lower a risk that they are concerned about, but they are much less likely to take additional steps that would provide risk reduction. The single action taken is not always the most effective one. Although the reason for this phenomenon is not yet fully confirmed, presumably the first action suffices in reducing the feeling of worry, which is why further action is often not taken. For example, Weber found that farmers in the early 1990s who started to worry about the consequences of global warming either changed something in their production practice, their pricing, or lobbied for government interventions. What they generally did not do is engage in more than one of those actions. This again shows that undertaking a single action possibly fulfills one's need to do something; this could prevent further action. In the end, the single-action bias improves a person's self-image and eliminates cognitive dissonance by giving the false impression that they have been contributing to the greater good. Another example of single-action bias is house owners that live in coastal regions that are likely to be flooded due to sea level rise (SLR). They can take small actions by piling resources or making sandbags in case of flooding or bigger actions by taking out flood insurance, elevating their homes or moving into a region that is less at risk for flooding. The first smaller action they take (making sandbags) takes away their anxiety about possible flooding and thereby makes it less likely to take actions that might have a better outcome in the long run, such as moving into another region. An option to eliminate the single-action bias is to have group discussions, in which people suggest different ideas to find a solution. This would give the individual more alternatives to solve the problem. Elimination Awareness of the action bias can help to carefully think about the consequences of inaction versus action in a certain situation. This leads to the process not being as impulsive as before and includes logical thinking which facilitates choosing the most efficient outcome. Inaction, in some situations, can enhance patience and self-control. New contexts that encourage making thoughtful decisions or checking for an overview of possibilities can also be beneficial. In medical contexts, full disclosure about the effects of action, especially negative side effects of medication, and inaction during treatment can lead to a lower effect of the action bias. The percentage of people choosing medication goes even lower (10%) when a doctor actively discourages the use of medication. See also Is–ought problem References Behavioral concepts Rationalism
Action bias
[ "Biology" ]
2,772
[ "Behavior", "Behavioral concepts", "Behaviorism" ]
64,775,179
https://en.wikipedia.org/wiki/Equalism%20%28socio-economic%20theory%29
Equalism is a socioeconomic theory based upon the idea that emerging technologies will put an end to social stratification through even distribution of resources in the technological singularity era. It originates from the work of Inessa Lee, the futurist writer and ideologist of California Transhumanist Party, published in "The Transhumanism Handbook" by Newton Lee. Theory According to the equalism theory, social stratification is a result of uneven distribution of resources on the planet. It is the major reason for human suffering, riots and wars. Therefore, in order to end poverty and achieve world peace, wealth and resources should be distributed evenly. Technology and science are the tools of equalism assisting its purpose. AI governance will put an end to socioeconomic inequality and diseases. Equalism is the highest stage of capitalism development. It’s part of the Transhumanism revolution, bloodless and inevitable, like the technological progress itself. References Transhumanism
Equalism (socio-economic theory)
[ "Technology", "Engineering", "Biology" ]
200
[ "Genetic engineering", "Transhumanism", "Ethics of science and technology" ]
64,776,609
https://en.wikipedia.org/wiki/Construction%20of%20Gothic%20cathedrals
The construction of Gothic cathedrals was an ambitious, expensive, and technically demanding aspect of life in the Late Middle Ages. From the late 11th century until the Renaissance, largely in Western Europe, Gothic cathedral construction required substantial funding, highly skilled workers, and engineering solutions for complex technical problems. Completion of a new cathedral often took at least half a century, yet many took longer or were rebuilt after fires or other damage. Because construction could take so long, many cathedrals were built in stages and reflect different aspects of the Gothic style. Motivation The 11th to 13th century brought unprecedented population growth and prosperity to northern Europe, particularly to the large cities, and particularly to those cities on trading routes. The old Romanesque cathedrals were too small for the population, and city leaders wanted visible symbols of their new wealth and prestige. The frequent fires in old cathedrals were also a reason for constructing a new building, as with Chartres Cathedral, Rouen Cathedral, Bourges Cathedral, and numerous others. Finance Bishops, like Maurice de Sully of Notre-Dame de Paris, usually contributed a substantial sum. Wealthy parishioners were invited to give a percentage of their income or estate in exchange for the right to be buried under the floor of the cathedral. In 1263 Pope Urban IV offered Papal indulgences, or the remission of the temporal effects of sin for one year, to wealthy donors who made large contributions. For less wealthy church members, contributions in kind, such as a few days' labour, the use of their oxen for transportation, or donations of materials were welcomed. The sacred relics of saints held by the cathedrals were displayed to attract pilgrims, who were also invited to make donations. Sometimes relics were taken in a procession to other towns to raise money. The guilds of the various professions in the town, such as the bakers, fur merchants and drapers, frequently made donations, and in exchange small panels of the stained glass windows in the new cathedral windows illustrated their activities. Master builders and masons The key figure in the construction of a cathedral was the master builder or master mason, who was the architect in charge of all aspects of the construction. One example was Gautier de Varinfroy, master builder of Évreux Cathedral. His contract, signed in 1253 with the master of the cathedral and Chapter of Évreux, paid him fifty pounds a year. He was required to live in Évreux, and to never be absent from the construction site for more than two months. Master masons were members of a particularly influential guild, the Corporation of Masons, the best-organized and most secretive of the medieval guilds. The names of the master masons of Early Gothic architecture are sometimes unknown, but later master masons, such as Godwin Gretysd, builder of Westminster Abbey for King Edward the Confessor, and Pierre de Montreuil, who worked on Notre-Dame de Paris and the Abbey of Saint-Denis, became very prominent. Eudes de Montreuil, the master mason for Louis IX of France, advised him on all architectural matters, and accompanied the king on his disastrous Seventh Crusade. The skill of masonry was frequently passed from father to son. A famous family of cathedral-builders was that of Peter Parler, born in 1325, who worked on Prague Cathedral, and was followed in his position by his son and grandson. The Parlers' work influenced European cathedrals as far away as Spain. Master masons frequently travelled to see other projects, and consulted with one another on technical issues. They also became wealthy. While the salary of an average mason or carpenter was the equivalent of twelve pounds a year, the master mason William Wynford received the equivalent of three thousand pounds a year. The master mason was responsible for all aspects of the building site, including preparing the plans, selecting the materials, coordinating the work of the craftsmen, and paying the labourers. He also needed a substantial knowledge of Christian theology, as he had to consult regularly with the bishop and canons about the religious functions of the building. The epitaph of the master mason Pierre de Montreuil of Notre-Dame de Paris described him appropriately as a "doctor of stones". The tomb of Hugues Libergier, master mason of Reims Cathedral, also depicts him in the robes of a doctor of theology. Plans Master masons usually first made a model of the building, either of papier-mâché, wood, plaster or stone, to present to the bishop and canons. The plans for certain parts were sometimes drawn or inscribed in full scale on the floor in the crypt or other portion of the worksite, where they could be easily consulted. The original plans of Prague Cathedral were rediscovered in the 19th century and were used to complete the building. Materials Building a cathedral required enormous quantities of stone, as well as timber for the scaffolding and iron for reinforcement. Stone Sometimes stone from earlier buildings was recycled, as at Beauvais Cathedral, but usually new stone had to be quarried, and in most cases the quarries were a considerable distance from the cathedral site. In a few cases, such as Lyon and Chartres cathedrals, the quarries were owned by the cathedral. In other cases, such as Tours Cathedral and Amiens Cathedral, the builders purchased the rights to extract all the stone needed from a quarry for a certain period of time. The stones were usually extracted and roughly trimmed at the quarry, and then taken by road, or preferably shipped, to the building site. For some early English cathedrals, some stone was shipped from Normandy, whose quarries produced an exceptionally fine pale-coloured stone – Caen stone. The preferred building stone in the Île-de-France was limestone. As soon as they were cut, the stones gradually developed a coating of calcination, which protected them. The stone was carved at the quarry so the calcination could develop before being shipped to the building site. When the facade of Notre-Dame de Paris was cleaned of soot and grime in the 1960s, the original white colour was exposed. On land, stones were often moved by oxen; some shipments required as many as twenty teams of two oxen each. The oxen were particularly important in the construction of Laon Cathedral, moving all the stones to the top of a steep hill. They were honoured for their work by statues of sixteen oxen placed on the towers of the cathedral. Each stone at the construction site carried three mason's marks, placed on the side which would not be visible in the finished cathedral. The first indicated its quarry of origin; the second indicated the position of the stone and the direction it should face; and the third was the signature mark of the stone carver, so the master mason could evaluate the quality. These marks allowed modern historians to trace the work of individual stone carvers from cathedral to cathedral. Wood An enormous amount of wood was used in Gothic construction for scaffolding, platforms, hoists, and beams. Durable hard woods such as oak and walnut were often used, which led to a shortage of these trees, and eventually led to the practice of using softer pine for scaffolds, and reusing old scaffolding from worksite to worksite. Iron Iron was also used in early Gothic for the reinforcement of walls, windows and vaults. Because the iron rusted and deteriorated, causing walls to fail, it was gradually replaced by other more durable forms of support, such as flying buttresses. The most visible use of iron was to reinforce the glass of rose windows and other large stained glass windows, making possible their enormous size (fourteen metres in diameter at Strasbourg Cathedral) and their intricate designs. Standardization As the period advanced, the materials gradually became more standardized, and certain parts, such as columns and cornices and building blocks, could be produced in series for use at multiple sites. Full-size templates were used to make complex structures such as vaults or the pieces of ribbed columns, which could be reused at different sites. The result of these efforts was a remarkable degree of precision. The stone columns of the triforium of the apse of Chartres Cathedral have a maximum variation of plus or minus . Excess materials and stone chips were not wasted. Instead of building walls of solid stone, walls were often built with two smooth stone faces filled in the interior with stone rubble. Construction site Cathedrals were traditionally built from east to west. If a new building was replacing an older cathedral, the choir at the east end of the old cathedral was demolished first, to begin construction of the new building, while the nave to the west was left standing for religious services. Once the new choir was completed and sanctified, the rest of the old cathedral was gradually torn down. The walls and pillars, timber scaffolding and roof were built first. Once the roof was in place, and the walls were reinforced with buttresses, the construction of the vaults could begin. One of the most complex steps was the construction of the rib vaults, which covered the nave and choir. Their slender ribs directed the weight of the vaults to thin columns leading down to the large support pillars below. The first step in the construction of a vault was the construction of a wooden scaffold up to the level of the top of the supporting columns. Next, a precise wooden frame was constructed on top of the scaffold in the exact shape of the ribs. The stone segments of the ribs were then carefully laid into the frame and cemented. When the ribs were all in place, the keystone was placed at the apex where they converged. Once the keystone was in place, the ribs could stand alone, Workers then filled in the compartments between the ribs with a thin layer of small fitted pieces of brick or stone. The framework was removed. Once the compartments were finished, their interior surface, visible from below, was plastered and then painted, and the vault was complete. This process required a team of specialized workers. This included the hewers who cut the stone, the posers who set the stones in place; and layers who cemented the pieces together. These craftsmen worked alongside the carpenters who built the complex scaffolds and models. Work continued six days a week, from sunrise until sundown, except Sundays and religious holidays. The stones were finished at the site and put into place, following the drawings by the master mason displayed in his workshop on the site, or sometimes inscribed on the floor of the cathedral itself. Some of these plans can still be seen on the floor of Lyon Cathedral. Since the Gothic cathedrals were the tallest buildings built in Europe since the Roman Empire, new technologies were needed to lift the stones up to the highest levels. A variety of cranes were developed. These included the treadmill crane, a type of hoist powered by one or more men walking inside a large treadmill (). The wheel varied in size from , and allowed a single man to hoist a weight of up to . During the winter, the construction on the site was usually shut down. To prevent the rain or snow from damaging the unfinished masonry, it was usually covered with fertilizer. Only the sculptors and stone-cutters, in their workshops, were able to continue work. Workers The stone-cutters, mortar-makers, carpenters and other workers were highly skilled but usually illiterate. They were managed by foremen who reported to the master mason. The foremen used tools such as the compass to measure and enlarge the plans to full size, and levels using lead in glass tubes to assure that the blocks were level. The stone dressers used similar tools to make sure the surfaces were flat and the edges were at precise right angles. Their tools were frequently shown in the medieval miniature drawings of the period (see gallery). Crypt Crypts, with underground vaults, were usually part of the foundation of the building, and were built first. Many Gothic cathedrals, like Notre-Dame de Paris and Chartres, were built on the sites of Romanesque cathedrals, and often used the same foundations and crypt. In Romanesque times the crypt was used to keep sacred relics, and often had its own chapels and, as in the 11th-century crypt of the first Chartres Cathedral, a deep well. The Romanesque crypt of Chartres Cathedral was greatly enlarged in the 11th century; it is U-shaped and long. It survived the fire in the 12th century which destroyed the Romanesque cathedral, and was used as the foundation for the new Gothic cathedral. The walls of the crypt chapels were painted with Gothic murals. The names of the master masons were frequently inscribed on the walls of cathedral crypts. Windows and stained glass The stained glass windows were an essential element of the cathedral, filling the interior with coloured light. They grew larger and larger over the course of the Gothic period, until they filled the entire walls beneath the vaults. In the early Gothic period the windows were relatively small, and the glass was thick and densely coloured, giving the light a mysterious quality which contrasted strongly with the dark interiors. In the later period, the builders installed much larger windows, and frequently used grey-or white coloured glass, or grisaille, which made the interior much brighter. The windows themselves were made by two different groups of craftsmen, usually at different locations. The colored glass was made at workshops located near forests, because an enormous amount of firewood was needed to melt the glass. The molten glass was colored with metal oxides, and then blown into a bubble, which was cut and flattened into small sheets. The glass sheets were then transferred to the workshop of the window-maker, usually close to the cathedral site. There a full-size precise drawing of the window was made on a large table, with the colors indicated. The craftsmen cracked off small pieces of colored glass to fill in the design. When it was complete, the pieces of glass were fit into slots of thin lead strips, and then the strips were soldered together. The faces and other details were painted onto the glass in vitreous enamel colours, which were fired in a kiln to fuse the paint to the glass. The sections were and fixed into the stone mullions of the window. and reinforced with iron bars. In the later Gothic periods, the windows were larger and were painted with more sophisticated techniques. They were sometimes covered with a thin layer of coloured glass, and this layer was carefully scratched to achieve finer shading, giving the images greater realism. This was called flashed glass. Gradually the windows came more and more to resemble paintings, but lost some of the vivid contrast and richness of color of early Gothic glass. Sculpture Sculpture was an essential element of the Gothic cathedral. Its purpose was to illustrate the personages, stories and messages of the Bible to the ordinary church-goers, of whom the great majority were illiterate. Figurative sculpture was common on the tympana of Romanesque churches, but in Gothic architecture it gradually spread across the entire facade and the transepts, and even the interior of the facade. The sculptors did not select the subject matter of their work. Church doctrine specified that the themes would be selected by the church fathers, not the artists. Nonetheless, the Gothic artists over time began to add figures with increasingly realistic features and expressive faces, and gradually the statues became more lifelike and separate from the walls. An early innovative feature used by Gothic sculptors was the statue-column. At Saint-Denis, twenty statues of the apostles supported the central portal, literally portraying them as "pillars of the church." The originals were destroyed during the French Revolution, and only fragments remain. The idea was adapted for the west porch of Chartres Cathedral (about 1145), possibly by the same sculptors. Traces of paint on sculpture of Gothic cathedral portals indicate they were likely originally painted in bright colors. This effect has been recreated at Amiens Cathedral using projected light. Towers and bells Towers were an important feature of a Gothic cathedral; they symbolized the aspiration toward heaven. The traditional Gothic arrangement, following Saint-Denis near Paris, was two towers of equal size on the west facade, flanking a porch with three portals. In Normandy and England, a central tower was often added over the meeting point of the transept and the main body of the church, as at Salisbury Cathedral. Since the towers were usually built last, sometimes long after other parts of the building, they were often built entirely or partly in different styles. The south tower of Chartres Cathedral is in large part still the originally Romanesque tower, built in the 1140s. The north tower, built at the same time, was struck by lightning and was rebuilt in the 16th century in the Flamboyant style. Towers had had the practical purpose of serving as watch towers, and, more important, housing bells which chimed the hour, and rang to celebrate important events, when the King was in attendance, or for funerals and periods of mourning. Notre-Dame de Paris was originally equipped with ten bells, eight in the north tower and two, the largest, in the south tower. The principal bell, or bourdon, called Emmanuel, was installed in the north tower in the 15th century, and is still in place. It required the strength of eleven men, pulling on ropes from a chamber below, to ring that single bell. The ringing from the bells was so loud that the bell-ringers were deafened for several hours afterwards. See also Gothic architecture Gothic cathedrals and churches Notes and citations Bibliography Construction Architectural history Architectural styles European architecture Architecture in England Medieval French architecture
Construction of Gothic cathedrals
[ "Engineering" ]
3,573
[ "Construction", "Architectural history", "Architecture" ]
64,777,637
https://en.wikipedia.org/wiki/Probability%20and%20statistics
Probability and statistics are two closely related fields in mathematics that are sometimes combined for academic purposes. They are covered in multiple articles and lists: Probability Statistics Glossary of probability and statistics Notation in probability and statistics Timeline of probability and statistics Publications named for both fields include the following: Brazilian Journal of Probability and Statistics Counterexamples in Probability and Statistics Probability and Mathematical Statistics Theory of Probability and Mathematical Statistics References Probability and statistics Mathematics-related lists
Probability and statistics
[ "Mathematics" ]
88
[ "Applied mathematics", "Probability and statistics" ]
64,777,898
https://en.wikipedia.org/wiki/Kersti%20Hermansson
Kersti Hermansson (born in 1951) is a Professor for Inorganic Chemistry at Uppsala University. Education and professional career She did her PhD on "The Electron Distribution in the Bound Water Molecule" in 1984. From 1984 to 1986, she had a postdoctoral fellowship from the Swedish Research Council with Dr. E. Clementi at IBM-Kingston, USA. From 1986–1988, she was a Högskolelektor in Inorganic Chemistry at Uppsala University. In 1988, she was a docent of Inorganic Chemistry at Uppsala University. In 1996, she was a Biträdande professor. Since 2000, she is a professor of Inorganic Chemistry at Uppsala University. During this time (2008-2013), she was a part-time guest professor at KTH Stockholm. Research Her research focuses on condensed-matter chemistry including the investigation of chemical bonding and development of quantum chemical methods. Awards She received several prizes for her research: "Letterstedska priset" from the Swedish Royal Academy of Sciences (KVA) (1987) "Oskarspriset" from Uppsala University (1988) "Norblad-Ekstrand" medal in gold from the Swedish Chemical Society (2003) Member of Kungl. Vetenskapssamhället (Academia regia scientiarum Upsaliensis, KVSU), Uppsala (since 1988) Member of Royal Society of Science (since 2002) Member of Royal Swedish Academy of Sciences (since 2007) Adjunct professor at the Kasetsart University, Bangkok (2005) Honorary guest professor at the Department of Ion Physics and Applied Physics, Innsbruck University (since June 2009) References Academic staff of Uppsala University Quantum chemistry Living people Swedish Royal Academies Kersti Hermansson IBM Fellows 1951 births
Kersti Hermansson
[ "Physics", "Chemistry" ]
350
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", " molecular", "Atomic", " and optical physics" ]
64,778,731
https://en.wikipedia.org/wiki/Propulsion%20Cryogenics%20%26%20Advanced%20Development
Propulsion Cryogenics & Advanced Development (PCAD), was a NASA rocket engine development project from 2005 to 2010 that included methalox engines (liquid methane and liquid oxygen). Ablative engines were developed by KT engineering, and Aerojet. A regeneratively cooled engine was developed by XCOR/ATK. References NASA programs
Propulsion Cryogenics & Advanced Development
[ "Astronomy" ]
72
[ "Outer space stubs", "Outer space", "Astronomy stubs" ]
64,778,751
https://en.wikipedia.org/wiki/Yongsong%20Huang
Yongsong Huang is a Chinese-American organic geochemist, biogeochemist and astrobiologist, and is a professor of Earth, Environmental, and Planetary Sciences at Brown University. He researches the development of lipid biomarkers and their isotopic ratios as quantitative proxies for paleoclimate and paleoenviromental studies and subsequent application of these proxies to study mechanisms controlling climate change and environmental response to climate change at a variety of time scales. Education Huang received a B.Sc. in geochemistry from University of Science and Technology of China in 1984, then received a M.S. in Analytical Chemistry from Sichuan University. He earned his first Ph.D. in petroleum geochemistry from the Chinese Academy of Sciences in 1990, and earned a second Ph.D. in organic geochemistry from the University of Bristol in 1997, as a student of Geoffrey Eglinton. Career and research Huang is an organic geochemist, biogeochemist and astrobiologist. After graduating from the University of Bristol, he joined the lab of Katherine H. Freeman at Pennsylvania State University as a postdoctoral research associate. He periodically worked as a guest investigator with Timothy Eglinton at the Woods Hole Oceanographic Institution during his postdoc. In 2000, Huang joined the faculty of Brown University, where he was awarded tenure in 2012. Huang's primary fields are organic geochemistry, geochemistry, and paleoclimatology. He is particularly well known for his work developing organic geochemical proxies of climate change and reconstructing climates sediments. According to Scopus, he has published 187 research articles so far with 9681 citations and has an H-index of 55. Notable student advisees Juzhi Hou (Ph.D. 2008) William D'Andrea (Ph.D. 2008) Jaime Toney (Ph.D. 2011) Elizabeth Thomas (Ph.D. 2014) Editorial activities 2000 to present: Member of the editorial board for the Journal of Paleolimnology. Academic honors 1991-1992: British Royal Society Queen's Fellowship. 2001: Salamon Award, Brown University 2009: Hans Fellow, Germany 2011-2012: Teagle Fellow, Brown University Selected works Journal articles Northern hemisphere controls on tropical southeast African climate during the past 60,000 years. Climate change as the dominant control on glacial-interglacial variations in C3 and C4 plant abundance. References Living people American geochemists Brown University faculty 21st-century American scientists American astrobiologists Year of birth missing (living people)
Yongsong Huang
[ "Chemistry" ]
528
[ "Geochemists", "American geochemists" ]
64,781,131
https://en.wikipedia.org/wiki/2-Bromodeschloroketamine
2-Bromodeschloroketamine (also known as 2-Br-2'-Oxo-PCM and bromoketamine) is a chemical compound of the arylcyclohexylamine class, which is an analog of the dissociative anesthetic drug ketamine in which the chlorine atom has been replaced with a bromine atom. It is used in scientific research as a comparison or control compound in studies into the metabolism of ketamine and norketamine, and has also been sold online alongside arylcyclohexylamine designer drugs, though it is unclear whether bromoketamine has similar pharmacological activity. See also 2-Fluorodeschloroketamine 3-Fluorodeschloroketamine Deschloroketamine Methoxyketamine Trifluoromethyldeschloroketamine References Arylcyclohexylamines Designer drugs Ketones Dissociative drugs 2-Bromophenyl compounds
2-Bromodeschloroketamine
[ "Chemistry" ]
220
[ "Ketones", "Functional groups", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
64,781,650
https://en.wikipedia.org/wiki/ELMo
ELMo (embeddings from language model) is a word embedding method for representing a sequence of words as a corresponding sequence of vectors. It was created by researchers at the Allen Institute for Artificial Intelligence, and University of Washington and first released in February, 2018. It is a bidirectional LSTM which takes character-level as inputs and produces word-level embeddings, trained on a corpus of about 30 million sentences and 1 billion words. The architecture of ELMo accomplishes a contextual understanding of tokens. Deep contextualized word representation is useful for many natural language processing tasks, such as coreference resolution and polysemy resolution. ELMo was historically important as a pioneer of self-supervised generative pretraining followed by fine-tuning, where a large model is trained to reproduce a large corpus, then the large model is augmented with additional task-specific weights and fine-tuned on supervised task data. It was an instrumental step in the evolution towards transformer-based language modelling. Architecture ELMo is a multilayered bidirectional LSTM on top of a token embedding layer. The output of all LSTMs concatenated together consists of the token embedding. The input text sequence is first mapped by an embedding layer into a sequence of vectors. Then two parts are run in parallel over it. The forward part is a 2-layered LSTM with 4096 units and 512 dimension projections, and a residual connection from the first to second layer. The backward part has the same architecture, but processes the sequence back-to-front. The outputs from all 5 components (embedding layer, two forward LSTM layers, and two backward LSTM layers) are concatenated and multiplied by a linear matrix ("projection matrix") to produce a 512-dimensional representation per input token. ELMo was pretrained on a text corpus of 1 billion words. The forward part is trained by repeatedly predicting the next token, and the backward part is trained by repeatedly predicting the previous token. After the ELMo model is pretrained, its parameters are frozen, except for the projection matrix, which can be fine-tuned to minimize loss on specific language tasks. This is an early example of the pretraining-fine-tune paradigm. The original paper demonstrated this by improving state of the art on six benchmark NLP tasks. Contextual word representation The architecture of ELMo accomplishes a contextual understanding of tokens. For example, the first forward LSTM of ELMo would process each input token in the context of all previous tokens, and the first backward LSTM would process each token in the context of all subsequent tokens. The second forward LSTM would then incorporate those to further contextualize each token. Deep contextualized word representation is useful for many natural language processing tasks, such as coreference resolution and polysemy resolution. For example, consider the sentenceShe went to the bank to withdraw money.In order to represent the token "bank", the model must resolve its polysemy in context. The first forward LSTM would process "bank" in the context of "She went to the", which would allow it to represent the word to be a location that the subject is going towards. The first backward LSTM would process "bank" in the context of "to withdraw money", which would allow it to disambiguate the word as referring to a financial institution. The second forward LSTM can then process "bank" using the representation vector provided by the first backward LSTM, thus allowing it to represent it to be a financial institution that the subject is going towards. Historical context ELMo is one link in a historical evolution of language modelling. Consider a simple problem of document classification, where we want to assign a label (e.g., "spam", "not spam", "politics", "sports") to a given piece of text. The simplest approach is the "bag of words" approach, where each word in the document is treated independently, and its frequency is used as a feature for classification. This was computationally cheap but ignored the order of words and their context within the sentence. GloVe and Word2Vec built upon this by learning fixed vector representations (embeddings) for words based on their co-occurrence patterns in large text corpora. Like BERT (but unlike "bag of words" such as Word2Vec and GloVe), ELMo word embeddings are context-sensitive, producing different representations for words that share the same spelling. It was trained on a corpus of about 30 million sentences and 1 billion words. Previously, bidirectional LSTM was used for contextualized word representation. ELMo applied the idea to a large scale, achieving state of the art performance. After the 2017 publication of Transformer architecture, the architecture of ELMo was changed from a multilayered bidirectional LSTM to a Transformer encoder, giving rise to BERT. BERT has the same pretrain-fine-tune workflow, but uses a Transformer for parallelizable training. References Machine learning Natural language processing Natural language processing software Computational linguistics
ELMo
[ "Technology", "Engineering" ]
1,087
[ "Machine learning", "Computational linguistics", "Natural language processing", "Artificial intelligence engineering", "Natural language and computing" ]
64,783,246
https://en.wikipedia.org/wiki/Fixed%20prayer%20times
Fixed prayer times, praying at dedicated times during the day, are common practice in major world religions such as Islam, Judaism, and Christianity. Islam Muslims pray five times a day, with their prayers being known as Fajr (dawn), Dhuhr (after midday), Asr (afternoon), Maghrib (sunset), Isha (nighttime), facing towards Mecca. The direction of prayer is called the qibla; the early Muslims initially prayed in the direction of Jerusalem before this was changed to Mecca in 624 CE, about a year after Muhammad's migration to Medina. The timing of the five prayers are fixed intervals defined by daily astronomical phenomena. For example, the Maghrib prayer can be performed at any time after sunset and before the disappearance of the red twilight from the west. In a mosque, the muezzin broadcasts the call to prayer at the beginning of each interval. Because the start and end times for prayers are related to the solar diurnal motion, they vary throughout the year and depend on the local latitude and longitude when expressed in local time. In modern times, various religious or scientific agencies in Muslim countries produce annual prayer timetables for each locality, and electronic clocks capable of calculating local prayer times have been created. In the past, some mosques employed astronomers called the muwaqqits who were responsible for regulating the prayer time using mathematical astronomy. Judaism Jewish law requires Jews to pray thrice a day; the morning prayer is known as Shacharit, the afternoon prayer is known as Mincha, and the evening prayer is known as Maariv. According to Jewish tradition, the prophet Abraham introduced Shacharit, the prophet Isaac introduced Mincha, and the prophet Jacob introduced Maariv. Jews historically prayed in the direction of the Temple in Jerusalem, where the "presence of the transcendent God (shekhinah) [resided] in the Holy of Holies of the Temple". In the Hebrew Bible, it is written that when the prophet Daniel was in Babylon, he "went to his house where he had windows in his upper chamber open to Jerusalem; and he got down upon his knees three times a day and prayed and gave thanks before his God, as he had done previously" (cf. ). After its destruction, Jews continue to pray facing Jerusalem in hope for the coming of the Messiah whom they await. Christianity From the time of the early Church, the practice of seven fixed prayer times has been taught, which traces itself to the Prophet David in . In Apostolic Tradition, Hippolytus instructed Christians to pray seven times a day, "on rising, at the lighting of the evening lamp, at bedtime, at midnight" and "the third, sixth and ninth hours of the day, being hours associated with Christ's Passion (i.e. 9 a.m., 12 p.m., 3 p.m.)." Christians attended two liturgies on the Lord's Day, worshipping communally in both a morning service and evening service, with the purpose of reading the Scriptures and celebrating the Eucharist. Throughout the rest of the week, Christians assembled at the church every day for "the main hours of prayer"—morning prayer (which became known as Lauds) and evening prayer (which became known as Vespers), while praying at the other fixed prayer times privately (which included praying the Lord's Prayer at 9 a.m., 12 p.m. and 3 p.m.); monastics came to gather together to corporately pray all of the canonical hours communally. This practice of seven fixed prayer times was done in the bodily positions of prostration and standing, which continues today in some denominations, especially those of Oriental Christianity. Oriental Orthodox Christians (such as Copts, Armenians and Syriacs, as well certain Oriental Protestant denominations (such as the Mar Thoma Syrian Church), use a breviary such as the Agpeya and Shehimo to pray the canonical hours seven times a day while facing ad orientem, in anticipation of the Second Coming of Jesus; this Christian practice has its roots in , in which the King David prays to God seven times a day. In the Indian Christian and Syriac Christian tradition, these canonical hours are known as Vespers (Ramsho [6 pm]), Compline (Soutoro [9 pm]), Nocturns (Lilio [12 am]), Matins (Sapro [6 am]), third hour prayer (Tloth sho`in [9 am]), sixth hour prayer (Sheth sho`in [12 pm]), and ninth hour prayer (Tsha' sho`in [3 pm]). In the Coptic Christian and Ethiopian Christian tradition, these seven canonical hours are known as the First Hour (Prime [6 am]), the Third Hour (Terce [9 am]), the Sixth Hour (Sext [12 pm]), the Ninth Hour (None [3 pm]), the Eleventh Hour (Vespers [6 pm]), the Twelfth Hour (Compline [9 pm]), and the Midnight office [12 am]; monastics pray an additional hour known as the Vigil. Church bells are tolled at these hours to enjoin the faithful to prayer. At the very minimum, Orthodox Christians are to pray before meals and thrice daily — in the morning, at noon, and in the evening (cf. ). Those who are unable to pray the canonical hour of a certain fixed prayer time may recite the Qauma, in the Indian Orthodox tradition. In Western Christianity and Eastern Orthodox Christianity, the practice of praying the canonical hours at fixed prayer times became mainly observed by monastics and clergy, though today, the Catholic Church encourages the laity to pray the Liturgy of the Hours and in the Lutheran Churches and Anglican Communion, breviaries such as The Brotherhood Prayer Book and the Anglican Breviary, respectively, are used to pray the Daily Office; the Methodist tradition has emphasized the praying of the canonical hours as an "essential practice" in being a disciple of Jesus, with the Order of Saint Luke, a Methodist religious order, printing The Book of Offices and Services to serve this end. In Anabaptist Christianity, Mennonites (especially Old Order Mennonites and Conservative Mennonites) and Amish have family prayer every morning and evening, which is done kneeling; the Christenpflicht prayer book is used for this purpose. Bible readings may be read after this, often after the evening prayer; to this end, the Tägliches Manna devotional is used by many Anabaptists. Some traditions have historically placed a cross the eastern wall of their houses, which they face during these seven fixed prayer times. Before praying, Oriental Orthodox Christians and Oriental Protestant Christians wash their hands, face and feet in order to be clean before and present their best to God; shoes are removed in order to acknowledge that one is offering prayer before a holy God. In these Christian denominations, and in many others as well, it is customary for women to wear a headcovering when praying. There exist watches that indicate the seven fixed prayer times. Mandaeism In Mandaeism, the daily prayer or brakha consists of a set of prayers recited three times per day. Mandaeans stand facing north while reciting daily prayers. Unlike in Islam and Coptic Orthodox Christianity, prostration is not practiced. Mandaean priests recite rahma prayers three times every day, while laypeople also recite the Rushuma (signing prayer) and Asut Malkia ("Healing of Kings") daily. The three prayer times in Mandaeism are: dawn (sunrise) noontime (the "seventh hour") evening (sunset) Baháʼí Faith Followers of the Baháʼí Faith must choose either a short, medium, or long prayer each day to fulfill the requirement of the daily obligatory prayer. Reciting these prayers is considered one of the Baháʼí's most important obligations. The short prayer can only be said between noon and sunset, while the medium prayer must be said three times during the day: once between sunrise and noon, once between noon and sunset, and once in the two hours following sunset. The long prayer is not bound by a fixed prayer time. The text of these prayers is taken from the writings of the religion's founder, Baháʼu'lláh. Sikhism Initiated Sikhs are obligated to perform five daily prayers at varying times during the day, from the collection of Nitnem prayers. In the morning, typically right after waking and bathing, the Japji Sahib, Jaap Sahib, and Tav Prasad Savaiye prayers are recited. In the evening, the Sodar Rahras Sahib is recited, and before bed the Kirtan Sohila is recited. See also Buddhist devotion Cetiya Daily devotional Mealtime Prayer Notes References Further reading External links Shehimo - Indian Orthodox Christian prayer times Agpeya - Coptic Christian prayer times Salah - Islamic prayer times Islamic prayer times calculator worldwide Muslim five time prayers Jewish practices Christian prayer Salah Time in religion Muslim Prayer Times
Fixed prayer times
[ "Physics" ]
1,898
[ "Spacetime", "Time in religion", "Physical quantities", "Time" ]
64,783,551
https://en.wikipedia.org/wiki/Vadym%20Slyusar
Vadym Slyusar (born 15 October 1964, vil. Kolotii, Reshetylivka Raion, Poltava region, Ukraine) is a Soviet and Ukrainian scientist, Professor, Doctor of Technical Sciences, Honored Scientist and Technician of Ukraine, founder of tensor-matrix theory of digital antenna arrays (DAAs), N-OFDM and other theories in fields of radar systems, smart antennas for wireless communications and digital beamforming. Scientific results N-OFDM theory In 1992 Vadym Slyusar patented the 1st optimal demodulation method for N-OFDM signals after Fast Fourier transform (FFT). From this patent was started the history of N-OFDM signals theory. In this regard, W. Kozek and A. F. Molisch wrote in 1998 about N-OFDM signals with the sub-carrier spacing , that "it is not possible to recover the information from the received signal, even in the case of an ideal channel." But in 2001 Vadym Slyusar proposed such Non-orthogonal frequency digital modulation (N-OFDM) as an alternative of OFDM for communications systems. The next publication of V. Slysuar about this method has priority in July 2002 before the conference paper of I. Darwazeh and M.R.D. Rodrigues (September, 2003) regarding SEFDM. The description of the method of optimal processing for N-OFDM signals without FFT of ADC samples was transferred to publication by V. Slyusar in October 2003. The theory N-OFDM of V. Slyusar inspired numerous investigations in this area of other scientists. Tensor-matrix theory of digital antenna array In 1996 V. Slyusar proposed the column-wise Khatri–Rao product to estimate four coordinates of signals sources at a digital antenna array. The alternative concept of the matrix product, which uses row-wise splitting of matrices with a given quantity of rows (Face-splitting product), was proposed by V. Slyusar in 1996 as well. After these results the tensor-matrix theory of digital antenna arrays and new matrix operations was evolved (such as the Block Face-splitting product, Generalized Face-splitting product, Matrix Derivative of Face-splitting product etc.), which used also in artificial intelligence and machine learning systems to minimization of convolution and tensor sketch operations, in a popular Natural Language Processing models, and hypergraph models of similarity. The Face-splitting product and his properties used for multidimensional smoothing with P-splines and Generalized linear array model in the statistic in two- and multidimensional approximations of data as well. Theory of odd-order I/Q demodulators The theory of odd-order I/Q demodulators, which was proposed by V. Slyusar in 2014, started from his investigations of the tandem scheme of two-stage signal processing for the design of an I/Q demodulator and multistage I/Q demodulators concept in 2012. As result, Slyusar "presents a new class of I/Q demodulators with odd order derived from the even order I/Q demodulator which is characterized by linear phase-frequency relation for wideband signals". Results in the other fields of research V. Slyusar provided numerous theoretical works realized in several experimental radar stations with DAAs which were successfully tested. He investigated electrical small antennas and new constructions of such antennas, evolved the theory of metamaterials, proposed new ideas to implementation of augmented reality, and artificial intelligence to combat vehicles as well. V. Slyusar has 68 patents, and 850 publications in the areas of digital antenna arrays for radars and wireless communications. Life data 1981–1985 – listener of Orenburg Air Defense high military school. In this time started the scientific carrier of V. Slyusar, which published a first scientific report in 1985. June 1992 – defended the dissertation for a candidate degree (Techn. Sci.) at the Council of Military Academy of Air Defense of the Land Forces (Kyiv). The significant stage of the recognition of Vadym Slyusar’s scientific results became the defense of the dissertation for a doctoral degree (Techn. Sci.) in 2000. Professor – since 2005, Honored Scientist and Technician of Ukraine – 2008. Since 1996 – work at Central Scientific Research Institute of Armament and Military Equipment of the Armed Forces of Ukraine (Kyiv). Military Rank - Colonel. Since 2003 – participates in Ukraine-NATO cooperation as head of the national delegations, a person of contact, and national representative within experts groups of NATO Conference of National Armaments Directors and technical members of the Research Task Groups (RTG) of NATO Science and Technology Organisation (STO). Since 2009 – member of editorial board of Izvestiya Vysshikh Uchebnykh Zavedenii. Radioelektronika. Selected awards Honored Scientist and Technician of Ukraine (2008) Soviet and Ukraine military medals Gallery See also Digital antenna array N-OFDM Face-splitting product of matrix Tensor random projections References External links Personal Website Few Inventors of Vadym Slyusar – Ukrainian Patents Data Base. «Науковці України – еліта держави». Том VI, 2020. – С. 216 Who's Who in the World 2013. - P. 2233 Who's Who in the World 2014 People from Poltava Oblast Soviet systems scientists Soviet computer scientists Soviet Army officers Soviet Air Defence Force officers Ukrainian colonels Ukrainian electronics engineers Ukrainian mathematicians 20th-century Ukrainian scientists 1964 births Systems engineers Soviet inventors Soviet military engineers Ukrainian military officers Ukrainian inventors Ukrainian computer scientists 21st-century Ukrainian scientists Radar signal processing Living people
Vadym Slyusar
[ "Engineering" ]
1,201
[ "Systems engineers", "Systems engineering" ]
64,783,876
https://en.wikipedia.org/wiki/ST%20Camelopardalis
ST Camelopardalis, abbreviated ST Cam, is a carbon star in the constellation of Camelopardalis. It has a radius of . Its apparent magnitude ranges from 6.3 to 8.5, so under excellent observing conditions, it could be very faintly visible to the naked eye when it is near maximum brightness. In 1902, Thomas William Backhouse announced that ST Cam was a low amplitude variable star with a long or irregular period. It was given its variable star designation in 1912. ST Cam is a semiregular variable star. It is doubly periodic, with the two pulsation periods P0 and P1 being equal to 368.6 and 201 days respectively. ST Cam is an AGB star, in the process of expelling its red giant envelope into space. Line emission from the 115 GHz rotational transition of carbon monoxide was detected in 1987 by Olofson et al. The width of the emission line indicated that ST Cam is surrounded by a circumstellar envelope expanding at 10 km/sec. Bergeat and Chevallier (2005) analyzed later molecular spectroscopy results, and derived an envelope expansion velocity of 9 km/sec, and a mass loss rate of per year. Broadband emission from dust in the envelope was spatially resolved by the IRAS satellite in its 60 micron data. Dust was detected out to a distance of 3.1 arc minutes from the star, or about 1.8 light years assuming a distance to ST Cam of 600 pc. References Carbon stars Camelopardalis Camelopardalis, ST Semiregular variable stars 022552 030243 Durchmusterung objects
ST Camelopardalis
[ "Astronomy" ]
341
[ "Camelopardalis", "Constellations" ]
64,784,077
https://en.wikipedia.org/wiki/Mode%20%28video%20game%29
Mode (1996) is the second of two published full motion video (FMV) interactive multimedia CD-ROM based collaborations between writer and director Jeff Green and the Animatics Multimedia Corporation. History The success and international acclaim of their first title, Midnight Stranger in 1994, led to a production partnership and publishing arrangement with Corel Corporation, an expanded production budget, and the creation of an associated web site version called "Club Mode". Mode (1996) is the second of two published full motion video (FMV) interactive multimedia CD-ROM based collaborations between writer and director Jeff Green and the Animatics Multimedia Corporation. Mode was intended to be the launch vehicle for an expansion of Corel's CD Home Collection into the multimedia entertainment business; however, after a scathing review in PC Gamer magazine, Corel never fully released Mode, the division responsible for the CD Home Collection was shut down, and its software assets were sold to Hoffmann + Associates of Toronto. Plot Experienced from the player's point-of-view (POV), users find themselves at the start of the game crashing an exclusive high-society fashion and art party on the top floor of an unnamed hotel hosted by anarchist artist Vito Brevis. The primary mystery at the heart of the event is Vito's true intentions. He appears cynical and anarchistic, but hidden information can be found that implies that he is actually part of a secret cabal that is using this event as some kind of supernatural ritual. The gameplay progresses through emotion-based interactions with other characters that are also attending the party, with the goal of creating in the player a feeling of shared intimacy with those characters. Gameplay Like Midnight Stranger, Mode is video-based and also uses an emotional continuum Mood Bar for interaction with the in-story characters rather than a text interface or an itemized set of options for each interaction. In Midnight Stranger, the Mood Bar had been proven to simulate some of the frustration and uncertainty of dealing with other people in social situations, and provide a more realistic role playing experience, and so was used similarly in Mode. The red, left end of the bar represents a negative response ("no", "I disagree", "I don’t like that"), the blue central part of the band represents a neutral response ("I don’t know", "I don’t care", "I have no opinion"), and the green right end of the bar represents a positive response ("yes", "I agree", "I like that"). While the bar is a smooth colour gradient that shows no clear demarcations, giving the illusion of infinite choice, practically there were usually only a few possible pathways from any given bar, with varying percentages of the bar devoted to the choices depending on circumstance. Mode had more material than Midnight Stranger, and was distributed on 3 CD-ROMs; however, it continued to use the technique of embedding small frames of video into full screen still frames to save disk space — usually the head and shoulders of the speaking character being the only part of the frame that moved. This embedded video technique allowed many times more interactive material to be included on the disks than could have been done with full-screen video (even if highly compressed), and thus allowed greater freedom in storytelling and a more diverse cast within the technical limitations of the CD-ROM format. This approach was both lauded and criticized in published reviews, since it is a clever solution yet often created distracting disjoints between character motion and the framing image. Cast and crew Writer and director — Jeff Green Producer — Alfredo Coppola, Animatics Multimedia Corporation Publisher — Corel Corporation Crew — Video Producer: Jeff Lively; Multimedia Producer: Don Aker; Art Director: Tero Hollo; Music: Tim Kohout, Dan Proulx; Director Of Photography: Scott Plante, Visualeyes Productions; Associate Producers: Christine Doyle, Krista Thompson; Production Manager: Sonja Kruitwagen; Production Coordinator: Eileen Keefe; Sound Technician: Eldy Gouthro; Digital Media Manipulator: Warren Hodgson; Animation and Effects: Teddy MacLeod; Digitizing: James Hall; Graphic Designer: Jim Dicker; Gaffer: John Oliver; Key Grip: Michael Tien; Props: ; Props Assistant: Kent Osgood; Wardrobe Coordinator: Celine Richardson; Head Dresser: Sheila Snapper; Makeup Artist: Samantha Caldwell; Hair Stylist: Lawrence Finnie; Body Painting: Jef (Krunchie) Harris, Sylvie Rochon Cast — Chip Chuipka (Vito Brevis), Natalie Grey (Charity Flame), Daniel Richer (Riel Attaychek), Quinn (Ed Cheminoma), W. David Watson (Jack Neetch), Gloria David (Mia Tesla), Alexandre Lincourt (Hercule Anaste), Katja Dolleato (Bela), Luigi Saracino (Killer Klown), Nathalie Coutu (The Dome Lady), Kent Rowe (Tuba), Cynthia Smith (Sheeva) Additional performances — The Ambient People (Denise, Hound, Robin, Alex, Chrystine), Johnathan Parker, The James Sisters, Dirt Farm, Tim Kohout and friends. Body Paint Models — Vanessa LaBelle, Meriel Meeks, Karan Friis, Jamie Durie Barrett Palmer Models — Nicole Harod, Sara Ducharme, Erin Downie, Parnille Holt Sawaya, Silvia DiNardo, Isabelle Mousseau, Sandra Guerard, Angela Selkirk, Alana Husson, Maganda Lumbu, Erin Binks, Joni-Sarah White Awards 1996 — Mode was a finalist for the Macromedia People's Choice Award, 1996 Macromedia International User's Conference 1997 — Club Mode was a finalist at the International Digital Media Awards, Multimedia '97 Reception Mode was intended to be the launch vehicle for an expansion of Corel's CD Home Collection into the multimedia entertainment business, however it received a scathing review in PC Gamer magazine. After this review, Corel never fully released Mode. Mode did receive many favourable reviews at the time, and has since developed an international fan base Edited playthroughs by one player posted on YouTube between 2012 and 2014 have collectively received over 100,000 views. References External links Primary web site of Jeff Green 1996 video games Full motion video based games Interactive movie video games Multimedia works Single-player video games Video games developed in Canada
Mode (video game)
[ "Technology" ]
1,332
[ "Multimedia", "Multimedia works" ]