id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
28,162
https://en.wikipedia.org/wiki/List%20of%20nearest%20stars
This list covers all known stars, white dwarfs, brown dwarfs, and sub-brown dwarfs within of the Sun. So far, 131 such objects have been found. Only 22 are bright enough to be visible without a telescope, for which the star's visible light needs to reach or exceed the dimmest brightness visible to the naked eye from Earth, 6.5 apparent magnitude. The known 131 objects are bound in 94 stellar systems. Of those, 103 are main sequence stars: 80 red dwarfs and 23 "typical" stars having greater mass. Additionally, astronomers have found 6 white dwarfs (stars that have exhausted all fusible hydrogen), 21 brown dwarfs, as well as 1 sub-brown dwarf, WISE 0855−0714 (possibly a rogue planet). The closest system is Alpha Centauri, with Proxima Centauri as the closest star in that system, at 4.2465 light-years from Earth. The brightest, most massive and most luminous object among those 131 is Sirius A, which is also the brightest star in Earth's night sky; its white dwarf companion Sirius B is the hottest object among them. The largest object within the 20 light-years is Procyon. The Solar System, and the other stars/dwarfs listed here, are currently moving within (or near) the Local Interstellar Cloud, roughly across. The Local Interstellar Cloud is, in turn, contained inside the Local Bubble, a cavity in the interstellar medium about across. It contains Ursa Major and the Hyades star cluster, among others. The Local Bubble also contains the neighboring G-Cloud, which contains the stars Alpha Centauri and Altair. In the galactic context, the Local Bubble is a small part of the Orion Arm, which contains most stars that we can see without a telescope. The Orion Arm is one of the spiral arms of our Milky Way galaxy. Astrometrics The easiest way to determine stellar distance to the Sun for objects at these distances is parallax, which measures how much stars appear to move against background objects over the course of Earth's orbit around the Sun. As a parsec (parallax-second) is defined by the distance of an object that would appear to move exactly one second of arc against background objects, stars less than 5 parsecs away will have measured parallaxes of over 0.2 arcseconds, or 200 milliarcseconds. Determining past and future positions relies on accurate astrometric measurements of their parallax and total proper motions (how far they move across the sky due to their actual velocity relative to the Sun), along with spectroscopically determined radial velocities (their speed directly towards or away from us, which combined with proper motion defines their true movement through the sky relative to the Sun). Both of these measurements are subject to increasing and significant errors over very long time spans, especially over the several thousand-year time spans it takes for stars to noticeably move relative to each other. Based on results from the Gaia telescope's second data release from April 2018, an estimated 694 stars will approach the Solar System to less than 5 parsecs in the next 15 million years. Of these, 26 have a good probability to come within and another 7 within . This number is likely much higher, due to the sheer number of stars needed to be surveyed; a star approaching the Solar System 10 million years ago, moving at a typical Sun-relative 20–200 kilometers per second, would be 600–6,000 light-years from the Sun at present day, with millions of stars closer to the Sun. The closest encounter to the Sun so far predicted is the low-mass orange dwarf star Gliese 710 / HIP 89825 with roughly 60% the mass of the Sun. It is currently predicted to pass ( au) from the Sun in million years from the present, close enough to significantly disturb the Solar System's Oort cloud. List The classes of the stars and brown dwarfs are shown in the color of their spectral types (these colors are derived from conventional names for the spectral types and do not necessarily represent the star's observed color). Many brown dwarfs are not listed by visual magnitude but are listed by near-infrared J band apparent magnitude due to how dim (and often invisible) they are in visible color bands (U, B or V). Absolute magnitude (with electromagnetic wave, 'light' band denoted in subscript) is a measurement at a 10-parsec distance across imaginary empty space devoid of all its sparse dust and gas. Some of the parallaxes and resultant distances are rough measurements. Distant future and past encounters Over long periods of time, the slow independent motion of stars change in both relative position and in their distance from the observer. This can cause other currently distant stars to fall within a stated range, which may be readily calculated and predicted using accurate astrometric measurements of parallax and total proper motions, along with spectroscopically determined radial velocities. Although extrapolations can be made into the past or future, they are subject to increasingly significant cumulative errors over very long periods. Inaccuracies of these measured parameters make determining the true minimum distances of any encountering stars or brown dwarfs fairly difficult. One of the first stars known to approach the Sun particularly close is Gliese 710. The star, whose mass is roughly half that of the Sun, is currently 62 light-years from the Solar System. It was first noticed in 1999 using data from the Hipparcos satellite, and was estimated to pass less than from the Sun in 1.4 million years. With the release of Gaia's observations of the star, it has since been refined to a much closer , close enough to significantly disturb objects in the Oort cloud, which extends from the Sun. Gaia third data release has provided updated values for many of the candidates in the table below. See also Interstellar travel Location of Earth The Magnificent Seven Nearby Stars Database Solar System#Galactic context Stars in fiction Related lists List of stars with resolved images List of brightest stars List of star systems within 20–25 light-years List of star systems within 25–30 light-years List of star systems within 30–35 light-years List of star systems within 35–40 light-years List of star systems within 40–45 light-years List of star systems within 45–50 light-years List of star systems within 50–55 light-years List of star systems within 55–60 light-years List of star systems within 60–65 light-years List of star systems within 65–70 light-years List of star systems within 70–75 light-years List of star systems within 75–80 light-years List of star systems within 80–85 light-years List of star systems within 85–90 light-years List of star systems within 90–95 light-years List of star systems within 95–100 light-years List of nearest giant stars List of nearest supergiants List of nearest bright stars Historical brightest stars List of nearest exoplanets List of nearest terrestrial exoplanet candidates List of nearest stars by spectral type List of nearby stellar associations and moving groups List of star-forming regions in the Local Group Lists of stars List of Solar System objects by greatest aphelion List of trans-Neptunian objects List of nearest known black holes Notes References External links "The 100 nearest star systems", Research Consortium on Nearby Stars The dynamics of the closest stars Nearest Stars 3D View Table 4 "The Census of Stars and Brown Dwarfs within 8 Parsecs of the Sun" in nearest stars and brown dwarfs Local Bubble nearest stars and brown dwarfs Articles containing video clips nearest stars and brown dwarfs
List of nearest stars
Physics,Astronomy
1,582
22,121,664
https://en.wikipedia.org/wiki/Audience%20reception
Also known as reception analysis, audience reception theory has come to be widely used as a way of characterizing the wave of audience research which occurred within communications and cultural studies during the 1980s and 1990s. On the whole, this work has adopted a "culturalist" perspective, has tended to use qualitative (and often ethnographic) methods of research and has tended to be concerned, one way or another, with exploring the active choices, uses and interpretations made of media materials, by their consumers. Can also be known as reception theory, in which producers encode with a desired response, then the audience decode. Origins Audience reception theory can be traced back to work done by British Sociologist Stuart Hall and his communication model first revealed in an essay titled "Encoding/Decoding." Hall proposed a new model of mass communication which highlighted the importance of active interpretation within relevant codes. Hall's model of communication moved away from the view that the media had the power to directly cause a certain behavior in an individual, while at the same time holding onto the role of media as an agenda-setting function. Hall's model put forward three central premises: (1) the same event can be encoded in more than one way; (2) the message contains more than one possible reading; and (3) understanding the message can be a problematic process, regardless of how natural it may seem. In "Encoding/Decoding", Hall addressed the issue of how people make sense of media texts, and presented three hypothetical methods of decoding. Hall often used examples involving televised media to explain his ideas. Hall argued that the dominant ideology is typically inscribed as the "preferred reading" in a media text, but that this is not automatically adopted by readers. The social situations of readers/viewers/listeners may lead them to adopt different stances."Dominant" readings are produced by those whose social situation favours the preferred reading; 'negotiated' readings are produced by those who inflect the preferred reading to take account of their social position; and "oppositional" readings are produced by those whose social position puts them into direct conflict with the preferred reading. The encoding/decoding model invites analysts to categorize readings as "dominant", "negotiated" or "oppositional". This set of three presupposes that the media text itself is a vehicle of dominant ideology and that it hegemonically strives to get readers to accept the existing social order, with all its inequalities and oppression of underprivileged social groups. Audience reception also has roots in uses and gratifications, structuralism, and post-structuralism. Encoding/decoding model Since the early days of cultural studies-oriented interest in processes of audience meaning-making, the scholarly discussion about "readings" has leaned on two sets of polar opposites that have been invoked to explain the differences between the meaning supposedly encoded into and now residing in the media text and the meanings actualized by audiences from that text. One framework of explanation has attempted to position readings on an ideological scale from "dominant" through "negotiated", to "oppositional", while another has relied on the semiotic notion of "polysemy", frequently without identifying or even mentioning its logical "other": the "monosemic" reading. Often these two frameworks have been used within the same argument, with no attempt made to distinguish "polysemic" from "oppositional" readings: in the literature one often encounters formulations which imply that if a TV programme triggers a diversity of meanings in different audience groups, this programme can then be called "polysemic", and the actualized meanings "oppositional". Audience analysis Audiences can be groups or individuals targeted by and often built by media industries. Audience can be active (constantly filtering or resisting content) or passive (complying and vulnerable). Audience analysis emphasizes the diversity of responses to a given popular culture artifact by examining as directly as possible how given audiences actually understand and use popular culture texts. Three kinds of research make up most audience research: (1) broad surveys and opinion polls (like the famous Nielsen ratings, but also those done by advertisers and by academic researchers) that cover a representative sample of many consumers. (2) small, representative focus groups brought in to react to and discuss a pop culture text. (3) in-depth ethnographic participant observation of a given audience, in which, for example, a researcher actually lives with and observes the TV viewing habits of a household over a substantial period of time, or travels on the road with a rock band. Each approach has strengths and weaknesses, and sometimes more than one approach is used as a check on the others. Audience analysis tries to isolate variables like region, race, ethnicity, age, gender, and income in an effort to see how different social groups tend to construct different meanings for the same text. In media studies, there are two models used to construct audience reception. These models are defined as (1) The effects/hypodermic model and (2) the uses and gratification model. The effects model focuses on what the media does to audiences, influences is based on the message conveyed within the media. The uses and gratification model emphasizes what the audience does with the media presented to them, here influence lies with the consumer. The "ethnographic turn" contributed to the maturing of the field as contexts of consumption are now recognized as having significant impact upon the processes of the interpretation of media. Sometimes characterized as the "active audience" approach, this paradigm has attracted criticism for the apparent jettisoning of the influence of cultural power, diminishing the authority of the text while elevating the influence of context. Nevertheless, developments in this vein have deepened our understanding of the significant relationship between media texts and the production of identity. Repeatedly, audience studies and fan studies have recorded the ways in which media texts are utilized and often re-made in the creative production and reproduction of self-identity. Reception theory Reception theory emphasizes the reader's reception of a literary text or media. This approach to textual analysis focuses on the scope for negotiation and opposition on the part of the audience. This means that a "text"—be it a book, movie, or other creative work—is not simply passively accepted by the audience, but that the reader / viewer interprets the meanings of the text based on their individual cultural background and life experiences. In essence, the meaning of a text is not inherent within the text itself, but is created within the relationship between the text and the reader. A basic acceptance of the meaning of a specific text tends to occur when a group of readers have a shared cultural background and interpret the text in similar ways. It is likely that the less shared heritage a reader has with the artist, the less he/she will be able to recognize the artist's intended meaning, and it follows that if two readers have vastly different cultural and personal experiences, their reading of a text will vary greatly. Rooted in literary and cultural studies, reception theory emerged in the late 1960s as a response to traditional approaches to media analysis, which focused primarily on the intentions of the author or producer. Instead of treating media as passive vessels of meaning, reception theory recognizes that meaning is not fixed but is actively negotiated by the audience. It acknowledges that individuals bring their own beliefs, experiences, and cultural contexts to their interpretations of media texts. Thus, the reception of media texts can vary widely, reflecting the diverse interpretations and experiences of different audience members. One of the central tenets of reception theory is the concept of polysemy - the idea that media texts can have multiple meanings that are not dictated by the author or producer but are generated through interaction with the audience. This challenges the notion of a singular, authoritative interpretation, as it recognizes that each viewer or reader may interpret a text differently based on their own subjectivities and social contexts. For example, a film may be received as a comedy by some viewers, while others might perceive it as a tragedy, depending on their own experiences and cultural backgrounds. Reception theory further highlights the complex nature of media consumption, as audiences are not passive recipients but active participants in the construction of meaning. Various factors such as age, gender, race, class, and education can influence readers' or viewers' interpretations of media texts. Moreover, reception theory suggests that texts are not necessarily absorbed in their entirety, but rather selectively received and interpreted based on the audience's interests and preferences. This selective reception reinforces the idea that audiences actively engage with media texts and shape their meanings based on their own needs and desires. Reception theory also underscores the importance of considering the social context in which media texts are consumed. The social and cultural experiences of individuals can shape their understanding and interpretation of media messages. For example, the reception of a television show or movie in one culture may differ significantly from how it is received in another culture, due to differences in cultural norms, values, and social structures. Understanding the social context is thus crucial in understanding the different ways in which media texts are interpreted and received. In conclusion, reception theory provides a valuable framework for understanding the complex relationship between media texts and their audiences. It emphasizes the active role of the audience in constructing meaning and challenges the notion of a single, authoritative interpretation. By recognizing the multiplicity of meanings generated through audience reception and considering the influence of individual subjectivities and social contexts, reception theory helps shed light on the diverse ways in which media texts are interpreted and understood. References External links “Audience Analysis.” Cultural Politics: Popular Culture. Participations. Journal of Audience and Reception Studies. Reception theory Further reading Hall, Stuart. Encoding/Decoding.” Hill, Andrew. “Investigating Audience Reception of Electroacoustic Audio-visual Compositions: Developing an Effective Methodology.” eContact! 12.4 — Perspectives on the Electroacoustic Work / Perspectives sur l’œuvre électroacoustique (August 2010). Montréal: CEC. Wilson, Karina (Ed.) Audience Theory.” Media Know All, 2009. Human communication
Audience reception
Biology
2,063
54,709,700
https://en.wikipedia.org/wiki/Oil%20purification
Oil purification (transformer, turbine, industrial, etc.) removes oil contaminants in order to prolong oil service life. Contaminants of industrial oils Contaminants and various impurities get into industrial oils during storage and operation. The most common contaminants are: water; solid particles (like soot and dirt); gases; asphalt-resinous paraffin deposits; acids; oil sludge; organometallic compounds; unsaturated hydrocarbons; polyaromatic hydrocarbons; additive remains; products of oil decomposition. Methods of oil purification Industrial oils are purified through sedimentation, filtration, centrifugation, vacuum treatment and adsorption purification. Sedimentation is precipitation of solid particles and water to the bottom of oil tanks under gravity. The main drawback of this process is its longevity. Filtration is a partial removal of solid particles through filter medium. Oil filtration systems generally use a multistage filtration with coarse and fine filters. Centrifugation is separation of oil and water, or oil and solid particles by centrifugal forces. Vacuum treatment degasses and dehydrates industrial oil. This method is well suited for removing dispersed and dissolved water, as well as dissolved gases. Adsorption purification, in contrast to the methods mentioned above, does not remove solid particles and gases, but it shows good results at removing water, oil sludge and aging products. This process uses adsorbents of natural or artificial origin: bleaching clays, synthetic aluminosilicates, silica gels, zeolites, etc. The difference between purification and regeneration of industrial oil Often the terms "oil purification" and "oil regeneration" are used synonymously. Although in fact they are not the same. Oil purification cleans oil from contaminants. It can be used independently or as a part of oil regeneration. Oil regeneration also removes aging products (with the help of adsorbents) and stabilizes oil with additives. Regenerated oil is clean from carcinogenic products of oil aging and stabilized with the help of additives. References Oils Recycling
Oil purification
Chemistry
451
34,081,893
https://en.wikipedia.org/wiki/Epidemiology%20of%20childhood%20obesity
Prevalence of childhood obesity has increased worldwide. The world health organization (WHO) estimated that 39 million children younger than 5 years of age were overweight or had obesity in 2020, and that 340 million children between 5 and 19 were overweight or had obesity in 2016. If the trend continues at the same rate as seen after the year 2000, it could have been expected that there would be more children with obesity than moderate or severe undernutrition in 2022. However, the Covid-19 pandemic will most likely effect the prevalence of undernutrition and obesity In 2010 that the prevalence of childhood obesity during the past two to three decades, much like the United States, has increased in most other industrialized nations, excluding Russia and Poland. Between the early 1970s and late 1990s, prevalence of childhood obesity doubled or tripled in Australia, Brazil, Canada, Chile, Finland, France, Germany, Greece, Japan, the UK, and the USA. A 2010 article from the American Journal of Clinical Nutrition analyzed global prevalence from 144 countries in preschool children (less than 5 years old). Cross-sectional surveys from 144 countries were used and overweight and obesity were defined as preschool children with values >3SDs from the mean. They found an estimated 42 million obese children under the age of five in the world of which close to 35 million lived in developing countries.11 Additional findings included worldwide prevalence of childhood overweight and obesity increasing from 4.2% (95% CI: 3.2%, 5.2%) in 1990 to 6.7% (95% CI: 5.6%, 7.7%) in 2010 and expecting to rise to 9.1% (95% CI: 7.3%, 10.9%), an estimated 60 million overweight and obese children in 2020. Family and the public view Children are often viewed as the vulnerable population, needing more attention from government policies and family. The media also portrays this in shows and movies, which can bring a negative effect towards parents whose children are obese by placing blame and responsibility solely in the parents. United States Childhood obesity in the United States, has been a serious problem among children and adolescents, and can cause serious health problems among our youth. According to the CDC, as of 2015–2016, in the United States, 18.5% of children and adolescents have obesity, which affects approximately 13.7 million children and adolescents. It affects children of all ages and some ethnic groups more than others, 25.8% Hispanics, 22.0% non-Hispanic blacks, 14.1% non-Hispanic white children are affected by obesity. Prevalence has remained high over the past three decades across most age, sex, racial/ethnic, and socioeconomic groups, and represents a three-fold increase from one generation ago and is expected to continue rising. Prevalence of pediatric obesity also varies with state. The highest rates of childhood obesity are found in the southeastern states of which Mississippi was found to have the highest rate of overweight/obese children, 44.5%/21.9% respectively. The western states were found to have the lowest prevalence, such as Utah (23.1%) and Oregon (9.6%). From 2003 to 2007, there was a twofold increase in states reporting prevalence of pediatric obesity greater than or equal to 18%.7 Oregon was the only state showing decline from 2003 to 2007 (decline by 32%), and using children in Oregon as a reference group, obesity in children in Illinois, Tennessee, Kentucky, West Virginia, Georgia, and Kansas has doubled. The likelihood of obesity in children was found to increase significantly with decreasing levels of household income, lower neighborhood access to parks or sidewalks, increased television viewing time, and increased recreational computer time. Black and Hispanic children are more likely to be obese compared to white (Blacks OR=1.71 and Hispanics=1.76). Prevalence According to the CDC, For the 2015–2016 year, the CDC found that the prevalence of obesity for children aged 2–19 years old, in the U.S., was 18.5%. The current trends show that children aged 12–19 years old, have obesity levels 2.2% higher than children 6–11 years old (20.6% vs. 18.4%), and children 6–11 years old have obesity levels 4.5% higher than children aged 2–5 years old (18.4% vs. 13.9%). Boys, 6–19 years old, have a 6.1% higher prevalence of obesity, than boys aged 2–5 years old (20.4% vs. 14.3%). While girls aged 12–19 years old, have a 7.4% greater prevalence of obesity, than girls aged 2–5 years old (20.9% vs. 13.5%). A 2010 NCHS Data Brief published by the CDC found interesting trends in prevalence of childhood obesity. The prevalence of obesity among boys from households with an income at or above 350% the poverty level was found to be 11.9%, while boys with a household income level at or above 130% of the poverty level was 21.1%. The same trend followed in girls. Girls with a household income at or above 350% of the poverty level has an obesity prevalence of 12.0%, while girls with a household income 130% below the poverty level had a 19.3% prevalence. These trends were not consistent when stratified according to race. “The relationship between income and obesity prevalence is significant among non-Hispanic white boys; 10.2% of those living in households with income at or above 350% of the poverty level are obese compared with 20.7% of those in households below 130% of the poverty level.” The same trend follows in non-Hispanic white girls (10.6% of those living at or above 350% of the poverty level are obese, and 18.3% of those living below 130% of the poverty level are obese) There is no significant trend in prevalence by income level for either boys or girls among non-Hispanic black and Mexican-American children and adolescents. “In fact, the relationship does not appear to be consistent; among Mexican-American girls, although the difference is not significant, 21.0% of those living at or above 350% of the poverty level are obese compared with 16.2% of those living below 130% of the poverty level.” Additional findings also include that the majority of children and adolescents are not low income children. The majority of non-Hispanic white children and adolescents also live in households with income levels at or above 130% of the poverty level. Approximately 7.5 million children live in households with income levels above 130% of the poverty level compared to 4.5 million children in households with income at or above 130% of the poverty level. Incidence The importance of identifying the incidence of age-related onset of obesity is vital to understanding when intervention opportunities are most important. Similarly, identifying the incidence of childhood obesity within a respective race, ethnicity, and socioeconomic status, can also help delineate other areas of intervention opportunities for certain populations. A systematic review on the incidence of childhood obesity, found that childhood obesity in the U.S. declines with age. The age-and-sex related incidence of obesity was found to be "4.0% for infants 0–1.9 years, 4.0% for preschool-aged children 2.0–4.9 years, 3.2% for school-aged children 5.0–12.9 years, and 1.8% for adolescents 13.0–18.0 years." When the incidence of childhood obesity, was isolated for the socioeconomically disadvantaged, or for racial/ethnic minority groups, obesity incidence was discovered to be, "4.0% at ages 0–1.9 years, 4.1% at 2.0–4.9 years, 4.4% at 5.0–12.9 years, and 2.2% at 13.0–18.0 years." Based on a 2015-2016 National Health and Nutritional Examination Survey (NHANES), researchers at Duke University, found that the incidence of childhood obesity is on the rise, with a notable rise in preschool boys (2.0-4.9 years), and girls aged 16.0-19.0 years old. The Duke University researchers also discovered that although it had been believed that obesity in children had been on a decline in recent years, obesity in children at all ages has actually been increasing. See also Epidemiology of obesity References Childhood Obesity Childhood obesity
Epidemiology of childhood obesity
Environmental_science
1,792
883,683
https://en.wikipedia.org/wiki/Tropospheric%20Emission%20Spectrometer
Tropospheric Emission Spectrometer or TES was a satellite instrument designed to measure the state of the earth's troposphere. Overview TES was a high-resolution infrared Fourier Transform spectrometer and provided key data for studying tropospheric chemistry, troposphere-biosphere interaction, and troposphere-stratosphere exchanges. It was built for NASA by the Jet Propulsion Laboratory, California Institute of Technology in Pasadena, California. It was successfully launched into polar orbit by a Delta II 7920-10L rocket aboard NASA's third Earth Observing Systems spacecraft (EOS-Aura) at 10:02 UTC on July 15, 2004. Originally planned as a 5-year mission, it was decommissioned after almost 14 years on January 31, 2018. References External links NASA JPL's TES page NASA Aura TES page Spectrometers Earth observation satellite sensors
Tropospheric Emission Spectrometer
Physics,Chemistry
186
1,812,426
https://en.wikipedia.org/wiki/Gires%E2%80%93Tournois%20etalon
In optics, a Gires–Tournois etalon (also known as Gires–Tournois interferometer) is a transparent plate with two reflecting surfaces, one of which has very high reflectivity, ideally unity. Due to multiple-beam interference, light incident on a Gires–Tournois etalon is (almost) completely reflected, but has an effective phase shift that depends strongly on the wavelength of the light. The complex amplitude reflectivity of a Gires–Tournois etalon is given by where r1 is the complex amplitude reflectivity of the first surface, n is the index of refraction of the plate t is the thickness of the plate θt is the angle of refraction the light makes within the plate, and λ is the wavelength of the light in vacuum. Nonlinear effective phase shift Suppose that is real. Then , independent of . This indicates that all the incident energy is reflected and intensity is uniform. However, the multiple reflection causes a nonlinear phase shift . To show this effect, we assume is real and , where is the intensity reflectivity of the first surface. Define the effective phase shift through One obtains For R = 0, no reflection from the first surface and the resultant nonlinear phase shift is equal to the round-trip phase change () – linear response. However, as can be seen, when R is increased, the nonlinear phase shift gives the nonlinear response to and shows step-like behavior. Gires–Tournois etalon has applications for laser pulse compression and nonlinear Michelson interferometer. Gires–Tournois etalons are closely related to Fabry–Pérot etalons. This can be seen by examining the total reflectivity of a Gires–Tournois etalon when the reflectivity of its second surface becomes smaller than 1. In these conditions the property is not observed anymore: the reflectivity starts exhibiting a resonant behavior which is characteristic of Fabry-Pérot etalons. References (An interferometer useful for pulse compression of a frequency modulated light pulse.) Gires–Tournois Interferometer in RP Photonics Encyclopedia of Laser Physics and Technology Optical components Interferometers
Gires–Tournois etalon
Materials_science,Technology,Engineering
451
5,310,500
https://en.wikipedia.org/wiki/Ethinamate
Ethinamate (Valamin, Valmid) is a short-acting carbamate-derivative sedative-hypnotic medication used to treat insomnia. Regular use leads to drug tolerance, and it is usually not effective for more than 7 days. Prolonged use can lead to dependence. Ethinamate has been replaced by other medicines (particularly benzodiazepines), and it is not available in the Netherlands, the United States or Canada. It is a schedule IV substance in the United States. Synthesis Ethinamate (1-ethynylcyclohexanone carbamate) is synthesized by combining acetylene with cyclohexanone to make 1-ethynylcyclohexanol, and then transforming this into a carbamate by the subsequent reaction with phosgene, and later with ammonia. Some lithium metal or similar is used to make the acetylene react with the cyclohexanone in the first step. References Hypnotics Sedatives Carbamates Ethynyl compounds GABAA receptor positive allosteric modulators
Ethinamate
Biology
238
47,250,261
https://en.wikipedia.org/wiki/Pseudofusicoccum%20kimberleyense
Pseudofusicoccum kimberleyense is an endophytic fungus that might be a canker pathogen, specifically for Adansonia gibbosa (baobab). It was isolated from said trees, as well as surrounding ones, in the Kimberley (Western Australia). References Further reading Sakalidis, Monique L., Giles E. StJ Hardy, and Treena I. Burgess. "Endophytes as potential pathogens of the baobab species Adansonia gregorii: a focus on the Botryosphaeriaceae." Fungal Ecology 4.1 (2011): 1–14. Sakalidis, Monique L., et al. "Pathogenic Botryosphaeriaceae associated with Mangifera indica in the Kimberley region of Western Australia." European journal of plant pathology 130.3 (2011): 379–391. Burgess, T. I., et al. "Movement of pathogens between horticultural crops and endemic trees in the Kimberleys." (2009): 36. External links MycoBank Botryosphaeriales Fungus species
Pseudofusicoccum kimberleyense
Biology
230
7,541,586
https://en.wikipedia.org/wiki/Mission%20specialist
Mission specialist (MS) is a term for a specific position held by astronauts who are tasked with conducting a range of scientific, medical, or engineering experiments during a spaceflight mission. These specialists were usually assigned to a specific field of expertise that was related to the goals of the particular mission they were assigned to. Mission specialists were highly trained individuals who underwent extensive training in preparation for their missions. They were required to have a broad range of skills, including knowledge of science and engineering, as well as experience in operating complex equipment in a zero-gravity environment. During a mission, mission specialists were responsible for conducting experiments, operating equipment, and performing spacewalks to repair or maintain equipment outside the spacecraft. They also played a critical role in ensuring the safety of the crew by monitoring the spacecraft's systems and responding to emergencies as needed. The role of mission specialist was an important one in the Space Shuttle program, as they were instrumental in the success of the program's many scientific and engineering missions. Many of the advances in science and technology that were made during this period were made possible by the hard work and dedication of the mission specialists who worked tirelessly to push the boundaries of what was possible in space. References Astronauts
Mission specialist
Biology
246
63,140,623
https://en.wikipedia.org/wiki/Ingrid%20Burke
Ingrid C. "Indy" Burke is the Carl W. Knobloch, Jr. Dean at the Yale School of Forestry & Environmental Studies. She is the first female dean in the school's 116 year history. Her area of research is ecosystem ecology with a primary focus on carbon cycling and nitrogen cycling in semi-arid rangeland ecosystems. She teaches on subjects relating to ecosystem ecology, and biogeochemistry. Early life and education Burke received her B.S in biology from Middlebury College and her Ph.D in botany from the University of Wyoming. At Middlebury College, Burke was planning on becoming an English major, but after taking a science class where they examined the role of photosynthesis in aquatic environments she became fascinated by the topic of environmental science. Soon after taking this class, Burke decided to switch her major to biology after realizing that she could spend her life working outside and be able to solve scientific mysteries as a profession. After her time at Middlebury College she started a Ph.D. track at Dartmouth College. Here she planned on studying a phenomenon known as “fir waves,” where rows of balsam fir trees die collectively, forming arresting patterns across the landscape, but after her advisor moved to work at the University of Wyoming, Burke decided to move as well. After finishing her Ph.D, she moved to Colorado State University where she started her professional career. Career and research Burke's career as an environmental scientist began with a job teaching at Colorado State University in 1987 in the Natural Resource Ecology Laboratory. She became an associate professor in the Department of Forest Sciences at Colorado State University in 1994. In 2008 she began teaching at the University of Wyoming where she earned a spot as the director of the Haub School of Environment and Natural Resources. She worked there until 2016 when she became the Carl W. Knobloch, Jr. Dean at the Yale School of Forestry & Environmental Studies. Burke is also on the board of directors at The Conservation Fund. Burke has published over 150 peer reviewed articles, chapter, books and reports including the investigation of a significant project titled, "A Regional Assessment of Land Use Effects on Ecosystem Structure and Function in the Central Grasslands" from 1996-1999. This project had major implications for understanding and managing ecosystems in the central United States. Selected publications The Importance of Land-Use Legacies to Ecology and Conservation (2003) BioScience, Vol 53, Issue 1, 77–88 Texture, Climate, and Cultivation Effects on Soil Organic Matter Content in U.S. Grassland Soils (1989) Soil Science Society of America Journal, Vol. 53 No. 3, 800-805 Global-Scale Similarities in Nitrogen Release Patterns During Long-Term Decomposition (2007) Science, Vol. 315, Issue 5810, 361-364 ANPP Estimates From NDVI for the Central Grasslands Region of The United States (1997) Ecology, Vol. 78, No 3, 953-958 Interactions Between Individual Plant Species and Soil Nutrient Status in Shortgrass Steppe (1995) Ecology, Vol. 76, No 4, 45-52 additional publications can be found on her Google Scholar profile. Notable awards and honors Her awards and honors include: 2019 Fellow, Ecological Society of America, for advancing our understanding of ecosystem processes, in particular nitrogen and carbon cycling in grasslands. 2018 Fellow, Connecticut Academy of Science and Engineering 2012 Promoting Intellectual Engagement Award, University of Wyoming 2010 Fellow, American Association for the Advancement of Sciences 2008 USDA Agricultural Research Service, Rangeland Resources Unit: Award for Enhancing Collaborative Research Partnerships 2005 Colorado State University Honors Professor 2004–2005 National Academy of Sciences Education Fellow in the Life Science 2001-2008 University Distinguished Teaching Scholar, Colorado State University 2000 Mortar Board Rose Award, Colorado State University 1993–‘98 National Science Foundation Presidential Faculty Fellow Award References Living people Year of birth missing (living people) Middlebury College alumni University of Wyoming alumni Yale University faculty Colorado State University faculty University of Wyoming faculty American botanists Biogeochemists Fellows of the American Association for the Advancement of Science
Ingrid Burke
Chemistry
815
7,255,075
https://en.wikipedia.org/wiki/Brouwer%20Award%20%28Division%20on%20Dynamical%20Astronomy%29
The Brouwer Award is awarded annually by the Division on Dynamical Astronomy of the American Astronomical Society for outstanding lifetime achievement in the field of dynamical astronomy. The prize is named for Dirk Brouwer. Recipients Source: Division on Dynamical Astronomy of the American Astronomical Society See also List of astronomy awards Prizes named after people References Astronomy prizes American science and technology awards Awards established in 1976 American Astronomical Society
Brouwer Award (Division on Dynamical Astronomy)
Astronomy,Technology
82
14,098,809
https://en.wikipedia.org/wiki/Nautical%20time
Nautical time is a maritime time standard established in the 1920s to allow ships on high seas to coordinate their local time with other ships, consistent with a long nautical tradition of accurate celestial navigation. Nautical time divides the globe into 24 nautical time zones with hourly clock offsets, spaced at 15 degrees by longitudinal coordinate, with no political deviation. Nautical timekeeping dates back to the early 20th century as a standard way to keep time at sea, although it largely only applied to military fleets pre–World War 2. This time-keeping method is only used for radio communications and to account for slight inaccuracies that using Greenwich Mean Time (GMT) may lead to during navigation of the high seas. It is typically only used for trans-oceanic travel, as captains will often not change the timekeeping for short distances such as channels or inland seas. History of nautical time Establishment Prior to 1920, ships kept solar time on the high seas by setting their clocks at night or at the morning sight so that, given the ship's speed and direction, it would be 12 o'clock when the sun crossed the ship's meridian. The establishment of nautical standard times, nautical standard time zones and the nautical date line were recommended by the Anglo-French Conference on Time-keeping at Sea in 1917. The conference recommended that the standard apply to all ships, both military and civilian. These zones were adopted by all major fleets between 1920 and 1925 but not by many independent merchant ships until World War II. Letter suffixes Around 1950, a letter suffix was added to the zone description, assigning Z to the zero zone, and A–M (except J) to the east and N–Y to the west (J may be assigned to local time in non-nautical applications – zones M and Y have the same clock time but differ by 24 hours: a full day). These can be vocalized using the NATO phonetic alphabet which pronounces the letter Z as Zulu, leading to the use of the term "Zulu Time" for Greenwich Mean Time, or UT1 from January 1, 1972 onward. Zone Z runs from 7°30′W to 7°30′E longitude, while zone A runs from 7°30′E to 22°30′E longitude, etc. These nautical letters have been added to some time zone maps, like the World Time Zone Map published by HM Nautical Almanac Office (NAO), which extended the letters by adding an asterisk (*), a dagger (†) or a dot (•) for areas that do not use a nautical time zone (areas that have a half-hour or quarter-hour offset, and areas that have an offset greater than 12 hours), and a section sign (§) for areas that do not have a legal standard time (the Greenland ice sheet and Antarctica). The United Kingdom specifies UTC−3 for the claimed British Antarctic Territory. Preference for GMT over UTC In maritime usage, GMT retains its historical meaning of UT1, the mean solar time at Greenwich, which is empirically adjusted to track unpredictable variations in the Earth's rotational period. UTC, atomic time at Greenwich, makes these adjustments on a coarser granularity than GMT. Establishing latitude by local observations of solar position requires determination of the latitude on Earth where the Sun is directly overhead at the time when the observation is taken. Thus the coarseness of UTC in determining solar time makes it inaccurate in establishing the reference latitude of solar meridian, differing by as much as 0.9 seconds from UT1, creating an error of of a minute of longitude at all latitudes and which is at the equator but less at higher latitudes, varying roughly by the cosine of the latitude. However, the time correction DUT1 can be added to UTC to correct it to within 50 milliseconds of UT1, reducing the error to only . Solar position can also be established by celestial observation of more distant stars taken at nighttime, but this also involves a calendrical correction due to the Earth's elliptical orbit around the Sun; establishing solar position by observations of bright solar objects, such as Venus (also with its own elliptical orbit), involves yet further complexity. Today In practice, nautical times are used only for radio communication, etc. Aboard the ship, e.g. for scheduling work and meal times, the ship may use a suitable time of its own choosing. The captain is permitted to change his or her clocks at a chosen time following the ship's entry into another time zone, typically at midnight. Ships on long-distance passages change time zone on board in this fashion. On short passages the captain may not adjust clocks at all, even if they pass through different time zones, for example between the UK and continental Europe. Passenger ships often use both nautical and on-board time zones on signs. When referring to time tables and when communicating with land, the land time zone must be employed. Application The nautical time zone system is analogous to the terrestrial time zone system for use on high seas. Under the system time changes are required for changes of longitude in one-hour steps. The one-hour step corresponds to a time zone width of 15° longitude. The 15° gore that is offset from GMT or UT1 (not UTC) by twelve hours is bisected by the nautical date line into two 7°30′ gores that differ from GMT by ±12 hours. A nautical date line is implied but not explicitly drawn on time zone maps. It follows the 180th meridian except where it is interrupted by territorial waters adjacent to land, forming gaps: it is a pole-to-pole dashed line. Time on a ship's clocks and in a ship's log had to be stated along with a "zone description", which was the number of hours to be added to zone time to obtain GMT, hence zero in the Greenwich time zone, with negative numbers from −1 to −12 for time zones to the east and positive numbers from +1 to +12 to the west (hours, minutes, and seconds for nations without an hourly offset). These signs are different from those given in the List of UTC time offsets because ships must obtain GMT from zone time, not zone time from GMT. Nautical day Up to late 1805 the Royal Navy used three days: nautical, civil (or "natural"), and astronomical. For example, a nautical day of 10 July, would commence at noon on 9 July civil reckoning and end noon on 10 July civil reckoning, with pm coming before am. The astronomical day of 10 July, would commence at noon of 10 July civil reckoning and ended at noon on 11 July. The astronomical day was brought into use following the introduction of The Nautical Almanac in 1767, and the British Admiralty issued an order ending the use of the nautical day on 11 October 1805. The US did not follow suit until 1848, while many foreign vessels carried on using it until the 1880s. References Meridians (geography) Horology Time scales Time Time zones
Nautical time
Physics,Astronomy
1,439
5,076,875
https://en.wikipedia.org/wiki/Chip%20PC%20Technologies
Chip PC Technologies is a developer and manufacturer of thin client solutions and management software for server-based computing; where in a network architecture applications are deployed, managed and can be fully executed on the server. History Chip PC was founded in 2000 by Aviv Soffer and Ora Meir Soffer and raised its first round of financing from R.H. Technologies Ltd. (), an electronics contract manufacturing group. In 2005 Elbit Systems acquired 20% of the company. Later, the company established partnerships with Dell, which distributes its products, and Microsoft. In June 2007, it raised NIS 26 million in stocks, bonds, and warrants in an IPO on the Tel Aviv Stock Exchange. In November 2007, the company won Europe's largest Thin client tender thus far, to supply 20,000 Thin client PC's and management software to RZF, the tax authority of the State of North Rhine-Westphalia in Germany. Overview Chip PC supplies thin clients to Multinational & Public sector organizations, recently winning 1st place in an independent Thin-Clients Evaluation among 26 thin clients from 9 vendors worldwide. Among Chip PC customers are top organizations from various verticals, such as Healthcare, Finance, Defense (Israeli Navy), Government (US Police), and Education. Although the company's main target markets are enterprises and large organizations, it modifies and customizes models to fit other markets; such as the Networked home, SOHO (Small-Office-Home-Office), Point of sale and others. See also Mini PC Jack PC References External links Computer hardware companies Computer systems companies Electronics companies of Israel Computer terminals Thin clients Rehovot
Chip PC Technologies
Technology
334
24,065,014
https://en.wikipedia.org/wiki/Behind%20the%20Mirror
Behind the Mirror: A Search for a Natural History of Human Knowledge () is a 1973 book by the ethologist Konrad Lorenz. Lorenz shows the essentials of a lifetime's work and summarizes it into his very own philosophy: evolution is the process of growing perception of the outer world by living nature itself. Stepping from simple to higher organized organisms he shows how they benefit from information processing. The methods mirrored by organs have been created in the course of evolution as the natural history of this organism. Lorenz uses the mirror as a simplified model of our brain reflecting the part of information from the outside world it is able to "see". The backside of the mirror was created by evolution to gather as much information as needed to better survive. The book gives a hypothesis how consciousness was "invented" by evolution. One of the key positions of the book included the criticism of Immanuel Kant, arguing that the philosopher failed to realize that knowledge, as mirrored by the human mind is the product of evolutionary adaptations. Kant has maintained that our consciousness or our description and judgments about the world could never mirror the world as it really is so we can not simply take in the raw data that the world provides nor impose our forms on the world. Lorenz disputed this, saying it is inconceivable that - through chance mutations and selective retention - the world fashioned an instrument of cognition that grossly misleads man about such world. He said that we can determine the reliability of the mirror by looking behind it. Summary Lorenz summarizes his life's work into his own philosophy: Evolution is the process of growing perception of the outer world by living nature itself. Stepping from simple to higher organized organisms, Lorenz shows how they gain and benefit from information. The methods mirrored by organs have been created in the course of evolution as the natural history of this organism. In the book, Lorenz uses the mirror as a simple model of the human brain that reflects the part of the stream of information from the outside world it is able to "see". He argued that merely looking outward into the mirror ignores the fact that the mirror has a non-reflecting side, which is also a part and parcel of reality. The backside of the mirror was created by evolution to gather as much information as needed to better survive. The picture in the mirror is what we see within our mind. Within our cultural evolution we have extended this picture in the mirror by inventing instruments that transform the most needed of the invisible to something visible. The back side of the mirror is acting for itself as it processes the incoming information to improve speed and effectiveness. By that human inventions like logical conclusions are always in danger to be manipulated by these hardwired prejudices in our brain. The book gives a hypothesis how consciousness was invented by evolution. Main topics Fulguratio, the flash of lightning, denotes the act of creation of a totally new talent of a system, created by the combination of two other systems with talents much less than those of the new system. The book shows the "invention" of a feedback loop by this process. Imprinting, is the phase-sensitive learning of an individual that is not reversible. It's a static program run only once. Habituation is the learning method to distinguish important information from unimportant by analysing its frequency of occurrence and its impact. Conditioning by reinforcement, occurs when an event following a response causes an increase in the probability of that response occurring in the future. The ability to do this kind of learning is hardwired in our brain and is based on the principle of causality. The discovery of causality (which is a substantial element of science and Buddhism) was a major step of evolution in analysing the outer world. Pattern matching is the abstraction of different appearances into the identification of being one object and is available only in highly organized creatures Exploratory behaviour is the urge of the highest developed creatures on earth to go on with learning after maturity and leads to self-exploration which is the base for consciousness. See also Evolutionary Epistemology Karl Popper References Further reading Gerhard Medicus (2017, chapter 3). Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin VWB External links http://www.britannica.com/EBchecked/topic/58757/Behind-the-Mirror-A-Search-for-a-Natural-History-of-Human-Knowledge Comment 1973 non-fiction books Philosophy books Epistemology literature Works by Konrad Lorenz Philosophy of biology Ethology Human evolution books
Behind the Mirror
Biology
943
592,897
https://en.wikipedia.org/wiki/Hellinger%E2%80%93Toeplitz%20theorem
In functional analysis, a branch of mathematics, the Hellinger–Toeplitz theorem states that an everywhere-defined symmetric operator on a Hilbert space with inner product is bounded. By definition, an operator A is symmetric if for all x, y in the domain of A. Note that symmetric everywhere-defined operators are necessarily self-adjoint, so this theorem can also be stated as follows: an everywhere-defined self-adjoint operator is bounded. The theorem is named after Ernst David Hellinger and Otto Toeplitz. This theorem can be viewed as an immediate corollary of the closed graph theorem, as self-adjoint operators are closed. Alternatively, it can be argued using the uniform boundedness principle. One relies on the symmetric assumption, therefore the inner product structure, in proving the theorem. Also crucial is the fact that the given operator A is defined everywhere (and, in turn, the completeness of Hilbert spaces). The Hellinger–Toeplitz theorem reveals certain technical difficulties in the mathematical formulation of quantum mechanics. Observables in quantum mechanics correspond to self-adjoint operators on some Hilbert space, but some observables (like energy) are unbounded. By Hellinger–Toeplitz, such operators cannot be everywhere defined (but they may be defined on a dense subset). Take for instance the quantum harmonic oscillator. Here the Hilbert space is L2(R), the space of square integrable functions on R, and the energy operator H is defined by (assuming the units are chosen such that ℏ = m = ω = 1) This operator is self-adjoint and unbounded (its eigenvalues are 1/2, 3/2, 5/2, ...), so it cannot be defined on the whole of L2(R). References Reed, Michael and Simon, Barry: Methods of Mathematical Physics, Volume 1: Functional Analysis. Academic Press, 1980. See Section III.5. Theorems in functional analysis Hilbert spaces
Hellinger–Toeplitz theorem
Physics,Mathematics
422
229,528
https://en.wikipedia.org/wiki/Delta%20operator
In mathematics, a delta operator is a shift-equivariant linear operator on the vector space of polynomials in a variable over a field that reduces degrees by one. To say that is shift-equivariant means that if , then In other words, if is a "shift" of , then is also a shift of , and has the same "shifting vector" . To say that an operator reduces degree by one means that if is a polynomial of degree , then is either a polynomial of degree , or, in case , is 0. Sometimes a delta operator is defined to be a shift-equivariant linear transformation on polynomials in that maps to a nonzero constant. Seemingly weaker than the definition given above, this latter characterization can be shown to be equivalent to the stated definition when has characteristic zero, since shift-equivariance is a fairly strong condition. Examples The forward difference operator is a delta operator. Differentiation with respect to x, written as D, is also a delta operator. Any operator of the form (where Dn(ƒ) = ƒ(n) is the nth derivative) with is a delta operator. It can be shown that all delta operators can be written in this form. For example, the difference operator given above can be expanded as The generalized derivative of time scale calculus which unifies the forward difference operator with the derivative of standard calculus is a delta operator. In computer science and cybernetics, the term "discrete-time delta operator" (δ) is generally taken to mean a difference operator the Euler approximation of the usual derivative with a discrete sample time . The delta-formulation obtains a significant number of numerical advantages compared to the shift-operator at fast sampling. Basic polynomials Every delta operator has a unique sequence of "basic polynomials", a polynomial sequence defined by three conditions: Such a sequence of basic polynomials is always of binomial type, and it can be shown that no other sequences of binomial type exist. If the first two conditions above are dropped, then the third condition says this polynomial sequence is a Sheffer sequence—a more general concept. See also Pincherle derivative Shift operator Umbral calculus References External links Linear algebra Polynomials Finite differences
Delta operator
Mathematics
456
71,289
https://en.wikipedia.org/wiki/Extended%20Backus%E2%80%93Naur%20form
In computer science, extended Backus–Naur form (EBNF) is a family of metasyntax notations, any of which can be used to express a context-free grammar. EBNF is used to make a formal description of a formal language such as a computer programming language. They are extensions of the basic Backus–Naur form (BNF) metasyntax notation. The earliest EBNF was developed by Niklaus Wirth, incorporating some of the concepts (with a different syntax and notation) from Wirth syntax notation. Today, many variants of EBNF are in use. The International Organization for Standardization adopted an EBNF Standard, ISO/IEC 14977, in 1996. According to Zaytsev, however, this standard "only ended up adding yet another three dialects to the chaos" and, after noting its lack of success, also notes that the ISO EBNF is not even used in all ISO standards. Wheeler argues against using the ISO standard when using an EBNF and recommends considering alternative EBNF notations such as the one from the W3C Extensible Markup Language (XML) 1.0 (Fifth Edition). This article uses EBNF as specified by the ISO for examples applying to all EBNFs. Other EBNF variants use somewhat different syntactic conventions. Basics EBNF is a code that expresses the syntax of a formal language. An EBNF consists of terminal symbols and non-terminal production rules which are the restrictions governing how terminal symbols can be combined into a valid sequence. Examples of terminal symbols include alphanumeric characters, punctuation marks, and whitespace characters. The EBNF defines production rules where sequences of symbols are respectively assigned to a nonterminal: digit excluding zero = "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" ; digit = "0" | digit excluding zero ; This production rule defines the nonterminal digit which is on the left side of the assignment. The vertical bar represents an alternative and the terminal symbols are enclosed with quotation marks followed by a semicolon as terminating character. Hence a digit is a 0 or a digit excluding zero that can be 1 or 2 or 3 and so forth until 9. A production rule can also include a sequence of terminals or nonterminals, each separated by a comma: twelve = "1", "2" ; two hundred one = "2", "0", "1" ; three hundred twelve = "3", twelve ; twelve thousand two hundred one = twelve, two hundred one ; Expressions that may be omitted or repeated can be represented through curly braces { ... }: positive integer = digit excluding zero, { digit } ; In this case, the strings 1, 2, ..., 10, ..., 10000, ... are correct expressions. To represent this, everything that is set within the curly braces may be repeated arbitrarily often, including not at all. An option can be represented through squared brackets [ ... ]. That is, everything that is set within the square brackets may be present just once, or not at all: integer = "0" | [ "-" ], positive integer ; Therefore, an integer is a zero (0) or a positive integer that may be preceded by an optional minus sign. EBNF also provides, among other things, the syntax to describe repetitions (of a specified number of times), to exclude some part of a production, and to insert comments in an EBNF grammar. Table of symbols The following represents a proposed ISO/IEC 14977 standard, by R. S. Scowen, page 7, tables 1 and 2. Examples Syntax diagram EBNF Even EBNF can be described using EBNF. Consider below grammar (using conventions such as "-" to indicate set disjunction, "+" to indicate one or more matches, and "?" for optionality): letter = "A" | "B" | "C" | "D" | "E" | "F" | "G" | "H" | "I" | "J" | "K" | "L" | "M" | "N" | "O" | "P" | "Q" | "R" | "S" | "T" | "U" | "V" | "W" | "X" | "Y" | "Z" | "a" | "b" | "c" | "d" | "e" | "f" | "g" | "h" | "i" | "j" | "k" | "l" | "m" | "n" | "o" | "p" | "q" | "r" | "s" | "t" | "u" | "v" | "w" | "x" | "y" | "z" ; digit = "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" ; symbol = "[" | "]" | "{" | "}" | "(" | ")" | "<" | ">" | "'" | '"' | "=" | "|" | "." | "," | ";" | "-" | "+" | "*" | "?" | "\n" | "\t" | "\r" | "\f" | "\b" ; character = letter | digit | symbol | "_" | " " ; identifier = letter , { letter | digit | "_" } ; S = { " " | "\n" | "\t" | "\r" | "\f" | "\b" } ; terminal = "'" , character - "'" , { character - "'" } , "'" | '"' , character - '"' , { character - '"' } , '"' ; terminator = ";" | "." ; term = "(" , S , rhs , S , ")" | "[" , S , rhs , S , "]" | "{" , S , rhs , S , "}" | terminal | identifier ; factor = term , S , "?" | term , S , "*" | term , S , "+" | term , S , "-" , S , term | term , S ; concatenation = ( S , factor , S , "," ? ) + ; alternation = ( S , concatenation , S , "|" ? ) + ; rhs = alternation ; lhs = identifier ; rule = lhs , S , "=" , S , rhs , S , terminator ; grammar = ( S , rule , S ) * ; Pascal A Pascal-like programming language that allows only assignments can be defined in EBNF as follows: (* a simple program syntax in EBNF - Wikipedia *) program = 'PROGRAM', white space, identifier, white space, 'BEGIN', white space, { assignment, ";", white space }, 'END.' ; identifier = alphabetic character, { alphabetic character | digit } ; number = [ "-" ], digit, { digit } ; string = '"' , { all characters - '"' }, '"' ; assignment = identifier , ":=" , ( number | identifier | string ) ; alphabetic character = "A" | "B" | "C" | "D" | "E" | "F" | "G" | "H" | "I" | "J" | "K" | "L" | "M" | "N" | "O" | "P" | "Q" | "R" | "S" | "T" | "U" | "V" | "W" | "X" | "Y" | "Z" ; digit = "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" ; white space = ? white space characters ? ; all characters = ? all visible characters ? ; For example, a syntactically correct program then could be: PROGRAM DEMO1 BEGIN A:=3; B:=45; H:=-100023; C:=A; D123:=B34A; BABOON:=GIRAFFE; TEXT:="Hello world!"; END. The language can easily be extended with control flows, arithmetical expressions, and Input/Output instructions. Then a small, usable programming language would be developed. Advantages over BNF Any grammar defined in EBNF can also be represented in BNF, though representations in the latter are generally lengthier. E.g., options and repetitions cannot be directly expressed in BNF and require the use of an intermediate rule or alternative production defined to be either nothing or the optional production for option, or either the repeated production of itself, recursively, for repetition. The same constructs can still be used in EBNF. The BNF uses the symbols (<, >, |, ::=) for itself, but does not include quotes around terminal strings. This prevents these characters from being used in the languages, and requires a special symbol for the empty string. In EBNF, terminals are strictly enclosed within quotation marks ("..." or '...'). The angle brackets ("<...>") for nonterminals can be omitted. BNF syntax can only represent a rule in one line, whereas in EBNF a terminating character, the semicolon character “;” marks the end of a rule. Furthermore, EBNF includes mechanisms for enhancements, defining the number of repetitions, excluding alternatives, comments, etc. Conventions As examples, the following syntax rules illustrate the facilities for expressing repetition: aa = "A"; bb = 3 * aa, "B"; cc = 3 * [aa], "C"; dd = {aa}, "D"; ee = aa, {aa}, "E"; ff = 3 * aa, 3 * [aa], "F"; gg = {3 * aa}, "G"; hh = (aa | bb | cc), "H"; Terminal strings defined by these rules are as follows: aa: A bb: AAAB cc: C AC AAC AAAC dd: D AD AAD AAAD AAAAD etc. ee: AE AAE AAAE AAAAE AAAAAE etc. ff: AAAF AAAAF AAAAAF AAAAAAF gg: G AAAG AAAAAAG etc. hh: AH AAABH CH ACH AACH AAACH Extensibility According to the ISO 14977 standard EBNF is meant to be extensible, and two facilities are mentioned. The first is part of EBNF grammar, the special sequence, which is arbitrary text enclosed with question marks. The interpretation of the text inside a special sequence is beyond the scope of the EBNF standard. For example, the space character could be defined by the following rule: space = ? ASCII character 32 ?; The second facility for extension is using the fact that parentheses in EBNF cannot be placed next to identifiers (they must be concatenated with them). The following is valid EBNF: something = foo, ( bar ); The following is not valid EBNF: something = foo ( bar ); Therefore, an extension of EBNF could use that notation. For example, in a Lisp grammar, function application could be defined by the following rule: function application = list( symbol, { expression } ); Related work The W3C publishes an EBNF notation. The W3C used a different EBNF to specify the XML syntax. The British Standards Institution published a standard for an EBNF: BS 6154 in 1981. The IETF uses augmented BNF (ABNF), specified in . See also Meta-II – An early compiler writing tool and notation Phrase structure rules – The direct equivalent of EBNF in natural languages Regular expression Spirit Parser Framework References External links ISO/IEC 14977 : 1996(E) BNF/EBNF variants – A table by Pete Jinks comparing several syntaxes Compiler construction Formal languages Metalanguages
Extended Backus–Naur form
Mathematics
2,715
31,663,277
https://en.wikipedia.org/wiki/Magnetic%20chip%20detector
A magnetic chip detector is an electronic instrument that attracts ferromagnetic particles (mostly iron chips). It is mainly used in aircraft engine oil and helicopter gearbox chip detection systems. Chip detectors can provide an early warning of an impending engine failure and thus greatly reduce the cost of an engine overhaul. Operation Chip Detectors consist of small plugs which can be installed in an engine oil filter, oil sump or aircraft drivetrain gear boxes. Over a period of time, engine wear and tear causes small metal chips to break loose from engine parts and circulate in the engine oil. The detector houses magnets incorporated into an electric circuit. Magnetic lines of force attract ferrous particles. Collection of these particles continues until the insulated air gap between the magnets (two magnet configuration) or between the magnet and housing (one magnet configuration) is bridged, effectively closing the circuit. The result is an electronic signal for remote indication. Thus, a warning light on the instrument panel illuminates, indicating the presence of metal chips. Chip detectors may be positioned in the application with a self-closing valve/adapter through either a bayonet or threaded interface. As the chip detector is disengaged from the valve, the valve closes minimizing any fluid loss from the system. The chip detectors used on aircraft are inspected in every 'A check' and higher. They may also be specified intervals such as every 30–40 hours for an engine unit and 100 hours for an APU unit. References Aircraft instruments
Magnetic chip detector
Technology,Engineering
310
56,438,714
https://en.wikipedia.org/wiki/Euphorbia%20obtusifolia
The scientific name Euphorbia obtusifolia has been used for at least three species of Euphorbia: Euphorbia obtusifolia is a synonym of Euphorbia terracina , native from Macaronesia through Hungary and the Mediterranean to the Arabian Peninsula Euphorbia obtusifolia is an illegitimate name that has been applied to: Euphorbia lamarckii – of which it is a synonym; native to the western Canary Islands (Tenerife, La Gomera, La Palma and El Hierro); also known by the synonym Euphorbia broussonetii Euphorbia regis-jubae – with which it has been confused; native to the eastern Canary Islands (Gran Canaria, Lanzarote and Fuerteventura), west Morocco and north-western Western Sahara References Set index articles on plants obtusifolia
Euphorbia obtusifolia
Biology
185
727,277
https://en.wikipedia.org/wiki/John%20Thompson%20Dorrance
John Thompson Dorrance (November 11, 1873 – September 21, 1930) was an American chemist who discovered a method to create condensed soup, and was president of the Campbell Soup Company from 1914 to 1930. Early life Born in Bristol, Pennsylvania, he earned a bachelor of science degree from the Massachusetts Institute of Technology, where he was a member of Sigma Alpha Epsilon fraternity, and a doctor of philosophy from the University of Göttingen in Germany. He turned down offers to join the faculty at Cornell University and Columbia University to pursue work with his uncle. Career A nephew of the general manager of the Joseph Campbell Preserve Company, he went to work there in 1897 and invented condensed soup. Dorrance went on to become the president of Campbell Soup Company from 1914 to 1930, eventually buying out the Campbell family. He turned the business into one of America's longest-lasting brands. He was succeeded by his brother, Arthur Dorrance. Personal life In 1906 he married Ethel Mallinckrodt, with whom he had five children. Death Dorrance died on September 21, 1930, of heart disease at his home in Cinnaminson Township, New Jersey. He was buried in West Laurel Hill Cemetery in Bala Cynwyd, Pennsylvania. His estate in Radnor Township, Pennsylvania is now the home of Cabrini University. Following Dorrance's death, there was significant litigation over his domicile for purposes of estate and inheritance tax. The Supreme Court of Pennsylvania held that he was domiciled in Pennsylvania, and the Supreme Court of New Jersey held that he was domiciled in New Jersey, and his estate was required to pay estate tax to both states. The estate sought relief in the United States Supreme Court, but the request for review was denied. Legacy In 2012, Dorrance was elected into the New Jersey Hall of Fame. See also Dorrance Mansion References External links West Laurel Hill Cemetery web site 1873 births 1930 deaths American food chemists American food industry business executives Campbell Soup Company people Massachusetts Institute of Technology alumni People from Bristol, Pennsylvania People from Cinnaminson Township, New Jersey University of Göttingen alumni Businesspeople from Pennsylvania Dorrance family Sigma Alpha Epsilon members
John Thompson Dorrance
Chemistry
447
16,907,616
https://en.wikipedia.org/wiki/Electromagnetically%20induced%20grating
Electromagnetically induced grating (EIG) is an optical interference phenomenon where an interference pattern is used to build a dynamic spatial diffraction grating in matter. EIGs are dynamically created by light interference on optically resonant materials and rely on population inversion and/or optical coherence properties of the material. They were first demonstrated with population gratings on atoms. EIGs can be used for purposes of atomic/molecular velocimetry, to probe the material optical properties such as coherence and population life-times, and switching and routing of light. Related but different effects are thermally induced gratings and photolithography gratings. Writing, reading and phase-matching conditions for EIG diffraction Figure 1 shows a possible beam configuration to write and read an EIG. The period of the grating is controlled by the angle . The writing and reading frequencies are not necessarily the same. EB is referred as the "backward" reading beam and ER is the signal obtained by diffraction on the grating. The phase-matching conditions for the EIG for the plane-wave approximation is given by the simple geometric relation: , where the angles are given according to Fig. 2, and are the frequencies of the writing (W, W') and reading beam (R), respectively, and n is the effective index of refraction of the medium. Types of EIG Matter gratings The writing lasers form a grating by modulating density of matter or by localizing matter (trapping) on the regions of maxima (or minima) of the writing interference fields. A thermal grating is an example. Matter gratings have slow dynamics (milliseconds) compared to population and phase gratings (potentially nanoseconds and faster). Population gratings The writing lasers are resonant with optical transitions in the matter and the grating is formed by optical pumping (See Fig. 3). This type of grating can be easily tuned to produce multiple orders of diffraction. Coherence gratings A grating where the writing lasers form a coherent matter pattern. An example is a pattern of electromagnetically induced transparency. Applications Usually two lasers at an angle are used to build an EIG. The EIG is used to diffract a third laser, to monitor the behavior of the underlying substrate where the EIG was written or to serve as a switch for one of the lasers involved in the process. See also Atomic coherence Electromagnetically induced transparency Bragg's law Optical lattice Spectral hole burning Kerr effect Stimulated Raman spectroscopy References Wave mechanics Quantum mechanics Nonlinear optics
Electromagnetically induced grating
Physics
547
53,989,897
https://en.wikipedia.org/wiki/List%20of%20investigational%20sex-hormonal%20agents
This is a list of investigational sex-hormonal agents, or sex-hormonal agents that are currently under development for clinical use but are not yet approved. Chemical/generic names are listed first, with developmental code names, synonyms, and brand names in parentheses. This list was last comprehensively updated sometime between May 2017 and September 2021. It is likely to become outdated with time. Androgenics Androgen receptor agonists EC586 – oral prodrug of testosterone (androgen/anabolic steroid) with improved pharmacokinetics Selective androgen receptor modulators DT-200 (G-100192, GLPG-0492) – selective androgen receptor modulator for Duchenne muscular dystrophy Enobosarm (ostarine; GTx-024, MK-2866; S-22; VERU-024) – selective androgen receptor modulator for breast cancer GSK-2881078 – selective androgen receptor modulator for cachexia OPK-88004 (LY-2452473; OPK-88004; TT-701) – selective androgen receptor modulator for benign prostatic hyperplasia, erectile dysfunction, and prostate cancer VK-5211 (LGD-4033; ligandrol) – selective androgen receptor modulator for hip fracture and muscle atrophy Vosilasarm (EP-0062, RAD-140; testolone) – selective androgen receptor modulator for breast cancer Androgen receptor antagonists Bavdegalutamide (AVR-110) – androgen receptor antagonist for prostate cancer Clascoterone (CB-03-01, Breezula, Winlevi) – androgen receptor antagonist for topical treatment of scalp hair loss Deutenzalutamide (deuterated enzalutamide; HC-1119) – androgen receptor antagonist for prostate cancer Pruxelutamide (GT-0918; proxalutamide) – androgen receptor antagonist for prostate cancer Pyrilutamide (KX-826) – androgen receptor antagonist for topical treatment of androgen-dependent scalp hair loss and acne Spironolactone (Aldactone) – androgen receptor antagonist for systemic treatment of acne Atypical androgen receptor antagonists Dimethylcurcumin (ASC-J9) – androgen receptor degradation enhancer for topical acne treatment EPI-7386 – N-terminal domain androgen receptor antagonist for prostate cancer [https://www.prnewswire.com/news-releases/essa-pharma-announces-nomination-of-epi-7386-as-lead-clinical-candidate-in-metastatic-castration-resistant-prostate-cancer-300820449.html Androgen synthesis inhibitors Seviteronel (VT-464) – CYP17A1 inhibitor (androgen synthesis inhibitor) for prostate cancer and breast cancer Estrogenics Estrogen receptor agonists EC508 – oral prodrug of estradiol (estrogen) with improved pharmacokinetics Erteberel (LY-500307, SERBA-1) – selective ERβ agonist for schizophrenia Estetrol (Donesta) – estrogen for menopausal symptoms and other indications Selective estrogen receptor modulators Acolbifene (EM-652, SCH-57068) – selective estrogen receptor modulator for breast cancer Afimoxifene (4-hydroxytamoxifen; 4-OHT; TamoGel) – selective estrogen receptor modulator for topical treatment of breast cancer and hyperplasia Amcenestrant (SAR-439859; SERD '859) – selective estrogen receptor modulator and selective estrogen receptor degrader for breast cancer Camizestrant (AZ14066724, AZD-9833) – selective estrogen receptor modulator and selective estrogen receptor degrader for breast cancer Endoxifen (4-hydroxy-N-desmethyltamoxifen) – selective estrogen receptor modulator for breast cancer Giredestrant (GDC-9545; RG-6171) – selective estrogen receptor modulator and selective estrogen receptor degrader for breast cancer Imlunestrant (LY-3484356) – selective estrogen receptor modulator and selective estrogen receptor degrader for breast cancer and endometrial cancer Rintodestrant (G1T-48) – selective estrogen receptor modulator and selective estrogen receptor degrader for breast cancer Estrogen receptor antagonists Fulvestrant-3 boronic acid (ZB716) – estrogen receptor antagonist (antiestrogen) for breast cancer Estrogen synthesis inhibitors Estradiol sulfamate (E2MATE, J995, PGL-2, PGL-2001, ZK-190628) – steroid sulfatase inhibitor (estrogen "activation" inhibitor) for endometriosis Leflutrozole (BGS-649) – aromatase inhibitor (estrogen synthesis inhibitor) for male hypogonadism Progestogenics Progesterone receptor agonists Hydroxyprogesterone caproate (LPCN-1107) – oral progesterone receptor agonist (progestogen/progestin) for prevention of preterm labor VOLT-02 – water-soluble conjugate of progesterone and neurosteroid for traumatic brain injury and gynecological disorders Selective progesterone receptor modulators Telapristone (CDB-4124, Proellex, Proellex-V, Progenta) – selective progesterone receptor modulator for breast cancer, endometriosis, and uterine fibroids Vilaprisan (BAY-1002670) – selective progesterone receptor modulator for endometriosis and uterine fibroids Progesterone receptor antagonists Onapristone (AR-18, IVV-1001, ZK-299, ZK-98299) – progesterone receptor antagonist (antiprogestogen) for prostate cancer GnRH/gonadotropins Kisspeptin receptor agonists MVT-602 (RVT-602, TAK-448) – small-molecule kisspeptin receptor agonist for female infertility and hypogonadism Neurokinin/tachykinin receptor antagonists Elinzanetant (BAY-3427080; GSK-1144814; NT-814) – small-molecule NK1 receptor and NK3 receptor antagonist for hot flashes and "sex hormone disorders" Mixed/combinations Androgen and progesterone receptor modulators 11β-Methyl-19-nortestosterone dodecylcarbonate (CDB-4754) – dual androgen/anabolic steroid and progestin for use as a male birth control pill Dimethandrolone undecanoate (CDB-4521) – dual androgen/anabolic steroid and progestin for use as a male birth control pill Androgen and estrogen receptor modulators Acolbifene/prasterone (Femivia) – selective estrogen receptor modulator and dehydroepiandrosterone supplement for hot flashes Androgen, estrogen, and progesterone receptor modulators Ethinylestradiol/drospirenone/prasterone – estrogen, progestogen, and dehydroepiandrosterone combination for female birth control See also List of investigational drugs References External links AdisInsight - Springer 2011 Medicines in Development for Women - PhRMA Sex-hormonal agents, investigational Dynamic lists Experimental sex-hormone agents Hormonal agents
List of investigational sex-hormonal agents
Chemistry
1,690
61,522,806
https://en.wikipedia.org/wiki/Silvana%20Cardoso
Silvana Cardoso is a Portuguese fluid dynamicist working in Britain. She is professor of Fluid Mechanics and the Environment at the University of Cambridge and a fellow of Pembroke College, Cambridge. She leads the Fluids and the Environment research group at the Department of Chemical Engineering and Biotechnology. Her research focuses on fluid mechanics and environmental science, in particular the interaction of natural convection and chemical kinetics including turbulent plumes and thermals in the environment, such as the BP oil disaster in the Gulf of Mexico, the 2010 eruptions of Eyjafjallajökull in Iceland, the Fukushima Daiichi nuclear disaster in Japan and oceanic methane releases. flow and reaction in porous media, e.g., the spreading of carbon dioxide in geological storage at Sleipner gas field in the North Sea. cool flames and thermo-kinetic explosions, as occurred in the crash of TWA flight 800. self-assembling porous precipitate structures, such as chemical gardens and submarine hydrothermal vents. She is on the International Advisory Panel of the journal Chemical Engineering Science and the Editorial Board of Chemical Engineering Journal. In 2016 she was awarded the Davidson medal of the Institution of Chemical Engineers (IChemE). Recent press interest in her work has included pieces on whether natural geochemical reactions can delay or prevent the spreading of carbon dioxide in subsurface aquifers used for carbon capture and storage, the possible melting of oceanic methane hydrate deposits owing to climate change, and the importance to astrobiology of brinicles on Jupiter's moon, Europa. References External links Page at Cambridge Fluids Network regarding fluid mechanics research at Cambridge Page at the Department of Chemical Engineering and Biotechnology Page at Pembroke College Page at University of Cambridge Living people British women academics Academics of the University of Cambridge British chemical engineers Fellows of Pembroke College, Cambridge People from Porto 21st-century Portuguese scientists Portuguese engineers Fluid dynamicists University of Porto alumni Portuguese women scientists 21st-century British women scientists Portuguese chemical engineers Year of birth missing (living people)
Silvana Cardoso
Chemistry
408
38,380,056
https://en.wikipedia.org/wiki/OU%20Puppis
OU Puppis (OU Pup) is a chemically peculiar class A0 (white main-sequence) star in the constellation Puppis. Its apparent magnitude is about 4.9 and it is approximately 188 light-years away based on parallax. It is an α2 CVn variable, ranging from 4.93 to 4.86 magnitudes with a period of 0.92 of a day. Its spectrum has unusually strong lines of silicon, chromium, and strontium, making it an Ap star. Unlike the majority of star pairs, the number attached to the Bayer designation 'L' is generally a subscript: L1. Its better-known companion L2 Puppis is similarly represented. References Puppis A-type main-sequence stars Ap stars Alpha2 Canum Venaticorum variables Puppis, L1 Puppis, OU CD-44 3223 034899 2746 056022
OU Puppis
Astronomy
191
56,430,061
https://en.wikipedia.org/wiki/Market-Adjusted%20Performance%20Indicator
The Market-Adjusted Performance Indicator (MAPI) measures the performance of a company’s management using a relative performance indicator designed to capture management performance as holistically as possible by covering both short-term success and long-term impact. The MAPI is an important element for targeted corporate governance. Bengt Holmström, with his economic research and his findings, for which he was awarded the Nobel Prize for Economics in 2016, laid the theoretical foundation for the application of a relative performance indicator. It states that top management should be incentivised with a long-term relative performance indicator for its variable compensation. In the context of a research project of the University of Zurich under the direction of Ernst Fehr, the MAPI was developed and implemented with the consultancy firm Fehr Advice & Partners. To do this, a listed company’s total shareholder return (TSR) is compared with the TSR of a customised, relevant peer group. This way external market shocks, for which the management should be neither rewarded nor penalised, can be excluded. The difference between the TSR of the company and that of its peer group provides insights into the actual performance of the CEO and top management. This makes management performance transparent. Ernst Fehr and Adriano B. Lucatelli calculated the MAPI for all the firms in the Swiss Performance Index. The compensation model of the Liechtensteinische Landesbank is mainly based on the concept of the MAPI. References Business intelligence Business terms Corporate governance Financial ratios Metrics Organizational performance management
Market-Adjusted Performance Indicator
Mathematics
309
26,592,185
https://en.wikipedia.org/wiki/Actino-pnp%20RNA%20motif
The Actino-pnp RNA motif is a conserved structure found in Actinomycetota that is apparently in the 5' untranslated regions of genes predicted to encode exoribonucleases. The RNA element's function is likely analogous to an RNA structure found upstream of polynucleotide phosphorylase genes in E. coli and related enterobacteria. In this latter system, the polynucleotide phosphorlyase gene regulates its own expression levels by a feedback mechanism that involves its activity upon the RNA structure. However, the E. coli RNA appears to be structurally unrelated to the Actino-pnp motif. References External links Cis-regulatory RNA elements
Actino-pnp RNA motif
Chemistry
148
28,148,660
https://en.wikipedia.org/wiki/Circular%20prime
A circular prime is a prime number with the property that the number generated at each intermediate step when cyclically permuting its (base 10) digits will be prime. For example, 1193 is a circular prime, since 1931, 9311 and 3119 all are also prime. A circular prime with at least two digits can only consist of combinations of the digits 1, 3, 7 or 9, because having 0, 2, 4, 6 or 8 as the last digit makes the number divisible by 2, and having 0 or 5 as the last digit makes it divisible by 5. The complete listing of the smallest representative prime from all known cycles of circular primes (The single-digit primes and repunits are the only members of their respective cycles) is 2, 3, 5, 7, R2, 13, 17, 37, 79, 113, 197, 199, 337, 1193, 3779, 11939, 19937, 193939, 199933, R19, R23, R317, R1031, R49081, R86453, R109297, R270343, R5794777 and R8177207, where Rn is a repunit prime with n digits. There are no other circular primes up to 1023. A type of prime related to the circular primes are the permutable primes, which are a subset of the circular primes (every permutable prime is also a circular prime, but not necessarily vice versa). Other bases The complete listing of the smallest representative prime from all known cycles of circular primes in base 12 is (using inverted two and three for ten and eleven, respectively) 2, 3, 5, 7, Ɛ, R2, 15, 57, 5Ɛ, R3, 117, 11Ɛ, 175, 1Ɛ7, 157Ɛ, 555Ɛ, R5, 115Ɛ77, R17, R81, R91, R225, R255, R4ᘔ5, R5777, R879Ɛ, R198Ɛ1, R23175, and R311407. where Rn is a repunit prime in base 12 with n digits. There are no other circular primes in base 12 up to 1212. In base 2, only Mersenne primes can be circular primes, since any 0 permuted to the one's place results in an even number. References External links Circular prime at The Prime Glossary Circular prime at World of Numbers a related sequence (the circular primes are a subsequence of this one) Circular, Permutable, Truncatable and Deletable Primes Absolute Primes (including circular primes), Numberphile video Base-dependent integer sequences Classes of prime numbers
Circular prime
Mathematics
597
22,505,558
https://en.wikipedia.org/wiki/History%20of%20email%20spam
The history of email spam reaches back to the mid-1990s when commercial use of the internet first became possible - and marketers and publicists began to test what was possible. Very soon email spam was ubiquitous, unavoidable and repetitive. This article details significant events in the history of spam, and the efforts made to limit it. Background Commercialization of the internet and integration of electronic mail as an accessible means of communication has another face - the influx of unwanted information and mails. As the internet started to gain popularity in the early 1990s, it was quickly recognized as an excellent advertising tool. At practically no cost, a person can use the internet to send an email message to thousands of people. These unsolicited junk electronic mails came to be called 'Spam'. The history of spam is intertwined with the history of electronic mail. While the linguistic significance of the usage of the word 'spam' is attributed to the British comedy troupe Monty Python in a now legendary sketch from their Flying Circus TV series, in which a group of Vikings sing a chorus of "SPAM, SPAM, SPAM..." at increasing volumes, the historic significance lies in it being adopted to refer to unsolicited commercial electronic mail sent to a large number of addresses, in what was seen as drowning out normal communication on the internet. The "first spam email" in 1978 The first known spam electronic mail (although not yet called email), was sent on May 3, 1978 to several hundred users on ARPANET. It was an advertisement for a presentation by Digital Equipment Corporation for their DECSYSTEM-20 products sent by Gary Thuerk, a marketer of theirs. The reaction to it was almost universally negative, and for a long time there were no further instances. USENET The name "spam" was actually first applied, in April 1993, not to an email, but to unwanted postings on Usenet newsgroup network. Richard Depew accidentally posted 200 messages to news.admin.policy and in the aftermath readers of this group were making jokes about the accident, when one person referred to the messages as “spam”, coining the term that would later be applied to similar incidents over email. On January 18, 1994, the first large-scale deliberate USENET spam occurred. A message with the subject “Global Alert for All: Jesus is Coming Soon” was cross-posted to every available newsgroup. Its controversial message sparked many debates all across USENET. In April 1994 the first commercial USENET spam arrived. Two lawyers from Phoenix, Canter and Siegel, hired a programmer to post their "Green Card Lottery- Final One?" message to as many newsgroups as possible. What made them different was that they did not hide the fact that they were spammers. They were proud of it, and thought it was great advertising. They even went on to write the book "How to Make a Fortune on the Information Superhighway : Everyone’s Guerrilla Guide to Marketing on the internet and Other On-Line Services". They planned on opening a consulting company to help other people post similar advertisements, but it never took off. The 1990s MAPS ("Mail Abuse Prevention System") was founded in 1996. Dave Rand and Paul Vixie, well known internet software engineers, had started keeping a list of IP addresses which had sent out spam or engaged in other behavior they found objectionable. The list became known as the Real-time Blackhole List (RBL). Many network managers wanted to use the RBL to block unwanted email. Thus, Rand and Vixie created a DNS-based distribution scheme which quickly became popular. Spam was already becoming a serious concern, leading in late 1997 to the MAPS, which was "blackhole list" to allow mail servers to block mail coming from spam sources. Others started DNS-based blacklists of open relays. Alan Hodgson started Dorkslayers in September 1998. By November 1998 he was forced to close, since his upstream BCTel considered the open relay scanning to be abusive. The successor ORBS project was then moved to Alan Brown in New Zealand. Al Iverson of Radparker started the RRSS around May 1999. By September 1999 that project was folded into the MAPS group of DNS-based lists as the RSS. In August 1999, MAPS listed the ORBS mail servers, since the ORBS relay testing was thought to be abusive. 2000, spam becomes a serious problem The SpamAssassin spam-filtering system was first uploaded to SourceForge.net on April 20, 2001 by creator Justin Mason. In May 2000 the ILOVEYOU computer worm travelled by email to tens of millions of Windows personal computers. Although not spam, its impact highlighted how pervasive email had become. In June 2001, ORBS was sued in New Zealand, and shortly thereafter closed down . In August 2002, Paul Graham published an influential paper, "A plan for spam", describing a spam-filtering technique using improved Bayesian filtering and variants of this were soon implemented in a number of products. including server-side email filters, such as DSPAM, SpamAssassin, and SpamBayes. 2003, the fight to control spam In June 2003 Meng Weng Wong started the SPF-discuss mailing list and posted the very first version of the "Sender Permitted From" proposal, that would later become the Sender Policy Framework, a simple email-validation system designed to detect email spoofing as part of the solution to spam. The CAN-SPAM Act of 2003 was signed into law by President George W. Bush on December 16, 2003, establishing the United States' first national standards for the sending of commercial email and requiring the Federal Trade Commission (FTC) to enforce its provisions. The backronym CAN-SPAM derives from the bill's full name: "Controlling the Assault of Non-Solicited Pornography And Marketing Act of 2003". It plays on the word "canning" (putting an end to) spam, as in the usual term for unsolicited email of this type; as well as a pun in reference to the canned SPAM food product. The bill was sponsored in Congress by Senators Conrad Burns and Ron Wyden. In January 2004 Bill Gates of Microsoft announced that "spam will soon be a thing of the past." In May 2004, Howard Carmack of Buffalo, New York was sentenced to 3½ to 7 years for sending 800 million messages, using stolen identities. In May 2003 he also lost a $16 million civil lawsuit to EarthLink. On September 27, 2004, Nicholas Tombros pleaded guilty to charges and became the first spammer to be convicted under the CAN-SPAM Act of 2003. He was sentenced in July 2007 to three years' probation, six months' house arrest, and fined $10,000. On November 4, 2004, Jeremy Jaynes, rated the 8th-most prolific spammer in the world, according to Spamhaus, was convicted of three felony charges of using servers in Virginia to send thousands of fraudulent emails. The court recommended a sentence of nine years' imprisonment, which was imposed in April 2005 although the start of the sentence was deferred pending appeals. Jaynes claimed to have an income of $750,000 a month from his spamming activities. On February 29, 2008 the Supreme Court of Virginia overturned his conviction. On November 8, 2004, Nick Marinellis of Sydney, Australia, was sentenced to 4⅓ to 5¼ years for sending Nigerian 419 emails. On December 31, 2004, British authorities arrested Christopher Pierson in Lincolnshire, UK and charged him with malicious communication and causing a public nuisance. On January 3, 2005, he pleaded guilty to sending hoax emails to relatives of people missing following the Asian tsunami disaster. 2005 On July 25, 2005, Russian spammer Vardan Kushnir, who is believed to have spammed every single Russian internet user, was found dead in his Moscow apartment, having suffered numerous blunt-force blows to the head. It is believed that Kushnir's murder was unrelated to his spamming activities. On November 1, 2005, David Levi, 29, of Lytham, England was sentenced to four years for conspiracy to defraud by sending emails pretending to be from eBay. His brother Guy Levi, 22, was sentenced to 21 months after pleading guilty to conspiracy to defraud, and four others were each sentenced to six months for money laundering. On November 16, 2005, Peter Francis-Macrae of Cambridgeshire, described as Britain's most prolific spammer, was sentenced to six years in prison. 2006 In January 2006, James McCalla was ordered to pay $11.2 billion to an ISP in Iowa, U.S. and barred from using the internet for 3 years for sending 280 million email messages. In court, he was not represented by an attorney. On June 28, 2006, IronPort released a study which found 80% of spam emails originating from zombie computers. The report also found 55 billion daily spam emails in June 2006, a large increase from 35 billion daily spam emails in June 2005. The study used SenderData which represents 25% of global email traffic and data from over 100,000 ISP's, universities, and corporations. On August 8, 2006, AOL announced the intention of digging up the garden of the parents of spammer Davis Wolfgang Hawke in search of buried gold and platinum. AOL had been awarded a US$12.8 million judgment in May 2005 against Hawke, who had gone into hiding. The permission for the search was granted by a judge after AOL proved that the spammer had bought large amounts of gold and platinum. In July, 2007, AOL decided not to proceed. On October 12, 2006, Brian Michael McMullen, 22, of East Pittsburgh, Pennsylvania, U.S., was sentenced to three years' supervised release, five months' home detention and ordered to pay restitution in the amount of $11,848.55 for violating the CAN-SPAM Act of 2003. On October 27, 2006, the Federal Court of Australia fined Clarity1 A$4.5 million (US$3.4 million; euro2.7 million) and its director Wayne Mansfield A$1 million (US$760,000; euro600,000) for sending unsolicited emails in the first conviction under Australia's Spam Act of 2003. In November 2006, Christopher William Smith (aka Chris "Rizler" Smith) was convicted on 9 counts for offenses related to Smith's spamming. 2007 On January 16, 2007, an Azusa, California man was convicted by a jury in United States District Court for the Central District of California in Los Angeles in United States v. Goodin, U.S. District Court, Central District of California, 06-110, under the CAN-SPAM Act of 2003 (the first conviction under that Act). He was sentenced to and began serving a 70-month sentence on June 11, 2007. On May 30, 2007, notorious spammer Robert Soloway was arrested after having been indicted by a federal grand jury on 35 charges including mail fraud, wire fraud, email fraud, identity theft, and money laundering. If convicted, he could face decades behind bars. Bail was initially denied although he was released to a half way house in September. On March 14, 2008, Robert Soloway reached an agreement with federal prosecutors, two weeks before his scheduled trial on 40 charges. Soloway pleaded guilty to three charges - felony mail fraud, fraud in connection with email, and failing to file a 2005 tax return. In exchange, federal prosecutors dropped all other charges. Soloway faced up to 26 years in prison on the most serious charge, and up to $625,000 total in fines. On 22 July 2008 Robert Soloway was sentenced four years in federal prison. On June 25, 2007, two men were each convicted on eight counts including conspiracy, fraud, money laundering, and transportation of obscene materials in U.S. District Court in Phoenix, Arizona. The prosecution is the first of its kind under the CAN-SPAM Act of 2003, according to a release from the Department of Justice. One count for each under the act was for falsifying headers, the other was for using domain names registered with false information. The two had been sending millions of hard-core pornography spam emails. The two men were sentenced to five years in prison and ordered to forfeit US$1.3 million. 2008 On July 20, 2008, Eddie Davidson "the Spam King" walked away from a federal prison camp in Florence, Colorado. He was subsequently found dead in Arapahoe County, Colorado, after reportedly killing his wife and three-year-old daughter, in an apparent murder-suicide. August 19: A survey on Marshal Limited's website (an email and internet content security company) showed that 29% of the 622 respondents had bought something from a spam email. Other studies, one by Forrester Research in 2004, which surveyed 6,000 active Web users, reported 20 percent had bought something from spam, while a 2005 study by Mirapoint and the Radicati Group showed 11%, and 57% indicated that clicking on a link in spam caused them to receive more spam than before. A 2007 study by Endai Worldwide (an email marketing company) showed 16% had bought something from spam. In response to the Marshal study, the Download Squad started their own study. With 289 respondents, only 2.1% indicated they had ever bought something from a spam email. November 11: McColo, a San Jose, California-based hosting provider identified as hosting spamming organizations, was cut off by its internet providers. It is estimated that McColo hosted the machines responsible for 75 percent of spam sent worldwide. McColo's upstream service was severed on Tuesday, November 11; that same afternoon, organizations tracking spam noted a sharp decrease in the volume being sent; some as much as a half. See also Laura Betterly, one of the first bulk commercial emailers History of email History of email marketing History of Gmail References Further reading Coverage of the history of spam. About the first mass-mailed email spam. spam Spamming
History of email spam
Technology
2,970
11,857,892
https://en.wikipedia.org/wiki/Potable%20water%20diving
Potable water diving is diving inside a tank that is used for potable water. This is usually done for inspection and cleaning tasks. A person who is trained to do this work may be described as a potable water diver. The risks to the diver associated with potable water diving are related to the access, confined spaces and outlets for the water. The risk of contamination of the water is managed by isolating the diver in a clean dry-suit and helmet or full-face mask which are decontaminated before the dive. Scope Divers can inspect water storage tanks, towers and clearwells without draining them or taking them out of service. The work is classified as commercial diving and diver qualifications, equipment and dive team composition will generally be regulated. Using a specially equipped pump or airlift system the diver can remove loose sediment without damaging painted surfaces. This allows the chlorine in the system to function more effectively. Divers are an effective means to clean and inspect potable water storage tanks because all of the maintenance can be done while the tank remains in-service and full of water, though it may be necessary to close all inlet and outlet valves during the operation as they may present an unacceptable pressure difference hazard, and most of the interior surfaces of the tank can be easily accessed. Hazards Diving in a confined space presents specific hazards related to the possibility of an unbreathable atmosphere above the water surface, and tight access openings. Large tanks and water towers present hazards of access by ladder and working at height. The recovery of an unconscious diver can be complicated by inaccessibility and special extrication equipment will be needed on site to deal with this possibility. Diving teams may require confined space training and working at heights certification and must follow the appropriate standards or code of practice for this work. The diver should wear a diving harness, connected to a safety rope, so that in case of an emergency the dive tender can pull the diver up. Diving contractors always need to check the safety legislation appropriate to their local jurisdiction, and perform a job safety analysis for the specific site. Differential pressure hazards are also usually present in operational storage tanks, and a lockout-tagout procedure for outlets is normally required to minimise the risk. Equipment Diving in potable water uses the same type of equipment that would be used for diving in contaminated water, and for a similar reason. It is necessary to prevent contamination, but in this case it is the diving medium which must not be contaminated, as decontamination takes place before the diver enters the water. The equipment used should be dedicated to this application to minimise the contamination risk. On the other hand, a leak into the suit is of little consequence. Wireless communications do not work well in metal and concrete structures, so hard-wired diver telephone systems are the standard. Umbilicals should have as little place to trap contaminants as reasonably practicable - umbilicals held together by twisting the components like laid rope are preferred to umbilicals held together by tape or a casing. Gas supply and the control point for communications and gas control may have to be at some distance from the access opening, so communications between team members is important. A hoist system is often necessary as a means for recovering an unconscious diver from the enclosed space of the tank. Simple tripod frames are commonly used to support the hoist system over the access opening. Other hoisting systems may be used, providing that they do not unduly risk contaminating the water. The diver's harness must be suitable for lifting the diver out of the water without further injury, in a posture that allows the diver to be hoisted out through the access opening. Regional requirements In the USA commercial diving operations require at least one trained tender, a diver, and a supervisor. In some other countries a standby diver is required at all professional diving operations. Surface-supplied air with two-way voice communications with the diver and a safety rope are preferred and in some jurisdictions may be obligatory. In the US the Occupational Safety & Health Administration regulation 29 CFR Part 1910, Subpart T allows scuba with a rope for basic communications. References Underwater diving procedures Commercial diving Water supply Underwater work
Potable water diving
Chemistry,Engineering,Environmental_science
853
64,197,495
https://en.wikipedia.org/wiki/Andrew%20Anagnost
Andrew Anagnost is the President and CEO of Autodesk, having been appointed to the positions in 2017. He took over the positions from Carl Bass, who resigned in February 2017. Before the promotion, he had served in various other roles for the company since joining in 1997. He holds degrees from California State University, Northridge and Stanford University. Early life and education Anagnost grew up in Van Nuys, California and initially dropped out of high school. After issues with legal and educational authorities, his family helped him enroll in a new high school and he went on to graduate. Afterwards, Anagnost went on to earn a bachelor's degree in mechanical engineering from California State University, Northridge (CSUN) in 1987. His mother, sister, and brother also graduated from CSUN. During his bachelor's degree, Anagnost completed an internship at Lockheed Martin. He later obtained an MS in engineering science and a PhD in aeronautical engineering with a minor in computer science from Stanford University. Career Following graduation from his bachelor's, Anagnost initially worked as a composites structure engineer and propulsion installation engineer at Lockheed Martin, where he had previously interned. He left the position to pursue his further education at Stanford, leading to a position at the NASA Ames Research Center as a National Research Council post-doctoral fellow. Finding the aeronautics business 'too slow', he joined the Exa Corporation in Boston in 1992, before joining Autodesk as a product manager in 1997. Early on in his career at Autodesk, he led development of the company’s manufacturing products and increased the revenue of Autodesk Inventor five-fold to more than $500 million. Working his way up through the company over the years, he achieved the position of Chief Marketing Officer and SVP of the Business Strategy & Marketing. In these roles, he was credited with Autodesk's transition to software as a service, as well as the adoption of cloud computing. Following the resignation of Carl Bass, Anagnost was appointed as interim-CEO together with Amar Hanspal, the Chief Product Officer. Following a four month search, Anagnost was permanently appointed as President and CEO of the company. In this role, Anagnost has pushed for a refocus of the company on software for construction, leading to the demise of some other company ventures and a workforce slash which saw 1,200 employees lose their job at the company. As part of the new focus, Autodesk acquired construction tech start-ups PlanGrid for $875 million, the company's biggest ever acquisition, and BuildingConnected for $275 million in 2018. Additionally, since becoming CEO, the company's share price has nearly tripled and Autodesk has reached a market value of $41.1B, entering the Forbes Global 2000 and Fortune 500. Personal life and philanthropy Growing up, Anagnost's dream job was to work on space ships for NASA. He enjoys reading science fiction novels, with The Fountains of Paradise being one of his favorite works, and is a fan of both the Star Wars and Star Trek franchises. In 2018, Anagnost was one of the judges in the Annual Engineering Showcase at his alma mater CSUN and hosted a talk at the university. The following year he was rewarded with the 2019 Distinguished Alumni Award from CSUN. That same year, Anagnost and his wife donated $300,000 to the university to establish the Teresa Sendra-Anagnost Memorial Scholarship Endowment in honor of his mother, who died in 2011 after suffering complications from cardiac surgery. The endowment supports outstanding students in the university's College of Engineering and Computer Science through funding of their education. Autodesk also donated $1 million to CSUN in 2020 to support the founding and construction of a Center for Integrated Design and Advanced Manufacturing at the university. References Living people American computer businesspeople American technology chief executives California State University, Northridge alumni Stanford University School of Engineering alumni Autodesk people Year of birth missing (living people)
Andrew Anagnost
Technology
819
11,847,675
https://en.wikipedia.org/wiki/Symphonie
The Symphonie satellites (2 satellites orbited) were the first communications satellites built by France and Germany (and the first to use three-axis stabilization in geostationary orbit with a bipropellant propulsion system) to provide geostationary orbit injection and station-keeping during their operational lifetime. After the launch of the second flight model, they comprised the first complete telecommunications satellite system (including an on-orbit spare and a dedicated ground control segment). They were the result of a program of formal cooperation between France and Germany. 1963–1970: Beginnings January 22, 1963: Signing by President Charles de Gaulle and Chancellor Konrad Adenauer of the Élysée Treaty, an agreement for Franco-German cooperation. Start of preliminary studies in France (SAROS project) and in Germany (Olympia project) of communications satellites. June 1967: Both countries sign an intergovernmental convention concerning the launch and exploitation of an experimental telecommunication satellite (Symphonie) and the development and construction of earth stations necessary for control of the satellites. Formation of a Franco-German board of directors and executive committee. The committee is headed by two executive secretaries – one German and the other French. Belgium joins the program. 1967–1968: A Request for Proposals is launched for the Symphonie satellite, which was answered by two Franco-German consortia. The leaders are, respectively, Nord Aviation (which was to become Aérospatiale after merging with Sud Aviation) for the CIFAS consortium (Consortium Industriel Franco-Allemand pour le satellite Symphonie) and Matra Space for the competing consortium. The CIFAS consortium was selected after the evaluation of bids and undertook, according to the terms of the consultation, a rounding-out of the various roles of the French and German firms in charge of electronic technology. 1969: Beginning of a preliminary definition phase of the satellite, and negotiation of the contract and main subcontracts. Establishment of the industrial project team in Les Mureaux (Nord Aviation) and the client-project group in Brétigny-sur-Orge (CNES). Production of mission specifications, satellite specifications and specifications for the control and exploitation segments. Industrial Organization Within the bilateral (CNES – GfW) French-German contract, and under industrial prime contractorship of the CIFAS consortium (which was a European economic interest grouping under French law) composed of six companies (three French and three German), their responsibilities were as follows: Aérospatiale (France) Consortium leader and host of the integrated project team at its centre at Les Mureaux. Structures, and thermal-control subsystems and manufacture of all associated panels, mechanisms, thermal hardware and antenna reflectors (Cannes Space Centre). Manufacture of the cold gas attitude control system, harness and pyrotechnics (Les Mureaux). Integration of mechanical and thermal models (Cannes). Integration of the electrical identification model and the first flight model, Symphonie-A (Les Mureaux). Messerschmitt-Bölkow-Blohm (MBB) (Germany) Attitude and Orbit Control Subsystem (AOCS) (Ottobrunn, near Munich). Manufacture of the hot-gas (bi-propellant) thruster system (Ottobrunn and Lampoldshausen). Apogee motor (bi-propellant) subsystem (Ottobrunn and Lampoldshausen). Mechanical ground-support equipment for integration and transport. Contribution of electrical test sets. Integration of the qualification prototype and the second flight model, Symphonie-B (Ottobrunn). Thomson-CSF (France) Super high frequency (SHF)-antenna subsystem for telecommunications payload and VHF-antenna subsystem for the TT&C (Meudon). Manufacture of the TT&C system (Gennevilliers and Vélizy-Villacoublay). Manufacture of equipment for telecommunications transponders, local oscillators and frequency conversion. Electronic test system (EGSE level 1) for ground testing (integration phase and preparation for flight). Siemens AG (Germany) SHF C-band telecommunications transponder subsystem (Munich). Manufacture of equipment for telecommunications transponders, receiving section and intermediary frequency amplification (Munich). Contribution of electrical test set. SAT (France) Solar-array subsystem (Paris and Lannion). Manufacture of telemetry encoder (Paris). Contribution to electrical test sets. AEG-Telefunken (Germany) Regulated electric power supply subsystem (Wedel, near Hamburg). Manufacture of equipment for the telecommunications transponders, transmission section (Backnang – near Stuttgart – and Ulm). Manufacture of SHF modulators and demodulators for onboard telemetry and telecommand. Contribution to electrical test sets. Other major contributions The six CIFAS companies participated in the integrated project team with detached personnel, headed by Pierre Madon (Aérospatiale). Belgium officially contributed to the project; its industrial presence included the Ateliers de Constructions Electriques de Charleroi (ACEC) and the space division of ETCA, supplier of the DC-DC converters for the electric power supply; and SAIT for the EGSE test computers. French and German equipment manufacturers contributed under contract to the consortium members (notably Sodern, SAFT, Crouzet and Starec in France and Teldix and VFW in Germany). Major test facilities used for the qualification and acceptance tests (space environment simulation): SOPEMEA (a subsidiary of CNES in Toulouse) and IABG in (Ottobrunn). Calibration of telecommunications performance: Centre National d'Etudes des Telecommunications (CNET, in La Turbie). 1970: Satellite development 1970–1971: Beginning of the Symphonie satellite development program, with a contract signed by General Robert Aubinière (Director General of CNES) and Dr. Mayer (representing the German ministry) Bundesministerium für Wissenschaft und Forschung (BMWF). The CIFAS consortium (organized as a European economic interest grouping and whose administrator at the time was Charles Cristofini) went through several restructurings with the creation of Thomson-CSF, MBB (Messerschmitt Bolkow Blohm), AEG Telefunken and Aérospatiale. 1972: The failure of the Europa II launch vehicle and the abandonment of the program (which had been led by the ELDO) triggers a crisis; it is uncertain if development should continue or, if so, how the satellites will be deployed. After some governmental hesitation, the program continues. The satellites will be launched by the American Thor Delta 2914 satellite-launch vehicles, at the cost of a restrictive agreement; any commercial use of Symphonie is forbidden by the U.S. State Department. 1973: Integration of the test and qualification models of the satellite. 1974: Integration in Les Mureaux of the first flight model (Symphonie-A) and delivery of the satellite. Launch and lifespan 1974: Symphonie-A was successfully launched from the Kennedy Space Center on December 19, 1974, at 2:39 a.m. UT. (9:39 pm on December 18 local time). 1975: (January 12) President Valéry Giscard d'Estaing of France and German Chancellor Helmut Schmidt exchange their New Year greetings live in a videoconference, via Symphonie-A in geostationary orbit. Symphonie-A is the first geostationary telecommunications satellite built and operated in Europe; some of its technology is groundbreaking. 1975: After its integration at MBB in Ottobrunn and delivery, Symphonie-B is launched from the Kennedy Space Center on August 27, 1975, at 1:42 a.m. UT.(8:42 pm on August 26, local time) 1975: The two satellites are positioned in geostationary orbit at 11.5° west, perfectly fulfilling their mission (two coverage zones, Euro-African and America, can fully benefit from 4 wideband transponders of 90 MHz each); they are the stars of the 1975 Geneva Telecom Show. 1977–1979: For two years beginning in June 1977, Symphonie-A is repositioned over the Indian Ocean at 49° east, where it carries out experiments with India and China. February 4–7, 1980: An international colloquium is held in Berlin concerning the technical and operational results of the program. Among the presentations, Professor Hubert Curien (then-president of CNES) declared in brief, "Symphonie is the father of Ariane"; it served as the catalyst for the European decision to develop a heavy launch vehicle. August 12, 1983: Symphonie-A makes its final manoeuvre to a graveyard orbit, and is de-activated after years of service. December 19, 1984: Exactly ten years after the launch of Symphonie-A, Symphonie-B is also deactivated and placed in a graveyard orbit after nine years of active service. The Symphonie satellites operated successfully for double their expected lifespans, performing hundreds of experiments and expanding the horizons of telecommunications in space. Uses Symphonie was the forerunner for numerous telecommunications services. Its prohibition on commercial use may have paradoxically induced a larger program for experimentation of space telecommunications than ever before – both in the number of participating countries and diversity of field applications. As an example of the extent of its use, 40 countries participated in links via Symphonie A and B (east-west and north-south) – from Quebec to Argentina, from Finland to Reunion Island and from China to Indonesia. The Symphonie A and B experiments may be divided into two types: Humanitarian, cultural and educational experiments. Technical and scientific experiments. To these types operational experiments may be added, notably for links between metropolitan France and its overseas departments for telephony and television via satellite. From this viewpoint Symphonie was a forerunner of the French national programs Telecom-1 & 2 and TDF 1 & 2, and the German programs TV-SAT and DFS Kopernikus. The wideband transponders, with their operational flexibility, made it possible to test all-access techniques (single or multiple) and modulation: FDMA (frequency sharing), TDMA (time sharing) and SSMA (spread spectrum). Symphonie terrestrial stations with antennas of various diameters from 16 to 2.2 meters (fixed, portable and mobile) contributed to the renown of the programme around the world. + Several demonstrations were: Links between United Nations headquarters in New York and Geneva and the UN Blue Helmet squadrons in Jerusalem and Ismaïlia – inaugurating the future communications mode VSAT (very small aperture terminal), using small-diameter ground antennas. Educational television in Africa, particularly in Côte d'Ivoire and Gabon. Intercultural exchanges via teleconferencing, telerehabilitation and telemedicine, notably between France and Quebec. Occasional tele-transmission services (emergency links to disaster areas for the Red Cross, sports reporting and so on). High-speed, bidirectional links between computers – a forerunner of transcontinental data communications and the Internet. Synchronization of atomic clocks on an intercontinental scale, to obtain a very high stability of universal time – a forerunner to navigation and positioning satellites GPS and Galileo. Regional-level tests for mixed analog and digital television and radio broadcasting (now used in many countries – for example Iran, India and China). One opportunity to demonstrate Symphonie's utility in 1978 was not used; it could have been utilized in Kolwezi (the intervention of French troops in Zaire to protect Europeans living in Katanga), if the French chiefs of staff had followed the above-mentioned UN example rather than calling upon logistical support from the United States. After Symphonie Symphonie's ten years of service have been credited with developing the maturity and reliability of space technology, at a time when telecommunications operators were thinking in terms of cables and ground microwave links. After Intelsat (a pioneer in intercontinental telephony), Symphonie led to the development of regional systems with a number of applications (including tele-distribution, tele-education and reliable radio-electrical access) for use in isolated areas with no ground infrastructure and low population density. The Symphonie program was also a training program; it trained engineers, operators and satellite users, who acquired their expertise through the program and distributed it on the European and international level. Afterwards, new European programs followed and enabled Europe to attain excellence in the field of space telecommunications. The technical success of this precursor program, the demonstration in orbit of the quality of technology born in Europe and the diverse uses benefiting many countries and communities make Symphonie one of the major bases of Europe's success in space. On the industrial level, it helped launch Europe into major space programs and spurred an industrial restructuring which transformed national industries into European groups. Most of Symphonie's industrial partners contributed to the genesis of the Spacebus programs, and to commercial applications in space communications and direct-to-home TV broadcasting. Firsts Symphonie was the: First three-axis stabilized communications satellite in geostationary orbit with a bipropellant rocket propulsion system (to ensure geostationary orbit injection and orbit control during its entire lifespan). First European communications satellite system. Satellites Symphonie A (aka Symphonie 1, COSPAR 1974-101A), launched 19 December 1974 at 2:39 UT from Kennedy Space Center (Cape Canaveral) LC-17B aboard Delta 2914 rocket to geostationary orbit. Mass of satellite 400 kg, 230 kg in orbit. Planned lifetime was 5 years. Decommissioned August 12, 1983 (moved to graveyard orbit). Mission duration: years. Symphonie B (aka Symphonie 2, COSPAR 1975-077A), launched 27 August 1975 at 1:42 UT from Kennedy Space Center (Cape Canaveral) LC-17A aboard Delta 2914 rocket to geostationary orbit. Mass of satellite 400 kg, 230 kg in orbit. Planned lifetime was 5 years. Decommissioned December 19, 1984 (moved to graveyard orbit). Mission duration: 9 years. See also French version of this article Sources "80 years of passion: the Cannes Centre from 1919 to 1999", Editions Version Latine 1999, France. External links 1969 to 1975, the first steps of Symphonie, Space Corner, Eurospace Jean-Jacques Dechezelles, Technical presentation of the Symphonie Satellite, Cannes-aero-patrimoine Footnotes Satellites orbiting Earth Spacebus European space programmes France–Germany relations
Symphonie
Engineering
3,124
787,850
https://en.wikipedia.org/wiki/Two-phase%20commit%20protocol
In transaction processing, databases, and computer networking, the two-phase commit protocol (2PC, tupac) is a type of atomic commitment protocol (ACP). It is a distributed algorithm that coordinates all the processes that participate in a distributed atomic transaction on whether to commit or abort (roll back) the transaction. This protocol (a specialised type of consensus protocol) achieves its goal even in many cases of temporary system failure (involving either process, network node, communication, etc. failures), and is thus widely used. However, it is not resilient to all possible failure configurations, and in rare cases, manual intervention is needed to remedy an outcome. To accommodate recovery from failure (automatic in most cases) the protocol's participants use logging of the protocol's states. Log records, which are typically slow to generate but survive failures, are used by the protocol's recovery procedures. Many protocol variants exist that primarily differ in logging strategies and recovery mechanisms. Though usually intended to be used infrequently, recovery procedures compose a substantial portion of the protocol, due to many possible failure scenarios to be considered and supported by the protocol. In a "normal execution" of any single distributed transaction (i.e., when no failure occurs, which is typically the most frequent situation), the protocol consists of two phases: The commit-request phase (or voting phase), in which a coordinator process attempts to prepare all the transaction's participating processes (named participants, cohorts, or workers) to take the necessary steps for either committing or aborting the transaction and to vote, either "Yes": commit (if the transaction participant's local portion execution has ended properly), or "No": abort (if a problem has been detected with the local portion), and The commit phase, in which, based on voting of the participants, the coordinator decides whether to commit (only if all have voted "Yes") or abort the transaction (otherwise), and notifies the result to all the participants. The participants then follow with the needed actions (commit or abort) with their local transactional resources (also called recoverable resources; e.g., database data) and their respective portions in the transaction's other output (if applicable). The two-phase commit (2PC) protocol should not be confused with the two-phase locking (2PL) protocol, a concurrency control protocol. Assumptions The protocol works in the following manner: one node is a designated coordinator, which is the master site, and the rest of the nodes in the network are designated the participants. The protocol assumes that: there is stable storage at each node with a write-ahead log, no node crashes forever, the data in the write-ahead log is never lost or corrupted in a crash, and any two nodes can communicate with each other. The last assumption is not too restrictive, as network communication can typically be rerouted. The first two assumptions are much stronger; if a node is totally destroyed then data can be lost. The protocol is initiated by the coordinator after the last step of the transaction has been reached. The participants then respond with an agreement message or an abort message depending on whether the transaction has been processed successfully at the participant. Basic algorithm Commit request (or voting) phase The coordinator sends a query to commit message to all participants and waits until it has received a reply from all participants. The participants execute the transaction up to the point where they will be asked to commit. They each write an entry to their undo log and an entry to their redo log. Each participant replies with an agreement message (participant votes Yes to commit), if the participant's actions succeeded, or an abort message (participant votes No to commit), if the participant experiences a failure that will make it impossible to commit. Commit (or completion) phase Success If the coordinator received an agreement message from all participants during the commit-request phase: The coordinator sends a commit message to all the participants. Each participant completes the operation, and releases all the locks and resources held during the transaction. Each participant sends an acknowledgement to the coordinator. The coordinator completes the transaction when all acknowledgements have been received. Failure If any participant votes No during the commit-request phase (or the coordinator's timeout expires): The coordinator sends a rollback message to all the participants. Each participant undoes the transaction using the undo log, and releases the resources and locks held during the transaction. Each participant sends an acknowledgement to the coordinator. The coordinator undoes the transaction when all acknowledgements have been received. Message flow Coordinator Participant QUERY TO COMMIT --------------------------------> VOTE YES/NO prepare*/abort* <------------------------------- commit*/abort* COMMIT/ROLLBACK --------------------------------> ACKNOWLEDGEMENT commit*/abort* <-------------------------------- end An * next to the record type means that the record is forced to stable storage. Disadvantages The greatest disadvantage of the two-phase commit protocol is that it is a blocking protocol. If the coordinator fails permanently, some participants will never resolve their transactions: After a participant has sent an agreement message as a response to the commit-request message from the coordinator, it will block until a commit or rollback is received. A two-phase commit protocol cannot dependably recover from a failure of both the coordinator and a cohort member during the commit phase. If only the coordinator had failed, and no cohort members had received a commit message, it could safely be inferred that no commit had happened. If, however, both the coordinator and a cohort member failed, it is possible that the failed cohort member was the first to be notified, and had actually done the commit. Even if a new coordinator is selected, it cannot confidently proceed with the operation until it has received an agreement from all cohort members, and hence must block until all cohort members respond. Implementing the two-phase commit protocol Common architecture In many cases the 2PC protocol is distributed in a computer network. It is easily distributed by implementing multiple dedicated 2PC components similar to each other, typically named transaction managers (TMs; also referred to as 2PC agents or Transaction Processing Monitors), that carry out the protocol's execution for each transaction (e.g., The Open Group's X/Open XA). The databases involved with a distributed transaction, the participants, both the coordinator and participants, register to close TMs (typically residing on respective same network nodes as the participants) for terminating that transaction using 2PC. Each distributed transaction has an ad hoc set of TMs, the TMs to which the transaction participants register. A leader, the coordinator TM, exists for each transaction to coordinate 2PC for it, typically the TM of the coordinator database. However, the coordinator role can be transferred to another TM for performance or reliability reasons. Rather than exchanging 2PC messages among themselves, the participants exchange the messages with their respective TMs. The relevant TMs communicate among themselves to execute the 2PC protocol schema above, "representing" the respective participants, for terminating that transaction. With this architecture the protocol is fully distributed (does not need any central processing component or data structure), and scales up with number of network nodes (network size) effectively. This common architecture is also effective for the distribution of other atomic commitment protocols besides 2PC, since all such protocols use the same voting mechanism and outcome propagation to protocol participants. Protocol optimizations Database research has been done on ways to get most of the benefits of the two-phase commit protocol while reducing costs by protocol optimizations and protocol operations saving under certain system's behavior assumptions. Presumed abort and presumed commit Presumed abort or Presumed commit are common such optimizations. An assumption about the outcome of transactions, either commit, or abort, can save both messages and logging operations by the participants during the 2PC protocol's execution. For example, when presumed abort, if during system recovery from failure no logged evidence for commit of some transaction is found by the recovery procedure, then it assumes that the transaction has been aborted, and acts accordingly. This means that it does not matter if aborts are logged at all, and such logging can be saved under this assumption. Typically a penalty of additional operations is paid during recovery from failure, depending on optimization type. Thus the best variant of optimization, if any, is chosen according to failure and transaction outcome statistics. Tree two-phase commit protocol The Tree 2PC protocol (also called Nested 2PC, or Recursive 2PC) is a common variant of 2PC in a computer network, which better utilizes the underlying communication infrastructure. The participants in a distributed transaction are typically invoked in an order which defines a tree structure, the invocation tree, where the participants are the nodes and the edges are the invocations (communication links). The same tree is commonly utilized to complete the transaction by a 2PC protocol, but also another communication tree can be utilized for this, in principle. In a tree 2PC the coordinator is considered the root ("top") of a communication tree (inverted tree), while the participants are the other nodes. The coordinator can be the node that originated the transaction (invoked recursively (transitively) the other participants), but also another node in the same tree can take the coordinator role instead. 2PC messages from the coordinator are propagated "down" the tree, while messages to the coordinator are "collected" by a participant from all the participants below it, before it sends the appropriate message "up" the tree (except an abort message, which is propagated "up" immediately upon receiving it or if the current participant initiates the abort). The Dynamic two-phase commit (Dynamic two-phase commitment, D2PC) protocol is a variant of Tree 2PC with no predetermined coordinator. It subsumes several optimizations that have been proposed earlier. Agreement messages (Yes votes) start to propagate from all the leaves, each leaf when completing its tasks on behalf of the transaction (becoming ready). An intermediate (non leaf) node sends ready when an agreement message to the last (single) neighboring node from which agreement message has not yet been received. The coordinator is determined dynamically by racing agreement messages over the transaction tree, at the place where they collide. They collide either at a transaction tree node, to be the coordinator, or on a tree edge. In the latter case one of the two edge's nodes is elected as a coordinator (any node). D2PC is time optimal (among all the instances of a specific transaction tree, and any specific Tree 2PC protocol implementation; all instances have the same tree; each instance has a different node as coordinator): By choosing an optimal coordinator D2PC commits both the coordinator and each participant in minimum possible time, allowing the earliest possible release of locked resources in each transaction participant (tree node). See also Three-phase commit protocol Paxos algorithm Raft algorithm Two Generals' Problem References Data management Transaction processing
Two-phase commit protocol
Technology
2,395
1,136,901
https://en.wikipedia.org/wiki/167%20%28number%29
167 (one hundred [and] sixty-seven) is the natural number following 166 and preceding 168. In mathematics 167 is the 39th prime number, an emirp, an isolated prime, a Chen prime, a Gaussian prime, a safe prime, and an Eisenstein prime with no imaginary part and a real part of the form . 167 is the smallest number which requires six terms when expressed using the greedy algorithm as a sum of squares, 167 = 144 + 16 + 4 + 1 + 1 + 1, although by Lagrange's four-square theorem its non-greedy expression as a sum of squares can be shorter, e.g. 167 = 121 + 36 + 9 + 1. 167 is a full reptend prime in base 10, since the decimal expansion of 1/167 repeats the following 166 digits: 0.00598802395209580838323353293413173652694610778443113772455089820359281437125748502994 0119760479041916167664670658682634730538922155688622754491017964071856287425149700... 167 is a highly cototient number, as it is the smallest number k with exactly 15 solutions to the equation x - φ(x) = k. It is also a strictly non-palindromic number. 167 is the smallest multi-digit prime such that the product of digits is equal to the number of digits times the sum of the digits, i. e., 1×6×7 = 3×(1+6+7) 167 is the smallest positive integer d such that the imaginary quadratic field Q() has class number = 11. External links Prime curiosities: 167 References Integers
167 (number)
Mathematics
404
1,716,947
https://en.wikipedia.org/wiki/Disease%20management%20%28health%29
Disease management is defined as "a system of coordinated healthcare interventions and communications for populations with conditions in which patient self-care efforts are significant." For people who can access healthcare practitioners or peer support, disease management is the process whereby persons with long-term conditions (and often family/friend/carer) share knowledge, responsibility and care plans with practitioners and/or peers. To be effective it requires whole system implementation with community social support networks, a range of satisfying occupations and activities relevant to the context, clinical professionals willing to act as partners or coaches, and on-line resources which are verified and relevant to the country and context. Knowledge sharing, knowledge building and a learning community are integral to the concept of disease management. It is a population health strategy as well as an approach to personal health. It may reduce healthcare costs and/or improve quality of life for individuals by preventing or minimizing the effects of disease, usually a chronic condition, through knowledge, skills, enabling a sense of control over life (despite symptoms of disease), and integrative care. On the other hand, it may increase health care costs by causing high implementation costs and promoting the use of costly health care interventions. History Disease management has evolved from managed care, specialty capitation, and health service demand management, and refers to the processes and people concerned with improving or maintaining health in large populations. It is concerned with common chronic illnesses, and the reduction of future complications associated with those diseases. Illnesses that disease management would concern itself with would include: coronary heart disease, chronic obstructive pulmonary disease (COPD), kidney failure, hypertension, heart failure, obesity, diabetes mellitus, asthma, cancer, arthritis, clinical depression, sleep apnea, osteoporosis, and other common ailments. Industry In the United States, disease management is a large industry with many vendors. Major disease management organizations based on revenues and other criteria include Accordant (a subsidiary of Caremark), Alere (now including ParadigmHealth and Matria Healthcare), Caremark (excluding its Accordant subsidiary), Evercare, Health Dialog, Healthways, LifeMasters (now part of StayWell), LifeSynch (formerly Corphealth), Magellan, McKesson Health Solutions, and MedAssurant. Disease management is of particular importance to health plans, agencies, trusts, associations and employers that offer health insurance. A 2002 survey found that 99.5% of enrollees of Health Maintenance Organization/Point Of Service (HMO/POS) plans are in plans that cover at least one disease management program. A Mercer Consulting study indicated that the percentage of employer-sponsored health plans offering disease management programs grew to 58% in 2003, up from 41% in 2002. It was reported that $85 million was spent on disease management in the United States in 1997, and $600 million in 2002. Between 2000 and 2005, the compound annual growth rate of revenues for disease management organizations was 28%. In 2000, the Boston Consulting Group estimated that the U.S. market for outsourced disease management could be $20 billion by 2010; however, in 2008 the Disease Management Purchasing Consortium estimated that disease management organization revenues would be $2.8 billion by 2010. As of 2010, a study using National Ambulatory Medical Care Survey data estimated that 21.3% of patients in the U.S. with at least one chronic condition use disease management programs. Yet, management of chronic conditions is responsible for more than 75% of all health care spending. During the 2000s, payers have then embraced disease management in many other world regions. In Europe, notable examples include Germany and France. In Germany, the first national disease management program for diabetes enrolled patients in 2003. They are funded and operated by individual sickness funds that in turn contract with regular health care providers. In France, the program Sophia for diabetic patients was introduced in 2008. It is financed and operated as a single national program by statutory health insurance, which has contracted with a private provider for support services. The introduction of these programs was in part facilitated by support from international organizations or firms and study trips or other forms of exchange with Anglo-Saxon countries. Process The underlying premise of disease management is that when the right tools, experts, and equipment are applied to a population, labor costs (specifically: absenteeism, presenteeism, and direct insurance expenses) can be minimized in the near term, or resources can be provided more efficiently. The general idea is to ease the disease path, rather than cure the disease. Improving quality and activities for daily living are first and foremost. Improving cost, in some programs, is a necessary component, as well. However, some disease management systems believe that reductions in longer-term problems may not be measureable today, but may warrant continuation of disease management programs until better data is available in 10–20 years. Most disease management vendors offer return on investment (ROI) for their programs, although over the years there have been dozens of ways to measure ROI. Responding to this inconsistency, an industry trade association, the Care Continuum Alliance, convened industry leaders to develop consensus guidelines for measuring clinical and financial outcomes in disease management, wellness and other population-based programs. Contributing to the work were public and private health and quality organizations, including the federal Agency for Healthcare Research and Quality, the National Committee for Quality Assurance, URAC, and the Joint Commission. The project produced the first volume of a now four-volume Outcomes Guidelines Report, which details industry-consensus approaches to measuring outcomes. Tools include web-based assessment tools, clinical guidelines, health risk assessments, outbound and inbound call-center-based triage, best practices, formularies, and numerous other devices, systems and protocols. Experts include actuaries, physicians, pharmacists, medical economists, nurses, nutritionists, physical therapists, statisticians, epidemiologists, and human resources professionals. Equipment can include mailing systems, web-based applications (with or without interactive modes), monitoring devices, or telephonic systems. Effectiveness Possible biases When disease management programs are voluntary, studies of their effectiveness may be affected by a self-selection bias; that is, a program may "attract enrollees who were [already] highly motivated to succeed". At least two studies have found that people who enroll in disease management programs differ significantly from those who do not on baseline clinical, demographic, cost, utilization and quality parameters. To minimize any bias in estimates of the effectiveness of disease management due to differences in baseline characteristics, randomized controlled trials are better than observational studies. Even if a particular study is a randomized trial, it may not provide strong evidence for the effectiveness of disease management. A 2009 review paper examined randomized trials and meta-analyses of disease management programs for heart failure and asserted that many failed the PICO process and Consolidated Standards of Reporting Trials: "interventions and comparisons are not sufficiently well described; that complex programs have been excessively oversimplified; and that potentially salient differences in programs, populations, and settings are not incorporated into analyses." Medicare Section 721 of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 authorized the Centers for Medicare and Medicaid Services (CMS) to conduct what became the "Medicare Health Support" project to examine disease management. Phase I of the project involved disease management companies (such as Aetna Health Management, CIGNA Health Support, Health Dialog Services Corp., Healthways, and McKesson Health Solutions) chosen by a competitive process in eight states and the District of Columbia. The project focused on people with diabetes or heart failure who had relatively high Medicare payments; in each location, approximately 20,000 such people were randomly assigned to an intervention group and 10,000 were randomly assigned to a control group. CMS set goals in the areas of clinical quality and beneficiary satisfaction, and negotiated with the disease management programs for a target of 5% savings in Medicare costs. The programs started between August 2005 and January 2006. What is now the Care Continuum Alliance praised the project as "the first-ever national pilot integrating sophisticated care management techniques into the Medicare fee-for-service program". An initial evaluation of Phase I of the project by RTI International appeared in June 2007 which had "three key participation and financial findings": Medicare expenditures for the intervention group were higher than those of the comparison group by the time the pilots started. Within the intervention group, participants had lower Medicare payments (i.e., tended to be healthier) than non-participants. The "fees paid to date far exceed any savings produced." DMAA focused on another finding of the initial evaluation, the "high levels of satisfaction with chronic disease management services among beneficiaries and physicians". One commentary noted that the project "can only be observational" since "equivalence was not achieved at baseline". Another commentary claimed that the project was "in big trouble". A paper on the six-month evaluation, published in fall 2008, concluded that "Results to date indicate limited success in achieving Medicare cost savings or reducing acute care utilization". In December 2007, CMS changed the financial threshold from 5% savings to budget neutrality, a change that DMAA "hailed". In January 2008, however, CMS decided to end Phase I because it claimed that the statutory authority had run out. Four U.S. senators wrote a letter to CMS to reverse its decision. DMAA decried the termination of Phase I and called upon CMS to start Phase II as soon as possible. Among other criticisms of the project, the disease management companies claimed that Medicare "signed up patients who were much sicker than they had expected," failed to transmit information on patients' prescriptions and laboratory results to them in a timely fashion, and disallowed the companies from selecting patients most likely to benefit from disease management. By April 2008, CMS had spent $360 million on the project. The individual programs ended between December 2006 and August 2008. The results of the program were published in The New England Journal of Medicine in November 2011. Comparing the 163,107 patients randomized to the intervention group with the 79,310 patients randomized to the control group, the researchers found that "disease-management programs did not reduce hospital admissions or emergency room visits, as compared with usual care." Furthermore, there was "no demonstrable savings in Medicare expenditures," with the net fees for disease management ranging from 3.8% to 10.9% per patient per month. The researchers suggested that the findings might be explained by the severity of chronic disease among the patients studied, delays in patients' receiving disease management after hospitalizations, and lack of integration between health coaches and the patients' primary care providers. Other studies Studies that have reviewed other studies on the effectiveness of disease management include the following: A 2004 Congressional Budget Office analysis concluded that published studies "do not provide a firm basis for concluding that disease management programs generally reduce total costs". The report caused the disease management industry to "scrambl[e] to build a better business case for their services". A 2005 review of 44 studies on disease management found a positive return on investment (ROI) for congestive heart failure and multiple disease conditions, but inconclusive, mixed, or negative ROI for diabetes, asthma, and depression management programs. The lead author, of Cornell University and Thomson Medstat, was quoted as saying that the paucity of research conducted on the ROI of disease management was "a concern because so many companies and government agencies have adopted disease management to manage the cost of care for people with chronic conditions." A 2007 RAND summary of 26 reviews and meta-analyses of small-scale disease management programs, and 3 evaluations of population-based disease management programs, concluded that "Payers and policy makers should remain skeptical about vendor claims [concerning disease management] and should demand supporting evidence based on transparent and scientifically sound methods." In specific: Disease management improved "clinical processes of care" (e.g., adherence to evidence-based guidelines) for congestive heart failure, coronary artery disease, diabetes, and depression. There was inconclusive evidence, insufficient evidence, or evidence for no effect of disease management on health-related behaviors. Disease management led to better disease control for congestive heart failure, coronary artery disease, diabetes, and depression. There was inconclusive evidence, insufficient evidence, or evidence for no effect of disease management on clinical outcomes (e.g., "mortality and functional status"). Disease management reduced hospital admission rates for congestive heart failure, but increased health care utilization for depression, with inconclusive or insufficient evidence for the other diseases studied. In the area of financial outcomes, there was inconclusive evidence, insufficient evidence, evidence for no effect, or evidence for increased costs. Disease management increased patient satisfaction and health-related quality of life in congestive heart failure and depression, but the evidence was insufficient for the other diseases studied. A subsequent letter to the editor claimed that disease management might nevertheless "satisfy buyers today, even if academics remain unconvinced". A 2008 systematic review and meta-analysis concluded that disease management for COPD "modestly improved exercise capacity, health-related quality of life, and hospital admissions, but not all-cause mortality". A 2009 review of 27 studies "could not draw definitive conclusions about the effectiveness or cost-effectiveness of... asthma disease-management programs" for adults. A Canadian systematic review published in 2009 found that home telehealth in chronic disease management may be cost-saving but that "the quality of the studies was generally low." Researchers from The Netherlands systematically reviewed 31 papers published 2007–09 and determined that the evidence that disease management programs for four diseases reduce healthcare expenditures is "inconclusive." A meta-analysis of randomized trials published through 2009 estimated that disease management for diabetes has "a clinically moderate but significant impact on hemoglobin A1C levels," with an absolute mean difference of 0.51% between experimental and control groups. A 2011 "meta-review" (systematic review of meta-analyses) of heart failure disease management programs found them to be of "mixed quality" in that they did not report important characteristics of the studies reviewed. A 2015 systematic review of randomised controlled trials examining the impact of chronic disease management programmes for adults with asthma, found that having a coordinated program approach with multiple health care professionals compared with usual care can positively impact on the severity of asthma, can improve lung function and also perceived quality of life. Studies not reviewed in the aforementioned papers include the following: A U.K. study published in 2007 found certain improvements in the care of patients with coronary artery disease and heart failure (e.g., better management of blood pressure and cholesterol) if they received nurse-led disease management instead of usual care. In a 2007 Canadian study, people were randomized to receive or not receive disease management for heart failure for a period of six months. Emergency department visits, hospital readmissions, and all-cause deaths were no different in the two groups after 2.8 years of follow-up. A 2008 U.S. study found that nurse-led disease management for patients with heart failure was "reasonably cost-effective" per quality-adjusted life year compared with a "usual care group". A 2008 study from the Netherlands compared no disease management with "basic" nurse-led disease management with "intensive" nurse-led disease management for patients discharged from the hospital with heart failure; it detected no significant differences in hospitalization and death for the three groups of patients. A retrospective cohort study from 2008 found that disease management did not increase the use of drugs recommended for patients after a heart attack. Of 15 care coordination (disease management) programs followed for two years in a 2008 study, "few programs improved patient behaviors, health, or quality of care" and "no program reduced gross or net expenditures". After 18 months, a 2008 Florida study found "virtually no overall impacts on hospital or emergency room (ER) use, Medicare expenditures, quality of care, or prescription drug use" for a disease management program. With minor exceptions, a paper published in 2008 did not find significant differences in outcomes among people with asthma randomly assigned to telephonic disease management, augmented disease management (including in-home respiratory therapist visits), or traditional care. A 2009 review by the Centers for Medicare and Medicaid Services of 35 disease management programs that were part of demonstration projects between 1999 and 2008 found that relatively few improved quality in a budget-neutral manner. In a 2009 randomized trial, high- and moderate-intensity disease management did not improve smoking cessation rates after 24 months compared with drug therapy alone. A randomized trial published in 2010 determined that disease management reduced a composite score of emergency department visits and hospitalizations among patients discharged from Veterans Administration hospitals for chronic obstructive pulmonary disease. A 2011 post-hoc analysis of the study's data estimated that the intervention produced a net cost savings of $593 per patient. A Spanish study published in 2011 randomized 52 people hospitalized for heart failure to follow-up with usual care, 52 to home visits, 52 to telephone follow-up, and 52 to an in-hospital heart failure unit. After a median of 10.8 months of follow-up, there were no significant differences in hospitalization or mortality among the four groups. Among 18- to 64-year-old people with chronic diseases receiving Medicaid, telephone-based disease management in one group of members did not reduce ambulatory care visits, hospitalizations, or expenditures relative to a control group. Furthermore, in this 2011 study, the group receiving disease management had a lower decrease in emergency department visits than the group not receiving disease management. See also Ambulatory care sensitive conditions Chronic care management Expert Patient Programme Disaboom Medical case management References Further reading Todd, Warren E., and David B. Nash. Disease management: a systems approach to improving patient outcomes. Chicago: American Hospital Pub., 1997. Couch, James B. The health care professional's guide to disease management: patient-centered care for the 21st century. Gaithersburg, MD: Aspen Publishers, 1998. Patterson, Richard. Changing patient behavior: improving outcomes in health and disease management. San Francisco: Jossey-Bass, 2001. Disease management for nurse practitioners. Springhouse, PA: Springhouse, 2002. Howe, Rufus S. The disease manager's handbook. Sudbury, MA: Jones and Bartlett, 2005. Huber, Diane. Disease management: a guide for case managers. St. Louis: Elsevier Saunders, 2005. Nuovo, Jim, editor. Chronic disease management. New York, NY: Springer, 2007. Evidence-based nursing guide to disease management. Philadelphia: Lippincott Williams & Wilkins, 2009. External links Australian Disease Management Association Care Continuum Alliance. Advancing the Population Health Improvement Model. Center for Managing Chronic Disease. University of Michigan Disease Management: A collection of articles from MANAGED CARE magazine Disease Management Association of India Disease Management: Findings from Leading State Programs by Ben Wheatley (AcademyHealth State Coverage Initiatives Issue Brief, Vol. III, No. 3, December 2002) Disease Management in Medicare: Data Analysis and Benefit Design Issues by Dan L. Crippen (Testimony before the Special Committee on Aging, United States Senate, September 19, 2002) Disease Management Purchasing Consortium International, Inc. Evaluating ROI in State Disease Management Programs by Thomas W. Wilson (AcademyHealth State Coverage Initiatives Issue Brief, Vol. IV, No. 5, November 2003) Square peg in a round hole? Disease management in traditional Medicare. Special Committee on Aging, U.S. Senate, November 4, 2003. Health informatics Health care management Health care quality
Disease management (health)
Biology
4,081
41,970,839
https://en.wikipedia.org/wiki/USA-239
USA-239, also known as GPS IIF-3, GPS SVN-65, and Navstar-67 is an American navigation satellite which forms part of the Global Positioning System. It was the third of twelve Block IIF satellites to be launched. Built by Boeing and launched by United Launch Alliance, USA-239 was launched at 12:10 UTC on 4 October 2012, atop a Delta IV carrier rocket, flight number D361, flying in the Medium+(4,2) configuration. The launch took place from Space Launch Complex 37B at the Cape Canaveral Air Force Station, and placed USA-239 directly into medium Earth orbit. The rocket's second stage failed to provide the expected full thrust in all of its three burns due to a leak above the narrow throat portion of the thrust chamber, however the stage had enough propellant margins to put the satellite in the correct orbit. As of 18 February 2014, USA-239 was in an orbit with a perigee of , an apogee of , a period of 717.96 minutes, and 54.87 degrees of inclination to the equator. It is used to broadcast the PRN 24 signal, and operates in slot 1 of plane A of the GPS constellation. The satellite has a design life of 15 years and a mass of . As of 2019 it remains in service. References Spacecraft launched in 2012 GPS satellites USA satellites Spacecraft launched by Delta IV rockets
USA-239
Technology
293
1,016,556
https://en.wikipedia.org/wiki/Induced%20seismicity
Induced seismicity is typically earthquakes and tremors that are caused by human activity that alters the stresses and strains on Earth's crust. Most induced seismicity is of a low magnitude. A few sites regularly have larger quakes, such as The Geysers geothermal plant in California which averaged two M4 events and 15 M3 events every year from 2004 to 2009. The Human-Induced Earthquake Database (HiQuake) documents all reported cases of induced seismicity proposed on scientific grounds and is the most complete compilation of its kind. Results of ongoing multi-year research on induced earthquakes by the United States Geological Survey (USGS) published in 2015 suggested that most of the significant earthquakes in Oklahoma, such as the 1952 magnitude 5.7 El Reno earthquake may have been induced by deep injection of wastewater by the oil industry. A huge number of seismic events in oil and gas extraction states like Oklahoma is caused by increasing the volume of wastewater injection that is generated as part of the extraction process. "Earthquake rates have recently increased markedly in multiple areas of the Central and Eastern United States (CEUS), especially since 2010, and scientific studies have linked the majority of this increased activity to wastewater injection in deep disposal wells." Induced seismicity can also be caused by the injection of carbon dioxide as the storage step of carbon capture and storage, which aims to sequester carbon dioxide captured from fossil fuel production or other sources in Earth's crust as a means of climate change mitigation. This effect has been observed in Oklahoma and Saskatchewan. Though safe practices and existing technologies can be utilized to reduce the risk of induced seismicity due to injection of carbon dioxide, the risk is still significant if the storage is large in scale. The consequences of the induced seismicity could disrupt pre-existing faults in the Earth's crust as well as compromise the seal integrity of the storage locations. The seismic hazard from induced seismicity can be assessed using similar techniques as for natural seismicity, although accounting for non-stationary seismicity. It appears that earthquake shaking from induced earthquakes may be similar to that observed in natural tectonic earthquakes, or may have higher shaking at shorter distances. This means that ground-motion models derived from recordings of natural earthquakes, which are often more numerous in strong-motion databases than data from induced earthquakes, may be used with minor adjustments. Subsequently, a risk assessment can be performed, taking into account the increased seismic hazard and the vulnerability of the exposed elements at risk (e.g. local population and the building stock). Finally, the risk can, theoretically at least, be mitigated, either through reductions to the hazard or a reduction to the exposure or the vulnerability. Causes There are many ways in which induced seismicity has been seen to occur. In the 2010s, some energy technologies that inject or extract fluid from the Earth, such as oil and gas extraction and geothermal energy development, have been found or suspected to cause seismic events. Some energy technologies also produce wastes that may be managed through disposal or storage by injection deep into the ground. For example, waste water from oil and gas production and carbon dioxide from a variety of industrial processes may be managed through underground injection. Artificial lakes The column of water in a large and deep artificial lake alters in-situ stress along an existing fault or fracture. In these reservoirs, the weight of the water column can significantly change the stress on an underlying fault or fracture by increasing the total stress through direct loading, or decreasing the effective stress through the increased pore water pressure. This significant change in stress can lead to sudden movement along the fault or fracture, resulting in an earthquake. Reservoir-induced seismic events can be relatively large compared to other forms of induced seismicity. Though understanding of reservoir-induced seismic activity is very limited, it has been noted that seismicity appears to occur on dams with heights larger than . The extra water pressure created by large reservoirs is the most accepted explanation for the seismic activity. When the reservoirs are filled or drained, induced seismicity can occur immediately or with a small time lag. The first case of reservoir-induced seismicity occurred in 1932 in Algeria's Oued Fodda Dam. The 6.3 magnitude 1967 Koynanagar earthquake occurred in Maharashtra, India with its epicenter, fore- and aftershocks all located near or under the Koyna Dam reservoir. 180 people died and 1,500 were left injured. The effects of the earthquake were felt away in Bombay with tremors and power outages. During the beginnings of the Vajont Dam in Italy, there were seismic shocks recorded during its initial fill. After a landslide almost filled the reservoir in 1963, causing a massive flooding and around 2,000 deaths, it was drained and consequently seismic activity was almost non-existent. On August 1, 1975, a magnitude 6.1 earthquake at Oroville, California, was attributed to seismicity from a large earth-fill dam and reservoir recently constructed and filled. The filling of the Katse Dam in Lesotho, and the Nurek Dam in Tajikistan is an example. In Zambia, Kariba Lake may have provoked similar effects. The 2008 Sichuan earthquake, which caused approximately 68,000 deaths, is another possible example. An article in Science suggested that the construction and filling of the Zipingpu Dam may have triggered the earthquake. Some experts worry that the Three Gorges Dam in China may cause an increase in the frequency and intensity of earthquakes. Mining Mining affects the stress state of the surrounding rock mass, often causing observable deformation and seismic activity. A small portion of mining-induced events are associated with damage to mine workings and pose a risk to mine workers. These events are known as rock bursts in hard rock mining, or as bumps in underground coal mining. A mine's propensity to burst or bump depends primarily on depth, mining method, extraction sequence and geometry, and the material properties of the surrounding rock. Many underground hardrock mines operate seismic monitoring networks in order to manage bursting risks, and guide mining practices. Seismic networks have recorded a variety of mining-related seismic sources including: Shear slip events (similar to tectonic earthquakes) which are thought to have been triggered by mining activity. Notable examples include the 1980 Bełchatów earthquake and the 2014 Orkney earthquake. Implosional events associated with mine collapses. The 2007 Crandall Canyon mine collapse and the Solvay Mine Collapse are examples of these. Explosions associated with routine mining practices, such as drilling and blasting, and unintended explosions such as the Sago mine Disaster. Explosions are generally not considered "induced" events since they are caused entirely by chemical payloads. Most earthquake monitoring agencies take careful measures to identify explosions and exclude them from earthquake catalogs. Fracture formation near the surface of excavations, which are usually small magnitude events only detected by dense in-mine networks. Slope failures, the largest example being the Bingham Canyon Landslide. Waste disposal wells Injecting liquids into waste disposal wells, most commonly in disposing of produced water from oil and natural gas wells, has been known to cause earthquakes. This high-saline water is usually pumped into salt water disposal (SWD) wells. The resulting increase in subsurface pore pressure can trigger movement along faults, resulting in earthquakes. One of the first known examples was from the Rocky Mountain Arsenal, northeast of Denver. In 1961, waste water was injected into deep strata, and this was later found to have caused a series of earthquakes. The 2011 Oklahoma earthquake near Prague, of magnitude 5.8, occurred after 20 years of injecting waste water into porous deep formations at increasing pressures and saturation. On September 3, 2016, an even stronger earthquake with a magnitude of 5.8 occurred near Pawnee, Oklahoma, followed by nine aftershocks between magnitudes 2.6 and 3.6 within hours. Tremors were felt as far away as Memphis, Tennessee, and Gilbert, Arizona. Mary Fallin, the Oklahoma governor, declared a local emergency and shutdown orders for local disposal wells were ordered by the Oklahoma Corporation Commission. Results of ongoing multi-year research on induced earthquakes by the United States Geological Survey (USGS) published in 2015 suggested that most of the significant earthquakes in Oklahoma, such as the 1952 magnitude 5.5 El Reno earthquake may have been induced by deep injection of waste water by the oil industry. Prior to April 2015 however, the Oklahoma Geological Survey's position was that the quake was most likely due to natural causes and was not triggered by waste injection. This was one of many earthquakes which have affected the Oklahoma region. Since 2009, earthquakes have become hundreds of times more common in Oklahoma with magnitude 3 events increasing from 1 or 2 per year to 1 or 2 per day. On April 21, 2015, the Oklahoma Geological Survey released a statement reversing its stance on induced earthquakes in Oklahoma: "The OGS considers it very likely that the majority of recent earthquakes, particularly those in central and north-central Oklahoma, are triggered by the injection of produced water in disposal wells." Hydrocarbon extraction and storage Large-scale fossil fuel extraction can generate earthquakes. Induced seismicity can be also related to underground gas storage operations. The 2013 September–October seismic sequence occurred 21 km off the coast of the Valencia Gulf (Spain) is probably the best known case of induced seismicity related to Underground Gas Storage operations (the Castor Project). In September 2013, after the injection operations started, the Spanish seismic network recorded a sudden increase of seismicity. More than 1,000 events with magnitudes () between 0.7 and 4.3 (the largest earthquake ever associated with gas storage operations) and located close the injection platform were recorded in about 40 days. Due to the significant population concern the Spanish Government halted the operations. By the end of 2014, the Spanish government definitively terminated the concession of the UGS plant. Since January 2015 about 20 people who took part in the transaction and approval of the Castor Project were indicted. Groundwater extraction The changes in crustal stress patterns caused by the large scale extraction of groundwater has been shown to trigger earthquakes, as in the case of the 2011 Lorca earthquake. Geothermal energy Enhanced geothermal systems (EGS), a new type of geothermal power technology that does not require natural convective hydrothermal resources, are known to be associated with induced seismicity. EGS involves pumping fluids at pressure to enhance or create permeability through the use of hydraulic fracturing techniques. Hot dry rock (HDR) EGS actively creates geothermal resources through hydraulic stimulation. Depending on the rock properties, and on injection pressures and fluid volume, the reservoir rock may respond with tensile failure, as is common in the oil and gas industry, or with shear failure of the rock's existing joint set, as is thought to be the main mechanism of reservoir growth in EGS efforts. HDR and EGS systems are currently being developed and tested in Soultz-sous-Forêts (France), Desert Peak and the Geysers (U.S.), Landau (Germany), and Paralana and Cooper Basin (Australia). Induced seismicity events at the Geysers geothermal field in California has been strongly correlated with injection data. The test site at Basel, Switzerland, has been shut down due to induced seismic events. In November 2017 a Mw 5.5 struck the city of Pohang (South Korea) injuring several people and causing extensive damage. The proximity of the seismic sequence to an EGS site, where stimulation operations had taken place only a few months before the earthquake, raised the possibility that this earthquake had been anthropogenic. According to two different studies it seems plausible that the Pohang earthquake was induced by EGS operations. Researchers at MIT believe that seismicity associated with hydraulic stimulation can be mitigated and controlled through predictive siting and other techniques. With appropriate management, the number and magnitude of induced seismic events can be decreased, significantly reducing the probability of a damaging seismic event. Induced seismicity in Basel led to suspension of its HDR project. A seismic hazard evaluation was then conducted, which resulted in the cancellation of the project in December 2009. Hydraulic fracturing Hydraulic fracturing is a technique in which high-pressure fluid is injected into the low-permeable reservoir rocks in order to induce fractures to increase hydrocarbon production. This process is generally associated with seismic events that are too small to be felt at the surface (with moment magnitudes ranging from −3 to 1), although larger magnitude events are not excluded. For example, several cases of larger magnitude events (M > 4) have been recorded in Canada in the unconventional resources of Alberta and British Columbia. Carbon capture and storage Risk analysis Operation of technologies involving long-term geologic storage of waste fluids have been shown to induce seismic activity in nearby areas, and correlation of periods of seismic dormancy with minima in injection volumes and pressures has even been demonstrated for fracking wastewater injection in Youngstown, Ohio. Of particular concern to the viability of carbon dioxide storage from coal-fired power plants and similar endeavors is that the scale of intended CCS projects is much larger in both injection rate and total injection volume than any current or past operation that has already been shown to induce seismicity. As such, extensive modeling must be done of future injection sites in order to assess the risk potential of CCS operations, particularly in relation to the effect of long-term carbon dioxide storage on shale caprock integrity, as the potential for fluid leaks to the surface might be quite high for moderate earthquakes. However, the potential of CCS to induce large earthquakes and CO2 leakage remains a controversial issue., Monitoring Since geological sequestration of carbon dioxide has the potential to induce seismicity, researchers have developed methods to monitor and model the risk of injection-induced seismicity in order to manage better the risks associated with this phenomenon. Monitoring can be conducted with measurements from an instrument such as a geophone to measure the movement of the ground. Generally a network of instruments is used around the site of injection, although many current carbon dioxide injection sites use no monitoring devices. Modelling is an important technique for assessing the potential for induced seismicity and two primary models are used: Physical and numerical. A physical model uses measurements from the early stages of a project to forecast how the project will behave once more carbon dioxide is injected. A numerical model, on the other hand, uses numerical methods to simulate the physics of what is happening within the reservoir. Both modelling and monitoring are useful tools whereby to quantify, understand better and mitigate the risks associated with injection-induced seismicity. Failure mechanisms due to fluid injection To assess induced seismicity risks associated with carbon storage, one must understand the mechanisms behind rock failure. The Mohr-Coulomb failure criteria describe shear failure on a fault plane. Most generally, failure will happen on existing faults due to several mechanisms: an increase in shear stress, a decrease in normal stress or a pore pressure increase. The injection of supercritical will change the stresses in the reservoir as it expands, causing potential failure on nearby faults. Injection of fluids also increases the pore pressures in the reservoir, triggering slip on existing rock weakness planes. The latter is the most common cause of induced seismicity due to fluid injection. The Mohr-Coulomb failure criteria state that with the critical shear stress leading to failure on a fault, the cohesive strength along the fault, the normal stress, the friction coefficient on the fault plane and the pore pressure within the fault. When is attained, shear failure occurs and an earthquake can be felt. This process can be represented graphically on a Mohr's circle. Comparison of risks due to CCS versus other injection methods While there is risk of induced seismicity associated with carbon capture and storage underground on a large scale, it is currently a much less serious risk than other injection types. Wastewater injection, hydraulic fracturing, and secondary recovery after oil extraction have all contributed significantly more to induced seismic events than carbon capture and storage in the last several years. There have actually not been any major seismic events associated with carbon injection at this point, whereas there have been recorded seismic occurrences caused by the other injection methods. One such example is massively increased induced seismicity in Oklahoma, USA caused by injection of huge volumes of wastewater into the Arbuckle Group sedimentary rock. Electromagnetic pulses It has been shown that high-energy electromagnetic pulses can trigger the release of energy stored by tectonic movements by increasing the rate of local earthquakes, within 2–6 days after the emission by the EMP generators. The energy released is approximately six orders of magnitude larger than the EM pulses energy. The release of tectonic stress by these relatively small triggered earthquakes equals to 1-17% of the stress released by a strong earthquake in the area. It has been proposed that strong EM impacts could control seismicity as during the periods of the experiments and long time after, the seismicity dynamics were a lot more regular than usual. Risk analysis Risk factors Risk is defined as the probability of being impacted from an event in the future. Seismic risk is generally estimated by combining the seismic hazard with the exposure and vulnerability at a site or over a region. The hazard from earthquakes depends on the proximity to potential earthquake sources, and the rates of occurrence of different magnitude earthquakes for those sources, and the propagation of seismic waves from the sources to the site of interest. Hazard is then represented in terms of the probability of exceeding some level of ground shaking at a site. Earthquake hazards can include ground shaking, liquefaction, surface fault displacement, landslides, tsunamis, and uplift/subsidence for very large events (ML > 6.0). Because induced seismic events, in general, are smaller than ML 5.0 with short durations, the primary concern is ground shaking. Ground shaking Ground shaking can result in both structural and nonstructural damage to buildings and other structures. It is commonly accepted that structural damage to modern engineered structures happens only in earthquakes larger than ML 5.0. In seismology and earthquake engineering, ground shaking can be measured as peak ground velocity (PGV), peak ground acceleration (PGA) or spectral acceleration (SA) at a building's period of excitation. In regions of historical seismicity where buildings are engineered to withstand seismic forces, moderate structural damage is possible, and very strong shaking can be perceived when PGA is greater than 18-34% of g (the acceleration of gravity). In rare cases, nonstructural damage has been reported in earthquakes as small as ML 3.0. For critical facilities like dams and nuclear plants, the acceptable levels of ground shaking is lower than that for buildings. Probabilistic seismic hazard analysis Extended reading – An Introduction to Probabilistic Seismic Hazard Analysis (PSHA) Probabilistic Seismic Hazard Analysis (PSHA) is a probabilistic framework that accounts for probabilities in earthquake occurrence and the probabilities in ground motion propagation. Using the framework, the probability of exceeding a certain level of ground shaking at a site can be quantified, taking into account all the possible earthquakes (both natural and induced). PSHA methodology is used to determine seismic loads for building codes in both the United States and Canada, and increasingly in other parts of the world, as well as protecting dams and nuclear plants from the damage of seismic events. Calculating Seismic Risk Earthquake source characterization Understanding the geological background on the site is a prerequisite for seismic hazard estimation. Formations of the rocks, subsurface structures, locations of faults, state of stresses and other parameters that contribute to possible seismic events are considered. Records of past earthquakes of the site are also taken into account. Recurrence pattern The magnitudes of earthquakes occurring at a source generally follow the Gutenberg-Richter relation that states that the number of earthquakes decrease exponentially with increase in magnitude, as shown below, where is the magnitude of seismic events, is the number of events with magnitudes bigger than , is the rate parameter and is the slope. and vary for different sources. In the case of natural earthquakes, historical seismicity is used to determine these parameters. Using this relationship, the number and probability of earthquakes exceeding a certain magnitude can be predicted following the assumptions that earthquakes follow a Poisson process. However, the goal of this analysis is to determine the possibility of future earthquakes. For induced seismicity in contrast to natural seismicity, the earthquake rates change over time as a result of changes in human activity, and hence are quantified as non-stationary processes with varying seismicity rates over time. Ground motions At a given site, the ground motion describes the seismic waves that would have been observed at that site with a seismometer. In order to simplify the representation of an entire seismogram, PGV (peak ground velocity), PGA (peak ground acceleration), spectral acceleration (SA) at different period, earthquake duration, arias intensity (IA) are some of the parameters that are used to represent ground shaking. Ground motion propagation from the source to a site for an earthquake of a given magnitude is estimated using ground motion prediction equations (GMPE) that have been developed based on historical records. Since historical records are scarce for induced seismicity, researchers have provided modifications to GMPEs for natural earthquakes in order to apply them to indced earthquakes. Seismic hazard The PSHA framework uses the distributions of earthquake magnitudes and ground motion propagation to estimate the seismic hazard – the probability of exceeding a certain level of ground shaking (PGA, PGV, SA, IA, etc.) in the future. Depending on the complexity of the probability distributions, either numerical methods or simulations (such as, Monte Carlo method) may be used to estimate seismic hazard. In the case of induced seismicity, the seismic hazard is not constant, but varies with time due to changes in the underlying seismicity rates. Exposure and vulnerability In order to estimate seismic risk, the hazard is combined with the exposure and vulnerability at a site or in a region. For example, if an earthquake occurs where there are no humans or structures, there would be no human impacts despite any level of seismic hazard. Exposure is defined as the set of entities (such as, buildings and people) that exist at a given site or a region. Vulnerability is defined as the potential of impact to those entities, for example, structural or non-structural damage to a building, and loss of well-being and life for people. Vulnerability can also be represented probabilistically using vulnerability or fragility functions. A vulnerability or fragility function specifies the probability of impact at different levels of ground shaking. In regions like Oklahoma without a lot of historical natural seismicity, structures are not engineered to withstand seismic forces, and as a result are more vulnerable even at low levels of ground shaking, as compared to structures in tectonic regions like California and Japan. Seismic risk Seismic risk is defined as the probability of exceeding a certain level of impact in the future. For example, it may estimate the exceedance probability of moderate or more damage to a building in the future. Seismic hazard is combined with the exposure and vulnerability to estimate seismic risk. While numerical methods may be used to estimate risk at one site, simulation-based methods are better suited to estimate seismic risk for a region with a portfolio of entities, in order to correctly account for the correlations in ground shaking, and impacts. In the case of induced seismicity, the seismic risk varies over time due to changes in the seismic hazard. Risk Mitigation Induced seismicity can cause damage to infrastructure and has been documented to damage buildings in Oklahoma. It can also lead to brine and leakages. It is easier to predict and mitigate seismicity caused by explosions. Common mitigation strategies include constraining the amount of dynamite used in one single explosion and the locations of the explosions. For injection-related induced seismicity, however, it is still difficult to predict when and where induced seismic events will occur, as well as the magnitudes. Since induced seismic events related to fluid injection are unpredictable, it has garnered more attention from the public. Induced seismicity is only part of the chain reaction from industrial activities that worry the public. Impressions toward induced seismicity are very different between different groups of people. The public tends to feel more negatively towards earthquakes caused by human activities than natural earthquakes. Two major parts of public concern are related to the damages to infrastructure and the well-being of humans. Most induced seismic events are below M 2 and are not able to cause any physical damage. Nevertheless, when the seismic events are felt and cause damages or injuries, questions arise from the public whether it is appropriate to conduct oil and gas operations in those areas. Public perceptions may vary based on the population and tolerance of local people. For example, in the seismically active Geysers geothermal area in Northern California, which is a rural area with a relatively small population, the local population tolerates earthquakes up to M 4.5. Actions have been taken by regulators, industry and researchers. On October 6, 2015, people from industry, government, academia, and the public gathered together to discuss how effective it was to implement a traffic light system or protocol in Canada to help manage risks from induced seismicity. Risk assessment and tolerance for induced seismicity, however, is subjective and shaped by different factors like politics, economics, and understanding from the public. Policymakers have to often balance the interests of industry with the interests of the population. In these situations, seismic risk estimation serves as a critical tool for quantifying future risk, and can be used to regulate earthquake-inducing activities until the seismic risk reaches a maximum acceptable level to the population. Traffic Light System One of the methods suggested to mitigate seismic risk is a Traffic Light System (TLS), also referred to as Traffic Light Protocol (TLP), which is a calibrated control system that provides continuous and real-time monitoring and management of ground shaking of induced seismicity for specific sites. TLS was first implemented in 2005 in an enhanced geothermal plant in Central America. For oil and gas operations, the most widely implemented one is modified by the system used in the UK. Normally there are two types of TLS – the first one sets different thresholds, usually earthquake local magnitudes (ML) or ground motions from small to large. If the induced seismicity reaches the smaller thresholds, modifications of the operations are implemented by the operators and the regulators are informed. If the induced seismicity reaches the larger thresholds, operations are shut down immediately. The second type of traffic light system sets only one threshold. If this threshold is reached, the operations are halted. This is also called a "stop light system". Thresholds for the traffic light system vary between and within countries, depending on the area. However, the traffic light system is not able to account for future changes in seismicity. It may take time for changes in human activities to mitigate the seismic activity, and it has been observed that some of the largest induced earthquakes have occurred after stopping fluid injection. Nuclear explosions Nuclear explosions can cause seismic activity, but according to USGS, the resulting seismic activity is less energetic than the original nuclear blast, and generally does not produce large aftershocks. Nuclear explosions may instead release the elastic strain energy that was stored in the rock, strengthening the initial blast shockwave. U.S. National Research Council report A 2013 report from the U.S. National Research Council examined the potential for energy technologies—including shale gas recovery, carbon capture and storage, geothermal energy production, and conventional oil and gas development—to cause earthquakes. The report found that only a very small fraction of injection and extraction activities among the hundreds of thousands of energy development sites in the United States have induced seismicity at levels noticeable to the public. However, although scientists understand the general mechanisms that induce seismic events, they are unable to accurately predict the magnitude or occurrence of these earthquakes due to insufficient information about the natural rock systems and a lack of validated predictive models at specific energy development sites. The report noted that hydraulic fracturing has a low risk for inducing earthquakes that can be felt by people, but underground injection of wastewater produced by hydraulic fracturing and other energy technologies has a higher risk of causing such earthquakes. In addition, carbon capture and storage—a technology for storing excess carbon dioxide underground—may have the potential for inducing seismic events, because significant volumes of fluids are injected underground over long periods of time. List of induced seismic events Table References Further reading External links The Human-Induced Earthquake Database Map of reservoir-induced earthquakes at International Rivers WEBINAR: Yes, Humans Really Are Causing Earthquakes – IRIS Consortium One-year seismic hazard forecast for the Central and Eastern United States from induced and natural earthquakes – United States Geological Survey, 2016 (with maps) Induced Earthquakes – United States Geological Survey website Seismology Man-made disasters
Induced seismicity
Biology
5,882
13,830,818
https://en.wikipedia.org/wiki/Murray%20Batchelor
Murray Thomas Batchelor (born 27 August 1961) is an Australian mathematical physicist. He is best known for his work in mathematical physics and theoretical physics. Academic career Batchelor was educated at Chatham Public School and Chatham High School (Taree, New South Wales). He completed an Honours degree in Theoretical Physics at the University of New South Wales in 1983, graduating with 1st class honours and a University Medal. Batchelor completed a PhD in Mathematics at the Australian National University in 1987. His first postdoctoral research position was at the Lorentz Institute in Leiden. After a time as a postdoctoral research fellow in mathematics at the University of Melbourne he took up an Australian Research Council QEII Fellowship at the Australian National University. He then was awarded two successive ARC Senior Research Fellowships, followed by an ARC Professorial Fellowship in 2003. Batchelor served as Head of the Department of Theoretical Physics from mid-2005 to March 2013. He has held visiting positions at a number of universities, including the University of Oxford, the University of Tokyo and Institut Henri Poincaré. He held a Visiting Fellowship at All Souls College, Oxford during Michaelmas Term 2013. During his career, Batchelor has published over 150 peer-reviewed papers. He is a Fellow of the Australian Mathematical Society, the Australian Institute of Physics and the Institute of Physics (UK). Batchelor was Editor-in-Chief of Journal of Physics A: Mathematical and Theoretical. Prior to this he served as Mathematical Physics Section Editor (2007–2008) and as a member of the Editorial Board (2005–2006). He is currently Topical Reviews Editor (2014-). In 2008 Batchelor was awarded an Honorary Professorship at Chongqing University, China. He took up a full-time position there in 2013 under the 1000 Talents Plan. He is a General Council Member of the Asia-Pacific Center for Theoretical Physics. He holds a part-time position at the Australian National University jointly between the Department of Theoretical Physics in the Research School of Physics and Engineering and the Mathematical Sciences Institute. However, Batchelor has also shown an interest in ancient and modern stromatolites, which has led him on a number of field trips to outback Australia, including to the Pilbara Craton and to Hamelin Pool Marine Nature Reserve. In 2018 he led research into the stones used to build Buckingham Palace, determining that they were made from 200 million year old microbes. Public roles Batchelor served on the Mathematical and Information Sciences and Technology Assessment Panel for the Australian Research Quality Framework (2007). Awards University Medal, UNSW - 1983 Pawsey Medal, Australian Academy of Science - 1997 Australian Mathematical Society Medal - 1998 ARC Professorial Fellowship - 2003 Visiting Fellowship, All Souls College - 2013 1000 Talent Professor, China - 2013 Selected bibliography References External links Personal home page (ANU) Journal of Physics A: Mathematical and Theoretical Academic staff of the Australian National University Mathematical physicists Australian physicists Living people 1961 births People from Taree University of New South Wales alumni Theoretical physicists Fellows of the Australian Institute of Physics Australian National University alumni
Murray Batchelor
Physics
626
67,036,499
https://en.wikipedia.org/wiki/Bedtime%20procrastination
Bedtime procrastination is a psychological phenomenon that involves needlessly and voluntarily delaying going to bed, despite foreseeably being worse off as a result. Bedtime procrastination can occur due to losing track of time, or as an attempt to enjoy control over the nighttime due to a perceived lack of control over the events of the daytime; this latter phenomenon has recently been called revenge bedtime procrastination, a term which originated on the Chinese social media platform Weibo in 2014. Bedtime procrastination has been linked to shorter sleep duration, poorer sleep quality and higher fatigue during the day. One of the main factors in bedtime procrastination is human behaviour. Origin The "revenge" prefix is believed to have been added first in China in the late 2010s, possibly relating to the 996 working hour system (72 hours per week), since many feel that it is the only way they can take any control over their daytime self. The term "bedtime procrastination" became popular based on a 2014 study from the Netherlands. Writer Daphne K. Lee popularised the term in a Twitter post using the term "revenge bedtime procrastination" (報復性熬夜), describing it as "a phenomenon in which people who don't have much control over their daytime life refuse to sleep early in order to regain some sense of freedom during late night hours." Now, defining bedtime procrastination is shown in multiple ways like "going to bed later than planned" and "delaying sleep." Causes An individual may procrastinate sleep due to a variety of causes. The person may not consciously be avoiding sleep, but rather continuing to complete activities they perceive as more enjoyable than sleep (such as watching television or browsing social media). There are many distractions in the 21st century; obtaining distractions to delay sleep is much easier than in earlier decades. Problematic smartphone use directly causes bedtime procrastination. People who extensively use a smartphone are more likely to delay their bedtime because they find it hard to stop using the phone and keep getting distracted by it before going to sleep. These people enjoy the temporary satisfaction of smartphone use and want more time to entertain themselves. In addition, bedtime procrastination plays a mediated role between smartphone addiction and depression and anxiety. Habitual smartphone overuse results in bedtime procrastination, and shorter sleep duration and lower sleep quality may trigger many negative emotions responsible for depression and anxiety. Statistics show that disturbed sleep patterns are increasingly common. In 2013, an estimated 40% of U.S. adults slept less than the recommended amount. In Belgium, where data was collected for the study, 30% of adults reported difficulty sleeping, and 13% reported taking sleeping pills. A 2014 study of Dutch individuals concluded that low self-regulation could cause bedtime procrastination. Due to COVID-19, 40% more people have experienced sleeping problems. A 2021 study found that boredom also leads to bedtime procrastination. Boredom increases inattention, which leads to increased bedtime procrastination. Another 2014 study consisting of 145 people found that 43% of the self-labelled bedtime procrastinators did not have a set bedtime or routine. This study suggests and emphasizes that inattention is a big factor in bedtime procrastination because it is not necessary for explicit awareness to be active when procrastinating. People do not procrastinate intentionally, but as a result of poor self-regulation. A 2018 study of 19 people identified three bedtime procrastination themes: deliberate procrastination, mindless procrastination and strategic delay. Deliberate procrastination results from a person consciously believing they deserve more time for themselves, causing them to intentionally stay up later. Mindless procrastination results from losing track of time during one's daily tasks and consequently staying up later without intending to. Strategic delay results from purposely staying up late in order to fall asleep easier. Strategic delay has also been found to be linked with undiagnosed insomnia. In a 2022 cross-cultural study evaluated 210 employees in the United States and 205 employees in China. The results show that off-time work-related smartphone use may provoke bedtime procrastination. The negative impact of smartphone use on bedtime procrastination is more significant in the United States than in China. The research shows that employees in the United States have a more resistant attitude than employees in China when it comes to work after hours, resulting in a higher self-control depletion and a higher possibility of bedtime procrastination. Researchers have also found that bedtime procrastination's main causes are low-self control and increased stress. Psychological influences Bedtime procrastinators engaged in more leisure and social activities in the three hours before bedtime. High and low procrastinators spend similar amounts of time watching TV and using computers. In the three hours before bedtime, high bedtime procrastinators spent 79.5 minutes on their phones, while low bedtime procrastinators spent 17.6 minutes on their phones. People who stayed up late reported more symptoms of depression and anxiety, lower sleep quality, and a higher risk of insomnia than those who went to bed earlier. Research from a survey of 317 participants in 2022 has shown that people's subjective perception of time is associated with bedtime procrastination. Sleep time perceived as the end of the day prompts people to think about the rest of their time. In the research, people who procrastinate before sleep often use their evening time to enjoy their favorite activities as a reward for the hard work of the day, focusing on immediate rewards and immediate benefits. Bedtime procrastination causes people to feel that time is passing quickly, which can lead to anxiety and stress. For people who do not sleep well, bedtime is an abominable time. Sleep can become a task and a burden that increases people's worry about getting enough sleep, leading to nervousness, and increases their psychological stress. This can lead to a variety of negative health outcomes, including fatigue, mood swings, and difficulty concentrating. Women, students, and "night owls" (later chronotypes) are most likely to experience bedtime procrastination. People with high daytime stress levels are more prone to bedtime procrastination. Bedtime procrastination comes in many other forms as well, such as delaying going to sleep (sleep procrastination) and delaying the time trying to fall asleep (while in bed procrastination). One third of Chinese students showed signs of sleep procrastination. Signs and symptoms According to researchers, there are three key factors that differentiate between bedtime procrastination and staying up late: The individual experiencing bedtime procrastination must be decreasing their overall sleep time every night. There must be no reason for them to stay up late (such as location or sickness). The individual must be aware that the loss in sleep is impacting them negatively, but they do not care to change their routine. People with higher cell phone use report more signs of bedtime procrastination. The media environment creates the atmosphere for sleep procrastination by providing plenty of fun pastimes before lights out. Consequences A person who experiences bedtime procrastination is likely to face effects related to the delayed sleep. A meta-analysis found that greater bedtime procrastination was associated with poorer sleep quality, shorter sleep duration, and increased fatigue throughout the day. Bedtime procrastination results in poor sleep quality and can be a sign of poor self-regulation. Bedtime procrastinators are more likely to lose willpower, lose control of themselves, and fidget all the time. It is easy to cause a state of low interest, high dissatisfaction, and high distraction. Bedtime procrastination can cause sleep deprivation, which leads to slow thinking, low attention levels, bad memory, bad decision making, stress, anxiety, and irritation. If sleep deprivation is not treated quickly, long-term consequences can include heart disease, diabetes, obesity, weakened immune system, pain, hormone issues, and mental health issues. Bedtime procrastination can lead to short sleep, which can increase psychosis and may cause people to suffer from depression. People who have bedtime procrastination suffer from sleep disturbance and need medication to fall asleep. Bedtime procrastination can cause naps throughout the day to help lack of sleep. Prevention Media use interventions as treatment strategies for sleep insufficiency have been targeted mainly at reducing the volume of media use. This might not be a feasible scenario for the contemporary and future media user, given the immense proliferation of media and the experience of being connected 24/7. Using a self-control perspective on electronic media use and bedtime procrastination could provide novel ways of approaching this issue. As the endpoint of media use (which often implies getting ready to go to bed) is dependent on the level of self-control, strategies aimed at improving self-control could be a valuable avenue for future exploration. It is highly important to prevent bedtime procrastination because getting the right amount of sleep is essential for the human body to function properly. Most common consequences of lack of sleep are grogginess, lack of concentration, mood swings, and there are some long-term detrimental effects to both physical and mental health. Here are a few ways to prevent bedtime procrastination: Turning off electronic devices at least one hour before bed. In a darker environment, humans produce the sleep hormone melatonin. Therefore, people should limit the light they receive before going to sleep. Taking a hot shower or bath to reduce stresses. Writing down thoughts, feelings, and experiences that stood out throughout the day. Maintaining a regular wake-up time and bedtime, including on non-working days. Setting a bedtime routine. Snacking on nuts, seeds, and pulses, which are sources of tryptophan, which helps produce melatonin. Avoiding alcohol or caffeine late in the afternoon or evening. Taking melatonin supplements (but exercise caution) Managing one's time by doing things early in the day to avoid staying late and losing essential sleep time. Taking Vitamin D and magnesium supplements that may help induce sleep. Setting boundaries at work. Reducing internet use. Practicing time management and priority-setting skills. Using a method called mental contrasting with implementation intentions (MCII) developed by Gabriele Oettingen. References External links Earliest known reference to the term Study on why people delay their bedtimes Bedtime procrastination from smartphone addiction Sleep disorders
Bedtime procrastination
Biology
2,221
61,725,043
https://en.wikipedia.org/wiki/Remote%20Utilities
Remote Utilities is a remote desktop software that allows a user to remotely control another computer through a proprietary protocol and see the remote computer's desktop, operate its keyboard and mouse. The program utilizes the client-server model and consists of two primary components: the Host that is installed on the remote computer and the viewer that is installed on the local PC. Other modules include Agent, Remote Utilities Server (RU Server) and portable Viewer. Feature and architecture Remote Utilities provides full control over the remote system and allows to view the remote computer without disrupting its user. The connection is established via an IP address or the Internet ID and it has an IP filtering system allowing to restrict access to only certain IP addresses. It has the following connection modes: Full control View only File transfer Task manager Terminal Inventory manager RDP Integration Text chat Remote registry Screen recorder Execute Power control Send message Voice and Video chat Remote settings The Internet-ID The Internet-ID technology became available in Remote Utilities starting with version 5.0. It allows the user to bypass software and hardware firewalls and NAT devices when setting up a remote connection over the Internet. Remote Utilities Agent Remote Utilities Agent was introduced with the release of version 5.1 which works as a program module for spontaneous support that runs without installation and administrative privileges. Remote Utilities Server Remote Utilities Server (RU Server) is a program module which serves as a self-hosted replacement for Remote Utilities hosted relay servers. RU Server has been made available with Remote Utilities version 5.1 release. The most recent version of RU Server as of December 22, 2021 is version 3.1.0.0. History The developer company Remote Utilities, formerly known as Usoris Systems was founded in 2009. The predecessor project, Remote Office Manager was started in 2004 and were available for free download and use from 2004 until early 2010. The current name, Remote Utilities, was given to version 4.3 in mid-2010 as part of a rebranding effort. After version 4.3, Remote Utilities released version 5.0 in 2011 with a major update. On 27 April 2012 there was a minor update for version 5.2 which included new features, a free license, and an updated licensing model. Operating system support Remote Utilities was initially developed for Microsoft Windows. It currently supports Windows, macOS (viewer only), Linux (viewer only), iOS (viewer only), Android (viewer only). Remote Utilities has also developed applications for iOS and Android devices allowing users to control computers remotely with their phone. Reception Following Remote Utilities's launch, the software received consecutive positive reviews in PC World by editors in 2011 and 2012. It was featured in TechRadar's best free remote desktop software in 2019. References External links Official website Remote desktop Computer access control Windows remote administration software MacOS remote administration software
Remote Utilities
Engineering
568
58,888,319
https://en.wikipedia.org/wiki/Alison%20Davenport
Alison Jean Davenport is the Professor of Corrosion Science at the School of Metallurgy and Materials, University of Birmingham. Education Davenport studied the Natural Sciences Tripos at the University of Cambridge where she was a member of King's College, Cambridge. She remained there for her graduate studies, earning her PhD in 1987. Her PhD was in metallurgy, investigating the oxide layers that form on top of metals. Research and career Davenport spent eight years as a staff scientist at Brookhaven National Laboratory, looking at synchrotron X-Ray techniques for corrosion and passivation of alloys. In 1995 Davenport joined the University of Manchester. She was Associate Editor of the Journal of the Electrochemical Society between 1995 and 1997. She has carried out several experiments at the Diamond Light Source and is a member of the I18 working group. She was appointed to at the University of Birmingham and looked at the relationship between alloy microstructures and localised corrosion chemistry. She developed X-Ray micro-tomography to study the growth of small cracks, allowing her to understand the transition from pits to cracks in metals. She studies the relationship between microstructure and corrosion in stainless steel, titanium and aluminium. She looked at the impact of grain boundary crystallography on intergranular corrosion. Davenport uses X-Ray imaging to study corrosion. This information informs life-time prediction models. She works with synchrotron facilities to develop in situ characterisation techniques to understand the mechanisms of corrosion. Davenport leads an Engineering and Physical Sciences Research Council (EPSRC) consortium to develop synchrotron methods to look at nuclear waste storage. She has served as an international consultant on nuclear waste storage. She collaborated with Owen Addison on how corrosion impacts biomedical implants. Her group monitor the atmospheric corrosion of stainless steel alloys and have found that morphology is very sensitive to relative humidity and residual ferrite. They identified how bipolar plates corrode in proton-exchange membrane fuel cells. Awards and honours In 2003 Davenport won the NACE International H. H. Uhlig Award for outstanding efforts in corrosion education. In 2008 she chaired the Gordon Research Conference in aqueous corrosion. She was made a member of the Innovate UK Advanced Materials Leadership Council and the Government of the United Kingdom expert group on materials science. She was appointed a professor at the University of Birmingham in 2015. In 2016 she delivered the Birmingham Metallurgical Association lecture. She is on the working group of the Collaborative Computational Project in Tomographic Imaging. She is part of the Institute of Materials, Minerals and Mining and is involved with their women in materials science activities. She was the Head of School of Metallurgy and Materials at the University of Birmingham (2016-2022). Davenport was appointed Order of the British Empire (OBE) for services to electrochemistry in the 2018 Birthday Honours. References British materials scientists Women materials scientists and engineers Alumni of the University of Cambridge Academics of the University of Birmingham Year of birth missing (living people) Living people
Alison Davenport
Materials_science,Technology
609
5,061,419
https://en.wikipedia.org/wiki/Vinca%20alkaloid
Vinca alkaloids are a set of anti-mitotic and anti-microtubule alkaloid agents originally derived from the periwinkle plant Catharanthus roseus (basionym Vinca rosea) and other vinca plants. They block beta-tubulin polymerization in a dividing cell. Sources The Madagascan periwinkle Catharanthus roseus L. is the source for a number of important natural products, including catharanthine and vindoline and the vinca alkaloids it produces from them: leurosine and the chemotherapy agents vinblastine and vincristine, all of which can be obtained from the plant. The newer semi-synthetic chemotherapeutic agent vinorelbine is used in the treatment of non-small-cell lung cancer and is not known to occur naturally. However, it can be prepared either from vindoline and catharanthine or from leurosine, in both cases by synthesis of anhydrovinblastine, which "can be considered as the key intermediate for the synthesis of vinorelbine." The leurosine pathway uses the Nugent–RajanBabu reagent in a highly chemoselective de-oxygenation of leurosine. Anhydrovinblastine is then reacted sequentially with N-bromosuccinimide and trifluoroacetic acid followed by silver tetrafluoroborate to yield vinorelbine. Applications Vinca alkaloids are used in chemotherapy for cancer. They are a class of cell cycle–specific cytotoxic drugs that work by inhibiting the ability of cancer cells to divide: Acting upon tubulin, they prevent it from forming into microtubules, a necessary component for cellular division. The vinca alkaloids thus prevent microtubule polymerization, as opposed to the mechanism of action of taxanes. Vinca alkaloids are now produced synthetically and used as drugs in cancer therapy and as immunosuppressive drugs. These compounds include vinblastine, vincristine, vindesine, and vinorelbine. Additional researched vinca alkaloids include vincaminol, vineridine, and vinburnine. Vinpocetine is a semi-synthetic derivative of vincamine (sometimes described as "a synthetic ethyl ester of apovincamine"). Minor vinca alkaloids include minovincine, methoxyminovincine, minovincinine, vincadifformine, desoxyvincaminol, and vincamajine. References External links Chemotherapeutic vinca alkaloids Mitotic inhibitors Plant toxins
Vinca alkaloid
Chemistry
578
66,876,349
https://en.wikipedia.org/wiki/Spiridoula%20Matsika
Spiridoula Christos Matsika (born 1971) is a Greek theoretical chemist. She was elected as a fellow of the American Physical Society in 2014. Education Spiridoula Christos Matsika was born in 1971 in Greece; she attended the National and Kapodistrian University of Athens for her bachelor's degree in chemistry, graduating in 1994. She completed her PhD at the Ohio State University, graduating in 2000 under the advisorship of Russell M. Pitzer. Following the completion of her PhD, she was a postdoctoral researcher at Johns Hopkins University under David Yarkony for three years. Career In 2003 she was hired at Temple University as an assistant professor in its College of Science and Technology. She was promoted to associate professor in 2009 and full professor in 2014. Awards and honors In 2005 she was awarded the National Science Foundation CAREER Award. She was awarded a Alexander von Humboldt Foundation fellowship in 2013. In 2014 she was elected as a fellow of the American Physical Society "for her contributions to understanding the dynamics of excited molecules around conical intersections and method development to calculate such at the highest levels of theory". References Living people 1971 births National and Kapodistrian University of Athens alumni Ohio State University alumni Temple University faculty Fellows of the American Physical Society Theoretical chemists Greek women scientists Greek chemists
Spiridoula Matsika
Chemistry
264
5,491,283
https://en.wikipedia.org/wiki/Sealant
Sealant is a substance used to block the passage of fluids through openings in materials, a type of mechanical seal. In building construction sealant is sometimes synonymous with caulk (especially if acrylic latex or polyurethane based) and also serve the purposes of blocking dust, sound and heat transmission. Sealants may be weak or strong, flexible or rigid, permanent or temporary. Sealants are not adhesives but some have adhesive qualities and are called adhesive-sealants or structural sealants. History Sealants were first used in prehistory in the broadest sense as mud, grass and reeds to seal dwellings from the weather such as the daub in wattle and daub and thatching. Natural sealants and adhesive-sealants included plant resins such as pine pitch and birch pitch, bitumen, wax, tar, natural gum, clay (mud) mortar, lime mortar, lead, blood and egg. In the 17th century glazing putty was first used to seal window glass made with linseed oil and chalk, later other drying oils were also used to make oil-based putties. In the 1920s, polymers such as acrylic polymers, butyl polymers and silicone polymers were first developed and used in sealants. By the 1960s, synthetic-polymer-based sealants were widely available. Function Sealants, despite not having great strength, convey a number of properties. They seal top structures to the substrate, and are particularly effective in waterproofing processes by keeping moisture out (or in) the components in which they are used. They can provide thermal and acoustical insulation, and may serve as fire barriers. They may have electrical properties, as well. Sealants can also be used for simple smoothing or filling. They are often called upon to perform several of these functions at once. A caulking sealant has three basic functions: It fills a gap between two or more substrates; it forms a barrier due to the physical properties of the sealant itself and by adhesion to the substrate; and it maintains sealing properties for the expected lifetime, service conditions, and environments. The sealant performs these functions by way of correct formulation to achieve specific application and performance properties. Other than adhesives, however, there are few functional alternatives to the sealing process. Soldering or welding can perhaps be used as alternatives in certain instances, depending on the substrates and the relative movement that the substrates will see in service. However, the simplicity and reliability offered by organic elastomers usually make them the clear choice for performing these functions. Types A sealant may be viscous material that has little or no flow characteristics and which stay where they are applied; or they can be thin and runny so as to allow it to penetrate the substrate by means of capillary action. Anaerobic acrylic sealants (generally referred to as impregnants) are the most desirable, as they are required to cure in the absence of air, unlike surface sealants that require air as part of the cure mechanism that changes state to become solid, once applied, and is used to prevent the penetration of air, gas, noise, dust, fire, smoke, or liquid from one location through a barrier into another. Typically, sealants are used to close small openings that are difficult to shut with other materials, such as concrete, drywall, etc. Desirable properties of sealants include insolubility, corrosion resistance, and adhesion. Uses of sealants vary widely and sealants are used in many industries, for example, construction, automotive and aerospace industries. Sealants can be categorized in accordance with varying criteria, e. g. in accordance with the reactivity of the product in the ready-to-use condition or on the basis of its mechanical behavior after installation. Often the intended use or the chemical basis is used to classify sealants, too. A typical classification system for most commonly used sealants is shown below. Types of sealants fall between the higher-strength, adhesive-derived sealers and coatings at one end, and extremely low-strength putties, waxes, and caulks at the other. Putties and caulks serve only one function – i.e., to take up space and fill voids. Sealants may be based on silicone. Other common types of sealants: Common areas of use Aerospace sealants Firewall Sealants – a two-component, firewall sealant intended for use as a coating, sealant or filleting material in the construction, repair and maintenance of aircraft and is especially useful where fire resistance, exposure to phosphate ester fluids, and/or exposure to extreme temperatures, −65 °F (−54 °C) to 400 °F (204 °C) are major considerations. Fuel Tank Sealants – High-temperature fuel resistant sealant intended for use on integral fuel tanks with excellent resistance to other fluids such as water, alcohols, synthetic oils and petroleum-based hydraulic fluids Access Door Sealants – Access door sealant intended for use on integral fuel tanks and pressurized cabins with low adhesion characteristics and excellent resistance to other fluids such as water, alcohols, synthetic oils and petroleum based hydraulic fluids. Windshield Sealant – demonstrated to be a useful sealant in a variety of applications where quick setting is desired, for example, windshield sealants, repair caulks, adhesives, etc. Comparison with adhesives The main difference between adhesives and sealants is that sealants typically have lower strength and higher elongation than adhesives do. When sealants are used between substrates having different thermal coefficients of expansion or differing elongation under stress, they need to have adequate flexibility and elongation. Sealants generally contain inert filler material and are usually formulated with an elastomer to give the required flexibility and elongation. They usually have a paste consistency to allow filling of gaps between substrates. Low shrinkage after application is often required. Sealants also typically require a sufficient compression set, especially when the sealant is a foam gasket. Many adhesive technologies can be formulated into sealants. References Seals (mechanical) Materials
Sealant
Physics
1,266
28,868,086
https://en.wikipedia.org/wiki/Quadrel
Quadrel is a puzzle video game developed by Loriciels and released in June 1991. It was released for MS-DOS, Amiga, Atari ST, and Amstrad CPC Gameplay The game consists of a series of screens composed of patterns created by criss-crossing lines. the object of the game is to color the image entirely by using the four or less colours available in each screen. The same color can not be used to fill in adjacent shapes, requiring the player to form a strategy in order to complete each screen. Each screen is treated as a separate game. Two modes of play are available. The basic mode is a timed challenge for a single player, the player is scored by the number of seconds they spend to complete each screen. The main mode is for two players or a single player against the computer. In this mode the number of times each color may be used is limited. The player who cannot take their turn due to the placement of colors on the screen or the lack appropriate colors available to use loses. Reception The game received poor to mixed reviews, varying from 37% to 76%. References External links 1991 video games Amiga games Amstrad CPC games Atari ST games DOS games Loriciel games NP-complete problems Puzzle video games Single-player video games Video games developed in France
Quadrel
Mathematics
264
15,721,662
https://en.wikipedia.org/wiki/Kathryn%20Uhrich
Kathryn Uhrich (born 1965) is Dean of the College of Natural and Agricultural Sciences, at The University of California, Riverside, and founder of Polymerix Corporation. She has received many awards for her research and work including the ACS Buck-Whitney Award and the Sioux Award. She was a fellow at both the National Academy of Inventors and the American Chemical Society in 2014. Research Her research mainly focuses on biodegradable polymers for use in dental and medical applications. These polymers consist of esters, amides and anhydrides, all of which are susceptible to hydrolysis, thus ensuring the breakdown of the polymer in the body's watery milieu. The oldest version of aspirin came from Hippocrates in the fifth century BC, while the latest version, PolyAspirin, comes from Uhrich's lab at Rutgers University. Polyaspirine consists of anhydrides and esters that hydrolytically degrade into the active ingredient in aspirin (salicylic acid). Her research was highlighted in "Aspirin: The Remarkable Story of a Wonder Drug" by Diarmuid Jeffreys. Although the polymer was originally designed for biodegradable sutures, PolyAspirin is now undergoing clinical trials as a material for a new type of cardiac stent. This biodegradable stent controls the inflammation effects occurring after angioplasty, called restenosis and disappears when no longer needed. Uhrich has collaborated with Professor Michael Tchikindas in the Rutgers Food Science department to investigate PolyAspirin and other plant-based polymers as a method for prevention of biofilm formation by microbes such as E. coli and Salmonella in food. In 1997, Uhrich first patented PolyAspirin. All of Uhrich's inventions were originally licensed to Polymerix Corporation in 2000, to develop biodegradable polymerized drugs, and now being licensed through Rutgers. The technology includes more efficient delivery to targeted areas such as orthopedic implants, coronary stents and arthritic joints. Uhrich has at least 16 patents in the US and 160 patent applications pending worldwide, all of which are coordinated by Rutgers OCLTT. Uhrich's second research line is on polymeric micelles. Like soap, these polymers have a hydrophilic 'head' and a hydrophobic 'tail'. These molecules form a spherical particle in which you can pack a hydrophobic drug molecule. Uhrich's research group investigates two general classes of nanoscale polymeric micelles: amphiphilic star-like macromolecules (ASMs) and amphiphilic scorpion-like macromolecules (AScMs); both systems facilitate drug transport. ASMs behave as unimolecular micelles, where four polymer particles are covalently bound. AScMs consist of part of the star like macromolecules, and must first aggregate to form micellar structures. Because AScMs are easier to synthesize and have similar properties, the polymers are undergoing further proof of principle research in gene delivery of siRNA and plasmid DNA with Professor Charlie Roth. Also, the anionic (negatively charged) scorpion-like molecules inhibit cellular uptake of oxidized LDL, the 'bad' cholesterol in the body. This type of LDL is usually incorporated in macrophages, resulting in foam cell formation and formation of an atherosclerotic plaque which narrows or blocks the arteries. Contrary to most anti-atherosclerotic drugs, the anionic polymer only targets LDL particles and not HDL particles. The delivery of these polymeric particles is now undergoing investigation with Professor Prabhas Moghe. Thirdly, her group is interested in micro-sized striped patterns of protein (such as serum albumin, immunoglobulin G, laminin and other growth factors) on biocompatible polymeric substrates (such as poly(methylmethacrylate) or PMMA). These proteins promote neuron cell growth, but are not always large enough to bridge the gap caused by injury and restore function to the nerve. Thus, Uhrich investigates the optimal dimensions for promoting neuronal growth in conjugation with Professors Helen Buettner, Martin Grumet and David Shreiber, and the most effective patterning method to generate protein gradients. More recently, Uhrich's group is collaborating with Professor Sally Meiners of UMDNJ to create nerve guidance conduits from biodegradable polymers. Awards 2014, Fellow, American Chemical Society 2014, Fellow, National Academy of Inventors 2013, Sioux Award, 2013, Common Pathways Award, New Jersey Association for Biomedical Research 2007, Blavatnik Awards for Young Scientists, Finalist, New York Academy of Science 2006, Hall of Fame: Technology (Polymerix), New Jersey Technology Council 2005, New Jersey's Top Pharmaceutical Companies (Polymerix), NJBiz 2005, ACS Buck-Whitney Award 2004, Outstanding Scientist – New Jersey Association for Biomedical Research 2003, Thomas Alva Edison Patent Award: Medical/Technology Transfer, Research and Development Council of New Jersey 2003, Fellow, American Institute for Medical and Biological Engineering 2000–2004, National Science Foundation CAREER Award 1996–1998, Johnson & Johnson Discovery Award, Johnson & Johnson Inc. Education Grand Forks Central High School University of North Dakota, B.S. 1986 Cornell University, M.S. 1989 Cornell University, Ph.D. 1992 Professional career researcher in the division of Lithographic Materials and Chemical Engineering, AT&T Bell Laboratories, 1992–1993 researcher in the Corporate Research Laboratories, Eastman Kodak Company, 1990 researcher for the Energy Research Center, 1984–1986 References Cornell University alumni Living people American organic chemists Polymer scientists and engineers Rutgers University faculty 1965 births American women chemists University of North Dakota alumni Fellows of the American Institute for Medical and Biological Engineering American women academics 21st-century American women Chemists from North Dakota
Kathryn Uhrich
Chemistry
1,241
65,219,139
https://en.wikipedia.org/wiki/Radium%20sulfate
Radium sulfate (or radium sulphate) is an inorganic compound with the formula RaSO4 and an average molecular mass of 322.088 g/mol. This white salt is the least soluble of all known sulfate salts. It was formerly used in radiotherapy and smoke detectors, but this has been phased out in favor of less hazardous alternatives. Properties Radium sulfate crystallizes in a solid in the same structure as barium sulfate. It forms crystals in the orthorhombic crystal system, with a unit cell of dimensions a = 9.13 b=5.54 and c = 7.31 Å. The unit cell volume is 369.7 Å3. Distance from the radium ion to oxygen is 2.96  Å and the sulfur to oxygen bond length in the sulfate ion is 1.485  Å. In this compound the ionic radius of the radium ion is 1.66 Å, and it is in ten coordination. Radium sulfate can form solid solutions with the sulfates of strontium, barium or lead. References Radium compounds Sulfates
Radium sulfate
Chemistry
223
32,754,474
https://en.wikipedia.org/wiki/Gould%20polynomials
In mathematics the Gould polynomials Gn(x; a,b) are polynomials introduced by H. W. Gould and named by Roman in 1984. They are given by where so References Polynomials
Gould polynomials
Mathematics
40
31,899,711
https://en.wikipedia.org/wiki/1-Chloro-1%2C1-difluoroethane
1-Chloro-1,1-difluoroethane (HCFC-142b) is a haloalkane with the chemical formula CH3CClF2. It belongs to the hydrochlorofluorocarbon (HCFC) family of man-made compounds that contribute significantly to both ozone depletion and global warming when released into the environment. It is primarily used as a refrigerant where it is also known as R-142b and by trade names including Freon-142b. Physiochemical properties 1-Chloro-1,1-difluoroethane is a highly flammable, colorless gas under most atmospheric conditions. It has a boiling point of -10 °C. Its critical temperature is near 137 °C. Applications HCFC-142b is used as a refrigerant, as a blowing agent for foam plastics production, and as feedstock to make polyvinylidene fluoride (PVDF). It was introduced to replace the chlorofluorocarbons (CFCs) that were initially undergoing a phase-out per the Montreal Protocol, but HCFCs still have a significant ozone-depletion ability. As of year 2020, HCFC's are replaced by non ozone depleting HFCs within many applications. In the United States, the EPA stated that HCFCs could be used in "processes that result in the transformation or destruction of the HCFCs", such as using HCFC-142b as a feedstock to make PVDF. HCFCs could also be used in equipment that was manufactured before January 1, 2010. The point of these new regulations was to phase-out HCFCs in much the same way that CFCs were phased out. HCFC-142b production in non article 5 countries like the United States was banned on January 1, 2020, under the Montreal Protocol. Production history According to the Alternative Fluorocarbons Environmental Acceptability Study (AFEAS), in 2006 global production (excluding India and China who did not report production data) of HCFC-142b was 33,779 metric tons and an increase in production from 2006 to 2007 of 34%. For the most part, concentrations of HCFCs in the atmosphere match the emission rates that were reported by industries. The exception to this is HCFC-142b which had a higher concentration than the emission rates suggest it should. Environmental effects The concentration of HCFC-142b in the atmosphere grew to over 20 parts per trillion by year 2010. It has an ozone depletion potential (ODP) of 0.07. This is low compared to the ODP=1 of trichlorofluoromethane (CFC-11, R-11), which also grew about ten times more abundant in the atmosphere by year 1985 (prior to introduction of HCFC-142b and the Montreal Protocol). HCFC-142b is also a minor but potent greenhouse gas. It has an estimated lifetime of about 17 years and a 100-year global warming potential ranging 2300 to 5000. This compares to the GWP=1 of carbon dioxide, which had a much greater atmospheric concentration near 400 parts per million in year 2020. See also IPCC list of greenhouse gases List of refrigerants References Refrigerants Hydrochlorofluorocarbons Greenhouse gases Ozone-depleting chemical substances
1-Chloro-1,1-difluoroethane
Chemistry,Environmental_science
709
17,392,720
https://en.wikipedia.org/wiki/Journal%20of%20the%20Experimental%20Analysis%20of%20Behavior
The Journal of the Experimental Analysis of Behavior is a peer-reviewed academic journal of psychology that was established in 1958 by B.F. Skinner and Charles Ferster. JEAB publishes empirical research related to the experimental analysis of behavior and is published by Wiley-Blackwell on behalf of the Society for the Experimental Analysis of Behavior.The current editor-in-chief is Mark Galizio (University of North Carolina, Wilmington). The 2022 impact factor is 2.7. The mission of the Journal of the Experimental Analysis of Behavior (JEAB) is "the original publication of experiments relevant to the behavior of individual organisms." See also Journal of Applied Behavior Analysis (JABA) Behavior Modification (journal) Society for the Experimental Analysis of Behavior References External links Behaviorism journals Wiley-Blackwell academic journals English-language journals Academic journals established in 1958 Experimental psychology journals
Journal of the Experimental Analysis of Behavior
Biology
175
72,182,853
https://en.wikipedia.org/wiki/Rye%20rust
There are several rusts (Pucciniales syn. Uredinales) which affect rye (Secale cereale) including: Puccinia spp.: Stem rust (Puccinia graminis) Leaf rust (Puccinia triticina) Crown rust (Puccinia coronata) See also List of rye diseases Rye diseases Fungal plant pathogens and diseases
Rye rust
Biology
79
6,248,586
https://en.wikipedia.org/wiki/Cistrome
In simple words, the cistrome refers a collection of regulatory elements of a set of genes, including transcription factor binding sites and histone modifications. More specifically, "the set of cis-acting targets of a trans-acting factor on a genome-wide scale, also known as the in vivo genome-wide location of transcription factor binding sites or histone modifications". The term cistrome is a portmanteau of (from cistron) + ome (from genome). The term cistrome was coined by investigators at the Dana–Farber Cancer Institute and Harvard Medical School. Technologies such as chromatin immunoprecipitation combined with microarray analysis "ChIP-on-chip" or with massively parallel DNA sequencing "ChIP-Seq" have greatly facilitated the definition of the cistrome of transcription factors and other chromatin associated proteins. References Further reading Molecular genetics
Cistrome
Chemistry,Biology
189
20,646,279
https://en.wikipedia.org/wiki/Wedgie
A wedgie is the act of forcibly pulling a person's underpants upwards from the back. The act is often performed as a school prank or a form of bullying. Wedgies are commonly featured in popular works, either as a form of low comedy or as a behaviour representative of bullying. In such works, briefs are usually the type of underpants that are worn by the victim. A wedgie can also occur in artistic gymnastics, particularly females who wear leotards. Dangers Wedgies, especially when performed on males, can be dangerous, potentially causing testicular or scrotal damage. An incident in 2004 involving a ten-year-old boy required reattachment of a testicle to the scrotum. Variations As a prank or form of bullying, there are a number of variants to the normal, or traditional wedgie. It is impractical to list every variant, as the names and processes can be rather subjective; however, there are a few better-known variants of the wedgie. The melvin is a variant where the victim's underpants are pulled up from the front, to cause injury, or, at least, severe pain to the victim's genitals. The atomic wedgie entails hoisting the waistband of the receiver's underwear up and over their head. The hanging wedgie is a variant in which the victim is hung by their underpants, elevated above the ground. The ripping wedgie involves the tearing of the victim's underpants, sometimes ripping off a portion (such as the waistband) of them, or forcibly removing the garment entirely. See also List of practical joke topics Happy corner Indian burn Wedgies, a 2008 mini series-focused block on Cartoon Network References External links Harassment and bullying Undergarments Practical jokes
Wedgie
Biology
364
224,136
https://en.wikipedia.org/wiki/Abilene%20paradox
The Abilene paradox is a collective fallacy, in which a group of people collectively decide on a course of action that is counter to the preferences of most or all individuals in the group, while each individual believes it to be aligned with the preferences of most of the others. It involves a breakdown of group communication in which each member mistakenly believes that their own preferences are counter to the group's, and therefore does not raise objections. They even go so far as to state support for an outcome they do not want. A common phrase related to the Abilene paradox is a desire to not "rock the boat". Like in groupthink, group members jointly decide on a course of action that they would not choose as individuals. However, while in groupthink, individuals undergo self-deception and distortion of their own views (driven by, for example, not wanting to suffer in anticipation of a future they sense they cannot avoid by speaking out), in the Abilene Paradox, individuals are unable to perceive the views or preferences of others, or to manage an agreement. Overview The term was introduced by a management expert Jerry B. Harvey in his 1974 article "The Abilene Paradox: The Management of Agreement". The name of the phenomenon comes from an anecdote that Harvey uses in the article to elucidate the paradox: The Abilene Paradox consists of five components: The first component refers to mutual agreement of a group that the current situation is not acceptable. However, on the individual level, the members may be satisfied with the existing setting after they have compared it with proposed alternatives. The second component stands for ineffective communication within the group when several members express considerable support for a decision because they assume that is the desire of others. This process of communication reinforces assumptions that individual thoughts are a minority in the group. The third component of the Abilene Paradox is the vocalisation of group sentiment which arose from inaccurate assumptions or incorrect interpretation of the "signals" given by other members. The fourth component refers to the decision-maker's reflections on the actions taken, usually in the form of questions as follows: "Why did we do this?", "How can we justify our decision to others?". The fifth component refers to the defeat of the group leader to poor decision making in order to avoid making similar decisions in the future. There are several factors that may indicate the presence of the Abilene Paradox in the decision-making process: Leaders who publicly do not fear the unknown. Such arrogance leads them to go along as they do not possess sufficient understanding of complex problems. Rather, they stick to the "that sounds good to me" attitude. A group with no-conflict or no-debate type of decision-making. When such views are supported in the cohort, the lack of diverse opinions becomes the foundation for mismanagement of agreement. This can be visible by the emergence of the "I will go along with that" attitude. Overriding leaders and a strong organisation culture. A strong leader and solid organisation may become a powerful asset, it may also intimidate other members of subordinates to the point of submission. This results in the inclination of supporting more dominant ideas. Lack of diversity and pluralistic perspective in a group. Homogeneous groups tend to be conformal. Such groups tend to achieve consensus rather than searching for the "right" decision. Recognition of a dysfunctional decision-making environment. Management in this environment has lost control, as the directional prerogative of management has succumbed to wanting to be liked by avoiding conflict. The feeling of a "messiah" in the organisation and action anxiety on the part of management. When the group handles complex tasks, there is usually one person or a small cohort within the group who has required expertise to manage in this situation. As a result, there is a tendency to acquiesce to them. The development of a "spiral of silence" in the organisation. The spiral of silence occurs when one's perception of the majority opinion in the organisation suppresses one's willingness to express any challenging opinion against the most visible point of view. Research Based on an online experiment with more than 600 participants, being prosocial and generally caring about the implications of one's actions on others (measured by the social value orientation measure) has been shown to increase the likelihood that an individual finds themselves in an Abilene Paradox with others, especially if they are not the first to have a say. The study at Makerere University Business School described the case of the Abilene Paradox in the process of decision-making in 2006: The institution was in a dispute with its parent institution, Makerere University, over its status as an independent university. A meeting of the MUBS Academic Staff Association (MUBASA) was called to discuss the issue, and the attendees voted to support MUBS council's decision to sue the Ministry of Education for interfering in a high court pronouncement. Each member of the association was to contribute towards the legal costs. By interviewing 68 employees, the researcher found that the majority of them never considered it a solution but thought that others strongly support the idea of starting the trial. Chen and Chang conducted a study about the effects, causes, and influences of the Abilene paradox, if any, on their elementary school; and this study involved twelve faculty members. Results of this Abilene paradox study showed a negative effect on the school’s operation, through poor communication, inadequate interaction, isolation, exclusion, and rising gossip. Applications of the concept The theory is often used to help explain poor group decisions, especially notions of the superiority of "rule by committee". For example, Harvey cited the Watergate scandal as a potential instance of the Abilene paradox in action. The Watergate scandal occurred in the United States in the 1970s when many high officials of the Nixon administration colluded in the cover-up and perhaps the execution of a break-in at the Democratic National Committee headquarters in Washington, D.C. Harvey quotes several people indicted for the coverup as indicating that they had personal qualms about the decision but feared to voice them. In one instance, campaign aide Herbert Porter said that he "was not one to stand up in a meeting and say that this should be stopped", a decision that he attributed to "the fear of the group pressure that would ensue, of not being a team player". Another notable example of applying the Abilene paradox to the notorious real-world event can be seen during and in the aftermath of the 1989 United Kingdom Hillsborough tragedy and its cover-up by the authorities, which was characterised by individually hesitant, but otherwise compliant, government agents and the narrative and available information moulded and manipulated by the state. The other frequently cited example is the case of Challenger disaster, thought in that case researchers use both the concepts of groupthink and the Abilene paradox as possible explanation of the events. The phenomenon of the Abilene paradox can also be used in information systems development, to conceptualise and operationalise the relationship between systems analysts, users, and other organisational stakeholders in situations of illusory agreement. Related concepts and explanations Other theories add to the Abilene paradox’s explanation of poor decision-making in groups, notably, such phenomena as groupthink and pluralistic ignorance. The concept of groupthink posits that individuals correctly perceive the preferences of others, undergo some form of motivated reasoning, which distorts their true preferences, and then willingly choose to conform; hence, they generally feel positively about the resulting group decisions. The success of groupthink also hinges on the long-term homogeneity of the group, which seeks to keep that same cohesiveness and therefore to avoid all potential conflict. However, while groupthink, to some extent, depends on the ability of individuals to perceive attitudes and desires of others, the Abilene paradox hinges on the inability to gage true wants and intentions of group members. The concept of pluralistic ignorance, on the other hand, is also defined as the situation where an individual underestimates the extent to which their views are shared by the other members of the group or organisation. In some ways, pluralistic ignorance can be considered as a factor inciting situations where the Abilene paradox occurs — individuals’ inability to correctly estimate the share of potential supporters lead to the assumption of ‘the worst case scenario’ and in-advance mitigation of potential risks of dealing with the opponents. Some researchers consider pluralistic ignorance to be a wider-ranging concept: while both groupthink and the Abilene paradox are usually discussed as the detriments to successful group decision-making, pluralistic ignorance is sometimes evaluated neutrally. See also Argumentum ad populum Asch conformity experiments Design by committee Elephant in the room False consensus effect Group polarization Groupshift Keynesian beauty contest Moving the goalposts Peer pressure Pluralistic ignorance Prediction market Preference falsification Prisoner's dilemma Pseudoconsensus Special interests Spiral of silence The Wisdom of Crowds References Further reading Harvey, Jerry B. (1988). The Abilene Paradox and Other Meditations on Management. Lexington, Mass: Lexington Books. Harvey, Jerry B. (1996). The Abilene Paradox and Other Meditations on Management (paperback). San Francisco: Jossey-Bass. Harvey, Jerry B. (1999). How Come Every Time I Get Stabbed in the Back, My Fingerprints Are on the Knife?. San Francisco: Jossey-Bass. External links Abilene Paradox (Documentary film by Peter J. Jordan, 1984) Conformity Decision-making paradoxes Abilene, Texas Eponymous paradoxes Management Fallacies
Abilene paradox
Biology
1,988
50,495,886
https://en.wikipedia.org/wiki/Aspergillus%20creber
Aspergillus creber is a species of fungus in the genus Aspergillus. It is from the Versicolores section. The species was first described in 2012. Growth and morphology A. creber has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References Further reading creber Fungi described in 2012 Fungus species
Aspergillus creber
Biology
106
614,984
https://en.wikipedia.org/wiki/Richard%20Rado
Richard Rado FRS (28 April 1906 – 23 December 1989) was a German-born British mathematician whose research concerned combinatorics and graph theory. He was Jewish and left Germany to escape Nazi persecution. He earned two PhDs: in 1933 from the University of Berlin, and in 1935 from the University of Cambridge. He was interviewed in Berlin by Lord Cherwell for a scholarship given by the chemist Sir Robert Mond which provided financial support to study at Cambridge. After he was awarded the scholarship, Rado and his wife left for the UK in 1933. He was appointed Professor of Mathematics at the University of Reading in 1954 and remained there until he retired in 1971. Contributions Rado made contributions in combinatorics and graph theory including 18 papers with Paul Erdős. In graph theory, the Rado graph, a countably infinite graph containing all countably infinite graphs as induced subgraphs, is named after Rado. He rediscovered it in 1964 after previous works on the same graph by Wilhelm Ackermann, Erdős, and Alfréd Rényi. In combinatorial set theory, the Erdős–Rado theorem extends Ramsey's theorem to infinite sets. It was published by Erdős and Rado in 1956. Rado's theorem is another Ramsey-theoretic result concerning systems of linear equations, proved by Rado in his thesis. The Milner–Rado paradox, also in set theory, states the existence of a partition of an ordinal into subsets of small order-type; it was published by Rado and E. C. Milner in 1965. The Erdős–Ko–Rado theorem can be described either in terms of set systems or hypergraphs. It gives an upper bound on the number of sets in a family of finite sets, all the same size, that all intersect each other. Rado published it with Erdős and Chao Ko in 1961, but according to Erdős it was originally formulated in 1938. In matroid theory, Rado proved a fundamental result of transversal theory by generalizing the Marriage Theorem for matchings between sets S and X to the case where X has a matroid structure and matchings must match to an independent set in the matroid on X. The Klarner–Rado Sequence is named after Rado and David A. Klarner. Awards and honours In 1972, Rado was awarded the Senior Berwick Prize. References Further reading "Richard Rado", The Times (London), 2 January 1990, p. 12. 1906 births 1989 deaths 20th-century British mathematicians 20th-century German mathematicians Fellows of the Royal Society Jewish emigrants from Nazi Germany to the United Kingdom Set theorists Graph theorists Humboldt University of Berlin alumni Alumni of Fitzwilliam College, Cambridge
Richard Rado
Mathematics
561
1,272,465
https://en.wikipedia.org/wiki/Drop%20tube
In physics and materials science, a drop tower or drop tube is a structure used to produce a controlled period of weightlessness for an object under study. Air bags, polystyrene pellets, and magnetic or mechanical brakes are sometimes used to arrest the fall of the experimental payload. In other cases, high-speed impact with a substrate at the bottom of the tower is an intentional part of the experimental protocol. Not all such facilities are towers: NASA Glenn's Zero Gravity Research Facility is based on a vertical shaft, extending to below ground level. Typical operation For a typical materials science experiment, a sample of the material under study is loaded into the top of the drop tube, which is filled with inert gas or evacuated to create a low-pressure environment. Following any desired preprocessing (e.g. induction heating to melt a metal alloy), the sample is released to fall to the bottom of the tube. During its flight or upon impact the sample can be characterized with instruments such as cameras and pyrometers. Drop towers are also commonly used in combustion research. For this work, oxygen must be present and the payload may be enclosed in a drag shield to isolate it from high-speed "wind" as the apparatus accelerates toward the bottom of the tower. See a video of a microgravity combustion experiment in the NASA Glenn Five Second Drop Facility at . Fluid physics experiments and development and testing of space-based hardware can also be conducted using a drop tower. Sometimes, the ground-based research performed with a drop tower serves as a prelude to more ambitious, in-flight investigation; much longer periods of weightlessness can be achieved with parabolic-flight-path aircraft or with space-based laboratories aboard the Space Shuttle or the International Space Station. The duration of free-fall produced in a drop tube depends on the length of the tube and its degree of internal evacuation. The 105-meter drop tube at Marshall Space Flight Center produces 4.6 seconds of weightlessness when it is fully evacuated. In the drop facility Fallturm Bremen at University of Bremen a catapult can be used to throw the experiment upwards to prolong the weightlessness from 4.74 to nearly 9.3 seconds. Negating the physical space needed for the initial acceleration, this technique doubles the effective period of weightlessness. The NASA Glenn Research Center has a 5 second drop tower (The Zero Gravity Facility) and a 2.2 second drop tower (The 2.2 Second Drop Tower). Much of the operating cost of a drop tower is due to the need for evacuation of the drop tube, to eliminate the effect of aerodynamic drag. Alternatively the experiment is placed inside an outer box (the drag shield) for which, due to its weight, during its fall the reduction of acceleration due to air drag is less. Historical uses Though the story may be apocryphal, Galileo is popularly thought to have used the Leaning Tower of Pisa as a drop tower to demonstrate that falling bodies accelerate at the same constant rate regardless of their mass. Drop towers called shot towers were once useful for making lead shot. A short period of weightlessness allows molten lead to solidify into a quasi-perfect sphere by the time it reaches the floor of the tower. List of drop towers The vacuum-dynamic stand in Academician V.P.Makeyev State Rocket Centre NASA Glenn Research Center 2.2 Second Drop Tower NASA Glenn Research Center Zero-G Research Facility Micro-Gravity Laboratory of Japan (MGLAB) (Closed June 2010) HASTIC 50m Drop Tower, Sapporo Fallturm Bremen Applied Dynamics Laboratories drop tower for spinning spacecraft Experimental drop tube of the metallurgy department of Grenoble NASA Marshall Space Flight Center Drop Tube Facility is currently mothballed Dryden Drop Tower, Portland State University, Maseeh College of Engineering and Computer Science National Microgravity Laboratory (China) The pagoda at the Royal Botanic Gardens, Kew was used during the Second World War as a drop tower for testing bomb designs. Queensland University of Technology drop tower, Brisbane (Closed 2014) See also Glenn Research Center Marshall Space Flight Center Magnetic levitation Microgravity Environment Shot tower Weightlessness References External links Laboratory equipment Towers Weightlessness
Drop tube
Engineering
856
29,140,406
https://en.wikipedia.org/wiki/OnDemand
OnDemand is a brand name for a video-on-demand, London-based company owned by the On Demand Group, who provide mobile video services such as pay-per-view to over 25 million subscribers. Their product is sold through middleware to smart TV companies such as Panasonic. They have teamed up with Hollywood companies such as Twentieth Century Fox Home Entertainment to supply their services across multiple platforms for TV, broadband and mobile, and across multi-territories in partnership with companies such as Virgin Media. Together with Inview Technology they are providing viewers with access to movies on a transactional video-on-demand (TVOD) basis, and a library of on-demand TV content available on an SVOD basis including TV series, children's programming, classic movies and music videos. See also 10-foot user interface Enhanced TV Second screen C-Cast Home theater PC Interactive television Hotel television systems Hybrid Broadcast Broadband TV List of digital distribution platforms for mobile devices Non-linear media Over-the-top content TV Genius Smartphone Tivoization References External links Official Website Information appliances Digital television Interactive television Internet broadcasting Streaming television Multimedia Peercasting Streaming media systems Video on demand services Television technology Television terminology English brands
OnDemand
Technology
246
356,200
https://en.wikipedia.org/wiki/Muller%27s%20ratchet
In evolutionary genetics, Muller's ratchet (named after Hermann Joseph Muller, by analogy with a ratchet effect) is a process which, in the absence of recombination (especially in an asexual population), results in an accumulation of irreversible deleterious mutations. This happens because in the absence of recombination, and assuming reverse mutations are rare, offspring bear at least as much mutational load as their parents. Muller proposed this mechanism as one reason why sexual reproduction may be favored over asexual reproduction, as sexual organisms benefit from recombination and consequent elimination of deleterious mutations. The negative effect of accumulating irreversible deleterious mutations may not be prevalent in organisms which, while they reproduce asexually, also undergo other forms of recombination. This effect has also been observed in those regions of the genomes of sexual organisms that do not undergo recombination. Etymology Although Muller discussed the advantages of sexual reproduction in his 1932 talk, it does not contain the word "ratchet". Muller first introduced the term "ratchet" in his 1964 paper, and the phrase "Muller's ratchet" was coined by Joe Felsenstein in his 1974 paper, "The Evolutionary Advantage of Recombination". Explanation Asexual reproduction compels genomes to be inherited as indivisible blocks so that once the least mutated genomes in an asexual population begin to carry at least one deleterious mutation, no genomes with fewer such mutations can be expected to be found in future generations (except as a result of back mutation). This results in an eventual accumulation of mutations known as genetic load. In theory, the genetic load carried by asexual populations eventually becomes so great that the population goes extinct. Also, laboratory experiments have confirmed the existence of the ratchet and the consequent extinction of populations in many organisms (under intense drift and when recombinations are not allowed) including RNA viruses, bacteria, and eukaryotes. In sexual populations, the process of genetic recombination allows the genomes of the offspring to be different from the genomes of the parents. In particular, progeny (offspring) genomes with fewer mutations can be generated from more highly mutated parental genomes by putting together mutation-free portions of parental chromosomes. Also, purifying selection, to some extent, unburdens a loaded population when recombination results in different combinations of mutations. Among protists and prokaryotes, a plethora of supposedly asexual organisms exists. More and more are being shown to exchange genetic information through a variety of mechanisms. In contrast, the genomes of mitochondria and chloroplasts do not recombine and would undergo Muller's ratchet were they not as small as they are (see Birdsell and Wills [pp. 93–95]). Indeed, the probability that the least mutated genomes in an asexual population end up carrying at least one (additional) mutation depends heavily on the genomic mutation rate and this increases more or less linearly with the size of the genome (more accurately, with the number of base pairs present in active genes). However, reductions in genome size, especially in parasites and symbionts, can also be caused by direct selection to get rid of genes that have become unnecessary. Therefore, a smaller genome is not a sure indication of the action of Muller's ratchet. In sexually reproducing organisms, nonrecombining chromosomes or chromosomal regions such as the mammalian Y chromosome (with the exception of multicopy sequences which do engage intrachromosomal recombination and gene conversion) should also be subject to the effects of Muller's ratchet. Such nonrecombining sequences tend to shrink and evolve quickly. However, this fast evolution might also be due to these sequences' inability to repair DNA damage via template-assisted repair, which is equivalent to an increase in the mutation rate for these sequences. Ascribing cases of genome shrinkage or fast evolution to Muller's ratchet alone is not easy. Muller's ratchet relies on genetic drift, and turns faster in smaller populations because in such populations deleterious mutations have a better chance of fixation. Therefore, it sets the limits to the maximum size of asexual genomes and to the long-term evolutionary continuity of asexual lineages. However, some asexual lineages are thought to be quite ancient; Bdelloid rotifers, for example, appear to have been asexual for nearly 40 million years. However, rotifers were found to possess a substantial number of foreign genes from possible horizontal gene transfer events. Furthermore, a vertebrate fish, Poecilia formosa, seems to defy the ratchet effect, having existed for 500,000 generations. This has been explained by maintenance of genomic diversity through parental introgression and a high level of heterozygosity resulting from the hybrid origin of this species. Calculation of the fittest class In 1978, John Haigh used a Wright–Fisher model to analyze the effect of Muller's ratchet in an asexual population. If the ratchet is operating the fittest class (least loaded individuals) is small and prone to extinction by the effect of genetic drift. In his paper Haigh derives the equation that calculates the frequency of individuals carrying mutations for the population with stationary distribution: where, is the number of individual carrying mutations, is the population size, is the mutation rate and is the selection coefficient. Thus, the frequency of the individuals of the fittest class () is: In an asexual population which suffers from ratchet the frequency of fittest individuals would be small, and go extinct after few generations. This is called a click of the ratchet. Following each click, the rate of accumulation of deleterious mutation would increase, and ultimately results in the extinction of the population. The antiquity of recombination and Muller's ratchet It has been argued that recombination was an evolutionary development as ancient as life on Earth. Early RNA replicators capable of recombination may have been the ancestral sexual source from which asexual lineages could periodically emerge. Recombination in the early sexual lineages may have provided a means for coping with genome damage. Muller's ratchet under such ancient conditions would likely have impeded the evolutionary persistence of the asexual lineages that were unable to undergo recombination. Muller's ratchet and mutational meltdown Since deleterious mutations are harmful by definition, accumulation of them would result in loss of individuals and a smaller population size. Small populations are more susceptible to the ratchet effect and more deleterious mutations would be fixed as a result of genetic drift. This creates a positive feedback loop which accelerates extinction of small asexual populations. This phenomenon has been called mutational meltdown. It appears that mutational meltdown due to Muller’s ratchet can be avoided by a little bit of sex as in the common apomictic asexual flowering plant Ranunculus auricomus. See also Evolution of sexual reproduction Genetic hitchhiking Hill–Robertson effect References External links xkcd webcomic explaining Muller's ratchet and recombination through the evolution of Internet memes Evolutionary biology concepts Genetics concepts Population genetics
Muller's ratchet
Biology
1,541
50,615,433
https://en.wikipedia.org/wiki/Department%20of%20Information%20and%20Communications%20Technology
The Department of Information and Communications Technology (DICT) () is the executive department of the Philippine government responsible for the planning, development and promotion of the country's information and communications technology (ICT) agenda in support of national development. History Predecessor The Commission on Information and Communications Technology, a preceding agency, was created on January 12, 2004, by virtue of Executive Order No. 269, signed by President Gloria Macapagal Arroyo, as a transitory measure to the creation of a Department of Information and Communications Technology (DICT). The CICT was composed of the National Computer Center (NCC), the Telecommunications Office (TELOF), and all other operating units of the Department of Transportation and Communications (DOTC) dealing with communications. The National Telecommunications Commission (NTC) and the Philippine Postal Corporation (PhilPost) were also attached to the CICT for policy coordination. The CICT took over the functions of the Information Technology and Electronic Commerce Council (ITECC), which was subsequently abolished through Executive Order No. 334 on July 20, 2004. Restructuring Executive Order No. 454, signed on August 16, 2005, transferred the NTC back to the DOTC. According to EO 454, the transfer "will streamline bureaucracy operations." While the reasons for the transfer were unclear, there were discussions that placing the NTC under the CICT would be a bureaucratic anomaly, since it is unusual for a commission to fall under another commission. Executive Order No. 603, signed on February 17, 2007, transferred the TELOF and all other operating units of the CICT dealing with communications back to the DOTC. According to EO 603, the transfer "is necessitated by the present demands of national development and concomitant development projects as it will streamline bureaucracy operations and effectively promote fast, efficient and reliable networks of communication system and services." The transfer of the TELOF to the DOTC left the CICT with just two agencies—the NCC and the PhilPost. Executive Order No. 648, signed on August 6, 2007, but published only on December 24, 2008, transferred the NTC back to the CICT. Executive Order No. 780, signed on January 29, 2009, transferred the TELOF and all other operating units of the DOTC dealing with communications back to the CICT, thereby returning the CICT to its original composition. Initial efforts Several bills in the Philippine Congress have been filed creating a Department of Information and Communications Technology (DICT), which would transform the CICT into an executive department. In the House of Representatives, a consolidated bill, House Bill No. 4300, was approved on third and final reading on August 5, 2008, and transmitted to the Senate on August 11, 2008. In the Senate, a consolidated bill, Senate Bill No. 2546, was approved by the Senate Committee on Science and Technology on August 19, 2008, but had not made it past second reading by the time Congress adjourned session on February 5, 2010, which means the bill is as good as dead. It will have to be refiled in both the House of Representatives and the Senate in the next Congress. With the failure of Congress to pass the DICT Bill, the legal basis of the CICT remains an executive order, which means the next President can abolish the CICT. Executive Order No. 47 was signed by President Benigno Aquino III on June 23, 2011. The order states that: "Reorganizing, renaming and transferring the Commission on Information and Communications Technology and its attached agencies to the Department of Science and Technology, directing the implementation thereof and for other purposes." Furthermore, "the positions of Chairman and Commissioners of the CICT are hereby abolished." The BPO stakeholders were surprised with the order and unhappy with the change. Creation The law creating the DICT, Republic Act No. 10844 or "An Act Creating the Department of Information and Communications Technology", was signed on May 20, 2016, during the administration of President Aquino III. Several agencies from other executive departments, notably from the Department of Transportation and Communications (DOTC), dealing with communications functions and responsibilities will either be abolished or transferred to the newly created department. The DOTC will then be renamed "Department of Transportation." The law provides for a 6-month transition period “for the full implementation of the transfer of functions, assets and personnel.” The law took effect on June 9, 2016, which marked the establishment of the DICT. Abolished agencies The functions of the following government agencies have been transferred to the DICT: All operating units of the DOTC with functions and responsibilities dealing with communications Information and Communications Technology Office (ICTO) National Computer Center (NCC) National Computer Institute (NCI) National Telecommunications Training Institute (NTTI) Telecommunications Office (TELOF) List of secretaries of information and communications technology Agencies Attached agencies The following agencies are attached to the DICT for purposes of policy and program coordination: Cybercrime Investigation and Coordination Center (CICC) National Privacy Commission (NPC) National Telecommunications Commission (NTC) Organizational Structure The Department is currently headed by a Secretary with a following Undersecretaries and Assistant Secretaries: Undersecretary for Special Concerns Undersecretary for Infostructure Management, Cybersecurity and Upskilling Undersecretary for Support Services Undersecretary for e-Government Undersecretary for ICT Industry Development Assistant Secretary for Planning and Procurement Assistant Secretary for Regional Development Assistant Secretary for Infostructure Management Assistant Secretary for Consumer Protection Assistant Secretary for Management Information Systems Service Assistant Secretary for Legal Affairs See also Executive Departments of the Philippines References Philippines Philippines Information technology in the Philippines Information and communications technology Philippines, Information and Communications Technology 2016 establishments in the Philippines Information and Communications Technology
Department of Information and Communications Technology
Technology
1,187
48,826,266
https://en.wikipedia.org/wiki/Robert%20Emerson%20%28scientist%29
Robert Emerson (November 4, 1903 – February 4, 1959) was an American scientist noted for his discovery that plants have two distinct photosynthetic reaction centres. Family Emerson was born in 1903 in New York City, the son of Dr. Haven Emerson, Health Commissioner of New York City, and Grace Parrish Emerson, the sister of Maxfield Parrish. Emerson was the brother of John Haven Emerson the inventor of the iron lung. He married Claire Garrison, and they had three sons, and a daughter. Career Emerson received a master's degree in 1929 from Harvard, and received his doctorate from the University of Berlin working in the laboratory of Otto Warburg. Thomas Hunt Morgan invited him to join the Biology Division at the California Institute of Technology where he worked from 1930 to 1937, and again for a year in 1941 and 1945. From 1942 to 1945 he worked on producing rubber from the guayule shrub for the American Rubber Company. In 1947 he moved to the Botany Department of the University of Illinois, where he remained for the rest of his life. Experimental results Emerson's first "important" result was the quantification of the ratio of chlorophyll molecules to oxygen molecules produced by photosynthesis. Emerson and William Arnold found that "the yield per flash reached a maximum when just 1 out of 2500 chlorophylls absorbed a quantum". Next, in 1939, Emerson demonstrated that between 8 and 12 quanta of light were needed to produce one molecule of oxygen. These results were controversial, as they contradicted Warburg who reported 4, then 3, and finally 2 quanta. This dispute was settled after the death of both men, and it is now agreed that Emerson was correct, and the accepted modern value is 8 – 10 quanta. In 1957, Emerson reported results that are now called the Emerson effect, the excess rate of photosynthesis after chloroplasts are simultaneously exposed to light of wavelength 670 nm (red light), and 700 nm (far red light). These results were later shown to be the first experimental demonstration that there are two photosynthetic reaction centres in plants. Death Emerson died in the crash of American Airlines Flight 320 in New York City in 1959. References 1903 births 1959 deaths Accidental deaths in New York (state) 20th-century American botanists California Institute of Technology faculty Harvard University alumni Researchers of photosynthesis Scientists from New York City Victims of aviation accidents or incidents in 1959 Victims of aviation accidents or incidents in the United States University of Illinois Urbana-Champaign faculty
Robert Emerson (scientist)
Chemistry
516
2,435,876
https://en.wikipedia.org/wiki/Fictitious%20play
In game theory, fictitious play is a learning rule first introduced by George W. Brown. In it, each player presumes that the opponents are playing stationary (possibly mixed) strategies. At each round, each player thus best responds to the empirical frequency of play of their opponent. Such a method is of course adequate if the opponent indeed uses a stationary strategy, while it is flawed if the opponent's strategy is non-stationary. The opponent's strategy may for example be conditioned on the fictitious player's last move. History Brown first introduced fictitious play as an explanation for Nash equilibrium play. He imagined that a player would "simulate" play of the game in their mind and update their future play based on this simulation; hence the name fictitious play. In terms of current use, the name is a bit of a misnomer, since each play of the game actually occurs. The play is not exactly fictitious. Convergence properties In fictitious play, strict Nash equilibria are absorbing states. That is, if at any time period all the players play a Nash equilibrium, then they will do so for all subsequent rounds. (Fudenberg and Levine 1998, Proposition 2.1) In addition, if fictitious play converges to any distribution, those probabilities correspond to a Nash equilibrium of the underlying game. (Proposition 2.2) Therefore, the interesting question is, under what circumstances does fictitious play converge? The process will converge for a 2-person game if: Both players have only a finite number of strategies and the game is zero sum (Robinson 1951) The game is solvable by iterated elimination of strictly dominated strategies (Nachbar 1990) The game is a potential game (Monderer and Shapley 1996-a,1996-b) The game has generic payoffs and is 2 × N (Berger 2005) Fictitious play does not always converge, however. Shapley (1964) proved that in the game pictured here (a nonzero-sum version of Rock, Paper, Scissors), if the players start by choosing (a, B), the play will cycle indefinitely. Terminology Berger (2007) states that "what modern game theorists describe as 'fictitious play' is not the learning process that George W. Brown defined in his 1951 paper": Brown's "original version differs in a subtle detail..." in that modern usage involves the players updating their beliefs simultaneously, whereas Brown described the players updating alternatingly. Berger then uses Brown's original form to present a simple and intuitive proof of convergence in the case of two-player nondegenerate ordinal potential games. The term "fictitious" had earlier been given another meaning in game theory. Von Neumann and Morgenstern [1944] defined a "fictitious player" as a player with only one strategy, added to an n-player game to turn it into a (n + 1)-player zero-sum game. References Berger, U. (2005) "Fictitious Play in 2xN Games", Journal of Economic Theory 120, 139–154. Berger, U. (2007) "Brown's original fictitious play", Journal of Economic Theory 135:572–578 Brown, G.W. (1951) "Iterative Solutions of Games by Fictitious Play" In Activity Analysis of Production and Allocation, T. C. Koopmans (Ed.), New York: Wiley. Fudenberg, D. and D.K. Levine (1998) The Theory of Learning in Games Cambridge: MIT Press. Monderer, D., and Shapley, L.S. (1996-a) "Potential Games", Games and Economic Behavior 14, 124-143. Monderer, D., and Shapley, L.S. (1996-b) "Fictitious Play Property for Games with Identical Interests ", Journal of Economic Theory 68, 258–265. Nachbar, J. (1990) "Evolutionary Selection Dynamics in Games: Convergence and Limit Properties", International Journal of Game Theory 19, 59–89. von Neumann and Morgenstern (1944), Theory of Games and Economic Behavior, Princeton and Woodstock: Princeton University Press. Robinson, J. (1951) "An Iterative Method of Solving a Game", Annals of Mathematics 54, 296–301. Shapley L. (1964) "Some Topics in Two-Person Games" In Advances in Game Theory M. Dresher, L.S. Shapley, and A.W. Tucker (Eds.), Princeton: Princeton University Press. External links Game-Theoretic Solution to Poker Using Fictitious Play Game theory
Fictitious play
Mathematics
955
7,384,821
https://en.wikipedia.org/wiki/Guerbet%20reaction
The Guerbet reaction, named after Marcel Guerbet (1861–1938), is an organic reaction that converts a primary alcohol into its β-alkylated dimer alcohol with loss of one equivalent of water. The process is of interest because it converts simple inexpensive feedstocks into more valuable products. Its main disadvantage is that the reaction produces mixtures. Scope and applications The original 1899 publication concerned the conversion of n-butanol to 2-ethylhexanol. 2-ethylhexanol is however more easily prepared by alternative methods (from butyraldehyde by aldol condensation). Instead, the Guerbet reaction is mainly applied to fatty alcohols to afford oily products, which are called Guerbet alcohols. They are of commercial interest to as components of cosmetics, plasticizers, and related applications. The reaction is conducted in the temperature range 180-360 °C, often in a sealed reactor. The reaction requires alkali metal hydroxides or alkoxides. Catalysts such as Raney Nickel are required to facilitate the hydrogen transfer steps. While the Guerbet reaction is traditionally (and commercially) focused on fatty alcohols, it has been investigated for the dimerization of ethanol to butanol. Organometallic catalysts have been investigated. A small amount of the diene 1,7-octadiene is required as a hydrogen acceptor. Mechanism The reaction mechanism for this reaction is a four-step sequence. In the first step the alcohol is oxidized to the aldehyde. These intermediates then react in an aldol condensation to the allyl aldehyde which the hydrogenation catalyst then reduces to the alcohol. The Cannizzaro reaction is a competing reaction when two aldehyde molecules react by disproportionation to form the corresponding alcohol and carboxylic acid. Another side reaction is the Tishchenko reaction. See also Oxo alcohols - a different reaction which gives similar products Guerbet alcohols 2-Ethyl-1-butanol 2-Ethylhexanol 2-Propylheptan-1-ol 2-Butyl-1-octanol 2-Butyl-1-octanol References External links A Review of Guerbet Chemistry Anthony J. O’Lenick, Jr. https://web.archive.org/web/20110209074739/http://www.zenitech.com/ Link Condensation reactions Name reactions Fatty alcohols
Guerbet reaction
Chemistry
532
30,272,415
https://en.wikipedia.org/wiki/Entoloma%20myrmecophilum
Entoloma myrmecophilum is a species of fungus in the Entolomataceae family. It is found across Europe. It was described by Henri Romagnesi in 1978 as Rhodophyllus myrmecophilus, before being changed to its current name as the consensus has been to use the genus name Entoloma rather than Rhodophyllus. References External links Entolomataceae Fungi of Europe Fungi described in 1978 Fungus species
Entoloma myrmecophilum
Biology
101
24,062,542
https://en.wikipedia.org/wiki/Noncommutative%20harmonic%20analysis
In mathematics, noncommutative harmonic analysis is the field in which results from Fourier analysis are extended to topological groups that are not commutative. Since locally compact abelian groups have a well-understood theory, Pontryagin duality, which includes the basic structures of Fourier series and Fourier transforms, the major business of non-commutative harmonic analysis is usually taken to be the extension of the theory to all groups G that are locally compact. The case of compact groups is understood, qualitatively and after the Peter–Weyl theorem from the 1920s, as being generally analogous to that of finite groups and their character theory. The main task is therefore the case of G that is locally compact, not compact and not commutative. The interesting examples include many Lie groups, and also algebraic groups over p-adic fields. These examples are of interest and frequently applied in mathematical physics, and contemporary number theory, particularly automorphic representations. What to expect is known as the result of basic work of John von Neumann. He showed that if the von Neumann group algebra of G is of type I, then L2(G) as a unitary representation of G is a direct integral of irreducible representations. It is parametrized therefore by the unitary dual, the set of isomorphism classes of such representations, which is given the hull-kernel topology. The analogue of the Plancherel theorem is abstractly given by identifying a measure on the unitary dual, the Plancherel measure, with respect to which the direct integral is taken. (For Pontryagin duality the Plancherel measure is some Haar measure on the dual group to G, the only issue therefore being its normalization.) For general locally compact groups, or even countable discrete groups, the von Neumann group algebra need not be of type I and the regular representation of G cannot be written in terms of irreducible representations, even though it is unitary and completely reducible. An example where this happens is the infinite symmetric group, where the von Neumann group algebra is the hyperfinite type II1 factor. The further theory divides up the Plancherel measure into a discrete and a continuous part. For semisimple groups, and classes of solvable Lie groups, a very detailed theory is available. See also Selberg trace formula Langlands program Kirillov orbit theory Discrete series representation Zonal spherical function References "Noncommutative harmonic analysis: in honor of Jacques Carmona", Jacques Carmona, Patrick Delorme, Michèle Vergne; Publisher Springer, 2004 Yurii I. Lyubich. Introduction to the Theory of Banach Representations of Groups. Translated from the 1985 Russian-language edition (Kharkov, Ukraine). Birkhäuser Verlag. 1988. Notes Topological groups Duality theories
Noncommutative harmonic analysis
Mathematics
572
30,525,122
https://en.wikipedia.org/wiki/Mate%20choice%20copying
Mate-choice copying, or non-independent mate choice, occurs when an individual of an animal species copies another individual's mate choice. In other words, non-independent mate-choice is when an individual's sexual preferences get socially inclined toward the mate choices of other individuals. This behavior is speculated to be one of the driving forces of sexual selection and the evolution of male traits. It is also hypothesized that mate-choice copying can induce speciation due to the selective pressure for certain, preferred male qualities. Moreover, mate-choice copying is one form of social learning in which animals behave differently depending on what they observe in their surrounding environment. In other words, the animals tend to process the social stimuli they receive by observing the behavior of their conspecifics and execute a similar behavior to what they observed. Mate choice copying has been found in a wide variety of different species, including (but not limited to): invertebrates, like the common fruit fly (Drosophila melanogaster); fish, such as guppies (Poecilia reticulata) and ocellated wrasse; birds, like the black grouse; and mammals, such as the Norway rat (Rattus norvegicus) and humans. Most studies have focused on females, but male mate copying has been also found in sailfin mollies (Poecilia latipinna) and humans. Mechanism Visual copying Mate-choice copying requires a highly developed form of social recognition by which the observer (i.e. copier) female recognizes the demonstrator (i.e. chooser) female when mating with a target male and later recognizes the target male to mate with it. Though it might seem simple, observer females actually do not copy the choice of any haphazard, demonstrator female; instead, they copy based on their perception of the demonstrator female's quality. In guppies (Poecilia reticulata) for instance, females are more likely to copy the mate choice of a larger sized fish than to copy the mate choice of a fish of the same or a smaller size. Besides immediate copying based on visual cues, it has been hypothesized that observer females tend to - later on - choose other males with the same qualities as that of the target male the demonstrator mated with. However, it is not known whether this generalization of preference holds true or the observer's inability to discriminate the target male from other similar-looking males accounts for the behavior. Interestingly, in some instances, an observer female tend to copy a demonstrator's female choice only in the same geographical region (i.e. location) it has observed the demonstrator sexually interact with a target male; if the observer female is presented with the same target male in a different location, there is a less likelihood that the observer would execute the same mate choice. Olfactory copying In some cases, a direct, visual observation of the sexual interaction between the demonstrator and the target is not necessary; female rodents, for instance, use olfactory stimuli as a reference to whether the target male has been chosen by other females or not. A female rodent may choose to mate with a target male if there is a smell of other females associated with this male’s urine, as an indication that it has been mated with by other fellow females. Neurobiology As mentioned earlier, mate choice copying is a developed form of social recognition that requires highly efficient cognitive processes for the observer female to be able to not only identify the demonstrator female and the target male but also execute a suitable behavior (i.e. copying) in response to the observed stimulus. In other words, the execution of mate choice copying is an intricate behavior that most likely involves a coordinated function between the endocrine system, the digestive system, the nervous system, and the reproductive system. In addition to sex hormones, neurotransmitters such as oxytocin (OT) and arginine-vasopressin (AVP) are involved in mediating social recognition of demonstrator and target as well in sexual approach to target males. OT has proven to be of a particular importance to the mediation of mate-choice copying as OT gene-knockout female mice have failed to recognize the demonstrator female and the target male. Moreover, the OT gene-knockout mice have showed a significantly decreased, sexual interest in males even if these males have been previously observed mating with demonstrator females. Such results are likely to be attributed to OT's indispensable role in stimulating sexual arousal and feelings of trust in the female mice; absence of OT has hindered the knockout female mice from trusting the demonstrator female's mating choice, and from experiencing a general sexual attraction to males. Further research has also shown that OT itself is regulated by estrogen and testosterone as a part of the estrous cycles that female mice go through. Evolutionary origin Benefits Mate-choice copying has evolved to eliminate the possible costs—including time and energy —of mate-choice. The fact that mate-choice copying exists in various species is due to the differential abilities of females in choosing a desirable male with good quality genes. In other words, not all females have the same capability of taking good decisions when it comes to mate-choice. Therefore, mate-choice copying as a behavior has evolved through social learning to educate those females—including naive ones—to choose a desirable male, allowing only good quality genes to be propagated in the population over time. For instance, naïve female mice that had just entered the estrus cycle for their first time might choose a male if its urine is associated with the smell of other, older females in the estrus cycle. Therefore, mate-choice copying reduces the error frequency in mate-choice among inexperienced females, guaranteeing an increased relative fitness for the copying females. Another example can be seen in black grouse, Tetrao tetrix, where the naive females in their first breeding season tend to mate later than experienced females so that the former can copy the choice of the latter. Mate-choice copying also becomes effective when the females are constrained by time (i.e. if the breeding season is soon to end) in which case females tend to copy each other's choice to avoid going through the time-consuming choice process that might cost them not being able to mate at all. Mate-choice copying is also effective at eliminating the stress in females of monogamous species such as Gouldian finches (Erythrura gouldiae) that would have otherwise had to mate with a less-desirable, poor-quality male. Another hypothesis that have been also proposed is that Game theory applies to the mate-choice copying behavior where females choose whether to be an observer or a demonstrator based on the abundance of each in the population. A female might tend to become an observer in a population where demonstrators are more abundant to increase its chances of having access to a high-quality male and vice versa. Despite the fact that mate-choice copying, in theory, reduces the relative fitness of those males that are not chosen, it reduces their risks of injury and possible death of the aggressive courtship behaviors that they would have otherwise participated in with the chosen, high-quality males. Some evidence have shown that in species where females display cryptic mate choice, males tend to display the reverse of mate choice copying to avoid mating with females that have been visually observed mating with higher-quality, rival males. Such a mate choice behavior is displayed by a male mainly to avoid wasting its energy in having a sexual interaction that might not necessarily increase its relative fitness if the female chose the sperms of the rival to fertilize its eggs. There are also some instances where the males of a certain species get to be the choosier sex due to their higher parental investment in the offspring than females; an example where males practice mate-choice copying would be sailfin mollies (Poecilia latipinna). Costs There has not been various evidence on the fitness costs of mate choice copying; however, it has been suggested that depending solely on social cues to choose a potential mate is not always advantageous. It, in fact, might in some cases lead to mating with an unfit, poor-quality male that has been chosen maladaptively by demonstrator females. Moreover, in species where males display mate-choice copying such as Atlantic mollies (Poecilia mexicana), the demonstrator male might employ what is known as the Deception Hypothesis in which the demonstrator male pretends to mate with an undesirable female to deceive the observer male into choosing this female. Such a deceitful behavior is facilitated by the demonstrator's ability to change its behavior when it senses the presence of the observer as well as the observer's inability to recognize the behavior of the demonstrator as deceitful. Consequently, the observer male mates with an undesirable, poor-quality female, negatively affecting the survival of the observer male's offspring and, in turn, its own relative fitness. Alternative hypotheses Researchers have suggested other, alternative hypotheses that might explain as to why females might display nonindependent mate choice; these hypotheses include: Kin-associated genetic preferences, common environmental effects, consexual cueing, and associative learning. Kin-associated genetic preferences The proponents of this hypothesis argue that females tend to choose to mate with the same target male due to these females' shared innate preferences for the traits the target male holds. In other words, the genetic similarity of these females due to kinship is reflected in their mate choice behavior that other researchers can view as a mere act of social facilitation. Common environmental effects Some females tend to have the same mate choice due to abiotic factors rather than mate-choice copying. For instance, the distribution of food resources might limit the foraging ability of females to explore potential mates in farther regions; therefore, all females in such a confined region might end up mating with the same male because it holds the greatest potential among its rivals and not because it was targeted by demonstrator females. Another influencing biotic factor is predation; females threatened by predation would avoid foraging for a mate and, instead, mate with the male of the best quality traits in their confined region. This best quality male might be in most cases the same male. Consexual Cueing In polygamous species such as fallow deer (Dama dama), an outsider female deer (i.e. a female that is not part of the harem) might choose to mate with the harem's dominant male because the female is attracted to being a part of the harem's large group of females rather than being attracted to the dominant male itself. Aside from mate choice copying, being part of a large female group would provide such an outsider female with protection, company, and food resources. Associative learning Sometimes, nonindependent mate choice is not a direct copying of an observed mating preference; in fact, it can be the result of an association that the observer female constructs between mating with a target male and receiving a desired award. For instance, in such species where males present the females with a nuptial gift as a prerequisite for mating with the female, observer females are more likely to associate mating with the same target male with the nuptial gift it might receive. Such an association, then, might lead the observer female to mate with the same target male the demonstrator has mated with. Even though there is not a lot of evidence to support this hypothesis, it offers a plausible explanation as to why females of a species might exhibit nonindependent mate choice. References External links "The role of model female quality in the mate choice copying behaviour of sailfin mollies". NCBI.nlm.nih.gov. "Mate Choice Copying in Humans". Springerlink.com. "Beauty is in the eye of your friends". Newscientist.com. Mating Learning
Mate choice copying
Biology
2,499
23,288,360
https://en.wikipedia.org/wiki/Phylloporus%20hyperion
Phylloporus hyperion is a species of fungus in the family Boletaceae. External links Index Fungorum hyperion Fungus species
Phylloporus hyperion
Biology
31
43,215,883
https://en.wikipedia.org/wiki/Black-spotted%20false%20shieldback
The black-spotted false shieldback (Aroegas nigroornatus) is a species of katydid that is only known from the male holotype collected from Mpumalanga, South Africa. It is threatened by livestock grazing and changing weather patterns disturbing its microhabitat. References Tettigoniidae Endemic insects of South Africa Critically endangered insects Insects described in 1916 Species known from a single specimen
Black-spotted false shieldback
Biology
86
211,828
https://en.wikipedia.org/wiki/Columba%20%28constellation%29
Columba is a faint constellation designated in the late sixteenth century, remaining in official use, with its rigid limits set in the 20th century. Its name is Latin for dove. It takes up 1.31% of the southern celestial hemisphere and is just south of Canis Major and Lepus. History Early 3rd century BC: Aratus's astronomical poem Phainomena (lines 367–370 and 384–385) mentions faint stars where Columba is now but does not fit any name or figure to them. 2nd century AD: Ptolemy lists 48 constellations in the Almagest. While Columba is not yet among them, several stars south of Canis Major listed in this work will eventually become part of Columba. c. 150–215 AD: Clement of Alexandria wrote in his Logos Paidogogos"Αἱ δὲ σφραγῖδες ἡμῖν ἔστων πελειὰς ἢ ἰχθὺς ἢ ναῦς οὐριοδρομοῦσα ἢ λύρα μουσική, ᾗ κέχρηται Πολυκράτης, ἢ ἄγκυρα ναυτική," (= "[when recommending symbols for Christians to use], let our seals be a dove or a fish or a ship running in a good wind or a musical lyre ... or a ship's anchor ..."), with no mention of stars or astronomy. 1592 AD: Petrus Plancius first depicted Columba on the small celestial planispheres of his large wall map to differentiate the 'unformed stars' of the large constellation Canis Major. Columba is also shown on his smaller world map of 1594 and on early Dutch celestial globes. Plancius named the constellation Columba Noachi ("Noah's Dove"), referring to the dove that gave Noah the information that the Great Flood was receding. This name is found on early 17th-century celestial globes and star atlases. 1603: Frederick de Houtman listed Columba as "De Duyve med den Olijftack" ("the dove with the olive branch") 1603: Bayer's sky atlas Uranometria was published. It includes Columba as Columba Noachi. 1624: Bartschius listed Columba in his Usus Astronomicus as "Columba Nohae". 1662: Caesius published Coelum Astronomico-Poeticum, including an inaccurate Latin translation of the above text of Clement of Alexandria: it mistranslated "ναῦς οὐριοδρομοῦσα" as Latin "Navis coelestis cursu in coelum tendens" ("Ship of the sky following a course in the sky"), perhaps misunderstanding "οὐριο-" as "up in the air or sky" by analogy with οὐρανός = "sky". 1679: Halley mentioned Columba in his work Catalogus Stellarum Australium from his observations on St. Helena. 1679: Augustin Royer published a star atlas that showed Columba as a constellation. c.1690: Hevelius's Prodromus Astronomiae showed Columba but did not list it as a constellation. 1712 (pirated) and 1725 (authorized): Flamsteed's work Historia Coelestis Britannica showed Columba but did not list it as a constellation. 1757 or 1763: Lacaille listed Columba as a constellation and catalogued its stars. 1889: Richard H. Allen, misled by Caesius's mistranslation, wrote that the Columba asterism may have been invented in Roman/Greek times, but with a footnote saying that it may have been another star group. 2019: OSIRIS-REx students discovered a black hole in the constellation Columba, based on observing X-ray bursts. Features Stars Columba is rather inconspicuous with the brightest star, Alpha Columbae, being only of magnitude 2.7. This, a blue-white star, has a pre-Bayer, traditional, Arabic name Phact (meaning ring dove) and is 268 light-years from Earth. The only other named star is Beta Columbae, which has the alike-status name Wazn. It is an orange-hued giant star of magnitude 3.1, 87 light-years away. The constellation contains the runaway star μ Columbae. Exoplanet NGTS-1b and its star NGTS-1 are in Columba. General radial velocity Columba contains the solar antapex – the opposite to the net direction of the solar system Deep-sky objects The globular cluster NGC 1851 appears in Columba at 7th magnitude in a far part of our galaxy at 39,000 light-years away - it is resolvable south of at greatest latitude +40°N in medium-sized amateur telescopes (under good conditions). See also Columba (Chinese astronomy) IAU-recognized constellations Citations References Princeton University Press, Princeton. . External links The Deep Photographic Guide to the Constellations: Columba The clickable Columba Star Tales – Columba Lost Stars, by Morton Wagman, publ. Mcdonald & Woodward Publishing Company, First printing September 2003, , page 110 Southern constellations Constellations listed by Petrus Plancius
Columba (constellation)
Astronomy
1,172
452,577
https://en.wikipedia.org/wiki/Free%20body%20diagram
In physics and engineering, a free body diagram (FBD; also called a force diagram) is a graphical illustration used to visualize the applied forces, moments, and resulting reactions on a free body in a given condition. It depicts a body or connected bodies with all the applied forces and moments, and reactions, which act on the body(ies). The body may consist of multiple internal members (such as a truss), or be a compact body (such as a beam). A series of free bodies and other diagrams may be necessary to solve complex problems. Sometimes in order to calculate the resultant force graphically the applied forces are arranged as the edges of a polygon of forces or force polygon (see ). Free body A body is said to be "free" when it is singled out from other bodies for the purposes of dynamic or static analysis. The object does not have to be "free" in the sense of being unforced, and it may or may not be in a state of equilibrium; rather, it is not fixed in place and is thus "free" to move in response to forces and torques it may experience. Figure 1 shows, on the left, green, red, and blue widgets stacked on top of each other, and for some reason the red cylinder happens to be the body of interest. (It may be necessary to calculate the stress to which it is subjected, for example.) On the right, the red cylinder has become the free body. In figure 2, the interest has shifted to just the left half of the red cylinder and so now it is the free body on the right. The example illustrates the context sensitivity of the term "free body". A cylinder can be part of a free body, it can be a free body by itself, and, as it is composed of parts, any of those parts may be a free body in itself. Figure 1 and 2 are not yet free body diagrams. In a completed free body diagram, the free body would be shown with forces acting on it. Purpose Free body diagrams are used to visualize forces and moments applied to a body and to calculate reactions in mechanics problems. These diagrams are frequently used both to determine the loading of individual structural components and to calculate internal forces within a structure. They are used by most engineering disciplines from Biomechanics to Structural Engineering. In the educational environment, a free body diagram is an important step in understanding certain topics, such as statics, dynamics and other forms of classical mechanics. Features A free body diagram is not a scaled drawing, it is a diagram. The symbols used in a free body diagram depends upon how a body is modeled. Free body diagrams consist of: A simplified version of the body (often a dot or a box) Forces shown as straight arrows pointing in the direction they act on the body Moments are shown as curves with an arrow head or a vector with two arrow heads pointing in the direction they act on the body One or more reference coordinate systems By convention, reactions to applied forces are shown with hash marks through the stem of the vector The number of forces and moments shown depends upon the specific problem and the assumptions made. Common assumptions are neglecting air resistance and friction and assuming rigid body action. In statics all forces and moments must balance to zero; the physical interpretation is that if they do not, the body is accelerating and the principles of statics do not apply. In dynamics the resultant forces and moments can be non-zero. Free body diagrams may not represent an entire physical body. Portions of a body can be selected for analysis. This technique allows calculation of internal forces, making them appear external, allowing analysis. This can be used multiple times to calculate internal forces at different locations within a physical body. For example, a gymnast performing the iron cross: modeling the ropes and person allows calculation of overall forces (body weight, neglecting rope weight, breezes, buoyancy, electrostatics, relativity, rotation of the earth, etc.). Then remove the person and show only one rope; you get force direction. Then only looking at the person the forces on the hand can be calculated. Now only look at the arm to calculate the forces and moments at the shoulders, and so on until the component you need to analyze can be calculated. Modeling the body A body may be modeled in three ways: a particle. This model may be used when any rotational effects are zero or have no interest even though the body itself may be extended. The body may be represented by a small symbolic blob and the diagram reduces to a set of concurrent arrows. A force on a particle is a bound vector. rigid extended. Stresses and strains are of no interest but rotational effects are. A force arrow should lie along the line of force, but where along the line is irrelevant. A force on an extended rigid body is a sliding vector. non-rigid extended. The point of application of a force becomes crucial and has to be indicated on the diagram. A force on a non-rigid body is a bound vector. Some use the tail of the arrow to indicate the point of application. Others use the tip. What is included An FBD represents the body of interest and the external forces acting on it. The body: This is usually a schematic depending on the body—particle/extended, rigid/non-rigid—and on what questions are to be answered. Thus if rotation of the body and torque is in consideration, an indication of size and shape of the body is needed. For example, the brake dive of a motorcycle cannot be found from a single point, and a sketch with finite dimensions is required. The external forces: These are indicated by labelled arrows. In a fully solved problem, a force arrow is capable of indicating the direction and the line of action the magnitude the point of application a reaction, as opposed to an applied force, if a hash is present through the stem of the arrow Often a provisional free body is drawn before everything is known. The purpose of the diagram is to help to determine magnitude, direction, and point of application of external loads. When a force is originally drawn, its length may not indicate the magnitude. Its line may not correspond to the exact line of action. Even its orientation may not be correct. External forces known to have negligible effect on the analysis may be omitted after careful consideration (e.g. buoyancy forces of the air in the analysis of a chair, or atmospheric pressure on the analysis of a frying pan). External forces acting on an object may include friction, gravity, normal force, drag, tension, or a human force due to pushing or pulling. When in a non-inertial reference frame (see coordinate system, below), fictitious forces, such as centrifugal pseudoforce are appropriate. At least one coordinate system is always included, and chosen for convenience. Judicious selection of a coordinate system can make defining the vectors simpler when writing the equations of motion or statics. The x direction may be chosen to point down the ramp in an inclined plane problem, for example. In that case the friction force only has an x component, and the normal force only has a y component. The force of gravity would then have components in both the x and y directions: mgsin(θ) in the x and mgcos(θ) in the y, where θ is the angle between the ramp and the horizontal. Exclusions A free body diagram should not show: Bodies other than the free body. Constraints. (The body is not free from constraints; the constraints have just been replaced by the forces and moments exerted on the body.) Forces exerted by the free body. (A diagram showing the forces exerted both on and by a body is likely to be confusing since all the forces will cancel out. By Newton's 3rd law if body A exerts a force on body B then B exerts an equal and opposite force on A. This should not be confused with the equal and opposite forces that are necessary to hold a body in equilibrium.) Internal forces. (For example, if an entire truss is being analyzed, the forces between the individual truss members are not included.) Velocity or acceleration vectors. Analysis In an analysis, a free body diagram is used by summing all forces and moments (often accomplished along or about each of the axes). When the sum of all forces and moments is zero, the body is at rest or moving and/or rotating at a constant velocity, by Newton's first law. If the sum is not zero, then the body is accelerating in a direction or about an axis according to Newton's second law. Forces not aligned to an axis Determining the sum of the forces and moments is straightforward if they are aligned with coordinate axes, but it is more complex if some are not. It is convenient to use the components of the forces, in which case the symbols ΣFx and ΣFy are used instead of ΣF (the variable M is used for moments). Forces and moments that are at an angle to a coordinate axis can be rewritten as two vectors that are equivalent to the original (or three, for three dimensional problems)—each vector directed along one of the axes (Fx) and (Fy). Example: A block on an inclined plane A simple free-body diagram, shown above, of a block on a ramp, illustrates this. All external supports and structures have been replaced by the forces they generate. These include: mg: the product of the mass of the block and the constant of gravitation acceleration: its weight. N: the normal force of the ramp. Ff: the friction force of the ramp. The force vectors show the direction and point of application and are labelled with their magnitude. It contains a coordinate system that can be used when describing the vectors. Some care is needed in interpreting the diagram. The normal force has been shown to act at the midpoint of the base, but if the block is in static equilibrium its true location is directly below the centre of mass, where the weight acts because that is necessary to compensate for the moment of the friction. Unlike the weight and normal force, which are expected to act at the tip of the arrow, the friction force is a sliding vector and thus the point of application is not relevant, and the friction acts along the whole base. Polygon of forces In the case of two applied forces, their sum (resultant force) can be found graphically using a parallelogram of forces. To graphically determine the resultant force of multiple forces, the acting forces can be arranged as edges of a polygon by attaching the beginning of one force vector to the end of another in an arbitrary order. Then the vector value of the resultant force would be determined by the missing edge of the polygon. In the diagram, the forces P1 to P6 are applied to the point O. The polygon is constructed starting with P1 and P2 using the parallelogram of forces (vertex a). The process is repeated (adding P3 yields the vertex b, etc.). The remaining edge of the polygon O-e represents the resultant force R. Kinetic diagram In dynamics a kinetic diagram is a pictorial device used in analyzing mechanics problems when there is determined to be a net force and/or moment acting on a body. They are related to and often used with free body diagrams, but depict only the net force and moment rather than all of the forces being considered. Kinetic diagrams are not required to solve dynamics problems; their use in teaching dynamics is argued against by some in favor of other methods that they view as simpler. They appear in some dynamics texts but are absent in others. See also Classical mechanics Force field analysis – applications of force diagram in social science Kinematic diagram Physics Shear and moment diagrams Strength of materials References Sources Notes External links Mechanics Diagrams Structural analysis
Free body diagram
Physics,Engineering
2,430
78,045,164
https://en.wikipedia.org/wiki/Neknampur%20Lake
Neknampur Lake, also known as Ibrahim Bagh Cheruvu, is a lake in Hyderabad, India. It was once part of a water reservoir network that was used for irrigation and providing drinking water in the surrounding areas. History The lake was first dug up in the late 16th century by Ibrahim Qutb Shah, the fourth ruler of Golconda, and later flooded by his grandson Abdullah Qutb Shah. The construction was entrusted to Neknam Khan, one of Shah's courtiers. Rather than using water from the adjacent Musi, Neknam Khan commissioned channels to fill the lake from water bodies behind the Golconda Fort. Neknampur Lake is one of the three major lakes that were created during the reign of Quli Qutub Shah alongside Ibrahimpatnam Lake and Hussainsagar. There was a proposal by Greater Hyderabad Municipal Corporation to use the lake to dump sewage from surrounding housing colonies. The lake is today divided into two parts known as Chinna Cheruvu, which is smaller, and Pedda Cheruvu, which is larger. The Chinna Cheruvu has been partially restored and converted into a scenic spot whereas the Pedda Cheruvu continues to struggle with pollution. The lake is polluted with various chemicals and also used as a garbage dump by the residential colonies surrounding it. Encroachments and illegal structures surrounding the lake were demolished by government authorities. However these structures are being illegally rebuilt by the encroachers. Restoration efforts The lake was gradually occupied by land grabbers and converted into a dump yard for construction debris, garbage, sewage discharge and covered in water hyacinth. At one stage, the surface area of the lake was less than . Efforts to restore the lake were undertaken in 2016 with the help of NGOs based in Hyderabad. The restoration and rejuvenation of the lake included cleaning the lake and floating wetland treatment to tackle the growth of water hyacinth. Contaminants were removed using plants and with the use of microorganisms. NITI Aayog has recognised these efforts and "it has been identified as a role model for 'best restoration practices' in the country." Neknampur Lake restoration "has been recognised as a role model in the 'watershed development' category along with four other projects" in India. According to Niti Aayog, there has been a 90% reduction in Biochemical Oxygen Demand (BOD) of the lake. Centre for Science and Environment (CSE) has also recognised Neknampur Lake "as the best model of lake restoration in India." Reference Nature conservation in India Lakes of Hyderabad, India Water conservation in India Ecological restoration Water pollution in India
Neknampur Lake
Chemistry,Engineering
555
3,423,275
https://en.wikipedia.org/wiki/BURS
BURS (bottom-up rewrite system) theory tackles the problem of taking a complex expression tree or intermediate language term and finding a good translation to machine code for a particular architecture. Implementations of BURS often employ dynamic programming to solve this problem. BURS can also be applied to the problem of designing an instruction set for an application-specific instruction set processor. References A. V. Aho, M. Ganapathi, and S. W. K. Tjiang. Code generation using tree matching and dynamic programming. ACM Transactions on Programming Languages and Systems, 11(4):491-516, October 1989. Robert Giegerich and Susan L. Graham, editors. Code Generation - Concepts, Tools, Techniques. Workshops in Computing. Springer-Verlag, Berlin, Heidelberg, New York, 1992. External links https://strategoxt.org/Transform/BURG - short description of BURG including additional references to BURS and BURG Computer languages
BURS
Technology
203
23,774,477
https://en.wikipedia.org/wiki/C2H4S2
{{DISPLAYTITLE:C2H4S2}} The molecular formula C2H4S2 (molar mass: 92.18 g/mol, exact mass: 91.97544 u) may refer to: Isomers of dithietane 1,2-Dithietane 1,3-Dithietane
C2H4S2
Chemistry
73
89,246
https://en.wikipedia.org/wiki/Curve
In mathematics, a curve (also called a curved line in older texts) is an object similar to a line, but that does not have to be straight. Intuitively, a curve may be thought of as the trace left by a moving point. This is the definition that appeared more than 2000 years ago in Euclid's Elements: "The [curved] line is […] the first species of quantity, which has only one dimension, namely length, without any width nor depth, and is nothing else than the flow or run of the point which […] will leave from its imaginary moving some vestige in length, exempt of any width." This definition of a curve has been formalized in modern mathematics as: A curve is the image of an interval to a topological space by a continuous function. In some contexts, the function that defines the curve is called a parametrization, and the curve is a parametric curve. In this article, these curves are sometimes called topological curves to distinguish them from more constrained curves such as differentiable curves. This definition encompasses most curves that are studied in mathematics; notable exceptions are level curves (which are unions of curves and isolated points), and algebraic curves (see below). Level curves and algebraic curves are sometimes called implicit curves, since they are generally defined by implicit equations. Nevertheless, the class of topological curves is very broad, and contains some curves that do not look as one may expect for a curve, or even cannot be drawn. This is the case of space-filling curves and fractal curves. For ensuring more regularity, the function that defines a curve is often supposed to be differentiable, and the curve is then said to be a differentiable curve. A plane algebraic curve is the zero set of a polynomial in two indeterminates. More generally, an algebraic curve is the zero set of a finite set of polynomials, which satisfies the further condition of being an algebraic variety of dimension one. If the coefficients of the polynomials belong to a field , the curve is said to be defined over . In the common case of a real algebraic curve, where is the field of real numbers, an algebraic curve is a finite union of topological curves. When complex zeros are considered, one has a complex algebraic curve, which, from the topological point of view, is not a curve, but a surface, and is often called a Riemann surface. Although not being curves in the common sense, algebraic curves defined over other fields have been widely studied. In particular, algebraic curves over a finite field are widely used in modern cryptography. History Interest in curves began long before they were the subject of mathematical study. This can be seen in numerous examples of their decorative use in art and on everyday objects dating back to prehistoric times. Curves, or at least their graphical representations, are simple to create, for example with a stick on the sand on a beach. Historically, the term was used in place of the more modern term . Hence the terms and were used to distinguish what are today called lines from curved lines. For example, in Book I of Euclid's Elements, a line is defined as a "breadthless length" (Def. 2), while a line is defined as "a line that lies evenly with the points on itself" (Def. 4). Euclid's idea of a line is perhaps clarified by the statement "The extremities of a line are points," (Def. 3). Later commentators further classified lines according to various schemes. For example: Composite lines (lines forming an angle) Incomposite lines Determinate (lines that do not extend indefinitely, such as the circle) Indeterminate (lines that extend indefinitely, such as the straight line and the parabola) The Greek geometers had studied many other kinds of curves. One reason was their interest in solving geometrical problems that could not be solved using standard compass and straightedge construction. These curves include: The conic sections, studied in depth by Apollonius of Perga The cissoid of Diocles, studied by Diocles and used as a method to double the cube. The conchoid of Nicomedes, studied by Nicomedes as a method to both double the cube and to trisect an angle. The Archimedean spiral, studied by Archimedes as a method to trisect an angle and square the circle. The spiric sections, sections of tori studied by Perseus as sections of cones had been studied by Apollonius. A fundamental advance in the theory of curves was the introduction of analytic geometry by René Descartes in the seventeenth century. This enabled a curve to be described using an equation rather than an elaborate geometrical construction. This not only allowed new curves to be defined and studied, but it enabled a formal distinction to be made between algebraic curves that can be defined using polynomial equations, and transcendental curves that cannot. Previously, curves had been described as "geometrical" or "mechanical" according to how they were, or supposedly could be, generated. Conic sections were applied in astronomy by Kepler. Newton also worked on an early example in the calculus of variations. Solutions to variational problems, such as the brachistochrone and tautochrone questions, introduced properties of curves in new ways (in this case, the cycloid). The catenary gets its name as the solution to the problem of a hanging chain, the sort of question that became routinely accessible by means of differential calculus. In the eighteenth century came the beginnings of the theory of plane algebraic curves, in general. Newton had studied the cubic curves, in the general description of the real points into 'ovals'. The statement of Bézout's theorem showed a number of aspects which were not directly accessible to the geometry of the time, to do with singular points and complex solutions. Since the nineteenth century, curve theory is viewed as the special case of dimension one of the theory of manifolds and algebraic varieties. Nevertheless, many questions remain specific to curves, such as space-filling curves, Jordan curve theorem and Hilbert's sixteenth problem. Topological curve A topological curve can be specified by a continuous function from an interval of the real numbers into a topological space . Properly speaking, the curve is the image of However, in some contexts, itself is called a curve, especially when the image does not look like what is generally called a curve and does not characterize sufficiently For example, the image of the Peano curve or, more generally, a space-filling curve completely fills a square, and therefore does not give any information on how is defined. A curve is closed or is a loop if and . A closed curve is thus the image of a continuous mapping of a circle. A non-closed curve may also be called an open curve. If the domain of a topological curve is a closed and bounded interval , the curve is called a path, also known as topological arc (or just ). A curve is simple if it is the image of an interval or a circle by an injective continuous function. In other words, if a curve is defined by a continuous function with an interval as a domain, the curve is simple if and only if any two different points of the interval have different images, except, possibly, if the points are the endpoints of the interval. Intuitively, a simple curve is a curve that "does not cross itself and has no missing points" (a continuous non-self-intersecting curve). A plane curve is a curve for which is the Euclidean plane—these are the examples first encountered—or in some cases the projective plane. A is a curve for which is at least three-dimensional; a is a space curve which lies in no plane. These definitions of plane, space and skew curves apply also to real algebraic curves, although the above definition of a curve does not apply (a real algebraic curve may be disconnected). A plane simple closed curve is also called a Jordan curve. It is also defined as a non-self-intersecting continuous loop in the plane. The Jordan curve theorem states that the set complement in a plane of a Jordan curve consists of two connected components (that is the curve divides the plane in two non-intersecting regions that are both connected). The bounded region inside a Jordan curve is known as Jordan domain. The definition of a curve includes figures that can hardly be called curves in common usage. For example, the image of a curve can cover a square in the plane (space-filling curve), and a simple curve may have a positive area. Fractal curves can have properties that are strange for the common sense. For example, a fractal curve can have a Hausdorff dimension bigger than one (see Koch snowflake) and even a positive area. An example is the dragon curve, which has many other unusual properties. Differentiable curve Roughly speaking a is a curve that is defined as being locally the image of an injective differentiable function from an interval of the real numbers into a differentiable manifold , often More precisely, a differentiable curve is a subset of where every point of has a neighborhood such that is diffeomorphic to an interval of the real numbers. In other words, a differentiable curve is a differentiable manifold of dimension one. Differentiable arc In Euclidean geometry, an arc (symbol: ⌒) is a connected subset of a differentiable curve. Arcs of lines are called segments, rays, or lines, depending on how they are bounded. A common curved example is an arc of a circle, called a circular arc. In a sphere (or a spheroid), an arc of a great circle (or a great ellipse) is called a great arc. Length of a curve If is the -dimensional Euclidean space, and if is an injective and continuously differentiable function, then the length of is defined as the quantity The length of a curve is independent of the parametrization . In particular, the length of the graph of a continuously differentiable function defined on a closed interval is which can be thought of intuitively as using the Pythagorean theorem at the infinitesimal scale continuously over the full length of the curve. More generally, if is a metric space with metric , then we can define the length of a curve by where the supremum is taken over all and all partitions of . A rectifiable curve is a curve with finite length. A curve is called (or unit-speed or parametrized by arc length) if for any such that , we have If is a Lipschitz-continuous function, then it is automatically rectifiable. Moreover, in this case, one can define the speed (or metric derivative) of at as and then show that Differential geometry While the first examples of curves that are met are mostly plane curves (that is, in everyday words, curved lines in two-dimensional space), there are obvious examples such as the helix which exist naturally in three dimensions. The needs of geometry, and also for example classical mechanics are to have a notion of curve in space of any number of dimensions. In general relativity, a world line is a curve in spacetime. If is a differentiable manifold, then we can define the notion of differentiable curve in . This general idea is enough to cover many of the applications of curves in mathematics. From a local point of view one can take to be Euclidean space. On the other hand, it is useful to be more general, in that (for example) it is possible to define the tangent vectors to by means of this notion of curve. If is a smooth manifold, a smooth curve in is a smooth map . This is a basic notion. There are less and more restricted ideas, too. If is a manifold (i.e., a manifold whose charts are times continuously differentiable), then a curve in is such a curve which is only assumed to be (i.e. times continuously differentiable). If is an analytic manifold (i.e. infinitely differentiable and charts are expressible as power series), and is an analytic map, then is said to be an analytic curve. A differentiable curve is said to be if its derivative never vanishes. (In words, a regular curve never slows to a stop or backtracks on itself.) Two differentiable curves and are said to be equivalent if there is a bijective map such that the inverse map is also , and for all . The map is called a reparametrization of ; and this makes an equivalence relation on the set of all differentiable curves in . A arc is an equivalence class of curves under the relation of reparametrization. Algebraic curve Algebraic curves are the curves considered in algebraic geometry. A plane algebraic curve is the set of the points of coordinates such that , where is a polynomial in two variables defined over some field . One says that the curve is defined over . Algebraic geometry normally considers not only points with coordinates in but all the points with coordinates in an algebraically closed field . If C is a curve defined by a polynomial f with coefficients in F, the curve is said to be defined over F. In the case of a curve defined over the real numbers, one normally considers points with complex coordinates. In this case, a point with real coordinates is a real point, and the set of all real points is the real part of the curve. It is therefore only the real part of an algebraic curve that can be a topological curve (this is not always the case, as the real part of an algebraic curve may be disconnected and contain isolated points). The whole curve, that is the set of its complex point is, from the topological point of view a surface. In particular, the nonsingular complex projective algebraic curves are called Riemann surfaces. The points of a curve with coordinates in a field are said to be rational over and can be denoted . When is the field of the rational numbers, one simply talks of rational points. For example, Fermat's Last Theorem may be restated as: For , every rational point of the Fermat curve of degree has a zero coordinate. Algebraic curves can also be space curves, or curves in a space of higher dimension, say . They are defined as algebraic varieties of dimension one. They may be obtained as the common solutions of at least polynomial equations in variables. If polynomials are sufficient to define a curve in a space of dimension , the curve is said to be a complete intersection. By eliminating variables (by any tool of elimination theory), an algebraic curve may be projected onto a plane algebraic curve, which however may introduce new singularities such as cusps or double points. A plane curve may also be completed to a curve in the projective plane: if a curve is defined by a polynomial of total degree , then simplifies to a homogeneous polynomial of degree . The values of such that are the homogeneous coordinates of the points of the completion of the curve in the projective plane and the points of the initial curve are those such that is not zero. An example is the Fermat curve , which has an affine form . A similar process of homogenization may be defined for curves in higher dimensional spaces. Except for lines, the simplest examples of algebraic curves are the conics, which are nonsingular curves of degree two and genus zero. Elliptic curves, which are nonsingular curves of genus one, are studied in number theory, and have important applications to cryptography. See also Coordinate curve Crinkled arc Curve fitting Curve orientation Curve sketching Differential geometry of curves Gallery of curves Index of the curve List of curves topics List of curves Osculating circle Parametric surface Path (topology) Polygonal curve Position vector Vector-valued function Infinite-dimensional vector function Winding number Notes References Euclid, commentary and trans. by T. L. Heath Elements Vol. 1 (1908 Cambridge) Google Books E. H. Lockwood A Book of Curves (1961 Cambridge) External links Famous Curves Index, School of Mathematics and Statistics, University of St Andrews, Scotland Mathematical curves A collection of 874 two-dimensional mathematical curves Gallery of Space Curves Made from Circles, includes animations by Peter Moses Gallery of Bishop Curves and Other Spherical Curves, includes animations by Peter Moses The Encyclopedia of Mathematics article on lines. The Manifold Atlas page on 1-manifolds. Metric geometry Topology General topology
Curve
Physics,Mathematics
3,365
9,446,798
https://en.wikipedia.org/wiki/Abhyankar%27s%20lemma
In mathematics, Abhyankar's lemma (named after Shreeram Shankar Abhyankar) allows one to kill tame ramification by taking an extension of a base field. More precisely, Abhyankar's lemma states that if A, B, C are local fields such that A and B are finite extensions of C, with ramification indices a and b, and B is tamely ramified over C and b divides a, then the compositum AB is an unramified extension of A. See also Finite extensions of local fields References . Theorem 3, page 504. . , p. 279. . Theorems in algebraic geometry Lemmas in algebra Algebraic number theory Theorems in abstract algebra
Abhyankar's lemma
Mathematics
153
11,420,712
https://en.wikipedia.org/wiki/C0343%20RNA
The C0343 RNA is a bacterial non-coding RNA of 74 nucleotides in length that is found between the ydaN and dbpA genes in the genomes of Escherichia coli and Shigella flexneri, Salmonella enterica and Salmonella typhimurium. This ncRNA was originally identified in E.coli using high-density oligonucleotide probe arrays (microarray). The function of this ncRNA is unknown. FnrS RNA was later found to be transcribed from the same intergenic region as C0343 RNA. See also C0299 RNA C0465 RNA C0719 RNA References External links Non-coding RNA
C0343 RNA
Chemistry
147
66,570,719
https://en.wikipedia.org/wiki/Kapteyn%20series
Kapteyn series is a series expansion of analytic functions on a domain in terms of the Bessel function of the first kind. Kapteyn series are named after Willem Kapteyn, who first studied such series in 1893. Let be a function analytic on the domain with . Then can be expanded in the form where The path of the integration is the boundary of . Here , and for , is defined by Kapteyn's series are important in physical problems. Among other applications, the solution of Kepler's equation can be expressed via a Kapteyn series: Relation between the Taylor coefficients and the coefficients of a function Let us suppose that the Taylor series of reads as Then the coefficients in the Kapteyn expansion of can be determined as follows. Examples The Kapteyn series of the powers of are found by Kapteyn himself: For it follows (see also ) and for Furthermore, inside the region , See also Schlömilch's series References Series expansions
Kapteyn series
Mathematics
208
14,129,293
https://en.wikipedia.org/wiki/Neuroblastoma%20RAS%20viral%20oncogene%20homolog
NRAS is an enzyme that in humans is encoded by the NRAS gene. It was discovered by a small team of researchers led by Robin Weiss at the Institute of Cancer Research in London. It was the third RAS gene to be discovered, and was named NRAS, for its initial identification in human neuroblastoma cells. Function The N-ras proto-oncogene is a member of the Ras gene family. It is mapped on chromosome 1, and it is activated in HL60, a promyelocytic leukemia line. The order of nearby genes is as follows: cen—CD2—NGFB—NRAS—tel. The mammalian Ras gene family consists of the Harvey and Kirsten Ras genes (HRAS and KRAS), an inactive pseudogene of each (c-Hras2 and c-Kras1) and the N-Ras gene. They differ significantly only in the C-terminal 40 amino acids. These Ras genes have GTP/GDP binding and GTPase activity, and their normal function may be as G-like regulatory proteins involved in the normal control of cell growth. The N-Ras gene specifies two main transcripts of 2 kb and 4.3 kb. The difference between the two transcripts is a simple extension through the termination site of the 2 kb transcript. The N-Ras gene consists of seven exons (-I, I, II, III, IV, V, VI). The smaller 2 kb transcript contains the VIa exon, and the larger 4.3 kb transcript contains the VIb exon which is just a longer form of the VIa exon. Both transcripts encode identical proteins as they differ only the 3′ untranslated region. Mutations Mutations which change amino acid residues 12, 13 or 61 activate the potential of N-ras to transform cultured cells and are implicated in a variety of human tumors e.g. melanoma. As a drug target Binimetinib (MEK162) has had a phase III clinical trial for NRAS Q61 mutant melanoma. References Further reading External links GeneReviews/NCBI/NIH/UW entry on Noonan syndrome Oncogenes
Neuroblastoma RAS viral oncogene homolog
Chemistry
459
52,450,593
https://en.wikipedia.org/wiki/Dinosterane
Dinosterane is a steroidal alkane, also known as 4α,23,24-trimethylcholestane. It is used in geochemistry as a biomarker, interpreted as an indication of dinoflagellate presence due to its derivative dinosterol high occurrence in extant dinoflagellate species and its rarity in other taxa, although it has been shown to be produced by a single species of marine diatom as well. History of use as a biomarker A 1984 study was conducted which established the dinoflagellate origin for dinosterane based on distributions of modern dinoflagellates and dinosterane abundance in sediment. In 1993, dinosteranes were discovered in a section of the Bristol Trough that was dated to the Rhaetian Age. Due to the co-deposition of these dinosteranes with dinoflagellate cysts and comparison of microfossil abundance with hydrocarbon abundance, the dinosterane was associated with marine dinoflagellates. This was the first stratigraphic evidence for Mesozoic dinoflagellates. In 1998, dinosteranes were found in high relative abundance in samples from the Lükati Formation, which were collected from the Kopli quarry in Estonia. This evidence was used to place the origin of dinoflagellates as early as the Early Cambrian, much earlier than the Bristol Trough studies had been able to. Characterisation Dinosterane's mass spectrum shows a highly increased abundance of the m/z = 98 ion compared to 24-ethyl-4α-methylcholestane, which is likely due to preferential cleavage of the C-22,23 bond. References Hydrocarbons Cholestanes Biomarkers
Dinosterane
Chemistry,Biology
370
996,107
https://en.wikipedia.org/wiki/Universal%20coefficient%20theorem
In algebraic topology, universal coefficient theorems establish relationships between homology groups (or cohomology groups) with different coefficients. For instance, for every topological space , its integral homology groups: completely determine its homology groups with coefficients in , for any abelian group : Here might be the simplicial homology, or more generally the singular homology. The usual proof of this result is a pure piece of homological algebra about chain complexes of free abelian groups. The form of the result is that other coefficients may be used, at the cost of using a Tor functor. For example it is common to take to be , so that coefficients are modulo 2. This becomes straightforward in the absence of 2-torsion in the homology. Quite generally, the result indicates the relationship that holds between the Betti numbers of and the Betti numbers with coefficients in a field . These can differ, but only when the characteristic of is a prime number for which there is some -torsion in the homology. Statement of the homology case Consider the tensor product of modules . The theorem states there is a short exact sequence involving the Tor functor Furthermore, this sequence splits, though not naturally. Here is the map induced by the bilinear map . If the coefficient ring is , this is a special case of the Bockstein spectral sequence. Universal coefficient theorem for cohomology Let be a module over a principal ideal domain (e.g., or a field.) There is also a universal coefficient theorem for cohomology involving the Ext functor, which asserts that there is a natural short exact sequence As in the homology case, the sequence splits, though not naturally. In fact, suppose and define: Then above is the canonical map: An alternative point-of-view can be based on representing cohomology via Eilenberg–MacLane space where the map takes a homotopy class of maps from to to the corresponding homomorphism induced in homology. Thus, the Eilenberg–MacLane space is a weak right adjoint to the homology functor. Example: mod 2 cohomology of the real projective space Let , the real projective space. We compute the singular cohomology of with coefficients in using the integral homology, i.e. . Knowing that the integer homology is given by: We have , so that the above exact sequences yield In fact the total cohomology ring structure is Corollaries A special case of the theorem is computing integral cohomology. For a finite CW complex , is finitely generated, and so we have the following decomposition. where are the Betti numbers of and is the torsion part of . One may check that and This gives the following statement for integral cohomology: For an orientable, closed, and connected -manifold, this corollary coupled with Poincaré duality gives that . Universal coefficient spectral sequence There is a generalization of the universal coefficient theorem for (co)homology with twisted coefficients. For cohomology we have Where is a ring with unit, is a chain complex of free modules over , is any -bimodule for some ring with a unit , is the Ext group. The differential has degree . Similarly for homology for Tor the Tor group and the differential having degree . Notes References Allen Hatcher, Algebraic Topology, Cambridge University Press, Cambridge, 2002. . A modern, geometrically flavored introduction to algebraic topology. The book is available free in PDF and PostScript formats on the author's homepage. Jerome Levine. “Knot Modules. I.” Transactions of the American Mathematical Society 229 (1977): 1–50. https://doi.org/10.2307/1998498 External links Universal coefficient theorem with ring coefficients Homological algebra Theorems in algebraic topology
Universal coefficient theorem
Mathematics
787
24,361,819
https://en.wikipedia.org/wiki/Carinthia%20Statistical%20Region
The Carinthia Statistical Region () is a statistical region in northern Slovenia along the border with Austria. The region is difficult to access and is poorly connected with the central part of Slovenia. The environment has been strongly affected by heavy industry in the valleys. The importance of agriculture is shown by the fact that the farms in the region are among the largest in the country. More than 90% of farms in the region are engaged in breeding livestock. Farm owners in the region have the youngest average age in Slovenia (53 years); they average eight years younger than farm owners in the Coastal–Karst Statistical Region. In 2013 the registered unemployment rate was higher than the national average. The difference between the registered unemployment rate for men and women was the highest among the statistical regions: for women it was 7 percentage points higher than for men. The share of five-year survivals among new enterprises was the highest here (59% of all new enterprises in 2012). Cities and towns The Carinthia Statistical Region includes 5 cities and towns, the largest of which is Slovenj Gradec. Municipalities The Carinthia Statistical Region comprises the following 12 municipalities: Črna na Koroškem Dravograd Mežica Mislinja Muta Podvelka Prevalje Radlje ob Dravi Ravne na Koroškem Ribnica na Pohorju Slovenj Gradec Vuzenica Demographics The population in 2020 was 70,755. Economy Employment structure: 46.6% services, 49.6% industry, 3.8% agriculture. Tourism The region attracts only 1.1% of the total number of tourists in Slovenia, most being from Slovenia (66.7%). Transportation Length of motorways: 0 km Length of other roads: 1,620.7 km Sources Slovenian regions in figures 2014 References Statistical regions of Slovenia
Carinthia Statistical Region
Mathematics
379
6,592,539
https://en.wikipedia.org/wiki/Computer%20Aided%20Verification
In computer science, the International Conference on Computer-Aided Verification (CAV) is an annual academic conference on the theory and practice of computer-aided formal analysis of software and hardware systems, broadly known as formal methods. Among the important results originally published in CAV are techniques in model checking, such as Counterexample-Guided Abstraction Refinement and partial order reduction. It is often ranked among the top conferences in computer science. The first CAV was held in 1989 in Grenoble, France. The CAV proceedings (1989-present) are published by Springer Science+Business Media. They have been open access since 2018. The annual CAV Award was established in 2008. The list of recipients and citations can be found at https://i-cav.org/cav-award/. See also List of computer science conferences Symposium on Logic in Computer Science European Joint Conferences on Theory and Practice of Software References External links bibliography for CAV at DBLP Conference proceedings Theoretical computer science conferences Logic conferences
Computer Aided Verification
Technology
208
39,319,146
https://en.wikipedia.org/wiki/Lentiviral%20vector%20in%20gene%20therapy
Lentiviral vectors in gene therapy is a method by which genes can be inserted, modified, or deleted in organisms using lentiviruses. Lentiviruses are a family of viruses that are responsible for diseases like AIDS, which infect by inserting DNA into their host cells' genome. Many such viruses have been the basis of research using viruses in gene therapy, but the lentivirus is unique in its ability to infect non-dividing cells, and therefore has a wider range of potential applications. Lentiviruses can become endogenous (ERV), integrating their genome into the host germline genome, so that the virus is henceforth inherited by the host's descendants. Scientists use the lentivirus' mechanisms of infection to achieve a desired outcome to gene therapy. Lentiviral vectors in gene therapy have been pioneered by Luigi Naldini. The lentivirus is a retrovirus, meaning it has a single stranded RNA genome with a reverse transcriptase enzyme. Lentiviruses also have a viral envelope with protruding glycoproteins that aid in attachment to the host cell's outer membrane. The virus contains a reverse transcriptase molecule found to perform transcription of the viral genetic material upon entering the cell. Within the viral genome are RNA sequences that code for specific proteins that facilitate the incorporation of the viral sequences into the host cell genome. The "gag" gene codes for the structural components of the viral nucleocapsid proteins: the matrix (MA/p17), the capsid (CA/p24) and the nucleocapsid (NC/p7) proteins. The "pol" domain codes for the reverse transcriptase and integrase enzymes. Lastly, the "env" domain of the viral genome encodes for the glycoproteins and envelope on the surface of the virus.[1] There are multiple steps involved in the infection and replication of a lentivirus in a host cell. In the first step the virus uses its surface glycoproteins for attachment to the outer surface of a cell. More specifically, lentiviruses attach to the CD4 glycoproteins on the surface of a host's target cell. The viral material is then injected into the host cell's cytoplasm. Within the cytoplasm, the viral reverse transcriptase enzyme performs reverse transcription of the viral RNA genome to create a viral DNA genome. The viral DNA is then sent into the nucleus of the host cell where it is incorporated into the host cell's genome with the help of the viral enzyme integrase. From now on, the host cell starts to transcribe the entire viral RNA and express the structural viral proteins, in particular those that form the viral capsid and the envelope. The lentiviral RNA and the viral proteins then assemble and the newly formed virions leave the host cell when enough are made. Two methods of gene therapy using lentiviruses have been proposed. In the ex vivo methodology, cells are extracted from a patient and then cultured. A lentiviral vector carrying therapeutic transgenes are then introduced to the culture to infect them. The now modified cells continue to be cultured until they can be infused into the patient. In vivo gene therapy is the sample injection of viral vectors containing transgenes into the patient. Designing a lentivirus vector Lentiviruses are modified to act as a vector to insert beneficial genes into cells. Unlike other retroviruses, which cannot penetrate the nuclear envelope and can therefore only act on cells while they are undergoing mitosis, lentiviruses can infect cells whether or not they are dividing (shown to be largely due to the capsid protein). Many cell types, like neurons, do not divide in adult organisms, so lentiviral gene therapy is a good candidate for treating conditions that affect those cell types. Some experimental applications of lentiviral vectors have been done in gene therapy in order to cure diseases like Diabetes mellitus, Murine haemophilia A, prostate cancer, chronic granulomatous disease, and vascular diseases. HIV-derived lentiviral vectors have been widely developed for their ability to target specific genes through the coactivator PSIP1. This target specificity allows for the development of lentiviral gene vectors that do not carry the risk of randomly inserting themselves into normally functioning genes. As HIV is pathogenic, it must be genetically modified to remove its disease-causing properties and its ability to replicate itself. This can be achieved by deleting viral genes that are unnecessary for transduction of therapeutic transgenes. It has been proposed that by targeting the "gag" and "env" domains, enough of the HIV-1 genome can be deleted without losing its effectiveness in gene therapy while minimizing viral genes integrated into the patient. Genes may also be replaced rather than disrupted as another method to reduce the risks associated with the use of HIV-1. Other lentiviruses such as Feline immunodeficiency virus and Equine infectious anemia virus have been developed for use in gene therapy and are of interest due to the inability to cause serious disease in human hosts. Equine infectious anemia virus in particular has been shown to perform somewhat better than HIV-1 in hematopoietic stem cells Insertional mutagenesis Historically, lentiviral vectors included strong viral promoters which had a side effect of insertional mutagenesis, nuclear DNA mutations that affect the function of a gene. These strong viral promotors were shown to be the main cause of cancer formation. As a result, viral promotors have been replaced by cellular promotors and regulatory sequences. Contrast with other viral vectors As mentioned, lentiviruses have the unique ability to infect non-dividing cells. Beyond that, there are several other properties that distinguish lentiviral vectors from other viral vectors. Such properties are important to consider when determining whether lentiviruses are appropriate for a given treatment. Gammaretroviruses Gammaretroviruses are retroviruses like lentiviruses. Murine leukemia viruses (MLVs) were among the first to be investigated for their use in gene therapy. However, recent research has favored lentiviruses for their ability to integrate into non-dividing cells. More practically, gammaretroviruses have an affinity for integrating themselves near oncogene promoters, bringing forward an adverse risk of tumors. MLVs may be replication competent, meaning they can replicate in the host cell. These replication-competent viruses offer stable gene transfer and tumor and tissue specific targeting. Adenoviruses In gene therapy, adenoviruses differ from lentiviruses in many ways, some of which provide advantages over lentiviruses. Transduction efficiency is higher in adenoviruses compared to lentiviruses. Secondly, most human cells have receptors for adenoviruses likely as a result of the wide variety of adenovirus diseases in humans. As adenoviruses frequently infect humans, this could build an immune response in the body. Such a response can reduce the efficiency of adenoviral vector therapies and can result in adverse reactions such as inflammation of tissues. Research has been conducted to exploit this immune response to target cancerous cells and to develop vaccines. Hybrid adenovirus-retroviruses (specifically MLVs) have also been developed to exploit the benefits of MLVs and adenoviruses. Applications Severe combined immunodeficiency disease The ADA deficient variant of severe combined immunodeficiency (SCID) was treated highly successfully in a multi-year study reported in 2021. Over 95% of treated patients continued to be event free after 36 months, and 100% of patients survived this normally lethal disease. A self-inactivating lentiviral vector, EFS-ADA LV, was used to insert a functional ADA gene in autologous CD34+ hematopoietic stem and progenitor cells (HSPCs). Vascular transplants In a study designed to enhance the outcomes of vascular transplant through vascular endothelial cell gene therapy, the third generation of Lentivirus showed to be effective in the delivery of genes to moderate venous grafts and transplants in procedures like coronary artery bypass. Because the virus has been adapted to lose most of its genome, the virus becomes safer and more effective in transplanting the required genes into the host cell. A drawback to this therapy is explained in the study that long-term gene expression may require the use of promoters and can aid in a greater trans-gene expression. The researchers accomplished this by the addition of self-inactivating plasmids and creating a more universal tropism by pseudotyping a vesicular stomatitis virus glycoprotein. Chronic granulomatous disease In chronic granulomatous disease (CGD), immune functioning is deficient as a result of the mutations in components of the nicotinamide adenine dinucleotide phosphate oxidase (NADPH oxidase) enzyme in phagocyte cells, which catalyzes the production of superoxide free radicals. If this enzyme becomes deficient, the phagocytes can't kill effectively the engulfed bacteria, so granulomas can be formed. Study performed in mice emphasizes the use of lineage-specific lentiviral vectors to express a normal version of one of the mutant CGD proteins, allowing white blood cells to now make a functional version of the NADPH oxidase. Scientists developed this strain of lentivirus by transinfecting 293T cells with pseudotyped virus with the vesicular stomatitis G protein. The viral vector's responsibility was to increase the production of a functional NADPH oxidase gene in these phagocytic cells. They did this to create an affinity for myeloid cells. Prostate cancer With prostate cancer, the lentivirus is transformed by being bound to trastuzumab to attach to androgen-sensitive LNCaP and castration-resistant C4-2 human prostate cancer cell lines. These two cells are primarily responsible for secretion of excess human epidermal growth factor receptor 2 (HER-2), which is a hormone linked to prostate cancer. By attaching to these cells and changing their genomes, the lentivirus can slow down, and even kill, the cancer-causing cells. Researchers caused the specificity of the vector by manipulating the Fab region of the viral genome and pseudotyped it with the Sindbis virus. Haemophilia A Haemophilia A has also been studied in gene therapy with a lentiviral vector in mice. The vector targets the haematopoietic cells in order to increase the amount of factor VIII, which is affected in haemophilia A. But this continues to be a subject of study as the lentivirus vector was not completely successful in achieving this goal. They did this by trans-infecting the virus in a 293T cell, creating a virus known as 2bF8 expressing generation of viral vectors. Rheumatoid arthritis Studies have also found that injection of a lentiviral vector with IL-10 expressing genes in utero in mice can suppress, and prevent, rheumatoid arthritis and create new cells with constant gene expression. This contributes to the data on stem cells and in utero inoculation of viral vectors for gene therapy. The target for the viral vector in this study, were the synovial cells. Normally functioning synovial cells produce TNFα and IL-1. Diabetes mellitus Like many of the in utero studies, the lentiviral vector gene therapy for diabetes mellitus is more effective in utero as the stem cells that become affected by the gene therapy create new cells with the new gene created by the actual viral intervention. The vector targets the cells within the pancreas to add insulin secreting genes to help control diabetes mellitus. Vectors were cloned using a cytomegalovirus promoter and then co-transinfected in the 293T cell. Neurological disease As mature neurons do not divide, lentiviruses are ideal for division independent gene therapy. Studies of lentiviral gene therapy have been conducted on patients with advanced Parkinson's disease and aging-related atrophy of neurons in primates. See also Retinal gene therapy using lentiviral vectors References Further reading External links The Place of Retroviruses in Biology Synthesis of Gag and Gag-Pro-Pol Proteins in Retroviruses About: Retroviruses Resource Overview Applied genetics Gene delivery Lentiviruses
Lentiviral vector in gene therapy
Chemistry,Biology
2,628
428,364
https://en.wikipedia.org/wiki/Phishing
Phishing is a form of social engineering and a scam where attackers deceive people into revealing sensitive information or installing malware such as viruses, worms, adware, or ransomware. Phishing attacks have become increasingly sophisticated and often transparently mirror the site being targeted, allowing the attacker to observe everything while the victim navigates the site, and transverses any additional security boundaries with the victim. As of 2020, it is the most common type of cybercrime, with the FBI's Internet Crime Complaint Center reporting more incidents of phishing than any other type of cybercrime. The term "phishing" was first recorded in 1995 in the cracking toolkit AOHell, but may have been used earlier in the hacker magazine 2600. It is a variation of fishing and refers to the use of lures to "fish" for sensitive information. Measures to prevent or reduce the impact of phishing attacks include legislation, user education, public awareness, and technical security measures. The importance of phishing awareness has increased in both personal and professional settings, with phishing attacks among businesses rising from 72% in 2017 to 86% in 2020, already rising to 94% in 2023. Types Email phishing Phishing attacks, often delivered via email spam, attempt to trick individuals into giving away sensitive information or login credentials. Most attacks are "bulk attacks" that are not targeted and are instead sent in bulk to a wide audience. The goal of the attacker can vary, with common targets including financial institutions, email and cloud productivity providers, and streaming services. The stolen information or access may be used to steal money, install malware, or spear phish others within the target organization. Compromised streaming service accounts may also be sold on darknet markets. This type of social engineering attack can involve sending fraudulent emails or messages that appear to be from a trusted source, such as a bank or government agency. These messages typically redirect to a fake login page where users are prompted to enter their credentials. Spear phishing Spear phishing is a targeted phishing attack that uses personalized messaging, especially e‑mails, to trick a specific individual or organization into believing they are legitimate. It often utilizes personal information about the target to increase the chances of success. These attacks often target executives or those in financial departments with access to sensitive financial data and services. Accountancy and audit firms are particularly vulnerable to spear phishing due to the value of the information their employees have access to. The Russian government-run Threat Group-4127 (Fancy Bear) (GRU Unit 26165) targeted Hillary Clinton's 2016 presidential campaign with spear phishing attacks on over 1,800 Google accounts, using the domain to threaten targeted users. A study on spear phishing susceptibility among different age groups found that 43% of youth aged 18–25 years and 58% of older users clicked on simulated phishing links in daily e‑mails over 21 days. Older women had the highest susceptibility, while susceptibility in young users declined during the study, but remained stable among older users. Voice phishing (Vishing) Voice over IP (VoIP) is used in vishing or voice phishing attacks, where attackers make automated phone calls to large numbers of people, often using text-to-speech synthesizers, claiming fraudulent activity on their accounts. The attackers spoof the calling phone number to appear as if it is coming from a legitimate bank or institution. The victim is then prompted to enter sensitive information or connected to a live person who uses social engineering tactics to obtain information. Vishing takes advantage of the public's lower awareness and trust in voice telephony compared to email phishing. SMS phishing (smishing) SMS phishing or smishing is a type of phishing attack that uses text messages from a cell phone or smartphone to deliver a bait message. The victim is usually asked to click a link, call a phone number, or contact an email address provided by the attacker. They may then be asked to provide private information, such as login credentials for other websites. The difficulty in identifying illegitimate links can be compounded on mobile devices due to the limited display of URLs in mobile browsers. Smishing can be just as effective as email phishing, as many smartphones have fast internet connectivity. Smishing messages may also come from unusual phone numbers. Page hijacking Page hijacking involves redirecting users to malicious websites or exploit kits through the compromise of legitimate web pages, often using cross site scripting. Hackers may insert exploit kits such as MPack into compromised websites to exploit legitimate users visiting the server. Page hijacking can also involve the insertion of malicious inline frames, allowing exploit kits to load. This tactic is often used in conjunction with watering hole attacks on corporate targets. Quishing A relatively new trend in online scam activity is "quishing". The term is derived from "QR" (Quick Response) codes and "phishing", as scammers exploit the convenience of QR codes to trick users into giving up sensitive data, by scanning a code containing an embedded malicious web site link. Unlike traditional phishing, which relies on deceptive emails or websites, quishing uses QR codes to bypass email filters and increase the likelihood that victims will fall for the scam, as people tend to trust QR codes and may not scrutinize them as carefully as a URL or email link. The bogus codes may be sent by email, social media, or in some cases hard copy stickers are placed over legitimate QR codes on such things as advertising posters and car park notices. When victims scan the QR code with their phone or device, they are redirected to a fake website designed to steal personal information, login credentials, or financial details. As QR codes become more widely used for things like payments, event check-ins, and product information, quishing is emerging as a significant concern for digital security. Users are advised to exercise caution when scanning unfamiliar QR codes and ensure they are from trusted sources, although the UK's National Cyber Security Centre rates the risk as far lower than other types of lure. Techniques Link manipulation Phishing attacks often involve creating fake links that appear to be from a legitimate organization. These links may use misspelled URLs or subdomains to deceive the user. In the following example URL, , it can appear to the untrained eye as though the URL will take the user to the example section of the yourbank website; this URL points to the "yourbank" (i.e. phishing subdomain) section of the example website (fraudster's domain name). Another tactic is to make the displayed text for a link appear trustworthy, while the actual link goes to the phisher's site. To check the destination of a link, many email clients and web browsers will show the URL in the status bar when the mouse is hovering over it. However, some phishers may be able to bypass this security measure. Internationalized domain names (IDNs) can be exploited via IDN spoofing or homograph attacks to allow attackers to create fake websites with visually identical addresses to legitimate ones. These attacks have been used by phishers to disguise malicious URLs using open URL redirectors on trusted websites. Even digital certificates, such as SSL, may not protect against these attacks as phishers can purchase valid certificates and alter content to mimic genuine websites or host phishing sites without SSL. Social engineering Phishing often uses social engineering techniques to trick users into performing actions such as clicking a link or opening an attachment, or revealing sensitive information. It often involves pretending to be a trusted entity and creating a sense of urgency, like threatening to close or seize a victim's bank or insurance account. An alternative technique to impersonation-based phishing is the use of fake news articles to trick victims into clicking on a malicious link. These links often lead to fake websites that appear legitimate, but are actually run by attackers who may try to install malware or present fake "virus" notifications to the victim. History Early history Early phishing techniques can be traced back to the 1990s, when black hat hackers and the warez community used AOL to steal credit card information and commit other online crimes. The term "phishing" is said to have been coined by Khan C. Smith, a well-known spammer and hacker, and its first recorded mention was found in the hacking tool AOHell, which was released in 1994. AOHell allowed hackers to impersonate AOL staff and send instant messages to victims asking them to reveal their passwords. In response, AOL implemented measures to prevent phishing and eventually shut down the warez scene on their platform. 2000s In the 2000s, phishing attacks became more organized and targeted. The first known direct attempt against a payment system, E-gold, occurred in June 2001, and shortly after the September 11 attacks, a "post-9/11 id check" phishing attack followed. The first known phishing attack against a retail bank was reported in September 2003. Between May 2004 and May 2005, approximately 1.2 million computer users in the United States suffered losses caused by phishing, totaling approximately . Phishing was recognized as a fully organized part of the black market, and specializations emerged on a global scale that provided phishing software for payment, which were assembled and implemented into phishing campaigns by organized gangs. The United Kingdom banking sector suffered from phishing attacks, with losses from web banking fraud almost doubling in 2005 compared to 2004. In 2006, almost half of phishing thefts were committed by groups operating through the Russian Business Network based in St. Petersburg. Email scams posing as the Internal Revenue Service were also used to steal sensitive data from U.S. taxpayers. Social networking sites are a prime target of phishing, since the personal details in such sites can be used in identity theft; In 2007, 3.6 million adults lost due to phishing attacks. The Anti-Phishing Working Group reported receiving 115,370 phishing email reports from consumers with US and China hosting more than 25% of the phishing pages each in the third quarter of 2009. 2010s Phishing in the 2010s saw a significant increase in the number of attacks. In 2011, the master keys for RSA SecurID security tokens were stolen through a phishing attack. Chinese phishing campaigns also targeted high-ranking officials in the US and South Korean governments and military, as well as Chinese political activists. According to Ghosh, phishing attacks increased from 187,203 in 2010 to 445,004 in 2012. In August 2013, Outbrain suffered a spear-phishing attack, and in November 2013, 110 million customer and credit card records were stolen from Target customers through a phished subcontractor account. CEO and IT security staff subsequently fired. In August 2014, iCloud leaks of celebrity photos were based on phishing e-mails sent to victims that looked like they came from Apple or Google. In November 2014, phishing attacks on ICANN gained administrative access to the Centralized Zone Data System; also gained was data about users in the system - and access to ICANN's public Governmental Advisory Committee wiki, blog, and whois information portal. Fancy Bear was linked to spear-phishing attacks against the Pentagon email system in August 2015, and the group used a zero-day exploit of Java in a spear-phishing attack on the White House and NATO. Fancy Bear carried out spear phishing attacks on email addresses associated with the Democratic National Committee in the first quarter of 2016. In August 2016, members of the Bundestag and political parties such as Linken-faction leader Sahra Wagenknecht, Junge Union, and the CDU of Saarland were targeted by spear-phishing attacks suspected to be carried out by Fancy Bear. In August 2016, the World Anti-Doping Agency reported the receipt of phishing emails sent to users of its database claiming to be official WADA, but consistent with the Russian hacking group Fancy Bear. In 2017, 76% of organizations experienced phishing attacks, with nearly half of the information security professionals surveyed reporting an increase from 2016. In the first half of 2017, businesses and residents of Qatar were hit with over 93,570 phishing events in a three-month span. In August 2017, customers of Amazon faced the Amazon Prime Day phishing attack, when hackers sent out seemingly legitimate deals to customers of Amazon. When Amazon's customers attempted to make purchases using the "deals", the transaction would not be completed, prompting the retailer's customers to input data that could be compromised and stolen. In 2018, the company block.one, which developed the EOS.IO blockchain, was attacked by a phishing group who sent phishing emails to all customers aimed at intercepting the user's cryptocurrency wallet key, and a later attack targeted airdrop tokens. 2020s Phishing attacks have evolved in the 2020s to include elements of social engineering, as demonstrated by the July 15, 2020, Twitter breach. In this case, a 17-year-old hacker and accomplices set up a fake website resembling Twitter's internal VPN provider used by remote working employees. Posing as helpdesk staff, they called multiple Twitter employees, directing them to submit their credentials to the fake VPN website. Using the details supplied by the unsuspecting employees, they were able to seize control of several high-profile user accounts, including those of Barack Obama, Elon Musk, Joe Biden, and Apple Inc.'s company account. The hackers then sent messages to Twitter followers soliciting Bitcoin, promising to double the transaction value in return. The hackers collected 12.86 BTC (about $117,000 at the time). Anti-phishing There are anti-phishing websites which publish exact messages that have been recently circulating the internet, such as FraudWatch International and Millersmiles. Such sites often provide specific details about the particular messages. As recently as 2007, the adoption of anti-phishing strategies by businesses needing to protect personal and financial information was low. There are several different techniques to combat phishing, including legislation and technology created specifically to protect against phishing. These techniques include steps that can be taken by individuals, as well as by organizations. Phone, web site, and email phishing can now be reported to authorities, as described below. User training Effective phishing education, including conceptual knowledge and feedback, is an important part of any organization's anti-phishing strategy. While there is limited data on the effectiveness of education in reducing susceptibility to phishing, much information on the threat is available online. Simulated phishing campaigns, in which organizations test their employees' training by sending fake phishing emails, are commonly used to assess their effectiveness. One example is a study by the National Library of Medicine, in which an organization received 858,200 emails during a 1-month testing period, with 139,400 (16%) being marketing and 18,871 (2%) being identified as potential threats. These campaigns are often used in the healthcare industry, as healthcare data is a valuable target for hackers. These campaigns are just one of the ways that organizations are working to combat phishing. Nearly all legitimate e-mail messages from companies to their customers contain an item of information that is not readily available to phishers. Some companies, for example PayPal, always address their customers by their username in emails, so if an email addresses the recipient in a generic fashion ("Dear PayPal customer") it is likely to be an attempt at phishing. Furthermore, PayPal offers various methods to determine spoof emails and advises users to forward suspicious emails to their spoof@PayPal.com domain to investigate and warn other customers. However it is unsafe to assume that the presence of personal information alone guarantees that a message is legitimate, and some studies have shown that the presence of personal information does not significantly affect the success rate of phishing attacks; which suggests that most people do not pay attention to such details. Emails from banks and credit card companies often include partial account numbers, but research has shown that people tend to not differentiate between the first and last digits. A study on phishing attacks in game environments found that educational games can effectively educate players against information disclosures and can increase awareness on phishing risk thus mitigating risks. The Anti-Phishing Working Group, one of the largest anti-phishing organizations in the world, produces regular report on trends in phishing attacks. Technical approaches A wide range of technical approaches are available to prevent phishing attacks reaching users or to prevent them from successfully capturing sensitive information. Filtering out phishing mail Specialized spam filters can reduce the number of phishing emails that reach their addressees' inboxes. These filters use a number of techniques including machine learning and natural language processing approaches to classify phishing emails, and reject email with forged addresses. Browsers alerting users to fraudulent websites Another popular approach to fighting phishing is to maintain a list of known phishing sites and to check websites against the list. One such service is the Safe Browsing service. Web browsers such as Google Chrome, Internet Explorer 7, Mozilla Firefox 2.0, Safari 3.2, and Opera all contain this type of anti-phishing measure. Firefox 2 used Google anti-phishing software. Opera 9.1 uses live blacklists from Phishtank, cyscon and GeoTrust, as well as live whitelists from GeoTrust. Some implementations of this approach send the visited URLs to a central service to be checked, which has raised concerns about privacy. According to a report by Mozilla in late 2006, Firefox 2 was found to be more effective than Internet Explorer 7 at detecting fraudulent sites in a study by an independent software testing company. An approach introduced in mid-2006 involves switching to a special DNS service that filters out known phishing domains. To mitigate the problem of phishing sites impersonating a victim site by embedding its images (such as logos), several site owners have altered the images to send a message to the visitor that a site may be fraudulent. The image may be moved to a new filename and the original permanently replaced, or a server can detect that the image was not requested as part of normal browsing, and instead send a warning image. Augmenting password logins The Bank of America website was one of several that asked users to select a personal image (marketed as SiteKey) and displayed this user-selected image with any forms that request a password. Users of the bank's online services were instructed to enter a password only when they saw the image they selected. The bank has since discontinued the use of SiteKey. Several studies suggest that few users refrain from entering their passwords when images are absent. In addition, this feature (like other forms of two-factor authentication) is susceptible to other attacks, such as those suffered by Scandinavian bank Nordea in late 2005, and Citibank in 2006. A similar system, in which an automatically generated "Identity Cue" consisting of a colored word within a colored box is displayed to each website user, is in use at other financial institutions. Security skins are a related technique that involves overlaying a user-selected image onto the login form as a visual cue that the form is legitimate. Unlike the website-based image schemes, however, the image itself is shared only between the user and the browser, and not between the user and the website. The scheme also relies on a mutual authentication protocol, which makes it less vulnerable to attacks that affect user-only authentication schemes. Still another technique relies on a dynamic grid of images that is different for each login attempt. The user must identify the pictures that fit their pre-chosen categories (such as dogs, cars and flowers). Only after they have correctly identified the pictures that fit their categories are they allowed to enter their alphanumeric password to complete the login. Unlike the static images used on the Bank of America website, a dynamic image-based authentication method creates a one-time passcode for the login, requires active participation from the user, and is very difficult for a phishing website to correctly replicate because it would need to display a different grid of randomly generated images that includes the user's secret categories. Monitoring and takedown Several companies offer banks and other organizations likely to suffer from phishing scams round-the-clock services to monitor, analyze and assist in shutting down phishing websites. Automated detection of phishing content is still below accepted levels for direct action, with content-based analysis reaching between 80% and 90% of success so most of the tools include manual steps to certify the detection and authorize the response. Individuals can contribute by reporting phishing to both volunteer and industry groups, such as cyscon or PhishTank. Phishing web pages and emails can be reported to Google. Multi-factor authentication Organizations can implement two factor or multi-factor authentication (MFA), which requires a user to use at least 2 factors when logging in. (For example, a user must both present a smart card and a password). This mitigates some risk, in the event of a successful phishing attack, the stolen password on its own cannot be reused to further breach the protected system. However, there are several attack methods which can defeat many of the typical systems. MFA schemes such as WebAuthn address this issue by design. Legal responses On January 26, 2004, the U.S. Federal Trade Commission filed the first lawsuit against a Californian teenager suspected of phishing by creating a webpage mimicking America Online and stealing credit card information. Other countries have followed this lead by tracing and arresting phishers. A phishing kingpin, Valdir Paulo de Almeida, was arrested in Brazil for leading one of the largest phishing crime rings, which in two years stole between and . UK authorities jailed two men in June 2005 for their role in a phishing scam, in a case connected to the U.S. Secret Service Operation Firewall, which targeted notorious "carder" websites. In 2006, Japanese police arrested eight people for creating fake Yahoo Japan websites, netting themselves () and the FBI detained a gang of sixteen in the U.S. and Europe in Operation Cardkeeper. Senator Patrick Leahy introduced the Anti-Phishing Act of 2005 to Congress in the United States on March 1, 2005. This bill aimed to impose fines of up to $250,000 and prison sentences of up to five years on criminals who used fake websites and emails to defraud consumers. In the UK, the Fraud Act 2006 introduced a general offense of fraud punishable by up to ten years in prison and prohibited the development or possession of phishing kits with the intention of committing fraud. Companies have also joined the effort to crack down on phishing. On March 31, 2005, Microsoft filed 117 federal lawsuits in the U.S. District Court for the Western District of Washington. The lawsuits accuse "John Doe" defendants of obtaining passwords and confidential information. March 2005 also saw a partnership between Microsoft and the Australian government teaching law enforcement officials how to combat various cyber crimes, including phishing. Microsoft announced a planned further 100 lawsuits outside the U.S. in March 2006, followed by the commencement, as of November 2006, of 129 lawsuits mixing criminal and civil actions. AOL reinforced its efforts against phishing in early 2006 with three lawsuits seeking a total of under the 2005 amendments to the Virginia Computer Crimes Act, and Earthlink has joined in by helping to identify six men subsequently charged with phishing fraud in Connecticut. In January 2007, Jeffrey Brett Goodin of California became the first defendant convicted by a jury under the provisions of the CAN-SPAM Act of 2003. He was found guilty of sending thousands of emails to AOL users, while posing as the company's billing department, which prompted customers to submit personal and credit card information. Facing a possible 101 years in prison for the CAN-SPAM violation and ten other counts including wire fraud, the unauthorized use of credit cards, and the misuse of AOL's trademark, he was sentenced to serve 70 months. Goodin had been in custody since failing to appear for an earlier court hearing and began serving his prison term immediately. Notable incidents 2016–2021 literary phishing thefts See also Clickjacking Trojan Horse References External links Anti-Phishing Working Group Center for Identity Management and Information Protection – Utica College Plugging the "phishing" hole: legislation versus technology () – Duke Law & Technology Review Example of a Phishing Attempt with Screenshots and Explanations – StrategicRevenue.com A Profitless Endeavor: Phishing as Tragedy of the Commons – Microsoft Corporation Database for information on phishing sites reported by the public – PhishTank The Impact of Incentives on Notice and Take-down − Computer Laboratory, University of Cambridge (PDF, 344 kB) Confidence tricks Cybercrime Deception Fraud Identity theft Internet terminology Organized crime activity Social engineering (security) Spamming Types of cyberattacks
Phishing
Technology
5,200
1,343,748
https://en.wikipedia.org/wiki/Transition%20radiation%20detector
A transition radiation detector (TRD) is a particle detector using the -dependent threshold of transition radiation in a stratified material. It contains many layers of materials with different indices of refraction. At each interface between materials, the probability of transition radiation increases with the relativistic gamma factor. Thus particles with large give off many photons, and small give off few. For a given energy, this allows a discrimination between a lighter particle (which has a high and therefore radiates) and a heavier particle (which has a low and radiates much less). The passage of the particle is observed through many thin layers of material put in air or gas. The radiated photon gives energy deposition by photoelectric effect, and the signal is detected as ionization. Usually materials with low are preferred (, ) for the radiator, while for photons materials with high are used to get a high cross section for photoelectric effect (ex. ). TRD detectors are used in ALICE and ATLAS experiment at Large Hadron Collider. The ALICE TRD operates together with a big TPC (Time Projection Chamber) and TOF (Time of Flight counter) to do particle identification in ion collisions. The ATLAS TRD is called TRT (Transition Radiation Tracker) which serves also as a tracker measuring particles' trajectory simultaneously. Particle detectors
Transition radiation detector
Physics,Technology,Engineering
278
5,445,548
https://en.wikipedia.org/wiki/Tellurium%20trioxide
Tellurium trioxide (TeO3) is an inorganic chemical compound of tellurium and oxygen. In this compound, tellurium is in the +6 oxidation state. Polymorphs There are two forms, yellow-red α-TeO3 and grey, rhombohedral, β-TeO3 which is less reactive. α-TeO3 has a structure similar to FeF3 with octahedral TeO6 units that share all vertices. Preparation α-TeO3 can be prepared by heating orthotelluric acid, Te(OH)6, at over 300 °C. The β-TeO3 form can be prepared by heating α-TeO3 in a sealed tube with O2 and H2SO4. α-TeO3 is unreactive to water but is a powerful oxidising agent when heated. With alkalis it forms tellurates. α-TeO3 when heated loses oxygen to form firstly Te2O5 and then TeO2. References Oxides Tellurium(VI) compounds Interchalcogens
Tellurium trioxide
Chemistry
229
59,318,599
https://en.wikipedia.org/wiki/Windisch%E2%80%93Kolbach%20unit
°WK or degrees Windisch-Kolbach is a unit for measuring the diastatic power of malt, named after the German brewer Wilhelm Windisch and the Luxembourg brewer Paul Kolbach. It is a common unit in beer brewing (especially in Europe) that measures the ability of enzymes in malt to reduce starch to sugar (maltose). It is defined as the amount of maltose formed by 100 g of malt in 30 min at 20 °C. Degrees Lintner is a unit used in the United States for the same purpose. The conversion is as follows: . 334 °WK = 3.014×10−7 Katal References W. Diemair: Analytik der Lebensmittel Nachweis und Bestimmung von Lebensmittel-Inhaltsstoffen, Springer Verlag, Berlin/Heidelberg/New York 1967, S. 255–256. Units of measurement Brewing
Windisch–Kolbach unit
Mathematics
194
46,644,107
https://en.wikipedia.org/wiki/Sofituzumab%20vedotin
Sofituzumab vedotin (INN; development code DMUC5754A) is a monoclonal antibody designed for the treatment of ovarian cancer. This drug was developed by Genentech/Roche. Sofituzumab vedotin is an antibody-drug conjugate that targets MUC16, a protein that is overexpressed in several types of cancer including ovarian and pancreatic cancer. The conjugate consists of a human anti-nectin-4 antibody linked to the cytotoxic agent MMAE, which is released after internalization by the cancer cell. In addition to its direct cytotoxic effect, sofituzumab vedotin may also mediate antitumor activity through signal transduction inhibition, antibody-dependent cellular cytotoxicity, and complement-dependent cytotoxicity. Clinical trials have shown promising results in the treatment of ovarian and pancreatic cancer. References Monoclonal antibodies for tumors Antibody-drug conjugates Experimental cancer drugs
Sofituzumab vedotin
Biology
224
1,582,812
https://en.wikipedia.org/wiki/Hepeviridae
Hepeviridae is a family of viruses. Human, pig, wild boar, sheep, cow, camel, monkey, some rodents, bats and chickens serve as natural hosts. There are two genera in the family. Diseases associated with this family include: hepatitis; high mortality rate during pregnancy; and avian hepatitis E virus is the cause of hepatitis-splenomegaly (HS) syndrome among chickens. Taxonomy The following genera are assigned to the family: Orthohepevirus Piscihepevirus A third genus has been proposed — Insecthepevirus. This proposed genus contains one species — Sogatella furcifera hepe-like virus. A species — Crustacea hepe-like virus 1, has been isolated from a prawn (Macrobrachium rosenbergii). Structure Viruses in the family Hepeviridae are non-enveloped, with icosahedral and spherical geometries, and T=1 symmetry. The diameter is around 32-34 nm. Genomes are linear and non-segmented, around 7.2kb in length. The genome has three open reading frames. Evolution This has been studied by examining the ORF1 and the capsid proteins. The ORF1 protein appears to be related to members of the Alphatetraviridae - a member of the "Alpha-like" super-group of viruses - while the capsid protein is related to that of the chicken astrovirus capsid - a member of the "Picorna-like" supergroup. This suggests that a recombination event at some point in the past between at least two distinct viruses gave rise to the ancestor of this family. This recombination event occurred at the junction of the structural and non structural proteins. Life cycle Entry into the host cell is achieved by attachment of the virus to host receptors, which mediates clathrin-mediated endocytosis. Replication follows the positive stranded RNA virus replication model. Positive stranded rna virus transcription is the method of transcription. Translation takes place by leaky scanning. Human, pig, wild boar, monkey, cow, sheep, camel some rodents, bat and chicken serve as the natural host. Transmission routes are zoonosis and fomite. References External links ICTV Online (10th) Report Hepeviridae Viralzone: Hepeviridae Virus families Riboviria
Hepeviridae
Biology
487
77,771,208
https://en.wikipedia.org/wiki/Hydnellum%20nothofagacearum
Hydnellum nothofagacearum is a species of mushroom in the family Bankeraceae. It was described by James K. Douch and Jerry A. Cooper in 2024. The specific epithet refers to Nothofagaceae, with which these fungi are associated. The type locality is in Nelson Lakes National Park, New Zealand. See also Fungi of Australia References External links Fungi described in 2024 Fungus species nothofagacearum Fungi of New Zealand
Hydnellum nothofagacearum
Biology
94
78,285,950
https://en.wikipedia.org/wiki/F-Yang%E2%80%93Mills%20equations
In differential geometry, the -Yang–Mills equations (or -YM equations) are a generalization of the Yang–Mills equations. Its solutions are called -Yang–Mills connections (or -YM connections). Simple important cases of -Yang–Mills connections include exponential Yang–Mills connections using the exponential function for and -Yang–Mills connections using as exponent of a potence of the norm of the curvature form similar to the -norm. Also often considered are Yang–Mills–Born–Infeld connections (or YMBI connections) with positive or negative sign in a function involving the square root. This makes the Yang–Mills–Born–Infeld equation similar to the minimal surface equation. F-Yang–Mills action functional Let be a strictly increasing function (hence with ) and . Let: Since is a function, one can also consider the following constant: Let be a compact Lie group with Lie algebra and be a principal -bundle with an orientable Riemannian manifold having a metric and a volume form . Let be its adjoint bundle. is the space of connections, which are either under the adjoint representation invariant Lie algebra–valued or vector bundle–valued differential forms. Since the Hodge star operator is defined on the base manifold as it requires the metric and the volume form , the second space is usually used. The -Yang–Mills action functional is given by: For a flat connection (with ), one has . Hence is required to avert divergence for a non-compact manifold , although this condition can also be left out as only the derivative is of further importance. F-Yang–Mills connections and equations A connection is called -Yang–Mills connection, if it is a critical point of the -Yang–Mills action functional, hence if: for every smooth family with . This is the case iff the -Yang–Mills equations are fulfilled: For a -Yang–Mills connection , its curvature is called -Yang–Mills field. A -Yang–Mills connection/field with: is just an ordinary Yang–Mills connection/field. (or for normalization) is called (normed) exponential Yang–Mills connection/field. In this case, one has . The exponential and normed exponential Yang–Mills action functional are denoted with and respectively. is called -Yang–Mills connection/field. In this case, one has . Usual Yang–Mills connections/fields are exactly the -Yang–Mills connections/fields. The -Yang–Mills action functional is denoted with . or is called Yang–Mills–Born–Infeld connection/field (or YMBI connection/field) with negative or positive sign respectively. In these cases, one has and respectively. The Yang–Mills–Born–Infeld action functionals with negative and positive sign are denoted with and respectively. The Yang–Mills–Born–Infeld equations with positive sign are related to the minimal surface equation: Stable F-Yang–Mills connection Analogous to (weakly) stable Yang–Mills connections, one can define (weakly) stable -Yang–Mills connections. A -Yang–Mills connection is called stable if: for every smooth family with . It is called weakly stable if only holds. A -Yang–Mills connection, which is not weakly stable, is called unstable. For a (weakly) stable or unstable -Yang–Mills connection , its curvature is furthermore called a (weakly) stable or unstable -Yang–Mills field. Properties For a Yang–Mills connection with constant curvature, its stability as Yang–Mills connection implies its stability as exponential Yang–Mills connection. Every non-flat exponential Yang–Mills connection over with and: is unstable. Every non-flat Yang–Mills–Born–Infeld connection with negative sign over with and: is unstable. All non-flat -Yang–Mills connections over with are unstable. This result includes the following special cases: All non-flat Yang–Mills connections with positive sign over with are unstable. James Simons presented this result without written publication during a symposium on "Minimal Submanifolds and Geodesics" in Tokyo in September 1977. All non-flat -Yang–Mills connections over with are unstable. All non-flat Yang–Mills–Born–Infeld connections with positive sign over with are unstable. For , every non-flat -Yang–Mills connection over the Cayley plane is unstable. Literature See also Bi-Yang–Mills equations, modification of the Yang–Mills equation References External links F-Yang-Mills equation at the nLab Differential geometry Mathematical physics Partial differential equations
F-Yang–Mills equations
Physics,Mathematics
931
11,405,282
https://en.wikipedia.org/wiki/Glutamate-1-semialdehyde
Glutamate-1-semialdehyde is a molecule formed from by the reduction of tRNA bound glutamate, catalyzed by glutamyl-tRNA reductase. It is isomerized by glutamate-1-semialdehyde 2,1-aminomutase to give aminolevulinic acid in the biosynthesis of porphyrins, including heme and chlorophyll. See also Glutamate-5-semialdehyde References Aldehydes Gamma-Amino acids
Glutamate-1-semialdehyde
Chemistry,Biology
120
57,276,841
https://en.wikipedia.org/wiki/List%20of%20rail%20accidents%20%281920%E2%80%931929%29
This is a list of rail accidents from 1920 to 1929. 1920 March 9 – United Kingdom – A Lancashire and Yorkshire Railway freight train separates at Pendlebury, Lancashire. The rear portion runs away, pushing the banking locomotive downhill where it is derailed by catch points. March 14 – United States – Bellows Falls, Vermont The crew of a southbound freight incorrectly reads the train order, confusing "Bartonsville" for "Bellows Falls". Instead of waiting at Bartonsville, they instead proceed south, and collide with a northbound passenger train at Williams River. At least six people are killed. March – United States – Deerfield, Illinois. A locomotive boiler explodes killing one and injuring three. April 12 – United States – New York City: On the 6th Av. elevated line of the IRT company, one train takes a crossover into a track occupied by another knocking one car down to the street. One person is killed and 12 injured. April 24 - India - A collision on the Oudh and Rohilkhand Railway, east of Delhi, kills at least 150 as the wreckage is set alight by the gas installation aboard and burns fiercely. After the fire, pools of molten silver are found in the vicinity "resulting from the melting of the hoards of rupees many of the Indians carried." May 3 – France – A special Riviera Express running from Nice on the PLM railway during a partial strike derails at Les Laumes – Alesia station. Two people were killed: the regular driver and an engineering student who was learning the job to substitute for striking drivers. May 17 – British India – A passenger train starting from Bombay (now Mumbai) collides with a freight train, killing 23 and injuring 17, all in third class. May 20 – Spain – A freight train and a passenger train collide at Neon Daroston, killing 40 people. May 25 - United States - La Joya, New Mexico - Santa Fe passenger train No. 808 derails south of Albuquerque, on a track on ground made soft by high water. The fireman and engineer are killed and about 30 passengers injured. "All of the cars are reported to be on their sides in the water. A special train with doctors and nurses had been ordered from Seccorro [sic], and the wrecker ordered from Belen, which will also take all available doctors from there. The train left El Paso this morning." July 16 – Spain – A freight and passenger train collide between Barcelona and Tortosa, killing twenty people. October 7 – British India – During a labour dispute on the Madras Railway, the Madras-Bangalore Mail (which would now be Chennai-Bengaluru) derails due to a sabotaged track, killing 13 people and injuring 15; 60 coolies are arrested. October 8 – Italy – The Italian State Railways express to Milan is stopped by signals on the bridge from Venice to Mestre, but the signals behind are not set. A train from Trieste crashes into the rear killing 25 and injuring 20. October 9 – France – At Houilles in the Paris suburbs, unbraked cars separate from the rear of a freight train, roll downhill and derail, blocking the adjacent track with wreckage which was struck by a suburban train killing 47. October 27 – Romania – At Lufany, an inexperienced railwayman's error cause two passenger trains to collide, killing about 50 and injuring 200 or more. October – Russia – At Pogranichny, the mail train from Vladivostok to Harbin, China wrecks killing about 100. December 14 – British India – A mail train and goods train collide at Bommidi, killing 30 people and injuring 35. 1921 January 26 – United Kingdom – Abermule train collision: a head-on collision kills 17 people after improper, confused procedures resulted in the tablet from an incoming train being returned to its driver, who did not read it and assumed it was the following tablet that would give him permission to depart. January – Russia – On a mixed train from Novgorod, a consignment of flammable liquid explodes at Luga, killing 68 people. "Benzine Tank Explodes; 68 Persons Are Killed", Oklahoma City (OK) Times, January 18, 1921, p. 2 February 13 – United States – More than 50 people were injured in Brooklyn, New York when two Long Island Rail Road trains collide after one train operator missed a stop signal. February 27 – United States – Porter, Indiana: Over 37 people were killed when the Canadian on the Michigan Central Railroad and the Interstate Express on the New York Central Railroad crash at a cross track. The Michigan Central train, bound for Toronto, Montreal, and Quebec City, overshot a block signal and was derailed by a derailed device. The New York Central train crashed into the already wrecked Michigan Central train at . April 1 – United States – In Georgetown, Kentucky an unknown man was killed by a train. In June 2017 the John Doe was identified as Frank Haynes of Bronston, Kentucky June 18 - Netherlands - A passenger train is derailed at Amsterdam Weesperpoort. June 25 – France – On a bridge over the Ancre River at Beaumont-Hamel, on the Chemins de Fer du Nord, a derailment began with the luggage van at the rear of the train and spreads to the rear three passenger cars, which fall down an embankment; 25 people died and 60 were injured. June 25 – British India – Near Amroha on the Delhi-to-Moradabad line of the Oudh and Rohilkhand Railway, a length of the line is breached by flooding within the space of an hour. The locomotive and two front cars of the next train fall into the water, killing 42 people. July 8 – United Kingdom – A 12-car Great Eastern Railway goods train without continuous brakes is running north on the East London Railway when a coupling breaks. Four goods wagons and the rear brake van separate from the train, and run back southward, downhill. Realizing this, the guard in the brake van applies brakes, but not in time. As the runaway cars approach Wapping station, they cross a track circuit boundary backwards, and the next northbound train receives a false clear signal. This is a New Cross to Shoreditch passenger train of the Metropolitan Railway. The resulting collision kills two railwaymen injures 16 people. July 25 – British India – A mail train from Rangoon (now Yangon) to Mandalay, both now in Myanmar, about into its journey, collides at night with a goods train between Tawwi and Pein Za Loke; 104 people are killed and 48 injured. August 8 – United Kingdom – At Selby railway station, a sidelong collision of two passenger trains caused by driver error; 17 injured. August 27 – Italy – On the Italian State Railways, a train from Ladispoli to Rome collides with a shunting locomotive; 29 are killed and over 100 injured. September 10 – France – On the PLM railway, the section from Bourg-en-Bresse to Lyon had not yet been reopened after one of two tracks was removed to repair World War I damage elsewhere. Consequently a train from Strasbourg to Lyon uses a side track at Les Échets, but runs too fast over the switch and derails killing 38. September 18 – Norway – Nidareid train disaster in Trondheim. Confusion and unfortunate circumstances lead to a head-on collision between two passenger trains killing six. October 5 – France – Two passenger trains collide due to a signalman's error in the Batignolles Tunnel, Paris; at least 28 people were killed in the ensuing fire. November 19 – British India – 64 Mappila prisoners die of asphyxiation while being transported on a prison car with its ventilation blocked by paint, December 5 – United States – Bryn Athyn Train Wreck in Woodmont, Pennsylvania: Two local passenger trains on the Philadelphia and Reading Railway collide and catch fire, killing 27 people after signals were ignored. 1922 March 23 – United States – Azusa, California: A passenger train derails after hitting one of the city's steamrollers. The engineer and fireman are killed while the steamroller driver jumps to save his own life. May 1 – United States – Alton, Illinois, a Chicago and Alton Railroad passenger train strikes a fire engine on its way to a fire, at a grade crossing at 9th and Piasa Streets. The driver and officer on the fire engine seat are injured while two other firefighters jump off. The fire engine, only a year old, was squeezed between the moving passenger train and a parked coal car, and was beyond repair. The broken pieces of the fire engine had to be hauled away in a truck, and a new fire engine had to be purchased to replace it. June 27 – Germany – Berlin: Following a huge demonstration in the Lustgarten against the assassins of Walther Rathenau, the suburban trains are so overwhelmed with passengers that some people ride outside on the running boards, and dozens of them on one train are struck when a door on another train swings open between stations. Casualties were reported as 29 killed and 60 injured and 15 killed and over 100 injured. July 2 – United States – 1922 Winslow Junction train derailment: On the Philadelphia and Reading Railway's Atlantic City Railroad, at shortly before 11:30pm, Train 33 with Philadelphia and Reading Railway Engine No. 349 speeds through an open switch at approximately and derails, killing seven and injuring 89. July 11 – Spain – Paredes de Nava: A head-on collision between the Asturias Mail and an express from Galicia kills 32, including both engine crews, and seriously injures 19. July 31 - United States - Laurel, Maryland: Two freight trains collide; one engineer slightly injured. August 1 – France – Miélan: Two trains carrying pilgrims from Moulins to Lourdes collide when the first one stalls climbing a hill and then runs backwards, apparently due to a brake system failure. Forty people are killed. August 5 – United States – Missouri Pacific Railroad train 32, a local passenger train northbound from Hoxie, Arkansas, to St. Louis, was told at Riverside to proceed to Wickes, Missouri and take the siding while northbound express train 4 (from Texas to St. Louis) and southbound express 1 (the Sunshine Special from St. Louis to Texas) go past. Halfway there, it stopped for water at Sulphur Springs, Missouri. Train 4, while in motion, received an order to stop at Cliff Cave (after Wickes) to let southbound train 1 past. While reading the order, the engineer missed seeing the block signal at Sulphur Springs and crashed into train 32 killing 34 and injuring about 170 injured, mostly in the local, in the worst rail accident ever in Missouri. August 21 – United Kingdom – A South Eastern and Chatham Railway passenger train leaves , Kent against signals and collides with another train killing three. December 4 - United States - Near Shenandoah, Iowa: Of 150 passengers aboard Wabash Railroad train No. 14, 130 "were more or less seriously injured" when the several cars derail north of Shenandoah, at 8:50 pm. Three day coaches and smokers were turned over on their sides while the engine and baggage car remained on the rails. December 13 – United States – Humble, Texas: Traveling at moderate speed, Houston East & West Texas Railway passenger train No. 28, bound for Shreveport, sideswipes a light engine at Humble Station breaking off the boiler check valve on the engine; twenty-two are killed and 11 injured when high-pressure steam enters the first three passenger coaches. The cause is attributed to watchman error. 1923 January 14 – British Ceylon (now Sri Lanka) – Anuradhapura: The Jaffa Mail wrecks at a washout during severe storms; the locomotive, tender, brake van, travelling post office, and one passenger car fall into the floodwaters, killing 39 people. February 13 – United Kingdom – A London and North Eastern Railway express passenger train overruns signals and runs into the rear of a freight train at , Nottinghamshire killing three people. February 18 – France – A train from Paris to Strasbourg on the Chemins de fer de l'Est collides with a freight train, killing 27 people. March 30 – United States – Columbus, Ohio, A westbound Big Four Flyer en route from Boston to Cincinnati hits an automobile at a grade crossing killing the 3 occupants of the car, the engineer, the fireman, and an editor for the Warren Democrat; another 14 were injured. April 15 – United Kingdom – A Great Western Railway freight train collides head-on with a passenger train at Curry Rivel, Somerset due to a signalman's error injuring nine. July 2 – Romania – At Vinty-Leanca, between Ploești and Bezeu (now Ploiești and Buzău), a shunter's error diverts a mail train from Bucharest to Jassy into a siding where it crashes into a stationary goods train killing 63 people and injuring 100 injured. July 3 – United States – A passenger train in New Mexico derails killing both of the engineers and both of the firemen, and injuring 45. July 5 – United Kingdom – A freight train and an express passenger train collide at , Yorkshire, killing four people. July 6 – New Zealand – Ongarue railway disaster: A southbound express runs into a mudslide killing 17. A railway worker in charge of a gang also dies at the scene of cerebral haemorrhage – verified from news reports of the day. July 31 – Germany – A night express from Hamburg is run in two sections due to a heavy passenger load. At Kreiensen, between Hildesheim and Göttingen, the advance section stops due to engine trouble, and the driver of the following section missed seeing a signal because of something in his eye and crashes into the rear of the leading section. The resulting crash and fire kills 47 people. August 13 – United States – Colorado and Southern Railway train 609 collides head-on with Santa Fe train 6 in Fowler, Colorado, killing 5 people and injuring at least 5. September 1 – Japan – Nebukawa Station accident: A landslide caused by the 1923 Great Kantō earthquake hits Nebukawa Station and an approaching train. 112 passengers were killed and thirteen injured. September 8 – USSR – An express train derails at Omsk killing 82 and injuring 150. September 27 – United States – Glenrock train wreck: A Chicago, Burlington and Quincy Railroad passenger train falls through a bridge washaway at Cole Creek, killing 30 of the train's 66 passengers. This was the worst railroad accident in Wyoming's history. October 27 – Canada – Canadian Pacific Railway train 4, seven colonist passenger cars derail near Savanne, Ontario as a result of a broken rail. December 23 – United Kingdom – A London and North Eastern Railway express passenger train overruns signals at Belford, Northumberland and collides with a locomotive. 1924 March 14 – India – Near Bareilly, on the Oudh and Rohilkhand Railway, a tropical cyclone blew five cars of a train off a bridge leaving two submerged in a river. One early report said 18 bodies were found before the submerged cars were searched; another report estimated forty to fifty total dead. April 23 – Switzerland – Two passenger trains collided head-on at due to a pointsman's error and the driver of one of the trains passing a danger signal. A lack of interlocking was a major contributory factor. Fifteen people were killed. April 26 – United Kingdom – A London, Midland and Scottish Railway electric multiple unit overruns signals and crashes into the rear of an excursion train at station, London. May 2 – USSR – On its inaugural run, the Lenin Express from Odessa to Moscow derails, possibly due to sabotage, and several cars fell down an embankment. Many people were killed and injured but nothing was reported in the Soviet papers. July 28 – United Kingdom – A passenger train overruns signals and collides with another at station, Edinburgh, Lothian killing five. August 19 – French West Africa – Due to a flood, the Paporah Bridge on the Dakar–Niger Railway collapses with a train on it, killing 29 people. August 19 – British India – At Montgomery (now Sahiwal, Pakistan) on the North Western State Railway, two trains collide, killing 107 people and injuring 104; the assistant stationmaster of an adjacent station was arrested for criminal negligence. October – USSR – On the line from Moscow to Ivanovo and Vasenensk, a mixed train carrying passengers and gasoline is destroyed by fire. It was said that of 200 people on board only 27 survived, but the Soviet authorities suppressed the story. November 3 – United Kingdom – Lytham rail crash: The lead tyre of a locomotive suddenly fractures causing the train to derail and strike a bridge and a signal box killing fourteen. 1925 January 13 – Germany – An express train from Berlin to Cologne suddenly encounters fog and the driver passes signals without realizing it. The train crashes into the rear of a Ruhr local standing at Herne, smashing through the fourth-class cars at the rear; 32 people were killed and 57 injured, all on the local. January 13 – Germany – On the same day (and in fact at almost the same time) as the Herne disaster, a similar collision occurs at Hattingen in the Ruhr killing three and injuring twelve. January 30 – Ireland – Owencarrow Viaduct disaster: - A train is blown off a viaduct in Donegal in winds approaching killing four. February 27 - Canada - A Canadian Pacific Railway passenger train collides with a train operating a snowplow near Lachute, Quebec. Three crew members on the snowplow train were killed. April 9 – Spain – On the line from Barcelona to Tarrasa, two trains collide on a sharp curve near a tunnel at Las Planas crushing several cars against the wall which killed 25 and seriously injured 46. May 1 – Poland – A German express train from Königsberg (now Kaliningrad, Russia) to Berlin crossing the Polish Corridor derails on a sharp curve between Swarożyn (Swaroschin) and Starogard Gdański (Preußisch Stargard), sending the locomotive and six cars down an embankment; 26 people are killed and 12 seriously injured, mostly in first class, and because the train doors were locked while in Poland, passengers remain locked into the undamaged cars for another two hours. Germany accused Poland of poor maintenance while Poland blamed Germany of sabotaging their own train to discredit Poland. June 9 – Australia – near Traveston, South East Queensland. The Rockhampton Mail train derails on a high timber trestle bridge, killing ten people and injured 48 when a passenger car and the luggage van plunged off the bridge, and another passenger car was pulled on its side. It resulted in baggage cars being specially built for passenger trains and ended, for a time, the use of goods vehicles on passenger trains. June 16 – United States – Rockport train wreck: A special seven-car Delaware, Lackawanna and Western Railroad passenger train from Chicago to Hoboken, New Jersey encounters road debris that had been washed onto a grade crossing by a torrential rainstorm. The train derails, and two cars land adjacent to the locomotive, with escaping steam scalding numerous passengers; 51 were killed. The passengers were German-Americans traveling to Bremen, Germany, via the SS Republic. June 18 – United Kingdom – A Metropolitan Railway electric locomotive collides with carriages at Baker Street, London injuring six people. August 20 – United States – Two passenger trains collide head-on on the Denver & Rio Grande Western Railroad near Granite, Colorado killing two and injuring 117. The cause was determined to be human error and a blistering report followed: "It would be difficult to imagine a more inherently dangerous system, or lack of system, for the operation of trains...". August 22 – Isle of Man – A train hauled by No.3 Pender runs into station with insufficient braking power as the brakeman was left behind at killing the driver. Vacuum brakes were introduced on the Isle of Man Railway as a result of the accident. October 2 – United States – The Chesapeake and Ohio Railway's long Church Hill Tunnel in Richmond, Virginia, collapses on a work train, killing four and trapping steam locomotive 231 and 10 flat cars. Rescue efforts resulted in further collapse and the tunnel was sealed with the train and unrecovered victims entombed within. October 26 – United States – On the St. Louis–San Francisco Railway, the Sunnyland passenger train from St. Louis to Kansas City derails due to a broken rail at and tumbles down an embankment as it approaches Victoria, Mississippi killing 21. 1926 March 14 – Costa Rica – El Virilla train accident: A train fell off a bridge over the Río Virilla between Heredia and Tibás, resulting in 248 deaths and 93 wounded. May 24 – Germany – A train crashes into the rear of an excursion train standing at the platform at Munich East station killing 33 people and injuring about 100. The driver of the second train was convicted and sentenced to five months in prison. May 26 – United Kingdom – During the General Strike of 1926, a London and North Eastern Railway passenger train is deliberately derailed by miners south of , Northumberland. May 26 – Australia – Caulfield, Victoria: Caulfield railway accident, a night-time collision of a six-car electric multiple unit with another six-car electric multiple unit at Caulfield Railway Station resulted in three deaths and numerous injuries. June 7 – Spain – Barcelona: The famous architect Antoni Gaudí was run over by a tram and died a few days later. June 9 – South Africa – At Salt River, near Cape Town, a train derails due to a coupling lodged in the track. The rear cars broke away and two of them hit an overbridge killing 17 people and injuring about 40 or 50. July 3 – France – At Achères-la-Forêt on the Chemins de fer de l'Est, a train from Le Havre to Paris takes a turnout due to urgent repairs to the main line. The driver misses the speed restriction and derails killing 20, including the himself, and injuring 98. August 7 – United Kingdom – On an LNER 6-car electric multiple unit train completing a loop service from Newcastle via Monkseaton, after leaving Heaton station the driver ties down the control and dead man's handle with two handkerchiefs, leaves his driving position, leans out of the cab window to look backwards, strikes a bridge support, and is dragged out of the train and killed. Realizing the train was overshooting Manors station, the guard applies brakes, but too late to prevent a collision with a freight train. The front cars were lightly loaded, but 16 passengers were injured, including a young "courting couple" in the frontmost compartment—whom the driver was presumably trying to watch, but the official report declined to speculate. August 13 – United States – Calverton, New York – Long Island Rail Road's Shelter Island Express train jumps the tracks and crashes into the Golden's Pickle Works factory, resulting in six deaths. August 19 – Germany – A Berlin-to-Cologne express derails on an embankment due to sabotage of the track. The locomotive and seven cars fall down the embankment and two cars are telescoped; 21 people died. Two men were convicted and sentenced to death. August 30 – United Kingdom – A passenger train collides with a charabanc on a level crossing at Naworth, Cumberland due to errors by the crossing keeper and a lack of interlocking between signals and the gates, killing nine. September 1 – Spain – The Barcelona-to-Valencia mail train runs into a landslide and derails between L'Ametlla de Mar and L'Ampolla killing 25 and injuring 50. September 5 – United States – Granite, Colorado – Denver & Rio Grande Western Railroad's Scenic Limited running southeast, exceeds the rated speed for the track and crashes into the Arkansas River, resulting in 30 deaths and 54 injuries. The locomotive, tender, and six cars plunged into the river. Crash reports indicate the engineer was attempting to make up time since the train was running 25 minutes late. September 8 – United Kingdom – The driver of a passenger train loses control on greasy rails and the train overruns buffers at . September 9 - Netherlands - 1926 Voorschoten train crash: A passenger train derails near , South Holland. Two crew and two passengers are killed. September 13 – Australia – Murulla railway accident: Goods wagons on a siding uncouple, roll down a slope and crash into an oncoming mail train, resulting in 27 deaths and 37 injuries. It would remain the worst train crash in New South Wales history for just under 51 years until the Granville rail disaster of 1977 which left 84 people dead in Australia's worst rail disaster. September 23 – Japan – A Tokyo-Shimonoseki limited express derails at Hataga river bridge in eastern Hiroshima, in an incident caused by heavy rain and flooding, killing 34 and injuring 39. October 4 – Switzerland – Ricken Tunnel railway accident: A freight train gets stuck on an incline on the Ricken Tunnel between Kaltbrunn and Wattwil. Due to poor tunnel ventilation, the locomotive's exhaust gases kill 9. November 5 – United Kingdom – A milk train divides near , Hampshire. Because the guard fails to protect the train, a passenger train runs into it. One person was killed. November 19 – United Kingdom – A defective private-owner coal wagon derails at , Yorkshire. Further wagons derail and partially collapse a signal post. A passing express passenger train collides with the signal post, ripping out the side of the carriages. Eleven people were killed. November 24 – United Kingdom – A London, Midland and Scottish Railway passenger train overruns signals at Upney, Essex and rear-ends another passenger train injuring 604 people. December 8 – China – A passenger and freight train collide at Machungho on the South Manchuria Railway, killing 25 and seriously injuring 54. December 11 – China – A passenger and freight train collide on the South Manchuria Railway, killing 25 and seriously injuring 54, this time at Tieling. December 23 – United States – Rockmart, Georgia. The Northbound Ponce de Leon crashes head-on into the Southbound Royal Palm, resulting in 19 deaths and 115 injuries. It was remembered later on as the world-famous folk song, "Wreck of the Royal Palm" by Vernon Dalhart. December 28 – United Kingdom – Elliot Junction rail accident: On the joint line of the North British and Caledonian Railways, a major snowstorm led to many delays, the derailment of a freight train, and a collapse of telegraph lines that left the block signalling inoperative. Rather than staff being called out to assist trains with hand signals and detonators, drivers were told to proceed with caution. One driver, moving at about in very poor visibility, crashes into a standing train, killing his fireman and 21 passengers. 1927 January 9 – United States – Savannah, New York – The eighth section of the eastbound Twentieth Century Limited rear-ended the seventh section of the Century in heavy fog, killing one and injuring fifty-four. February 13 – United Kingdom – Hull Paragon rail accident: One signalman operated his lever too early, defeating the interlocking mechanism, just as another signalman operated the wrong lever. The resulting head-on collision killed twelve. February 27 – United Kingdom – An express passenger train collides with a light engine near Penistone, Derbyshire due to an error by the driver of the light engine. May 15 – Canada – Three trainmen were killed and many passengers injured when the Canadian Pacific Railway eastbound express train runs into a rockslide and derails near Nipigon, Ontario. July 6 – Argentina – Tragedy of Alpatacal: A trainload of Chilean army cadets were traveling from their school in Santiago, Chile to attend the dedication of a monument in Buenos Aires. At Alpatacal in Mendoza Province, the special was signaled with detonators to stop and wait for the Internacional coming from Buenos Aires. The train fails to stop and hits the other train, killing thirty and injuring hundreds. July 27 – South Africa – Heidelberg: On a single-track line, the driver of a southbound goods train from Roodekop (near Johannesburg) apparently forgets that the staff he is carrying only allows him to proceed as far as a newly added side track, where he is to wait for a northbound train from Durban. Altogether 29 people were killed, some of them by exposure to the winter weather while waiting for rescue, and 54 injured. August 20 – United Kingdom – A passenger train derails due to poor track at , Kent. The locomotive is repaired and returned to service on 23 August, but is involved in another accident the next day. August 24 – United Kingdom – Sevenoaks railway accident: Water in the tanks of a locomotive on the Southern Railway sloshes violently and derails the train, killing 13. August 25 – France – On the Chemin de fer du Montenvers, the rack railway from Chamonix-Mont-Blanc to the Hotel de Montenvers by the Mer de Glace glacier, a train runs away downhill due to operating errors by the crew. The first car derails, breaks away, and falls into a ravine, killing 16 to 20 people. October 2 - United States - Nine are injured when three coaches of the Peoria Limited of the Illinois Traction System derail on a curve at Edwardsville, Illinois, and crash into the porch of the Vanze Hotel. Two passengers are taken to St. Elizabeth’s Hospital at Venice, Illinois, while others are given treatment at the hotel. Motorman W. M. Nave, who was bruised, stated that the brakes failed to function as the trainset approached the curve near the hotel. October 26 - Yugoslavia - A freight train derailed while crossing a bridge between the Bradina and Brđani stations (near Konjic in what is now Bosnia and Herzegovina). Eight cars fell into the river. Two train operators were killed and two more injured. The Vienna correspondent of the Exchange Telegraph stated “that 260 persons are reported to have been killed when a passenger train plunged over a precipice between Sarajevo and Mostar.” December 3 – Canada – An eastbound Canadian Pacific Railway passenger train collides with the derailed cars of a westbound Canadian National Railways freight train on the CNR's double-track line near Port Credit (now part of Mississauga, Ontario). 1928 January 22 – British India – Between Hayaghat (now in Darbhanga District, Bihar) and Kishanpur (now in Samastipur District, Bihar), a locomotive breaks away from the train behind it, which then crashes hard into it; two cars derail and fall down an embankment. At least thirty people died. January 28 – British India – A mail train from Mandalay to Rangoon (now Yangon), both now in Myanmar, is derailed by sabotage to the track at a bridge between Yindsikkon (now Yin Taik Kone) and Kyauktaga. The locomotive and four cars fall into the river below, killing 54 people and injuring at least 30. Dacoits were suspected of the crime and an Indian man was sentenced to death, but his conviction was quashed. March 12 – British Ceylon (now Sri Lanka) – Kalutara: An express from Galle to Colombo crashes head-on at speed into an ordinary passenger train, killing 25 or 28 people and injuring 41. Several railwaymen are found responsible. June 4 – China,Feng-tien,Mukden – Huanggutun incident. June 10 – Germany – The locomotive and four cars of the Munich to Frankfurt fall down an embankment after derailing at Veitsbronn killing 22. June 27 – United Kingdom – Darlington rail crash, head-on collision kills 25. July 2 – United Kingdom – A London, Midland and Scottish Railway freight train derail at , Wigtownshire killing the crew of both locomotives. July 9 – United Kingdom – B2X class locomotive No. B210 sideswipes an electric multiple unit at when the driver misreads signals. Two people are killed and nine injured, six seriously. August 6 - United States - Two passenger trains derail in Mounds, Illinois. The derailment is caused by a large pipe lying on the tracks. Reports indicate 8-9 people dead and around 200 are injured. August 17 – United Kingdom – A London and North Eastern Railway express passenger train hits a lorry on a level crossing near , Cambridgeshire and derails. August 24 – United States – 1928 Times Square derailment: The last two cars of a downtown express train on the IRT Broadway–Seventh Avenue Line of the New York City subway derail at a switch, killing 18 people and injuring about 100. August 27 – United Kingdom – A London, Midland and Scottish Railway passenger train overruns the buffers at , London, injuring 30 people. September 10 – Czechoslovakia – Between Zaječí and Břeclav, both now in Czechia, an express passenger train from Paris to Bucharest collided with a freight train, killing 21 people and injuring at least 29. October 13 – United Kingdom – Charfield railway disaster: A Leeds-to-Bristol night mail train failed to stop at signals and collided with a freight train being moved into a siding. The mail train derailed and then collided with another freight train on the main line. Gas lighting on the passenger coaches of the mail train caused an intense fire, destroying four coaches. An estimated 16 died, and 41 were injured according to official report. October 25 United Kingdom – A London, Midland and Scottish Railway express passenger train rear-ends a freight train near , Dumfriesshire due to a signalman's error. Four people are killed and five injured. October 26 – Romania – At Reșca, a fast train to Bucharest is incorrectly diverted onto a track occupied by the Simplon Orient Express. Two cars of the diverted train telescope and almost everyone in them is killed; 34 were killed altogether. In the aftermath there were complaints that the station staff were so unhelpful that the passengers had to telegraph for rescue. Several railwaymen are punished by firing or suspension. 1929 January 17 – United States – near Aberdeen, Maryland: A Pennsylvania Railroad train bound for Baltimore rear-ends a freight, then a third train hits the derailed freight killing five and injuring 38. An unlit semaphore stop signal was invisible in heavy fog. Bandleader Fletcher Henderson, traveling with several of his musicians, was among the injured but still conducted an engagement in Baltimore that night. January – United Kingdom – An express passenger train overruns signals at , Gloucestershire and collides with a freight train killing three. February 2 – United Kingdom – Due to a signalman's error, a passenger train is diverted into the bay platform at , Renfrewshire and crashes into a horsebox. Many people were injured. February 12 – United Kingdom – A London Midland and Scottish Railway express passenger train collides head-on at , Derbyshire due to a signalman's error killing two. June 9 – United Kingdom – A London and North Eastern Railway steam railcar 220 Waterwitch overruns signals at Marshgate Junction, Yorkshire and stops on the main line where it is struck by an express passenger train. August 7 - Canada - An automobile driver caused the derailment of a Canadian Pacific Railway train near Tweed, Ontario which killed one of the crew. August 25 – Germany – Buir: The D29, running from Paris to Warsaw, derails some west of Buir station, near the town of Düren. Due to construction work, the train was supposed to be diverted to a siding, but the train driver received wrong instructions in Düren and noticed the signal too late, entering the siding at instead of . 13 passengers ware killed and 40 injured. This led to the introduction of the La, the German railways' book of temporary speed restrictions on the network and the distant signals indicating to expect the home signal showing to slow down if necessary. September 23 – USSR – A train from Moscow (now in Russia) to Siberia derails at Zuevka, between Kursk (now in Russia) and Kharkiv (now in Ukraine); at least 30 were killed. October 4 – United Kingdom – The driver of a freight train passed a danger signal at , London. An express passenger train ran into it. November 20 – United Kingdom – Bath Green Park runaway: A freight train runs away and crashed in goods yard, killing the driver and two railway employees in the yard, and severely injuring the fireman. The runaway was caused by the crew being overcome by fumes while travelling through Combe Down Tunnel. See also List of London Underground accidents References Sources External links Railroad train wrecks 1907–2007 Rail accidents 1900andndash;1949 20th-century railway accidents
List of rail accidents (1920–1929)
Technology
7,670
7,550,102
https://en.wikipedia.org/wiki/Dance%20organ
A dance organ () is a mechanical organ designed to be used in a dance hall or ballroom. Originated and popularized in Paris, it is intended for use indoors as dance organs tend to be quieter than the similar fairground organ. History Dance organs were principally used in mainland Europe. In their earliest days before the First World War they were used in France, Spain, Belgium and the Netherlands. After the First World War their use waned apart from in Belgium and the Netherlands where they became a mainstream form of music at public venues until the Second World War. The dance organ came into its own during the early 1900s, with many large instruments built by Gavioli and Marenghi. In the early 1910s the firm of Mortier began expanding out the sound-schemes of these instruments with a variety of novel and new pipework and percussion adapted to the new emerging styles of early 20th century popular music. Other manufacturers such as Hooghuys and Fasano followed suit. Many instruments with older style sound schemes from Gavioli and Marenghi were modernized by Mortier and others either partially or entirely. In Antwerp, Arthur Bursens built several hundred small roll and book-operated café orchestrions under the trade names "Ideal" and "Arburo" (a combination of Arthur Bursens and (Gustav) Roels). Roels was an early business partner, later succeeded by Frans de Groof. Bursens primarily catered to the smaller cafés in the Antwerp area, which often lacked the space or income to justify a larger Mortier or Decap dance organ. Patrons would drop a coin into a wallbox, allowing one tune from a roll that typically contained three or four tunes to play. At the end of the tune, the roll would rewind, ready to play from the beginning again. To meet the demand for the latest popular hits, multi-tune rolls were frequently produced. By the early 1920s Mortier were the predominant brand closely followed by Gaudin of Paris - successors to Marenghi. Throughout the 1920s the sound-schemes of the instruments constantly evolved to keep up with the trends of jazz-age dance music. Facade styles also followed the fashions of the era moving progressing naturally from the Art Nouveau towards the Art Deco of the 1920s and 1930s. In the 1930s the dominance of Mortier was matched by the instruments from the firm of Gebroeders Decap Antwerpen (Dutch for Decap Brothers Antwerp). By the end of the 1930s both Mortier and Decap had reached their zenith both in art-deco facade design and musical abilities. Dance organs came in every size. More compact versions were used in cafes and smaller public venues where they bridged the gap between orchestrions and the giant dance organs. Just like many cafe coin pianos and orchestrions some of the smaller instruments were set up so that they could be coin-operated remotely. After the Second World War Decap Herentals and Decap Antwerp made further developments to include use of the latest technology and instrumentation ideas. Hammond organ tone generators were incorporated following the trend of popular music into electronic instruments and creating a partial replacement for tone generation via conventional pipework. In the 21st century dance organs are still being built by a small number of manufacturers. Modern technology in all its varied forms is frequently adapted with the result than many new instruments are wi-fi and midi operable and tones electronically generated to modern standards, have percussion with dynamic playing capability, karaoke systems, volume control and other improvements. Instrumentation It is important to note that the dance organ developed to closely follow the new emerging styles of popular music. The earliest organs musically aim to replicate and replace a small dance orchestra playing musical styles of the late 1800s—early 1900s and the sounds of the bal-musette. The musical rhythms are mainly the older formal dances such as waltz, two-step, polka etc. During these early years the instruments were, in addition to loud performance, capable of soft solo playing with distinct solo pipework voices coupled with swell shutters in order to handle the characteristic gentle ballroom dances of the early 1900s such as the valse tres-lente, the valse boston (or Cross-step waltz) and the Hesitation waltz styles. By the 1910s the harmonic complexity of popular music has started to move away from the 19th century model. With the development of the foxtrot and one-step during the mid-1910s dance organs adapted with provision for greater musical chromaticism to provide higher degrees of musical flexibility. In the late 1910s with the emergence of jazz there grew a need for more complex percussion and this was mirrored with organs acquiring xylophones and extended jazz percussion. The jazz music of the '20s was based predominantly around brass instruments and the saxophone and its variants in particular. At this point the dance organ acquired many new novel pipework generated sounds of its own. The structure of bands moved to the big band format and dance organ capabilities and musical arrangements followed accordingly. In the 1930s the natural progress continued with further extended percussion to cope with the trends for Latin American rumba and other new rhythms in popular music. After the Second World War the developments already made with regard to percussion capabilities were adequate for the various post Second World War Latin dance rhythms such as the mambo and cha-cha-cha. With rhythm being an important part of any dance, dance organs generally have many more percussion instruments than other types of mechanical organs. For visual effect in addition to the facade itself there are often complex lighting effects. Some instruments have further visual interest provided by displaying automatically operated accordions, visible percussion and occasionally dummy saxophones rigged so that they appear to be playing, but actually the sax sound is made by a rank of reed pipes within the pipework case. The pinnacle of the organ builder's craft was exemplified by organs such as the robot band of which seven were built by Decap of Antwerp in the early 1950s. Built in Decap's trademark art deco style, each organ featured three highly articulated robots, one of which played percussion, one played a saxophone and the last played a conceptualised and flattened brass instrument. Unlike many dance organs, Robot organs usually used a hidden hammond organ played mechanically, rather than musical pipes, for most of the melodic line. A visible accordion is also played mechanically, sometimes by one of the robots. The robot drummer turned to align his drumsticks with snare drums, cymbal or tempo block as required, his foot playing a hi-hat. The other two robots, normally seated, would stand when they were required to perform. Their shoulders would rise and fall and their cheeks would puff out as they played to emulate blowing into their instrument. In addition, the wind players would tap a foot in time to the beat of the music. As a piece de resistance, the two wind players also take a bow at the end of a music roll. The principal chronology of manufacturers of dance organs is essentially: Gavioli (Paris), Marenghi (Paris), Mortier (Antwerp), Hooghuys (Grammont), Fasano (Antwerp), Decap (Antwerp) and Bursens (Hoboken, Antwerp) See also Organ (music) Barrel Organ Museum Haarlem, Netherlands Barrel organ Fairground organ Street organ Organ grinder Mortier References Organs (music) Mechanical musical instruments French musical instruments Music in Paris
Dance organ
Physics,Technology
1,498
26,084,762
https://en.wikipedia.org/wiki/Hemovanadin
Hemovanadin is a pale green vanabin protein found in the blood cells, called vanadocytes, of ascidians (sea squirts) and other organisms (particularly sea organisms). It is one of the few known vanadium-containing proteins. The German chemist Martin Henze first detected vanadium in ascidians (sea squirts) in 1911. Unlike hemocyanin and hemoglobin, hemovanadin is not an oxygen carrier. References Metalloproteins Blood proteins Vanadium compounds Ascidiacea
Hemovanadin
Chemistry
121
1,065,128
https://en.wikipedia.org/wiki/David%20King%20%28chemist%29
Sir David Anthony King (born 12 August 1939) is a South African-born British chemist, academic, and head of the Climate Crisis Advisory Group (CCAG). King first taught at Imperial College, London, the University of East Anglia, and was then Brunner Professor of Physical Chemistry (1974–1988) at the University of Liverpool. He held the 1920 Chair of Physical Chemistry at the University of Cambridge from 1988 to 2006, and was Master of Downing College, Cambridge, from 1995 to 2000: he is now emeritus professor. While at Cambridge, he was successively a fellow of St John's College, Downing College, and Queens' College. Moving to the University of Oxford, he was Director of the Smith School of Enterprise and the Environment from 2008 to 2012, and a Fellow of University College, Oxford, from 2009 to 2012. He was additionally President of Collegio Carlo Alberto in Turin, Italy (2008–2011), and Chancellor of the University of Liverpool (2010–2013). Outside of academia, King was Chief Scientific Adviser to the UK Government and Head of the Government Office for Science from 2000 to 2007. He was then senior scientific adviser to UBS, a Swiss investment bank and financial services company, from 2008 to 2013. From 2013 to 2017, he returned to working with the UK Government as Special Representative for Climate Change to the Foreign Secretary. He was also Chairman of the government's Future Cities Catapult from 2013 to 2016. Early life and education King was born on 12 August 1939 in South Africa, son of Arnold Tom Wallis King, of Johannesburg, director of a paint company, and Patricia Mary Bede, née Vardy. His elder brother, Michael Wallis King (born 1937), was director of the FirstRand bank and vice-chair of the multinational mining company Anglo American plc. King was educated at St John's College, an all-boys private school in Johannesburg. He studied at University of the Witwatersrand, graduating with a Bachelor of Science (BSc) degree and then a Doctor of Philosophy (PhD) degree in 1963. Academic career After his PhD, King moved to the United Kingdom where he was a Shell Scholar at Imperial College, London, from 1963 to 1966. He was then a lecturer in the School of Chemical Sciences of the University of East Anglia from 1966 to 1974. He was appointed Brunner Professor of Physical Chemistry at the University of Liverpool in 1974. He was a member of the National Executive of the Association of University Teachers from 1970 until 1978, and served as its president for the 1976/77 academic year. In 1988, King was appointed 1920 Professor of Physical Chemistry at the University of Cambridge. He subsequently served as Head of the University's Department of Chemistry from 1993 to 2000, and was its director of research from 2005 to 2011. When he first moved to Cambridge in 1988, he was elected a Fellow of St John's College, Cambridge. He moved from St John's when he was elected Master of Downing College, Cambridge, in 1995. He stepped down as Master in 2000, and was then a Fellow of Queens' College, Cambridge, from 2001 to 2008. From 2008 to 2012, King was Director of the Smith School of Enterprise and the Environment at the University of Oxford. He was also a Fellow of University College, Oxford, from 2009 to 2012. He was President of Collegio Carlo Alberto in Turin, Italy, from 2008 to 2011, and was Chancellor of the University of Liverpool from 2010 to 2013. Research King has published over 500 papers on his research in chemical physics and on science and policy. During his time at Cambridge, King had, together with Gabor Somorjai and Gerhard Ertl, shaped the discipline of surface science and helped to explain the underlying principles of heterogeneous catalysis. However, the 2007 Nobel Prize in Chemistry was awarded to Ertl alone. Career outside academia King was the Chief Scientific Adviser to the UK Government and Head of the Government Office for Science from October 2000 to 31 December 2007, under prime ministers Tony Blair and Gordon Brown. In that time, he raised the profile of the need for governments to act on climate change and was instrumental in creating the £1 billion Energy Technologies Institute. In 2008 he co-authored The Hot Topic on this subject. During his tenure as Chief Scientific Adviser, he raised public awareness for climate change and initiated several foresight studies. As director of the government's Foresight Programme, he created an in-depth horizon scanning process which advised government on a wide range of long-term issues, from flooding to obesity. He also chaired the government's Global Science and Innovation Forum from its inception. King advised the government on issues including: the foot-and-mouth disease epidemic 2001; post 9/11 risks to the UK; GM foods; energy provision; and innovation and wealth creation. He was heavily involved in the government's Science and Innovation Strategy 2004–2014. He suggested that scientists should honour a Hippocratic Oath for Scientists. In April 2008, King joined UBS, a Swiss investment bank, as senior science advisor. He left UBS to return to the UK government when he was appointed the Foreign Secretary's Special Representative for Climate Change in September 2013. From 2013 to 2016, King was the first chairman of the Future Cities Catapult, a government-funded body conducting research into smart cities. In May 2020, in response to the COVID-19 pandemic, King formed and led Independent SAGE, a committee of unpaid experts which acts as a "shadow" of the UK government's SAGE group to address concerns of lack of transparency and political influence on that body. Views Climate change In his role as scientific advisor to the UK government King was outspoken on the subject of climate change, saying "I see climate change as the greatest challenges facing Britain and the World in the 21st century" and "climate change is the most severe problem we are facing today – more serious even than the threat of terrorism". He strongly supports the work of the IPCC, saying in 2004 that the 2001 synthesis report "is the best current statement on the state of play of the science of climate change, and that really does represent 1,000 scientists". King criticised the Bush administration for what he saw as its failures in climate change policy, saying it is "failing to take up the challenge of global warming". In 2004, King gave evidence to a House of Commons select committee confirming his view that "on a global and geological scale that climate change is the most serious problem we are faced with this century", and illustrated it with a statement that "Fifty-five million years ago was a time when there was no ice on the earth; the Antarctic was the most habitable place for mammals". The Independent on Sunday reported that King had at a later event compared current and projected carbon dioxide levels with the record over the past 60 million years, and in an indirect quote suggested King implied that Antarctica was likely to be the world's "only" habitable continent by the end of this century if global warming remains unchecked. At the end of the 2007 programme "The Great Global Warming Swindle", broadcast on Channel 4, Fred Singer ridiculed the reported view of the "chief scientist"; King's complaint to Ofcom that the programme was unfair and had not given a chance to clarify was upheld, despite Channel 4's arguments that King was not named and had not challenged earlier reporting. King became head of the Climate Crisis Advisory Group in 2021, basing public meetings on a similar format to Independent SAGE, and publishing reports advising emission cuts and carbon dioxide removal. He promotes the CCAG's 4R planet pathway: Reducing emissions; Removing the excess greenhouse gases (GHGs) already in the atmosphere; Repairing ecosystems; strengthening local and global Resilience against inevitable climate impacts. Food production King told The Independent newspaper in February 2007 "he agreed that organic food was no safer than chemically-treated food" and openly supported a study by the Manchester Business School that implicated organic farming practices in unfavourable CO2 comparisons with conventional chemical farming. In an article published in The Guardian in February 2009, King is quoted as saying that "future historians might look back on our particular recent past and see the Iraq war as the first of the conflicts of this kind – the first of the resource wars" and that this was "certainly the view" (that the invasion was motivated by a desire to secure energy supplies) he held at the time of the invasion, along with "quite a few people in government". Energy King is a strong supporter of nuclear electricity generation, arguing that it is a safe, technically feasible solution that can help to reduce emissions from the utilities sector now, while the development of alternative low-carbon solutions is incentivised. In the transport sector, King has warned governments that conventional oil resources are more scarce than they believe and that peak oil might approach sooner than expected. Moreover, he has criticised first generation biofuels due to the effect on food prices and subsequent effect on the developing world. He strongly supports second generation biofuels, however, which are manufactured from inedible biomass such as corn stover, wood chips or straw. These biofuels are not made from food sources (see food vs fuel). King is a member of the Global Apollo Programme and headed its public launch in 2015. The programme calls for multinational research into reducing the cost of low-carbon electricity generation. Humanism King is a Distinguished Supporter of Humanists UK. Covid response In July 2020 King advocated for school closures in the UK until covid cases were reduced to 1 in a million. Honours and awards King was knighted in the 2003 New Year honours. In 2009, he was made a Chevalier of the Légion d'Honneur by the French government. In 1991 he received the BVC Medal and Prize, awarded by the British Vacuum Council. He was elected a Fellow of the Royal Society (FRS) in 1991, a Foreign Fellow of the American Academy of Arts and Sciences in 2002, and an Honorary Fellow of the Royal Academy of Engineering (HonFREng) in 2006. In media King appears in the film The Age of Stupid, released in February 2009, talking about Hurricane Katrina. He was portrayed by David Calder in the 2021 BBC television film The Trick. Personal life By his first marriage, which ended in divorce, King has two sons. In 1983, he married, secondly, charity administrator and former head of a commercial law team, Jane Margaret, daughter of general practitioner Hans Eugen Lichtenstein, OBE, of Llandrindod Wells, Powys, Wales, a Holocaust survivor from a family that owned leather goods shops and an umbrella factory in Berlin. They have a son and a daughter. Books published Sir David King, Gabrielle Walker, The Hot Topic: how to tackle global warming and still keep the lights on, Bloomsbury London 2008 Oliver Inderwildi, Sir David King, Energy, Transport & the Environment, 2012, Springer London New York Heidelberg References Biographical links David King interviewed by Alan Macfarlane 27 November 2009 (video) Sir David King at the Smith School of Enterprise and the Environment, University of Oxford Sir David King at the Department of Chemistry, University of Cambridge BBC's biography of Sir David King David King's article on climate change at www.chinadialogue.net 'Profile: Professor Sir David King' by Alison Benjamin, The Guardian, 27 November 2007. Sir David King: Building a Sustainable Future Lecture presented at the Royal Institute of British Architecture 2007 (Video) British physical chemists Fellows of the Royal Society Masters of Downing College, Cambridge Members of the University of Cambridge Department of Chemistry 1939 births Living people Knights Bachelor Knights of the Legion of Honour Academics of the University of East Anglia Chief Scientific Advisers to HM Government Place of birth missing (living people) Presidents of the British Science Association Fellows of Queens' College, Cambridge Global Apollo Programme Academics of the University of Liverpool University of the Witwatersrand alumni Fellows of the American Academy of Arts and Sciences Fellows of St John's College, Cambridge Fellows of University College, Oxford Honorary Fellows of the Royal Academy of Engineering Academics of Imperial College London Chancellors of the University of Liverpool Professors of Physical Chemistry (Cambridge)
David King (chemist)
Chemistry
2,492
7,481,025
https://en.wikipedia.org/wiki/IWARP
iWARP is a computer networking protocol that implements remote direct memory access (RDMA) for efficient data transfer over Internet Protocol networks. Contrary to some accounts, iWARP is not an acronym. Because iWARP is layered on Internet Engineering Task Force (IETF)-standard congestion-aware protocols such as Transmission Control Protocol (TCP) and Stream Control Transmission Protocol (SCTP), it makes few requirements on the network, and can be successfully deployed in a broad range of environments. History In 2007, the IETF published five Request for Comments (RFCs) that define iWARP: RFC 5040 A Remote Direct Memory Access Protocol Specification is layered over Direct Data Placement Protocol (DDP). It defines how RDMA Send, Read, and Write operations are encoded using DDP into headers on the network. RFC 5041 Direct Data Placement over Reliable Transports is layered over MPA/TCP or SCTP. It defines how received data can be directly placed into an upper layer protocols receive buffer without intermediate buffers. RFC 5042 Direct Data Placement Protocol (DDP) / Remote Direct Memory Access Protocol (RDMAP) Security analyzes security issues related to iWARP DDP and RDMAP protocol layers. RFC 5043 Stream Control Transmission Protocol (SCTP) Direct Data Placement (DDP) Adaptation defines an adaptation layer that enables DDP over SCTP. RFC 5044 Marker PDU Aligned Framing for TCP Specification defines an adaptation layer that enables preservation of DDP-level protocol record boundaries layered over the TCP reliable connected byte stream. These RFCs are based on the RDMA Consortium's specifications for RDMA over TCP. The RDMA Consortium's specifications are influenced by earlier RDMA standards, including Virtual Interface Architecture (VIA) and InfiniBand (IB). Since 2007, the IETF has published three additional RFCs that maintain and extend iWARP: RFC 6580 IANA Registries for the Remote Direct Data Placement (RDDP) Protocols published in 2012 defines IANA registries for Remote Direct Data Placement (RDDP) error codes, operation codes, and function codes. RFC 6581 Enhanced Remote Direct Memory Access (RDMA) Connection Establishment published in 2011 fixes shortcomings with iWARP connection setup. RFC 7306 Remote Direct Memory Access (RDMA) Protocol Extensions published in 2014 extends RFC 5040 with atomic operations and RDMA Write with Immediate Data. Protocol The main component in the iWARP protocol is the Direct Data Placement Protocol (DDP), which permits the actual zero-copy transmission. DDP itself does not perform the transmission; the underlying protocol (TCP or SCTP) does. However, TCP does not respect message boundaries; it sends data as a sequence of bytes without regard to protocol data units (PDU). In this regard, DDP itself may be better suited for SCTP, and indeed the IETF proposed a standard RDMA over SCTP. To run DDP over TCP requires a tweak known as marker PDU aligned (MPA) framing to guarantee boundaries of messages. Furthermore, DDP is not intended to be accessed directly. Instead, a separate RDMA protocol (RDMAP) provides the services to read and write data. Therefore, the entire RDMA over TCP specification is really RDMAP over DDP over either MPA/TCP or SCTP. All of these protocols can be implemented in hardware. Unlike IB, iWARP only has reliable connected communication, as this is the only service that TCP and SCTP provide. The iWARP specification omits other features of IB, such as Send with Immediate Data operations. With RFC 7306, the IETF is working to reduce these omissions. Implementation Because a kernel implementation of the TCP stack can be seen as a bottleneck, the protocol is typically implemented in hardware RDMA network interface controllers (rNICs). As simple data losses are rare in tightly coupled network environments, the error-correction mechanisms of TCP may be performed by software while the more frequently performed communications are handled strictly by logic embedded on the rNIC. Similarly, connections are often established entirely by software and then handed off to the hardware. Furthermore, the handling of iWARP specific protocol details is typically isolated from the TCP implementation, allowing rNICs to be used for both as RDMA offload and TCP offload (in support of traditional sockets based TCP/IP applications). The portion of the hardware implementation used for implementing the TCP protocol is known as the TCP Offload Engine (TOE). TOE itself does not prevent copying on the reception side, and must be combined with RDMA hardware for zero-copy results. The RDMA / TCP specification is a set of different wire protocols intended to be implemented in hardware (though it seems feasible to emulate it in software for compatibility but without the performance benefits). Interfaces iWARP is a protocol, not an implementation, but defines protocol behavior in terms of the operations that are legal for the protocol, known as Verbs. As such, iWARP does not have any single standard programming interface. However, programming interfaces tend to very closely correspond to the Verbs. Several programmatic interfaces have been proposed, including OpenFabrics Verbs, Network Direct, uDAPL, kDAPL, IT-API, and RNICPI. Implementations of some of these interfaces are available for different platforms, including Windows and Linux. Services available Networking services implemented over iWARP include those offered in the OpenFabrics Enterprise Distribution (OFED) by the OpenFabrics Alliance for Linux operating systems, and by Microsoft Windows via Network Direct. NVMe over Fabrics (NVMEoF) iSCSI Extensions for RDMA (iSER) Server Message Block Direct (SMB Direct) Sockets Direct Protocol (SDP) SCSI RDMA Protocol (SRP) Network File System over RDMA (NFS over RDMA) GPUDirect Vendors Popular vendors of iWarp enabled equipment include: Chelsio Marvell Bloombase See also RDMA over Converged Ethernet References External links OpenFabrics Alliance at the University of New Hampshire InterOperability Laboratory — Testing on iWARP devices Remote Direct Data Placement Charter (IETF) MPI-SCTP: Using the Stream Control Transmission Protocol for parallel programs written using the Message Passing Interface (2008-09-01) SMB2 Remote Direct Memory Access (RDMA) Transport Protocol (2017-06-01) Supercomputers Computer networks
IWARP
Technology
1,342
226,829
https://en.wikipedia.org/wiki/Four-velocity
In physics, in particular in special relativity and general relativity, a four-velocity is a four-vector in four-dimensional spacetime that represents the relativistic counterpart of velocity, which is a three-dimensional vector in space. Physical events correspond to mathematical points in time and space, the set of all of them together forming a mathematical model of physical four-dimensional spacetime. The history of an object traces a curve in spacetime, called its world line. If the object has mass, so that its speed is necessarily less than the speed of light, the world line may be parametrized by the proper time of the object. The four-velocity is the rate of change of four-position with respect to the proper time along the curve. The velocity, in contrast, is the rate of change of the position in (three-dimensional) space of the object, as seen by an observer, with respect to the observer's time. The value of the magnitude of an object's four-velocity, i.e. the quantity obtained by applying the metric tensor to the four-velocity , that is , is always equal to , where is the speed of light. Whether the plus or minus sign applies depends on the choice of metric signature. For an object at rest its four-velocity is parallel to the direction of the time coordinate with . A four-velocity is thus the normalized future-directed timelike tangent vector to a world line, and is a contravariant vector. Though it is a vector, addition of two four-velocities does not yield a four-velocity: the space of four-velocities is not itself a vector space. Velocity The path of an object in three-dimensional space (in an inertial frame) may be expressed in terms of three spatial coordinate functions of time , where is an index which takes values 1, 2, 3. The three coordinates form the 3d position vector, written as a column vector The components of the velocity (tangent to the curve) at any point on the world line are Each component is simply written Theory of relativity In Einstein's theory of relativity, the path of an object moving relative to a particular frame of reference is defined by four coordinate functions , where is a spacetime index which takes the value 0 for the timelike component, and 1, 2, 3 for the spacelike coordinates. The zeroth component is defined as the time coordinate multiplied by , Each function depends on one parameter τ called its proper time. As a column vector, Time dilation From time dilation, the differentials in coordinate time and proper time are related by where the Lorentz factor, is a function of the Euclidean norm of the 3d velocity vector Definition of the four-velocity The four-velocity is the tangent four-vector of a timelike world line. The four-velocity at any point of world line is defined as: where is the four-position and is the proper time. The four-velocity defined here using the proper time of an object does not exist for world lines for massless objects such as photons travelling at the speed of light; nor is it defined for tachyonic world lines, where the tangent vector is spacelike. Components of the four-velocity The relationship between the time and the coordinate time is defined by Taking the derivative of this with respect to the proper time , we find the velocity component for : and for the other 3 components to proper time we get the velocity component for : where we have used the chain rule and the relationships Thus, we find for the four-velocity Written in standard four-vector notation this is: where is the temporal component and is the spatial component. In terms of the synchronized clocks and rulers associated with a particular slice of flat spacetime, the three spacelike components of four-velocity define a traveling object's proper velocity i.e. the rate at which distance is covered in the reference map frame per unit proper time elapsed on clocks traveling with the object. Unlike most other four-vectors, the four-velocity has only 3 independent components instead of 4. The factor is a function of the three-dimensional velocity . When certain Lorentz scalars are multiplied by the four-velocity, one then gets new physical four-vectors that have 4 independent components. For example: Four-momentum: where is the rest mass Four-current density: where is the charge density Effectively, the factor combines with the Lorentz scalar term to make the 4th independent component and Magnitude Using the differential of the four-position in the rest frame, the magnitude of the four-velocity can be obtained by the Minkowski metric with signature : in short, the magnitude of the four-velocity for any object is always a fixed constant: In a moving frame, the same norm is: so that: which reduces to the definition of the Lorentz factor. See also Four-acceleration Four-momentum Four-force Four-gradient Algebra of physical space Congruence (general relativity) Hyperboloid model Rapidity Remarks References Four-vectors
Four-velocity
Physics
1,029
31,602,537
https://en.wikipedia.org/wiki/Presidents%20of%20the%20American%20Chemical%20Society
Presidents of the American Chemical Society: John W. Draper (1876) J. Lawrence Smith (1877) Samuel William Johnson (1878) T. Sterry Hunt (1879) Frederick A. Genth (1880) Charles F. Chandler (1881) John W. Mallet (1882) James C. Booth (1883) Albert B. Prescott (1886) Charles Anthony Goessmann (1887) T. Sterry Hunt (1888) Charles F. Chandler (1889) Henry B. Nason (1890) George F. Barker (1891) George C. Caldwell (1892) Harvey W. Wiley (1893) Edgar Fahs Smith (1895) Charles B. Dudley (1896) Charles E. Munroe (1898) Edward W. Morley (1899) William McMurtrie (1900) Frank W. Clarke (1901) Ira Remsen (1902) John H. Long (1903) Arthur Amos Noyes (1904) Francis P. Venable (1905) William F. Hillebrand (1906) Marston T. Bogert (1907) Willis R. Whitney (1909) Wilder D. Bancroft (1910) Alexander Smith (1911) Arthur D. Little (1912) Theodore W. Richards (1914) Charles H. Herty (1915) Julius Stieglitz (1917) William H. Nichols (1918) William A. Noyes (1920) Edgar Fahs Smith (1921) Edward C. Franklin (1923) Leo H. Baekeland (1924) James Flack Norris (1925) George D. Rosengarten (1927) Samuel W. Parr (1928) Irving Langmuir (1929) William McPherson (1930) Moses Gomberg (1931) Lawrence V. Redman (1932) Arthur B. Lamb (1933) Charles L. Reese (1934) Roger Adams (1935) Edward Bartow (1936) Edward R. Weidlein (1937) Frank C. Whitmore (1938) Charles A. Kraus (1939) Samuel C. Lind (1940) William Lloyd Evans (1941) Harry N. Holmes (1942) Per K. Frolich (1943) Thomas Midgley Jr. (1944) Carl S. Marvel (1945) Bradley Dewey (1946) W. Albert Noyes Jr. (1947) Charles A. Thomas (1948) Linus Pauling (1949) Ernest H. Volwiler (1950) N. Howell Funnan (1951) Edgar C. Britton (1952) Farrington Daniels (1953) Harry L. Fisher (1954) Joel H. Hildebrand (1955) John C. Warner (1956) Roger J. Williams (1957) Clifford F. Rassweiler (1958) John C. Bailar Jr. (1959) Albert L. Elder (1960) Arthur C. Cope (1961) Karl Folkers (1962) Henry Eyring (1963) Maurice H. Arveson (1964) Charles C. Price (1965) William J. Sparks (1966) Charles G. Overberger (1967) Robert W. Cairns (1968) Wallace R. Brode (1969) Byron Riegel (1970) Melvin Calvin (1971) Max Tishler (1972) Alan C. Nixon (1973) Bernard S. Friedman (1974) William J. Bailey (1975) Glenn T. Seaborg (1976) Henry A. Hill (1977) Anna J. Harrison (1978) Gardner W. Stacy (1979) James D. D'Ianni (1980) Albert C. Zettlemoyer (1981) Robert W. Parry (1982) Fred Basolo (1983) Warren D. Niederhauser (1984) Ellis K. Fields (1985) George C. Pimentel (1986) Mary L. Good (1987) Gordon L. Nelson (1988) Clayton F. Callis (1989) Paul G. Gassman (1990) S. Allen Heininger (1991) Ernest L. Eliel (1992) Helen M. Free (1993) Ned D. Heindel (1994) Brian M. Rushton (1995) Ronald Breslow (1996) Paul S. Anderson (1997) Paul H.L. Walter (1998) Edel Wasserman (1999) Daryle H. Busch (2000) Attila E. Pavlath (2001) Eli M. Pearce (2002) Elsa Reichmanis (2003) Charles P. Casey (2004) William F. Carroll Jr. (2005) Elizabeth Ann Nalley (2006) Catherine T. Hunt (2007) Bruce E. Bursten (2008) Thomas H. Lane (2009) Joseph Francisco (2010) Nancy B. Jackson (2011) Bassam Z. Shakhashiri (2012) Marinda Li Wu (2013) Thomas J. Barton (2014) Diane Grob Schmidt (2015) Donna J. Nelson (2016) Allison A. Campbell (2017) Peter K. Dorhout (2018) Bonnie A. Charpentier (2019) Luis Echegoyen (2020) H.N. Cheng (2021) Angela K. Wilson (2022) Judith Giordan (2023) Mary K. Carroll (2024) Dorothy J. Phillips (2025) Rigoberto Hernandez (2026) References American Chemical Society American Chemical Society
Presidents of the American Chemical Society
Chemistry
1,080
14,488,528
https://en.wikipedia.org/wiki/Turbojet%20train
A turbojet train is a train powered by turbojet engines. Like a jet aircraft, but unlike a gas turbine locomotive, the train is propelled by the jet thrust of the engines, rather than by its wheels. Only a handful of jet-powered trains have been built, for experimental research in high-speed rail. Turbojet engines have been built with the engine incorporated into a railcar combining both propulsion and passenger accommodation rather than as separate locomotives hauling passenger coaches. As turbojet engines are most efficient at high speeds, the experimental research has focused in applications for high-speed passenger services, rather than the heavier trains (with more frequent stops) used for freight services. M-497 The first attempt to use turbojet engines on a railroad was made in 1966 by the New York Central Railroad (NYCR), a company with operations throughout the Great Lakes region. They streamlined a Budd Rail Diesel Car, added two General Electric J47-19 jet engines, and nicknamed it the M-497 Black Beetle. Testing was performed on a length of the normal NYCR system – a virtually arrow-straight layout of regular existing track between Butler, Indiana, and Stryker, Ohio. On July 23, 1966, the train reached a speed of . LIMRV In the early 1970s, the U.S. Federal Railroad Administration developed the Linear Induction Motor Research Vehicle (LIMRV), meant to test the use of linear induction motors. The LIMRV was a specialized wheeled vehicle, running on standard-gauge railroad track. Speed was limited due to the length of the track and vehicle acceleration rates. One stage of research saw the addition of two Pratt & Whitney J52 jet engines to propel the LIMRV. Once the LIMRV had accelerated to desired velocity, the engines were throttled back so that the thrust equaled their drag. On 14 August 1974, using the jet engines, the LIMRV achieved a world record speed of for vehicles on conventional rail. SVL In 1970, researchers in the USSR developed the (SVL) turbojet train. The SVL was able to reach a speed of . The researchers placed jet engines on an ER22 railcar, normally part of an electric-powered multiple unit train. The SVL had a mass of 54.4 tonnes (including 7.4 tonnes of fuel) and was long. If the research had been successful, there was a plan to use the turbojet powered vehicle to pull a "Russian troika" express service. As of 2014, the train still exists in a dilapidated and unmaintained state, while the research project has been honoured with a monument made from the front of the railcar, outside a railcar factory in Tver, a city in western Russia. See also Aérotrain – a contemporary French hovercraft train, also powered by a jet engine Aerowagon Schienenzeppelin – a German propeller-driven railcar of 1929 Turboshaft Notes References External links A collection of photographs of the ER22 turbojet locomotive Gas turbine multiple units Jet engines Railcars of Russia High-speed trains of Russia Experimental locomotives
Turbojet train
Technology
634
1,244,926
https://en.wikipedia.org/wiki/Graveyard%20orbit
A graveyard orbit, also called a junk orbit or disposal orbit, is an orbit that lies away from common operational orbits. One significant graveyard orbit is a supersynchronous orbit well beyond geosynchronous orbit. Some satellites are moved into such orbits at the end of their operational life to reduce the probability of colliding with operational spacecraft and generating space debris. Overview A graveyard orbit is used when the change in velocity required to perform a de-orbit maneuver is too large. De-orbiting a geostationary satellite requires a delta-v of about , whereas re-orbiting it to a graveyard orbit only requires about . For satellites in geostationary orbit and geosynchronous orbits, the graveyard orbit is a few hundred kilometers beyond the operational orbit. The transfer to a graveyard orbit beyond geostationary orbit requires the same amount of fuel as a satellite needs for about three months of stationkeeping. It also requires a reliable attitude control during the transfer maneuver. While most satellite operators plan to perform such a maneuver at the end of their satellites' operational lives, through 2005 only about one-third succeeded. Given the economic value of the positions at geosynchronous altitude, unless premature spacecraft failure precludes it, satellites are moved to a graveyard orbit prior to decommissioning. According to the Inter-Agency Space Debris Coordination Committee (IADC) the minimum perigee altitude beyond the geostationary orbit is: where is the solar radiation pressure coefficient and is the aspect area [m2] to mass [kg] ratio of the satellite. This formula includes about 200 km for the GEO-protected zone to also permit orbit maneuvers in GEO without interference with the graveyard orbit. Another of tolerance must be allowed for the effects of gravitational perturbations (primarily solar and lunar). The remaining part of the equation considers the effects of the solar radiation pressure, which depends on the physical parameters of the satellite. In order to obtain a license to provide telecommunications services in the United States, the Federal Communications Commission (FCC) requires all geostationary satellites launched after March 18, 2002, to commit to moving to a graveyard orbit at the end of their operational lives. U.S. government regulations require a boost, , of about . In 2023 DISH received the first-ever fine by the FCC for failing to de-orbit its EchoStar VII satellite according to the terms of its license. A spacecraft moved to a graveyard orbit will typically be passivated. Uncontrolled objects in a near geostationary [Earth] orbit (GEO) exhibit a 53-year cycle of orbital inclination due to the interaction of the Earth's tilt with the lunar orbit. The orbital inclination varies ± 7.4°, at up to 0.8°pa. Disposal orbit While the standard geosynchronous satellite graveyard orbit results in an expected orbital lifetime of millions of years, the increasing number of satellites, the launch of microsatellites, and the FCC approval of large megaconstellations of thousands of satellites for launch by 2022 necessitates new approaches for deorbiting to assure earlier removal of the objects once they have reached end-of-life. Contrary to GEO graveyard orbits requiring three months' worth of fuel (delta-V of 11 m/s) to reach, large satellite networks in LEO require orbits that passively decay into the Earth's atmosphere. For example, both OneWeb and SpaceX have committed to the FCC regulatory authorities that decommissioned satellites will decay to a lower orbita disposal orbitwhere the satellite orbital altitude would decay due to atmospheric drag and then naturally reenter the atmosphere and burn up within one year of end-of-life. See also List of orbits SNAP-10A – nuclear reactor satellite, remaining in a sub-synchronous Earth orbit for an expected 4,000 years Spacecraft cemetery, in the Pacific Ocean Notes References derelict Astrodynamics Earth orbits Spacecraft retirement
Graveyard orbit
Technology,Engineering
804