id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
6,210,127
https://en.wikipedia.org/wiki/Communication%20software
Communication software is used to provide remote access to systems and exchange files and messages in text, audio and/or video formats between different computers or users. This includes terminal emulators, file transfer programs, chat and instant messaging programs, as well as similar functionality integrated within MUDs. The term is also applied to software operating a bulletin board system, but seldom to that operating a computer network or Stored Program Control exchange. History E-mail was introduced in the early 1960's as a way for multiple users of a time-sharing mainframe computer to communicate. Basic text chat functionality has existed on multi-user computer systems and bulletin board systems since the early 1970s. In the 1980s, a terminal emulator was a piece of software necessary to log into mainframes and thus access e-mail. Prior to the rise of the Internet, computer files were exchanged over dialup lines, requiring ways to send binary files over communication systems that were primarily intended for plain text; programs implementing special transfer modes were implemented using various de facto standards, most notably Kermit. Chat In 1985 the first decentralized chat system was created called Bitnet Relay, whereas Minitel probably provided the largest chat system at the same time. In August 1988 the Internet Relay Chat followed. CU-SeeMe was the first chat system to be equipped with a video camera. Instant messaging featuring a buddy list and the notion of online presence was introduced by ICQ in 1996. In the days of the Internet boom, web chats were very popular, too. Chatting is a real-time conversation or message exchange that takes place in public or in private groupings called chat rooms. Some chatrooms have moderators who will trace and block offensive comments and other kinds of abuse. Based on visual representation chats are divided into text based chat room just as were IRC and Bitnet Relay Chat, 2D – supporting graphic smilies; and 3D the conversation in which takes place in 2D graphic surrounding. References Internet New media Multimedia
Communication software
Technology
403
49,059,554
https://en.wikipedia.org/wiki/Clay%20chemistry
Clay chemistry is an applied subdiscipline of chemistry which studies the chemical structures, properties and reactions of or involving clays and clay minerals. It is a multidisciplinary field, involving concepts and knowledge from inorganic and structural chemistry, physical chemistry, materials chemistry, analytical chemistry, organic chemistry, mineralogy, geology and others. The study of the chemistry (and physics) of clays and clay minerals is of great academic and industrial relevance as they are among the most widely used industrial minerals, being employed as raw materials (ceramics, pottery, etc.), adsorbents, catalysts, additives, mineral charges, medicines, building materials and others. The unique properties of clay minerals including: nanometric scale layered construction, presence of fixed and interchangeable charges, possibility of adsorbing and hosting (intercalating) molecules, ability of forming stable colloidal dispersions, possibility of tailored surface and interlayer chemical modification and others, make the study of clay chemistry a very important and extremely varied field of research. Many distinct fields and knowledge areas are impacted by the physico-chemical behavior of clay minerals, from environmental sciences to chemical process engineering, from pottery to nuclear waste management. Their cation exchange capacity (CEC) is of great importance in the balance of the most common cations in soil (Na+, K+, NH4+, Ca2+, Mg2+) and pH control, with direct impact on the soil fertility. It also plays an important role in the fate of most Ca2+ arriving from land (river water) into the seas. The ability to change and control the CEC of clay minerals offers a valuable tool in the development of selective adsorbents with applications as varied as chemical sensors or pollution cleaning substances for contaminated water, for example. The understanding of the reactions of clay minerals with water (intercalation, adsorption, colloidal dispersion, etc.) are indispensable for the ceramic industry (plasticity and flow control of ceramic raw mixtures, for example). Those interactions also influence a great number of mechanical properties of soils, being carefully studied by building and construction engineering specialists. The interactions of clay minerals with organic substances in the soil also plays a vital role in the fixation of nutrients and fertility, as well as in the fixation or leaching of pesticides and other contaminants. Some clay minerals (kaolinite) are used as carrier material for fungicides and insecticides. The weathering of many rock types produce clay minerals as one of its last products. The understanding of these geochemical processes is also important for the understanding of geological evolution of landscapes and macroscopic properties of rocks and sediments. Presence of clay minerals in Mars, detected by the Mars Reconnaissance Orbiter in 2009 was another strong evidence of the existence of water on the planet in previous geological eras. The possibility to disperse nanometric scaled clay mineral particles into a matrix of polymer, with the formation of an inorganic-organic nanocomposite has prompted a large resurgence in the study of these minerals from the late 1990s. In addition, study of clay chemistry is also of great relevance to the chemical industry, as many clay minerals are used as catalysts, catalyst precursors or catalyst substrates in a number of chemical processes, like automotive catalysts and oil cracking catalysts. See also References Pottery Bricks Clay Chemistry Geochemistry Soil-based building materials
Clay chemistry
Chemistry
702
11,740,955
https://en.wikipedia.org/wiki/Thioformaldehyde
Thioformaldehyde is the organosulfur compound with the formula CH2S. It is the simplest thioaldehyde. This compound is not observed in the condensed state (solid or liquid) because it oligomerizes to 1,3,5-trithiane, which is a stable colorless compound with the same empirical formula. Despite the instability of these concentrated forms, thioformaldehyde as a dilute gas has been extensively studied. For these purposes, it is produced by thermal decomposition of dimethyl disulfide. The molecule has been observed in the interstellar medium and has attracted much attention for its fundamental nature. The tendency of thioformaldehyde to form chains and rings is a manifestation of the double bond rule. Although thioformaldehyde tends to oligomerize, many metal complexes are known. One example is Os(SCH2)(CO)2(PPh3)2. References Thioaldehydes
Thioformaldehyde
Chemistry
208
5,360,782
https://en.wikipedia.org/wiki/Robert%20Abel%20and%20Associates
Robert Abel and Associates (RA&A) was an American pioneering production company specializing in television commercials made with computer graphics. Founded by Robert Abel and Con Pederson in 1971, RA&A was especially known for their art direction and won many Clio Awards. Abel and his team created some of the most advanced and impressive computer-animated works of their time, including full ray-traced renders and fluid character animation at a time when such things were largely unknown. A variety of high-profile television advertisements, graphics sequences for motion pictures (including The Andromeda Strain and Tron), and work on laserdisc video games such as Cube Quest, put Abel and his team on the map in the early 1980s. The company was also originally commissioned to create the visual effects for Star Trek: The Motion Picture, but were subsequently taken off the project for mishandling funds. The company was also notable on its work for The Jacksons' 1981 music video "Can You Feel It." RA&A was on the southwest corner of Highland Avenue and Romaine in the heart of Hollywood, California. RA&A closed in 1987 following an ill-fated merger with now-defunct Omnibus Computer Graphics, Inc., a company which had been based in Toronto. Many people who worked at RA&A went on to other ground-breaking projects, including the founding of Wavefront Technologies, Rhythm & Hues and other studios. Many RA&A people went on to win Academy Awards. References External links The Bob Abel Project Rob Abel info at OSU A demo reel of work from 1982 1971 establishments in California 1987 disestablishments in California Design companies established in 1971 Design companies disestablished in 1987 Defunct companies based in Greater Los Angeles Defunct organizations based in Hollywood, Los Angeles Film production companies of the United States American animation studios Visual effects companies Digital media
Robert Abel and Associates
Technology
377
14,156,867
https://en.wikipedia.org/wiki/PSMD10
26S proteasome non-ATPase regulatory subunit 10 or gankyrin is an enzyme that in humans is encoded by the PSMD10 gene. First isolated in 1998 by Tanaka et al.; Gankyrin is an oncoprotein that is a component of the 19S regulatory cap of the proteasome. Structurally, it contains a 33-amino acid ankyrin repeat that forms a series of alpha helices. It plays a key role in regulating the cell cycle via protein-protein interactions with the cyclin-dependent kinase CDK4. It also binds closely to the E3 ubiquitin ligase MDM2, which is a regulator of the degradation of p53 and retinoblastoma protein, both transcription factors involved in tumor suppression and found mutated in many cancers. Gankyrin also has an anti-apoptotic effect and is overexpressed in certain types of tumor cells such as hepatocellular carcinoma. Function The 26S proteasome is a multicatalytic proteinase complex with a highly ordered structure composed of 2 complexes, a 20S core and a 19S regulator. The 20S core is composed of 4 rings of 28 non-identical subunits; 2 rings are composed of 7 alpha subunits and 2 rings are composed of 7 beta subunits. The 19S regulator is composed of a base, which contains 6 ATPase subunits and 2 non-ATPase subunits, and a lid, which contains up to 10 non-ATPase subunits. Proteasomes are distributed throughout eukaryotic cells at a high concentration and cleave peptides in an ATP/ubiquitin-dependent process in a non-lysosomal pathway. An essential function of a modified proteasome, the immunoproteasome, is the processing of class I MHC peptides. This gene encodes a non-ATPase subunit of the 19S regulator. Two transcripts encoding different isoforms have been described. Pseudogenes have been identified on chromosomes 3 and 20. Clinical significance The proteasome and its subunits are of clinical significance for at least two reasons: (1) a compromised complex assembly or a dysfunctional proteasome can be associated with the underlying pathophysiology of specific diseases, and (2) they can be exploited as drug targets for therapeutic interventions. More recently, more effort has been made to consider the proteasome for the development of novel diagnostic markers and strategies. An improved and comprehensive understanding of the pathophysiology of the proteasome should lead to clinical applications in the future. The proteasomes form a pivotal component for the ubiquitin–proteasome system (UPS) and corresponding cellular Protein Quality Control (PQC). Protein ubiquitination and subsequent proteolysis and degradation by the proteasome are important mechanisms in the regulation of the cell cycle, cell growth and differentiation, gene transcription, signal transduction and apoptosis. Subsequently, a compromised proteasome complex assembly and function lead to reduced proteolytic activities and the accumulation of damaged or misfolded protein species. Such protein accumulation may contribute to the pathogenesis and phenotypic characteristics in neurodegenerative diseases, cardiovascular diseases, inflammatory responses and autoimmune diseases, and systemic DNA damage responses leading to malignancies. Several experimental and clinical studies have indicated that aberrations and deregulations of the UPS contribute to the pathogenesis of several neurodegenerative and myodegenerative disorders, including Alzheimer's disease, Parkinson's disease and Pick's disease, Amyotrophic lateral sclerosis (ALS), Huntington's disease, Creutzfeldt–Jakob disease, and motor neuron diseases, polyglutamine (PolyQ) diseases, Muscular dystrophies and several rare forms of neurodegenerative diseases associated with dementia. As part of the ubiquitin–proteasome system (UPS), the proteasome maintains cardiac protein homeostasis and thus plays a significant role in cardiac ischemic injury, ventricular hypertrophy and heart failure. Additionally, evidence is accumulating that the UPS plays an essential role in malignant transformation. UPS proteolysis plays a major role in responses of cancer cells to stimulatory signals that are critical for the development of cancer. Accordingly, gene expression by degradation of transcription factors, such as p53, c-jun, c-Fos, NF-κB, c-Myc, HIF-1α, MATα2, STAT3, sterol-regulated element-binding proteins and androgen receptors are all controlled by the UPS and thus involved in the development of various malignancies. Moreover, the UPS regulates the degradation of tumor suppressor gene products such as adenomatous polyposis coli (APC) in colorectal cancer, retinoblastoma (Rb). and von Hippel–Lindau tumor suppressor (VHL), as well as a number of proto-oncogenes (Raf, Myc, Myb, Rel, Src, Mos, ABL). The UPS is also involved in the regulation of inflammatory responses. This activity is usually attributed to the role of proteasomes in the activation of NF-κB which further regulates the expression of pro inflammatory cytokines such as TNF-α, IL-β, IL-8, adhesion molecules (ICAM-1, VCAM-1, P-selectin) and prostaglandins and nitric oxide (NO). Additionally, the UPS also plays a role in inflammatory responses as regulators of leukocyte proliferation, mainly through proteolysis of cyclines and the degradation of CDK inhibitors. Lastly, autoimmune disease patients with SLE, Sjögren syndrome and rheumatoid arthritis (RA) predominantly exhibit circulating proteasomes which can be applied as clinical biomarkers. Interactions PSMD10 has been shown to interact with: Mdm2, PAAF1, and PSMC4. References Further reading External links Proteins Oncogenes
PSMD10
Chemistry
1,288
580
https://en.wikipedia.org/wiki/Astronomer
An astronomer is a scientist in the field of astronomy who focuses on a specific question or field outside the scope of Earth. Astronomers observe astronomical objects, such as stars, planets, moons, comets and galaxies – in either observational (by analyzing the data) or theoretical astronomy. Examples of topics or fields astronomers study include planetary science, solar astronomy, the origin or evolution of stars, or the formation of galaxies. A related but distinct subject is physical cosmology, which studies the Universe as a whole. Types Astronomers typically fall under either of two main types: observational and theoretical. Observational astronomers make direct observations of celestial objects and analyze the data. In contrast, theoretical astronomers create and investigate models of things that cannot be observed. Because it takes millions to billions of years for a system of stars or a galaxy to complete a life cycle, astronomers must observe snapshots of different systems at unique points in their evolution to determine how they form, evolve, and die. They use this data to create models or simulations to theorize how different celestial objects work. Further subcategories under these two main branches of astronomy include planetary astronomy, astrobiology, stellar astronomy, astrometry, galactic astronomy, extragalactic astronomy, or physical cosmology. Astronomers can also specialize in certain specialties of observational astronomy, such as infrared astronomy, neutrino astronomy, x-ray astronomy, and gravitational-wave astronomy. Academic History Historically, astronomy was more concerned with the classification and description of phenomena in the sky, while astrophysics attempted to explain these phenomena and the differences between them using physical laws. Today, that distinction has mostly disappeared and the terms "astronomer" and "astrophysicist" are interchangeable. Professional astronomers are highly educated individuals who typically have a PhD in physics or astronomy and are employed by research institutions or universities. They spend the majority of their time working on research, although they quite often have other duties such as teaching, building instruments, or aiding in the operation of an observatory. The American Astronomical Society, which is the major organization of professional astronomers in North America, has approximately 8,200 members (as of 2024). This number includes scientists from other fields such as physics, geology, and engineering, whose research interests are closely related to astronomy. The International Astronomical Union comprises about 12,700 members from 92 countries who are involved in astronomical research at the PhD level and beyond (as of 2024). Contrary to the classical image of an old astronomer peering through a telescope through the dark hours of the night, it is far more common to use a charge-coupled device (CCD) camera to record a long, deep exposure, allowing a more sensitive image to be created because the light is added over time. Before CCDs, photographic plates were a common method of observation. Modern astronomers spend relatively little time at telescopes, usually just a few weeks per year. Analysis of observed phenomena, along with making predictions as to the causes of what they observe, takes the majority of observational astronomers' time. Activities and graduate degree training Astronomers who serve as faculty spend much of their time teaching undergraduate and graduate classes. Most universities also have outreach programs, including public telescope time and sometimes planetariums, as a public service to encourage interest in the field. Those who become astronomers usually have a broad background in physics, mathematics, sciences, and computing in high school. Taking courses that teach how to research, write, and present papers are part of the higher education of an astronomer, while most astronomers attain both a Master's degree and eventually a PhD degree in astronomy, physics or astrophysics. PhD training typically involves 5-6 years of study, including completion of upper-level courses in the core sciences, a competency examination, experience with teaching undergraduates and participating in outreach programs, work on research projects under the student's supervising professor, completion of a PhD thesis, and passing a final oral exam. Throughout the PhD training, a successful student is financially supported with a stipend. Amateur astronomers While there is a relatively low number of professional astronomers, the field is popular among amateurs. Most cities have amateur astronomy clubs that meet on a regular basis and often host star parties. The Astronomical Society of the Pacific is the largest general astronomical society in the world, comprising both professional and amateur astronomers as well as educators from 70 different nations. As with any hobby, most people who practice amateur astronomy may devote a few hours a month to stargazing and reading the latest developments in research. However, amateurs span the range from so-called "armchair astronomers" to the highly ambitious people who own science-grade telescopes and instruments with which they are able to make their own discoveries, create astrophotographs, and assist professional astronomers in research. See also List of astronomers List of women astronomers List of Muslim astronomers List of French astronomers List of Hungarian astronomers List of Russian astronomers and astrophysicists List of Slovenian astronomers References Sources External links American Astronomical Society European Astronomical Society International Astronomical Union Astronomical Society of the Pacific Space's astronomy news Astronomy Science occupations
Astronomer
Astronomy
1,022
23,482,063
https://en.wikipedia.org/wiki/SN%202009gj
SN 2009gj was a supernova located approximately 60 million light years away from Earth. It was discovered on June 20, 2009, by New Zealand amateur astronomer and dairy farmer Stuart Parker. See also List of supernovae History of supernova observation List of supernova remnants List of supernova candidates References External links Light curves on the Open Supernova Catalog Supernovae Sculptor (constellation) 20090620
SN 2009gj
Chemistry,Astronomy
85
3,518,914
https://en.wikipedia.org/wiki/Stress%20migration
Stress migration is a failure mechanism that often occurs in integrated circuit metallization (aluminum, copper). Voids form as result of vacancy migration driven by the hydrostatic stress gradient. Large voids may lead to open circuit or unacceptable resistance increase that impedes the IC performance. 'Stress migration is often referred as stress voiding, stress induced voiding or SIV. High temperature processing of copper dual damascene structures leaves the copper with a large tensile stress due to a mismatch in coefficient of thermal expansion of the materials involved. The stress can relax with time through the diffusion of vacancies leading to the formation of voids and ultimately open circuit failures. References Semiconductor device fabrication
Stress migration
Materials_science
141
1,244,992
https://en.wikipedia.org/wiki/Moment%20problem
In mathematics, a moment problem arises as the result of trying to invert the mapping that takes a measure to the sequence of moments More generally, one may consider for an arbitrary sequence of functions . Introduction In the classical setting, is a measure on the real line, and is the sequence . In this form the question appears in probability theory, asking whether there is a probability measure having specified mean, variance and so on, and whether it is unique. There are three named classical moment problems: the Hamburger moment problem in which the support of is allowed to be the whole real line; the Stieltjes moment problem, for ; and the Hausdorff moment problem for a bounded interval, which without loss of generality may be taken as . The moment problem also extends to complex analysis as the trigonometric moment problem in which the Hankel matrices are replaced by Toeplitz matrices and the support of is the complex unit circle instead of the real line. Existence A sequence of numbers is the sequence of moments of a measure if and only if a certain positivity condition is fulfilled; namely, the Hankel matrices , should be positive semi-definite. This is because a positive-semidefinite Hankel matrix corresponds to a linear functional such that and (non-negative for sum of squares of polynomials). Assume can be extended to . In the univariate case, a non-negative polynomial can always be written as a sum of squares. So the linear functional is positive for all the non-negative polynomials in the univariate case. By Haviland's theorem, the linear functional has a measure form, that is . A condition of similar form is necessary and sufficient for the existence of a measure supported on a given interval . One way to prove these results is to consider the linear functional that sends a polynomial to If are the moments of some measure supported on , then evidently Vice versa, if () holds, one can apply the M. Riesz extension theorem and extend to a functional on the space of continuous functions with compact support ), so that By the Riesz representation theorem, () holds iff there exists a measure supported on , such that for every . Thus the existence of the measure is equivalent to (). Using a representation theorem for positive polynomials on , one can reformulate () as a condition on Hankel matrices. Uniqueness (or determinacy) The uniqueness of in the Hausdorff moment problem follows from the Weierstrass approximation theorem, which states that polynomials are dense under the uniform norm in the space of continuous functions on . For the problem on an infinite interval, uniqueness is a more delicate question. There are distributions, such as log-normal distributions, which have finite moments for all the positive integers but where other distributions have the same moments. Formal solution When the solution exists, it can be formally written using derivatives of the Dirac delta function as . The expression can be derived by taking the inverse Fourier transform of its characteristic function. Variations An important variation is the truncated moment problem, which studies the properties of measures with fixed first moments (for a finite ). Results on the truncated moment problem have numerous applications to extremal problems, optimisation and limit theorems in probability theory. Probability The moment problem has applications to probability theory. The following is commonly used: By checking Carleman's condition, we know that the standard normal distribution is a determinate measure, thus we have the following form of the central limit theorem: See also Carleman's condition Hamburger moment problem Hankel matrix Hausdorff moment problem Moment (mathematics) Stieltjes moment problem Trigonometric moment problem Notes References (translated from the Russian by N. Kemmer) Mathematical analysis Hilbert spaces Probability problems Moment (mathematics) Mathematical problems Real algebraic geometry Optimization in vector spaces
Moment problem
Physics,Mathematics
779
11,422,090
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD82
In molecular biology, snoRNA U82 (also known as SNORD82 or Z25) is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. snoRNA U82/Z25 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. snoRNA U82 has been identified in both humans and mice: it is located in the fifth intron of the nucleolin gene in both species. Two additional snoRNAs (C/D box snoRNA U20 and the H/ACA snoRNA U23 ) are also encoded within the introns of the nucleolin gene. U82 is predicted to guide the 2'O-ribose methylation of 18S ribosomal RNA (rRNA) residue A1678. Another, different snoRNA, named U82 has been predicted in the introns of L3 ribosomal protein gene (RPL3) in humans and cows. However, the expression of this snoRNA could not be confirmed by northern blotting or Reverse transcription polymerase chain reaction (RT-PCR) and it should not be confused with this snoRNA located in the nucleolin gene. References External links Small nuclear RNA
Small nucleolar RNA SNORD82
Chemistry
383
243,390
https://en.wikipedia.org/wiki/Propinquity
In social psychology, propinquity (; from Latin propinquitas, "nearness") is one of the main factors leading to interpersonal attraction. It refers to the physical or psychological proximity between people. Propinquity can mean physical proximity, a kinship between people, or a similarity in nature between things ("like-attracts-like"). Two people living on the same floor of a building, for example, have a higher propinquity than those living on different floors, just as two people with similar political beliefs possess a higher propinquity than those whose beliefs strongly differ. Propinquity is also one of the factors, set out by Jeremy Bentham, used to measure the amount of (utilitarian) pleasure in a method known as felicific calculus. Propinquity effect The propinquity effect is the tendency for people to form friendships or romantic relationships with those whom they encounter often, forming a bond between subject and friend. Workplace interactions are frequent and this frequent interaction is often a key indicator as to why close relationships can readily form in this type of environment. In other words, relationships tend to form between those who have a high propinquity. It was first theorized by psychologists Leon Festinger, Stanley Schachter, and Kurt Back in what came to be called the Westgate studies conducted at MIT (1950). The typical Euler diagram used to represent the propinquity effect is shown below where U = universe, A = set A, B = set B, and S = similarity: The sets are basically any relevant subject matter about a person, persons, or non-persons, depending on the context. Propinquity can be more than just physical distance. Residents of an apartment building living near one of the building's stairways, for example, tend to have more friends from other floors than those living further from the stairway. The propinquity effect is usually explained by the mere exposure effect, which holds that the more exposure a stimulus gets, the more likeable it becomes. There is a requirement for the mere exposure effect to influence the propinquity effect, and that is that the exposure is positive. If the resident has repeatedly negative experiences with a person then the propinquity effect has a far less chance of happening (Norton, Frost, & Ariely, 2007). In a study on interpersonal attraction (Piercey and Piercey, 1972), 23 graduate psychology students, all from the same class, underwent 9 hours of sensitivity training in two groups. Students were given pre- and post-tests to rate their positive and negative attitudes toward each class member. Members of the same sensitivity training group rated each other higher in the post-test than they rated members of the other group in both the pre- and post-test, and members of their own group in the pre-test. The results indicated that the 9 hours of sensitivity training increased the exposure of students in the same group to each other, and thus they became more likeable to each other. Propinquity is one of the effects used to study group dynamics. For example, there was a British study done on immigrant Irish women to observe how they interacted with their new environments (Ryan, 2007). This study showed that there were certain people with whom these women became friends much more easily than others, such as classmates, workplace colleagues, and neighbours as a result of shared interests, common situations, and constant interaction. For women who still felt out of place when they began life in a new place, giving birth to children allowed for different ties to be formed, ones with other mothers. Having slightly older children participating in activities such as school clubs and teams also allowed social networks to widen, giving the women a stronger support base, emotional or otherwise. Types Various types of propinquity exist: industry/occupational propinquity, in which similar people working in the same field or job tend to be attracted to one another; residential propinquity, in which people living in the same area or within neighborhoods of each other tend to come together; and acquaintance propinquity, a form of proximity in existence when friends tend to have a special bond of interpersonal attraction. Many studies have been performed in assessing various propinquities and their effect on marriage. Virtual propinquity The introduction of instant messaging and video conferencing has reduced the effects of propinquity. Online interactions have facilitated instant and close interactions with people despite a lack of material presence. This allows a notional "virtual propinquity" to work on virtual relationships where people are connected virtually. However, research that came after the development of the internet and email has shown that physical distance is still a powerful predictor of contact, interaction, friendship, and influence. In popular culture William Shakespeare's King Lear, Act 1 Scene 1 Page 5 LEAR: 'Let it be so. Thy truth then be thy dower. For by the sacred radiance of the sun, The mysteries of Hecate and the night, By all the operation of the orbs From whom we do exist and cease to be— Here I disclaim all my paternal care, Propinquity, and property of blood, And as a stranger to my heart and me Hold thee from this for ever. The barbarous Scythian, Or he that makes his generation messes To gorge his appetite, shall to my bosom Be as well neighbored, pitied, and relieved As thou my sometime daughter.' "Love is a Science", a 1959 short story by humorist Max Shulman, features a girl named Zelda Gilroy assuring her science lab tablemate, Dobie Gillis, that he would eventually come to love her through the influence of propinquity, as their similar last names would put them in proximity throughout school. "Love is a Science" was adapted into a 1959 episode of the Shulman-created TV sitcom The Many Loves of Dobie Gillis, featuring Dobie as its main character and Zelda as a semi-regular, and a 1988 made-for-TV movie based on the series, Bring Me the Head of Dobie Gillis, portrayed Dobie and Zelda as being married. "Propinquity (I've Just Begun To Care)" is a song by Mike Nesmith. It was first recorded by Nesmith in 1968 while he was with The Monkees, though this version was not released until the 1990s. The first released version was by the Nitty Gritty Dirt Band on their album Uncle Charlie & His Dog Teddy, and Nesmith released a new version on his solo album Nevada Fighter. On page 478 of Jonathan Franzen's 2010 novel Freedom, Walter attributes his inability to stop having sex with Lalitha to their "daily propinquity". On page 150 in Michael Ondaatje's novel The English Patient, "He said later it was propinquity. Propinquity in the desert. It does that here, he said. He loved the word – the propinquity of water, the propinquity of two or three bodies in a car driving the Sand Sea for six hours." In Ian Fleming's 1957 James Bond novel Diamonds Are Forever, Felix Leiter tells Bond "Nothing propinks like propinquity." In William Faulkner's 1936 novel Absalom, Absalom!, Rosa, in explaining to Quentin why she agreed to marry Sutpen, states, "I don't plead propinquity: the fact that I, a woman young and at the age for marrying and in a time when most of the young men whom I would have known ordinarily were dead on lost battlefields, that I lived for two years under the same roof with him." In Ryan North's webcomic Dinosaur Comics, T-Rex discusses propinquity. In the P. G. Wodehouse novel Right Ho, Jeeves, Bertie asks, "What do you call it when two people of opposite sexes are bunged together in close association in a secluded spot meeting each other every day and seeing a lot of each other?" to which Jeeves replies, "Is 'propinquity' the word you wish, sir?" Bertie: "It is. I stake everything on propinquity, Jeeves." In Ernest Thompson Seton's short story "Arnaux: the Chronicle of a Homing Pigeon," published in Animal Heroes (1905): "Pigeon marriages are arranged somewhat like those of mankind. Propinquity is the first thing: force the pair together for a time and let nature take its course." See also References External links Propinquity Effect Human Mate Selection – An Exploration of Assortive Mating Preferences – (has two pages of propinquity studies) Interpersonal relationships
Propinquity
Biology
1,833
19,872,687
https://en.wikipedia.org/wiki/Ferroelectric%20liquid%20crystal%20display
Ferroelectric liquid-crystal display (FLCD) is a display technology based on the ferroelectric properties of chiral smectic liquid crystals as proposed in 1980 by Clark and Lagerwall. Reportedly discovered in 1975, several companies pursued the development of FLCD technologies, notably Canon and Central Research Laboratories (CRL), along with others including Seiko, Sharp, Mitsubishi and GEC. Canon and CRL pursued different technological approaches with regard to the switching of display cells, these providing the individual pixels or subpixels, and the production of intermediate pixel intensities between full transparency and full opacity, these differing approaches being adopted by other companies seeking to develop FLCD products. Development By 1985, Seiko had already demonstrated a colour FLCD panel able to display a 10-inch diagonal still image with a resolution of . By 1993, Canon had delivered the first commercial application of the technology in its EZPS Japanese-language desktop publishing system in the form of a 15-inch monochrome display with a reported cost of around £2,000, and the company demonstrated a 21-inch 64-colour display and a 24-inch 16-greyscale display, both with a resolution and able to show "GUI software with multiple windows". Other applications included projectors, viewfinders and printers. The FLCD did not make many inroads as a direct view display device. Manufacturing of larger FLCDs was problematic making them unable to compete against direct view LCDs based on nematic liquid crystals using the Twisted nematic field effect or In-Plane Switching. Today, the FLCD is used in reflective microdisplays based on Liquid Crystal on Silicon technology. Using ferroelectric liquid crystal (FLC) in FLCoS technology allows a much smaller display area which eliminates the problems of manufacturing larger area FLC displays. Additionally, the dot pitch or pixel pitch of such displays can be as low as 6 μm giving a very high resolution display in a small area. To produce color and grey-scale, time multiplexing is used, exploiting the sub-millisecond switching time of the ferroelectric liquid crystal. These microdisplays find applications in 3D head mounted displays (HMDs), image insertion in surgical microscopes, and electronic viewfinders where direct-view LCDs fail to provide more than 600 ppi resolution. Ferroelectric LCoS also finds commercial uses in Structured illumination for 3D-Metrology and Super-resolution microscopy. Some commercial products use FLCD. High switching speed allows building optical switches and shutters in printer heads. References Display technology Liquid crystal displays
Ferroelectric liquid crystal display
Engineering
541
49,504,684
https://en.wikipedia.org/wiki/Torricelli%27s%20experiment
Torricelli's experiment was invented in Pisa in 1643 by the Italian scientist Evangelista Torricelli (1608-1647). The purpose of his experiment is to prove that the source of vacuum comes from atmospheric pressure. Context For much of human history, the pressure of gases like air was ignored, denied, or taken for granted, but as early as the 6th century BC, Greek philosopher Anaximenes of Miletus claimed that all things are made of air that is simply changed by varying levels of pressure. He could observe water evaporating and changing to a gas and felt that this applied even to solid matter. More condensed air made colder, heavier objects, and expanded air made lighter, hotter objects. This was akin to how gases become less dense when warmer and more dense when cooler. Aristotle stated in some writings that "nature abhors a vacuum" and also that air has no mass/weight. The popularity of that philosopher kept this the dominant view in Europe for two thousand years. Even Galileo accepted it, believing that the pull of vacuum creates a siphon and that the pull can be overcome if the siphon is high enough. In the 17th century, Evangelista Torricelli conducted experiments with mercury that allowed him to measure the presence of air. He would dip a glass tube, closed at one end, into a bowl of mercury and raise the closed end out of it, keeping the open end submerged. The weight of the mercury would pull it down, leaving a partial vacuum at the far end. This validated his belief that air/gas has mass, creating pressure on things around it. The discovery helped bring Torricelli to the following conclusion: This test was essentially the first documented pressure gauge. In 1647 Valerianus Magnus published his Demonstratio ocularis, in which he claims to have proved the existence of the vacuum in the court of the king of Poland, Ladislaus IV, in Warsaw by means of an experiment identical to that carried out by Torricelli three years earlier. Three months after Magnus, Blaise Pascal published his Expériences nouvelles touchant le vide, giving details of his first barometric experiments. Pascal went farther than Torricelli, having his brother-in-law try the experiment at different altitudes on a mountain and finding, indeed, that the farther down in the ocean of atmosphere, the higher the pressure. Procedure The experiment uses a simple barometer to measure the pressure of air, filling it with mercury up until 75% of the tube. Any air bubbles in the tube must be removed by inverting several times. After that, a clean mercury is filled once again until the tube is completely full. The barometer is then placed inverted on the dish full of mercury. This causes the mercury in the tube to fall down until the difference between mercury on the surface and in the tube is about 760 mm. Even when the tube is shaken or tilted, the difference between the surface and in the tube is not affected due to the influence of atmospheric pressure. Conclusion Torricelli concluded that the mercury fluid in the tube is aided by the atmospheric pressure that is present on the surface of mercury fluid on the dish. He also stated that the changes of liquid level from day to day are caused by the variation of atmospheric pressure. The empty space in the tube is called the Torricellian vacuum. 760 mmHg = 1 atm 1 atm = 1 013 mbar or hPa 1 mbar or hPa = 0.7502467 mmHg 1 pascal = 1 Newton per square metre (SI unit) 1 hectopascal is 100 pascals Additional images References 1643 in science Science and technology in Italy Physics experiments Pressure
Torricelli's experiment
Physics
752
14,717,810
https://en.wikipedia.org/wiki/Gas%20Turbine%20Research%20Establishment
Gas Turbine Research Establishment (GTRE) is a laboratory of the Defence Research and Development Organisation (DRDO). Located in Bengaluru, its primary function is research and development of aero gas-turbines for military aircraft. As a spin-off effect, GTRE has been developing marine gas-turbines also. It was initially known as GTRC (Gas Turbine Research Centre), created in 1959 in No.4 BRD Air Force Station, Kanpur, Uttar Pradesh. In November 1961 it was brought under DRDO, renamed to GTRE and moved to Bengaluru, Karnataka. GTRE has consistently faced critcism for failing to develop an indigenous jet engine for fighter aircraft. Products Principal achievements of Gas Turbine Research Establishment include: Design and development of India's "first centrifugal type 10 kN thrust engine" between 1959-61. Design and development of a "1700K reheat system" for the Orpheus 703 engine to boost its power. The redesigned system was certified in 1973. Successful upgrade of the reheat system of the Orpheus 703 to 2000K. Improvement of the Orpheus 703 engine by replacing "the front subsonic compressor stage" with a "transonic compressor stage" to increase the "basic dry thrust" of the engine. Design and development of a "demonstrator" gas turbine engine—GTX 37-14U—for fighter aircraft. Performance trials commenced in 1977 and the "demonstrator phase" was completed in 1981. The GTX 37-14U was "configured" and "optimized" to build a "low by-pass ratio jet engine" for "multirole performance aircraft". This engine was dubbed GTX 37-14U B. GTX Kaveri GTX-35VS Kaveri engine was intended to power production models of HAL Tejas. Defending the program GTRE mentioned reasons for delay including: Non availability of state of the art wind tunnel facility in India The technology restrictions imposed by US by placing it in "entities" list Both hurdles having been cleared, GTRE intended to continue work on the AMCA (future generation fighter craft). This program was abandoned in 2014. Kaveri Marine Gas Turbine (KMGT) Kaveri Marine Gas Turbine is a design spin-off from the Kaveri engine, designed for Indian combat aircraft. Using the core of the Kaveri engine, GTRE added low-pressure compressor and turbine as a gas generator and designed a free power turbine to generate shaft power for maritime applications. The involvement of Indian Navy in the development and testing of the engine has given a tremendous boost to the programme. The base frame for KMGT was developed by private player Larsen & Toubro (L&T). Ghatak engine The engine for DRDO Ghatak will be a 52-kilonewton dry variant of the Kaveri aerospace engine and will be used in the UCAV (Unmanned Combat Aerial Vehicles). The Government of India has cleared a funding of ₹2,650 crores ($394 Million) for the project. Manik Engine Small Turbofan Engine (STFE), also known as Manik engine is a 4.5 kN thrust turbofan engine developed by GTRE to power Nirbhay series cruise missile and under development UAVs, Long range Anti-ship and Land Attack cruise missile systems. In October 2022, STFE was successfully flight tested. DRDO is currently on search for a private production partner to mass produce Manik engine. It is estimated that 300 units will be produced over the course of five years. This amount could be allocated to the GTRE-identified industries. An Expression of Interest (EOI) will first identify two industries to supply three engines each over the course of eighteen months. After that, an RFI for mass production quantities will be issued. In April 2024, the DRDO designed Indigenous Technology Cruise Missile (ITCM), which incorporates the Manik engine, was successfully tested. In July 2024, ABI Showatech India Pvt Ltd was awarded the contract to supply Casting Vane Low-Pressure Turbine (LPNGV) subcomponent of the engine as a part of the cruise missile programme. The low pressure turbine is "responsible for extracting energy from the exhaust gases to drive the fan and other compressor stages." The current STFE production plant is located near Thiruvananthapuram International Airport in Kerala for Limited Series Production for testing purpose of Nirbhay cruise missile. Testing The KMGT was tested on the Marine Gas Turbine test bed, an Indian Navy facility at Vishakhapatnam. The engine has been tested to its potential of 12 MW at ISA SL 35 °C condition, a requirement of the Navy to propel SNF class ships, such as the Rajput class destroyers. Manufacturing The Ministry of Defence (MoD) has awarded Azad Engineering Limited a contract to serve as a production agency for engines designed by the Gas Turbine Research Establishment. Assembling and manufacturing what is known as an Advanced Turbo Gas Generator (ATGG) engine is the focus of the present long-term contract. This is meant to power various defense applications, such as the gas turbine engine that powers the Indian Army's fleet of infantry combat vehicles (ICVs) and tanks, the marine gas turbine engine (MGTE) for upcoming Indian Navy warships, and the GTX-35VS Kaveri turbofan engine for the Tejas fighter. By early 2026, Azad must begin delivering its first batch of fully integrated engines. Using components including a 4-stage axial flow compressor, an annular combustor, a single-stage axial flow uncooled turbine, and a fixed exit area nozzle, the engine is built using a single-spool turbojet configuration. Azad Engineering will be essential to GTRE as a single source industry partner. In 2024, discussions began between Safran, a French defence and aerospace company, and DRDO's Aeronautical Development Agency and GTRE for future technology transfer and manufacturing of jet engines for India's 5th generation Advanced Medium Combat Aircraft (AMCA) programme. Industry collaboration For Combat Aircraft Engine Development Program, PTC Industries Limited, a Titanium recycling and aerospace component forging company has taken up a developmental contract for essential components on 6 December 2022. GTRE is expanding PTC Industries' capacity to produce vital titanium alloy aero engine and aircraft parts through investment casting – hot isostatic pressing technology. In cooperation with GTRE, a prototype of the Engine Bevel Pinion Housing has already been developed. Jet engine development criticism GTRE has been frequently criticised for its failure to develop an indigenous jet engine for fighter aircraft, a project the laboratory has been working on since 1982. As of 2023, GTRE has not been able to overcome its engine development issues regarding metallurgy for turbine blades and other engine blade technologies, lack of a flying testbed and wind tunnel to validate engines above a 90 Kilo Newton (KN) thrust. References External links Gas Turbine Research Establishment Gas Turbine Research Establishment (GTRE) Defence Research and Development Organisation laboratories Research institutes in Bengaluru Engineering research institutes Aircraft engine manufacturers of India Gas turbine manufacturers Marine engine manufacturers Research institutes in Lucknow Engine manufacturers of India 1959 establishments in Mysore State
Gas Turbine Research Establishment
Engineering
1,490
57,585,688
https://en.wikipedia.org/wiki/Paul%20Chirik
Paul James Chirik (born June 13, 1973) is an American chemist known for his work in sustainable chemistry using Earth-abundant metals like iron, cobalt, and nickel to surpass the performance of more exotic elements traditionally used in catalysis. He is the Edwards S. Sanford Professor of Chemistry and chair of the chemistry department at Princeton University. Academic career Chirik received his B.S. in chemistry from Virginia Tech, studying organometallic chemistry under the advisement of Joseph Merola. He earned his Ph.D. in 2000 at the California Institute of Technology studying polymerization and hydrometallation chemistry with John E. Bercaw. After his postdoctoral work at the Massachusetts Institute of Technology, Chirik joined the chemistry faculty at Cornell University until 2011, when he was named the Edwards S. Sanford Professor of Chemistry at Princeton University. Research Chirik’s multidisciplinary research seeks to transform traditional catalysis, which relies on exotic metals like platinum and rhodium to drive chemical reactions. Instead, Chirik uses alternative, Earth-abundant metals like iron and cobalt, developing techniques that allow these metals to mimic or surpass the performance of exotics. An important example of this research was published in 2021 in Nature Chemistry, in which Chirik detailed a route to recyclable plastics through a molecule he discovered called oligocyclobutane. This molecule can be “unzipped” back to its original monomer by employing an iron catalyst in a process known as depolymerization. Another major focus of Chirik’s lab is improving the process surrounding iron- and cobalt-based catalysis cross-coupling for carbon-carbon bond formation, an essential technology used by the pharmaceutical industry to develop new therapies. Chirik publishes regularly on this technology, with recent papers on cobalt-catalyzed cross-coupling and the addition of halides to a reduced-iron pincer complex to create an improved pathway for a desired end product. In 2022, Chirik was among the first chemists in the nation to receive a Gordon and Betty Moore Foundation exploration-phase grant in green chemistry based on his proposal for iron catalysts for a biorenewable hydrocarbon future. He has been the editor-in-chief of the journal Organometallics since 2015. Awards and honors Named Fellow in the American Association for the Advancement of Science, 2023 Gabor Somorjai Award for Creative Research in Catalysis, 2021 The Linus Pauling Award, 2020 Paul N. Rylander Award, 2020 The Arthur C. Cope Scholar Award, American Chemical Society, 2009 David and Lucile Packard Fellowship in Science and Engineering, 2004 Editor-in-Chief, Organometallics, 2015 to present References Living people Virginia Tech alumni California Institute of Technology alumni American organic chemists 21st-century American chemists 1973 births Scientists from Philadelphia Princeton University faculty Cornell University faculty
Paul Chirik
Chemistry
596
78,439,040
https://en.wikipedia.org/wiki/Darcula
Darcula is a phishing-as-a-service (PhaaS) Chinese-language platform which has been used against organizations (government, airlines) and services (postal, financial) in over 100 countries. Darcula offers to cybercriminals more than 20,000 counterfeit domains (to spoof brands) and over 200 templates. Darcula uses iMessage and RCS (Rich Communication Services) to steal credentials from Android and iPhone users. References Cybercrime Mobile malware
Darcula
Technology
105
48,870,697
https://en.wikipedia.org/wiki/Gamma-object
In mathematics, a Γ-object of a pointed category C is a contravariant functor from Γ to C. The basic example is Segal's so-called Γ-space, which may be thought of as a generalization of simplicial abelian group (or simplicial abelian monoid). More precisely, one can define a Gamma space as an O-monoid object in an infinity-category. The notion plays a role in the generalization of algebraic K-theory that replaces an abelian group by something higher. Notes References Category theory
Gamma-object
Mathematics
121
27,176,781
https://en.wikipedia.org/wiki/Magnetohydrodynamic%20turbulence
Magnetohydrodynamic turbulence concerns the chaotic regimes of magnetofluid flow at high Reynolds number. Magnetohydrodynamics (MHD) deals with what is a quasi-neutral fluid with very high conductivity. The fluid approximation implies that the focus is on macro length-and-time scales which are much larger than the collision length and collision time respectively. Incompressible MHD equations The incompressible MHD equations for constant mass density, , are where represents the velocity, represent the magnetic field, represents the total pressure (thermal+magnetic) fields, is the kinematic viscosity and represents magnetic diffusivity. The third equation is the incompressibility condition. In the above equation, the magnetic field is in Alfvén units (same as velocity units). The total magnetic field can be split into two parts: (mean + fluctuations). The above equations in terms of Elsässer variables () are where . Nonlinear interactions occur between the Alfvénic fluctuations . The important nondimensional parameters for MHD are The magnetic Prandtl number is an important property of the fluid. Liquid metals have small magnetic Prandtl numbers, for example, liquid sodium's is around . But plasmas have large . The Reynolds number is the ratio of the nonlinear term of the Navier–Stokes equation to the viscous term. While the magnetic Reynolds number is the ratio of the nonlinear term and the diffusive term of the induction equation. In many practical situations, the Reynolds number of the flow is quite large. For such flows typically the velocity and the magnetic fields are random. Such flows are called to exhibit MHD turbulence. Note that need not be large for MHD turbulence. plays an important role in dynamo (magnetic field generation) problem. The mean magnetic field plays an important role in MHD turbulence, for example it can make the turbulence anisotropic; suppress the turbulence by decreasing energy cascade etc. The earlier MHD turbulence models assumed isotropy of turbulence, while the later models have studied anisotropic aspects. In the following discussions will summarize these models. More discussions on MHD turbulence can be found in Biskamp, Verma. and Galtier. Isotropic models Iroshnikov and Kraichnan formulated the first phenomenological theory of MHD turbulence. They argued that in the presence of a strong mean magnetic field, and wavepackets travel in opposite directions with the phase velocity of , and interact weakly. The relevant time scale is Alfven time . As a results the energy spectra is where is the energy cascade rate. Later Dobrowolny et al. derived the following generalized formulas for the cascade rates of variables: where are the interaction time scales of variables. Iroshnikov and Kraichnan's phenomenology follows once we choose . Marsch chose the nonlinear time scale as the interaction time scale for the eddies and derived Kolmogorov-like energy spectrum for the Elsasser variables: where and are the energy cascade rates of and respectively, and are constants. Matthaeus and Zhou attempted to combine the above two time scales by postulating the interaction time to be the harmonic mean of Alfven time and nonlinear time. The main difference between the two competing phenomenologies (−3/2 and −5/3) is the chosen time scales for the interaction time. The main underlying assumption in that Iroshnikov and Kraichnan's phenomenology should work for strong mean magnetic field, whereas Marsh's phenomenology should work when the fluctuations dominate the mean magnetic field (strong turbulence). However, as we will discuss below, the solar wind observations and numerical simulations tend to favour −5/3 energy spectrum even when the mean magnetic field is stronger compared to the fluctuations. This issue was resolved by Verma using renormalization group analysis by showing that the Alfvénic fluctuations are affected by scale-dependent "local mean magnetic field". The local mean magnetic field scales as , substitution of which in Dobrowolny's equation yields Kolmogorov's energy spectrum for MHD turbulence. Renormalization group analysis have been also performed for computing the renormalized viscosity and resistivity. It was shown that these diffusive quantities scale as that again yields energy spectra consistent with Kolmogorov-like model for MHD turbulence. The above renormalization group calculation has been performed for both zero and nonzero cross helicity. The above phenomenologies assume isotropic turbulence that is not the case in the presence of a mean magnetic field. The mean magnetic field typically suppresses the energy cascade along the direction of the mean magnetic field. Anisotropic models Mean magnetic field makes turbulence anisotropic. This aspect has been studied in last two decades. In the limit , Galtier et al. showed using kinetic equations that where and are components of the wavenumber parallel and perpendicular to mean magnetic field. The above limit is called the weak turbulence limit. Under the strong turbulence limit, , Goldereich and Sridhar argue that ("critical balanced state") which implies that The above anisotropic turbulence phenomenology has been extended for large cross helicity MHD. Solar wind observations Solar wind plasma is in a turbulent state. Researchers have calculated the energy spectra of the solar wind plasma from the data collected from the spacecraft. The kinetic and magnetic energy spectra, as well as are closer to compared to , thus favoring Kolmogorov-like phenomenology for MHD turbulence. The interplanetary and interstellar electron density fluctuations also provide a window for investigating MHD turbulence. Numerical simulations The theoretical models discussed above are tested using the high resolution direct numerical simulation (DNS). Number of recent simulations report the spectral indices to be closer to 5/3. There are others that report the spectral indices near 3/2. The regime of power law is typically less than a decade. Since 5/3 and 3/2 are quite close numerically, it is quite difficult to ascertain the validity of MHD turbulence models from the energy spectra. Energy fluxes can be more reliable quantities to validate MHD turbulence models. When (high cross helicity fluid or imbalanced MHD) the energy flux predictions of Kraichnan and Iroshnikov model is very different from that of Kolmogorov-like model. It has been shown using DNS that the fluxes computed from the numerical simulations are in better agreement with Kolmogorov-like model compared to Kraichnan and Iroshnikov model. Anisotropic aspects of MHD turbulence have also been studied using numerical simulations. The predictions of Goldreich and Sridhar () have been verified in many simulations. Energy transfer Energy transfer among various scales between the velocity and magnetic field is an important problem in MHD turbulence. These quantities have been computed both theoretically and numerically. These calculations show a significant energy transfer from the large scale velocity field to the large scale magnetic field. Also, the cascade of magnetic energy is typically forward. These results have critical bearing on dynamo problem. There are many open challenges in this field that hopefully will be resolved in near future with the help of numerical simulations, theoretical modelling, experiments, and observations (e.g., solar wind). See also Magnetohydrodynamics Turbulence Alfvén wave Solar dynamo Reynolds number Navier–Stokes equations Computational magnetohydrodynamics Computational fluid dynamics Solar wind Magnetic flow meter Ionic liquid References Magnetohydrodynamics Turbulence
Magnetohydrodynamic turbulence
Chemistry
1,563
64,221,731
https://en.wikipedia.org/wiki/Kirkpatrick%E2%80%93Reisch%20sort
Kirkpatrick–Reisch sorting is a fast sorting algorithm for items with limited-size integer keys. It is notable for having an asymptotic time complexity that is better than radix sort. References Sorting algorithms
Kirkpatrick–Reisch sort
Mathematics,Technology
47
74,743,071
https://en.wikipedia.org/wiki/Y%CE%94-%20and%20%CE%94Y-transformation
In graph theory, ΔY- and YΔ-transformations (also written delta-wye and wye-delta) are a pair of operations on graphs. A ΔY-transformation replaces a triangle by a vertex of degree three; and conversely, a YΔ-transformation replaces a vertex of degree three by a triangle. The names for the operations derive from the shapes of the involved subgraphs, which look respectively like the letter Y and the Greek capital letter Δ. A YΔ-transformation may create parallel edges, even if applied to a simple graph. For this reason ΔY- and YΔ-transformations are most naturally considered as operations on multigraphs. On multigraphs both operations preserve the edge count and are exact inverses of each other. In the context of simple graphs it is common to combine a YΔ-transformation with a subsequent normalization step that reduces parallel edges to a single edge. This may no longer preserve the number of edges, nor be exactly reversible via a ΔY-transformation. Formal definition Let be a graph (potentially a multigraph). Suppose contains a triangle with vertices and edges . A ΔY-transformation of at deletes the edges and adds a new vertex adjacent to each of . Conversely, if is a vertex of degree three with neighbors , then a YΔ-transformation of at deletes and adds three new edges , where connects and . If the resulting graph should be a simple graph, then any resulting parallel edges are to be replaced by a single edge. Relevance ΔY- and YΔ-transformations are a tool both in pure graph theory as well as applications. Both operations preserve a number of natural topological properties of graphs. For example, applying a YΔ-transformation to a 3-vertex of a planar graph, or a ΔY-transformation to a triangular face of a planar graph, results again in a planar graph. This was used in the original proof of Steinitz's theorem, showing that every 3-connected planar graph is the edge graph of a polyhedron. Applying ΔY- and YΔ-transformations to a linkless graph results again in a linkless graph. This fact is used to compactly describe the forbidden minors of the associated graph classes as ΔY-families generated from a small number of graphs (see the section on ΔY-families below). A particularly relevant application exists in electrical engineering in the study of three-phase power systems (see Y-Δ transform (electrical engineering)). In this context they are also known as star-triangle transformations and are a special case of star-mesh transformations. ΔY-families The ΔY-family generated by a graph is the smallest family of graphs that contains and is closed under YΔ- and ΔY-transformations. Equivalently, it is constructed from by recursively applying these transformations until no new graph is generated. If is a finite graph it generates a finite ΔY-family, all members of which have the same edge count. The ΔY-family generated by several graphs is the smallest family that contains all these graphs and is closed under YΔ- and ΔY-transformation. Some notable families are generated in this way: the Petersen family is generated from the complete graph . It consists of the six forbidden minors for the class of linkless graphs. the Heawood family is generated from and . It consists of 78 graphs, each of which is a forbidden minors for the class of 4-flat graphs. YΔY-reducible graphs A graph is YΔY-reducible if it can be reduced to a single vertex by a sequence of ΔY- or YΔ-transformations and the following normalization steps: removing a loop, removing a parallel edge, removing a vertex of degree one, smoothing out a vertex of degree two, i.e., replacing it by a single edge between its two former neighbors. The YΔY-reducible graphs form a minor closed family and therefore have a forbidden minor characterization (by the Robertson–Seymour theorem). The graphs of the Petersen family constitute some (but not all) of the excluded minors. In fact, already more than 68 billion excluded minor are known. The class of YΔY-reducible graphs lies between the classes of planar graphs and linkless graphs: each planar graph is YΔY-reducible, while each YΔY-reducible graph is linkless. Both inclusions are strict: is not planar but YΔY-reducible, while the graph in the figure is not YΔY-reducible but linkless. References Graph theory Graph operations
YΔ- and ΔY-transformation
Mathematics
943
1,296,864
https://en.wikipedia.org/wiki/Heart%20rate%20monitor
A heart rate monitor (HRM) is a personal monitoring device that allows one to measure/display heart rate in real time or record the heart rate for later study. It is largely used to gather heart rate data while performing various types of physical exercise. Measuring electrical heart information is referred to as electrocardiography (ECG or EKG). Medical heart rate monitoring used in hospitals is usually wired and usually multiple sensors are used. Portable medical units are referred to as a Holter monitor. Consumer heart rate monitors are designed for everyday use and do not use wires to connect. History Early models consisted of a monitoring box with a set of electrode leads which attached to the chest. The first wireless EKG heart rate monitor was invented in 1977 by Polar Electro as a training aid for the Finnish National Cross Country Ski team. As "intensity training" became a popular concept in athletic circles in the mid-80s, retail sales of wireless personal heart monitors started in 1983. Technologies Modern heart rate monitors commonly use one of two different methods to record heart signals (electrical and optical). Both types of signals can provide the same basic heart rate data, using fully automated algorithms to measure heart rate, such as the Pan-Tompkins algorithm. ECG (Electrocardiography) sensors measure the bio-potential generated by electrical signals that control the expansion and contraction of heart chambers, typically implemented in medical devices. PPG (Photoplethysmography) sensors use light-based technology to measure the blood volume controlled by the heart's pumping action. Electrical The electrical monitors consist of two elements: a monitor/transmitter, which is worn on a chest strap, and a receiver. When a heartbeat is detected, a radio signal is transmitted, which the receiver uses to display/determine the current heart rate. This signal can be a simple radio pulse or a unique coded signal from the chest strap (such as Bluetooth, ANT, or other low-power radio links). Newer technology prevents one user's receiver from using signals from other nearby transmitters (known as cross-talk interference) or eavesdropping. Note, that the older Polar 5.1 kHz radio transmission technology is usable underwater. Both Bluetooth and Ant+ use the 2.4 GHz radio band, which cannot send signals underwater. Optical More recent devices use optics to measure heart rate by shining light from an LED through the skin and measuring how it scatters off blood vessels. In addition to measuring the heart rate, some devices using this technology are able to measure blood oxygen saturation (SpO2). Some recent optical sensors can also transmit data as mentioned above. Newer devices such as cell phones or watches can be used to display and/or collect the information. Some devices can simultaneously monitor heart rate, oxygen saturation, and other parameters. These may include sensors such as accelerometers, gyroscopes, and GPS to detect speed, location and distance. In recent years, it has been common for smartwatches to include heart rate monitors, which has greatly increased in popularity. Some smartwatches, smart bands and cell phones often use PPG sensors. Fitness metrics Garmin (Venu Sq 2 and Lily*), Polar Electro (Polar H9, Polar H10, and Polar Verity Sense), Suunto, Samsung Galaxy Watch (Galaxy Watch 5 and Galaxy Watch 6*), Google (Pixel Watch 2*), Spade and Company, Vital Fitness Tracker**, Apple Watch (Series 7**, Series 9*, Apple Watch SE*, Apple Watch Ultra 2*), Mobvoi (TicWatch Pro 5*) and Fitbit (Versa 3** and Versa 4*) are vendors selling consumer heart rate products. Most companies use their own proprietary heart rate algorithms. Accuracy The newer, wrist based heart rate monitors have achieved almost identical levels of accuracy as their chest strap counterparts with independent tests showing up to 95% accuracy, but sometimes more than 30% error can persist for several minutes. Optical devices can be less accurate when used during vigorous activity, or when used underwater. Currently, heart rate variability is less available on optical devices. Apple introduced HRV data collection to the Apple Watch devices in 2018. Fitbit started offering HRV monitoring on their devices starting from the Fitbit Sense, released in 2020. See also Activity tracker Apple Watch E-textiles eHealth GPS watch Pedometer References External links Visualization of ECG recordings using Python program ECG-pyview A simple Python program visualizing raw data produced by chest strap based ECG sportesters. Enables to inspect ECG recordings having lengths up to days. (Open source, non-commercial use, use matplotlib.) A demo video. Biomedical engineering Diagnostic cardiology Exercise equipment Finnish inventions Medical monitoring equipment Physiological instruments
Heart rate monitor
Technology,Engineering,Biology
978
2,938,583
https://en.wikipedia.org/wiki/List%20of%20auto%20parts
This is a list of auto parts, which are manufactured components of automobiles. This list reflects both fossil-fueled cars (using internal combustion engines) and electric vehicles; the list is not exhaustive. Many of these parts are also used on other motor vehicles such as trucks and buses. Car body and main parts Body components, including trim Doors Windows Low voltage/auxiliary electrical system and electronics Audio/video devices Cameras Low voltage electrical supply system Gauges and meters Ignition system Lighting and signaling system Sensors Starting system Electrical switches Wiring harnesses Miscellaneous Interior Floor components and parts Carpet and rubber and other floor material Center console (front and rear) Other components Roll cage or Exo cage Dash Panels Car seat Arm Rest Bench seat Bucket seat Children and baby car seat Fastener Headrest Seat belt Seat bracket Seat cover Seat track Other seat components Back seat Front seat Power-train and chassis Braking system Electrified powertrain components Engine components and parts Engine cooling system Engine oil systems Exhaust system Fuel supply system Suspension and steering systems Transmission system Miscellaneous auto parts Air conditioning system (A/C) Automobile air conditioning A/C Clutch A/C Compressor A/C Condenser A/C Hose high pressure A/C Kit A/C Relay A/C Valve A/C Expansion Valve A/C Low-pressure Valve A/C Schroeder Valve A/C Inner Plate A/C Cooler A/C Evaporator A/C Suction Hose Pipe A/C Discharge Hose Pipe A/C Gas Receiver A/C Condenser Filter A/C Cabin Filter (Pollen Filter) Bearings Grooved ball bearing Needle bearing Roller bearing Sleeve bearing Wheel bearing Hose Fuel vapour hose Reinforced hose (high-pressure hose) Non-reinforced hose Radiator hose Other miscellaneous parts Logo Adhesive tape and foil Air bag Bolt cap License plate bracket Cables Speedometer cable Cotter pin Dashboard Center console Glove compartment Drag link Dynamic seal Fastener Gasket: Flat, moulded, profiled Hood and trunk release cable Horn and trumpet horn Injection-molded parts Instrument cluster Label Mirror Phone Mount Name plate Nut Flange nut Hex nut O-ring Paint Rivet Rubber (extruded and molded) Screw Shim Sun visor Washer See also 42-volt electrical system Fuel economy in automobiles Spare parts management Electric Car References Parts Auto
List of auto parts
Technology
466
1,267,900
https://en.wikipedia.org/wiki/Lift%20hill
A lift hill, or chain hill, is an upward-sloping section of track on a roller coaster on which the roller coaster train is mechanically lifted to an elevated point or peak in the track. Upon reaching the peak, the train is then propelled from the peak by gravity and is usually allowed to coast throughout the rest of the roller coaster ride's circuit on its own momentum, including most or all of the remaining uphill sections. The initial upward-sloping section of a roller coaster track is usually a lift hill, as the train typically begins a ride with little speed, though some coasters have raised stations that permit an initial drop without a lift hill. Although uncommon, some tracks also contain multiple lift hills. Lift hills usually propel the train to the top of the ride via one of two methods: a chain lift involving a long, continuous chain which trains hook on to and are carried to the top; or a drive tire system in which multiple motorized tires (known as friction wheels) push the train upwards. A typical chain lift consists of a heavy piece of metal called a chain dog, which is mounted onto the underside of one of the cars which make up the train. This is in place to line up with the chain on the lift hill. The chain travels through a steel trough, and is normally powered by one or more motors which are positioned under the lift hill. Chain dogs underneath each train are engaged by the chain and the train is pulled up the lift. Anti-rollback dogs engage a rack (ratcheted track) alongside the chain to prevent the train from descending the lift hill. At the crest of the lift, the chain wraps around a gear wheel where it begins its return to the bottom of the lift; the train is continually pulled along until gravity takes over and it accelerates downhill. The spring-loaded chain and anti-rollback dogs will disengage themselves as this occurs. Intamin cable lift The Intamin cable lift is a type of lift mechanism that was first used on Millennium Force at Cedar Point in Sandusky, Ohio. This type of lift has also been used for Kings Dominion's Pantherian, Holiday Park's Expedition GeForce, Walibi Holland's Goliath, Djurs Sommerland's Piraten (Europe's only "Mega-Lite"-model coaster by Intamin), Tokyo Dome City's Thunder Dolphin, Hersheypark's Skyrush, Flying Aces at Ferrari World, and Altair at Cinecittà World. Currently, there are only two wooden roller coasters that utilize a cable lift hill: El Toro at Six Flags Great Adventure and T Express at Everland. The cable lift utilizes a cable that is attached to a catch car that moves up and down the lift hill in a separate channel between the track rails. On several coasters the catch car rolls into the station and latches to the front cars of the train to carry it up the lift hill. This requires the lift hill to be positioned directly in front of the station. El Toro was the first coaster to incorporate a turn between the station and the cable lift hill and was the first (and so far only) of this type to engage the catch car while the train is moving. Once the train engages the catch car, the speed is increased and the train is quickly pulled to top of the lift. Because a cable is much lighter than a chain, cable lifts are much faster than chain lifts. A cable also requires far less maintenance than a chain. Another advantage to park guests is that a cable lift is very quiet, partly because the main drive winch is located directly beneath the top of the lift, a location which will normally be relatively far from guest-accessible areas. Ferris wheel lift The Ferris wheel lift is a type of lift based on the rotating circular design of a ferris wheel. Created by Premier Rides, it existed on 'Round About' (formerly Maximum RPM) which operated at Freestyle Music Park in Myrtle Beach, South Carolina prior to being dismantled and moved to a park in Vietnam only to never operate and was later dismantled again. It uses a Ferris Wheel like motion to lift the cars to the top, as on a Ferris Wheel. The cars are then released onto the track. Elevator lift The elevator lift is typically used on a single car or a short, double-car train. The vehicle moves into position on a piece of track that is then lifted vertically, along with the vehicle, operating very similar to a passenger elevator. Several of these systems use a single shaft and a second piece of track in the opposite position serves as the counterweight. With the single shaft the rail may curve to the left or right as the two tracks pass each other at the halfway point. The first coaster to use an elevator system with a counterweight was Batflyer at Lightwater Valley. It is believed that those same designers then founded Caripro, which then constructed nine vertical lift suspended coasters between 1997 and 2001. The Mack Rides-built Matterhorn Blitz at Europa Park was the first to use a two-track system with a single shaft. Friction wheel lift A friction wheel lift is a type of lift mechanism in which two wheels are placed in either a horizontal or a vertical position. These are commonly used for brake runs, lifts, storage and more. The train has a small vertical lip, where the two friction wheels meet at each side. The wheels pull the train up slowly, while making a jet-like noise. An anti-rollback system is not needed, as the wheels are tight against the lip. Tilt lift/thrill lift section A tilt lift is a new way to elevate coasters. The tilt lift is essentially an elevator lift, but the elevator lift rotates 90 degrees so that the train is now vertical, with the nose of the train facing the ground. This design has not been made yet; the only places where this occurs are in the video games RollerCoaster Tycoon 3, Thrillville Off the Rails and Coaster Crazy. However, there are coaster designs that use the tilting aspect of this lift already. The first operating tilt coaster in the world is Gravity Max at Lihpao Land in Taiwan. The coaster was built by Vekoma. In this coaster, after going up a chain hill, the train is held on a horizontal section of track, which then tilts forwards, to become a vertical section, which then leads into a vertical drop accelerated by gravity. The Chinese company Golden Horse has made several unofficial recreations, each featuring a less than vertical drop and significantly different track elements. Anti-rollback device The familiar "click-clack" sound that occurs as a roller coaster train ascends the lift hill is not caused by the chain itself. The cause for this noise is actually a safety device used on lift hills—the anti-rollback device. The anti-rollback device is a standard safety feature, typically consisting of a continuous, saw-toothed, section of metal, forming a linear ratchet. Roller coaster trains are fitted with anti-rollback "dogs," essentially heavy-duty pieces of metal that fall and rest in each groove of the anti-rollback device on the track as the trains ascend the lift-hill. This makes the "clicking" sound and allows the train to go upwards only, effectively preventing the train from rolling back down the hill should it ever encounter a power failure or broken chain. This feature was derived from the similar feature originally used on the Mauch Chunk Switchback Railway in Pennsylvania, starting in 1846. Under the power of a stationary steam engine, railway cars were drawn up two uphill planes that had two slightly different early forms of this anti-rollback device. The entire concept of the modern roller coaster was also initially inspired by this railroad. References Roller coaster elements Roller coaster technology de:Achterbahnelemente#Lifthill
Lift hill
Technology
1,607
7,164,750
https://en.wikipedia.org/wiki/Dimethyl%20carbonate
Dimethyl carbonate (DMC) is an organic compound with the formula OC(OCH3)2. It is a colourless, flammable liquid. It is classified as a carbonate ester. This compound has found use as a methylating agent and as a co-solvent in lithium-ion batteries. Notably, dimethyl carbonate is a weak methylating agent, and is not considered as a carcinogen. Instead, dimethyl carbonate is often considered to be a green reagent, and it is exempt from the restrictions placed on most volatile organic compounds (VOCs) in the United States. Production World production in 1997 was estimated at 1000 barrels a day. Production of dimethyl carbonate worldwide is limited to Asia, the Middle East, and Europe. Dimethyl carbonate is traditionally prepared by the reaction of phosgene and methanol. Methyl chloroformate is produced as an intermediate: COCl2 + CH3OH → CH3OCOCl + HCl CH3OCOCl + CH3OH → CH3OCO2CH3 + HCl This synthesis route has been largely replaced by oxidative carbonylation. In this process, carbon monoxide and an oxidizer provide the equivalent of CO2+: CO + 1/2 O2 + 2 CH3OH → (CH3O)2CO + H2O It can also be produced industrially by a transesterification of ethylene carbonate or propylene carbonate and methanol, which also affords respectively ethylene glycol or propylene glycol. This route is complicated by the methanol-DMC azeotrope, which requires azeotropic distillation or other techniques. Reactions and potential applications Methylating agent Dimethyl carbonate methylates anilines, carboxylic acids, and phenols, albeit usually slowly. Sometimes these reactions require the use of an autoclave. Dimethyl carbonate's main benefit over other methylating reagents such as iodomethane and dimethyl sulfate is its low toxicity. Additionally, it is biodegradable. Unfortunately, it is a relatively weak methylating agent compared to these traditional reagents. Solvent In the US, dimethyl carbonate was exempted under the definition of volatile organic compounds (VOCs) by the U.S. EPA in 2009. Due to its classification as VOC exempt, dimethyl carbonate has grown in popularity and applications as a replacement for methyl ethyl ketone (MEK) and parachlorobenzotrifluoride, as well as tert-butyl acetate until it too was exempted. Dimethyl carbonate has an ester- or alcohol-like odor, which is more favorable to users than most hydrocarbon solvents it replaces. Dimethyl carbonate has an evaporation rate of 3.22 (butyl acetate = 1.0), which slightly slower than MEK (3.8) and ethyl acetate (4.1), and faster than toluene (2.0) and isopropanol (1.7). Dimethyl carbonate has solubility profile similar to common glycol ethers, meaning dimethyl carbonate can dissolve most common coating resins except perhaps rubber based resins. Hildebrand solubility parameter is 20.3 MPa and Hansen solubility parameters are: dispersion = 15.5, polar = 3.9, H bonding = 9.7. Dimethyl carbonate is partially soluble in water up to 13%, however it is hydrolyzed in water-based systems over time to methanol and CO2 unless properly buffered. Dimethyl carbonate can freeze at same temperatures as water, it can be thawed out with no loss of properties to itself or coatings based on dimethyl carbonate. Intermediate in polycarbonate synthesis A large captive use of dimethyl carbonate is for the production of diphenyl carbonate through transesterification with phenol. Diphenyl carbonate is a widely used raw material for the synthesis of bisphenol-A-polycarbonate in a melt polycondensation process, the resulting product being recyclable by reversing the process and transesterifying the polycarbonate with phenol to yield diphenyl carbonate and bisphenol A. Alternative fuel additive There is also interest in using this compound as a fuel oxygenate additive. In lithium-ion and lithium-metal batteries Similar to ethylene carbonate, dimethyl carbonate forms an electronically-insulating Li+-conducting film at negative electrode potentials. However, the film in dry DMC solutions is not as effective in passivating the negative electrode as the film in wet solutions. For this reason dimethyl carbonate is rarely used in lithium batteries without a co-solvent. Safety DMC is a flammable liquid with a flash point of 17 °C (63 °F), which limits its use in consumer and indoor applications. DMC is still safer than acetone, methyl acetate and methyl ethyl ketone from a flammability point of view. The National Center for Sustainable Transportation recommends limiting exposure by inhalation to less than 100 ppm over an 8-hour work day, which is similar to that of a number of common industrial solvents (toluene, methyl ethyl ketone). Workers should wear protective organic vapor respirators when using DMC indoors or in other conditions where concentrations exceed the REL. DMC is metabolized by the body to methanol and carbon dioxide, so accidental ingestion should be treated in the same manner as methanol poisoning. See also Dimethyl dicarbonate References Methylating agents Carbonate esters Methyl esters
Dimethyl carbonate
Chemistry
1,202
14,472,947
https://en.wikipedia.org/wiki/Sacral%20nerve%20stimulation
Sacral nerve stimulation, also termed sacral neuromodulation, is a type of medical electrical stimulation therapy. It typically involves the implantation of a programmable stimulator subcutaneously, which delivers low amplitude electrical stimulation via a lead to the sacral nerve, usually accessed via the S3 foramen. The U.S. Food and Drug Administration has approved InterStim Therapy, by Medtronic, as a sacral nerve stimulator for treatment of urinary incontinence, high urinary frequency and urinary retention. Sacral nerve stimulation is also under investigation as treatment for other conditions, including constipation brought on by nerve damage due to surgical procedures. An experimental procedure for constipation in children is being conducted in Nationwide Children's Hospital. In the event that the nerves and the brain are no longer communicating effectively, resulting in a bowel/bladder disorder, this type of treatment is designed to imitate a signal sent via the central nervous system. One of the major nerve routes is from the brain, along the spinal cord and through the back. This is commonly referred to as the sacral area. This area controls the everyday function of the pelvic floor, urethral sphincter, bladder and bowel. By stimulating the sacral nerve (located in the lower back), a signal is sent that manipulates a contraction within the pelvic floor. Over time these contractions rebuild the strength of the organs and muscles within it. This effectively alleviates all symptoms of urinary/faecal disorders, and in many cases eliminates them completely. Medical uses Urge incontinence Many studies have been initiated using the sacral nerve stimulation (SNS) technique to treat patients that suffer with urinary problems. When applying this procedure, proper patient screening is essential, because some disorders that affect the urinary tract (like bladder calculus or carcinoma in-situ) have to be treated differently. Once the patient is selected, he receives a temporary external pulse generator connected to wire leads at S3 foramina for 1–2 weeks. If the person's symptoms improve by more than 50%, he receives the permanent wire leads and stimulator that is implanted in the hip in the subcutaneous tissue. The first follow-up happens 1–2 weeks later to check if the permanent devices are providing improvement in the user's symptoms and to program the pulse generator adequately. Bleeding, infection, pain and unwanted stimulation in the extremities are some of the complications resulting from this therapy. Currently, battery replacements are necessary 5–10 years after implementation depending upon the strength of the stimulation therapy. (The newest interstim's battery can be wirelessly recharged (roughly weekly) using a paddle placed against the skin outside the implant.) This procedure has shown long term success rate that ranges from 50% to 90%, and one study concluded that it was a good option for patients with lower urinary tract dysfunction refractive to conservative and pharmacological interventions. Fecal incontinence Fecal incontinence, the involuntary loss of stool and flatus release afflicting mainly elderly people, can also be treated with sacral nerve stimulation as long as patients have intact sphincter muscles. The FDA approved the approach for treating the fecal incontinence in March 2011. The etiology is not well understood yet and both conservative treatments (like antidiarrheics, special diet and biofeedback) and surgical treatments for this disorder are not regarded as ideal options. Pascual et al. (2011) revised the follow-up results of the first 50 people that submit to sacral nerve stimulation (SNS) to treat fecal incontinence in Madri (Spain). The most common cause for the fecal incontinence was obstetric procedures, idiopathic origin and prior anal surgery, and all these people were refractory to the conservative treatment. The procedure consisted of placing a temporary pulse generator connected to a unilateral electrode at S3 or S4 foramen for 2–4 weeks. After it was confirmed that the SNS was decreasing the incontinence episodes, the patients received the definitive electrode and pulse generator that was implanted in the gluteus or in the abdomen. Two patients did not show improvement in the first step and did not receive the definitive stimulator. Mean follow-up was 17.02 months and during this time the patients showed improvement in the voluntary contraction pressure and reduction of incontinence episodes. Complications were two cases of infection, two cases with pain and one broken electrode. Therefore, although the reason the SNS is effective is unknown, this procedure had satisfactory results in these clinical cases with a low incidence of complications, and the study concluded that it was a good option for treatment of anal incontinence. Limited evidence from a Cochrane review of randomised controlled trials suggests that sacral nerve stimulation may help to reduce fecal incontinence. Method TENS (transcutaneous electrical nerve stimulation) was patented and first used in 1974 for pain relief. TENS is non-invasive; it sends electric current through electrodes placed directly on the skin. Although predominantly carried out as a percutaneous procedure, it is possible to apply sacral nerve stimulation with the use of these external electrodes. It is not known if TENS helps with chronic pain in people with fibromyalgia or neuropathic pain. There are currently no studies into the efficacy of this on an overactive bladder and other associated symptoms of urinary incontinence, however, in a report carried out by GUT (an international peer-reviewed journal for health professionals and researchers in gastroenterology and hepatology) it was found that 20% of the group tested achieved complete continence. All others saw a significant reduction in the frequency of FI episodes and an improvement in the ability to defer defecation. The first percutaneous sacral nerve stimulation study was performed in 1988. By penetrating the skin, sacral nerve stimulation aims to give a direct and localized electric current to specific nerves in order to elicit a favored response. Today it is one of the most common neuromodulation techniques. Percutaneous procedure Patients interested in getting a sacral nerve stimulator implanted in them because less severe methods have failed all must go through a trial for their own safety, known as the PNE (percutaneous nerve evaluation). PNE involves inserting a temporary electrode to the left or right of the S3 posterior foramen. This electrode is connected to an external pulse generator, which generates a signal for 3–5 days. If this neuromodulation has positive results for the patient, the option of implanting a permanent electrode for permanent sacral neuromodulation is possible. The procedure has low level of invasiveness, as all incisions are relatively small. A pulse generator is implanted in a subcutaneous pocket in the upper, outer quadrant of the buttock or even the lower abdomen. The generator is attached to a thin lead wire with a small electrode tip which is anchored near the sacral nerve. The most common postoperative complaints are pain and lead migration. In most studies, usually 5-10% of subjects need post-operative correction to lead migration, but since leads can be anchored near the sacral nerve, subsequent operations are generally unnecessary. Mechanism Stimulation of the sacral nerve causes contraction of external sphincter and pelvic floor muscle, which in turn causes the inhibition of bladder contractions which may be involuntarily releasing urine. Researchers currently believe that the sacral neuromodulation blocks the c-afferent fibers, which are a critical part of the afferent limb of a pathological reflex arc believed to be responsible for incontinence. See also Urinary incontinence Fecal incontinence Transcutaneous electrical nerve stimulation (TENS) Electrical muscle stimulation References Bibliography External links Fecal Incontinence Neurotechnology
Sacral nerve stimulation
Biology
1,669
71,316,296
https://en.wikipedia.org/wiki/Coprinopsis%20nivea
Coprinopsis nivea is a species of mushroom producing fungus in the family Psathyrellaceae. It is commonly known as the snowy inkcap. Taxonomy It was first described in 1801 by the German mycologist Christiaan Hendrik Persoon who classified it as Agaricus niveus. In 1838 it was reclassified as Coprinus niveus by the Swedish mycologist Elias Magnus Fries. In 2001 phylogentic analysis restructured the Coprinus genus and it was reclassified as Coprinopsis nivea by the mycologists Scott Alan Redhead, Rytas J. Vilgalys & Jean-Marc Moncalvo. Description Coprinopsis nivea is a small inkcap mushroom which grows in wetland environments. Cap: 1.5–3 cm. Starts egg shaped expanding to become campanulate (bell shaped). Covered in white powdery fragments of the veil when young. Gills: Start white before turning grey and ultimately black and deliquescing (dissolving into an ink-like black substance). Crowded and adnate or free. Stem: 3–9 cm long and 4-7mm in diameter. White with a very slightly bulbous base which may present with white tufts similar to that of the cap. Spore print: Black. Spores: Flattened ellipsoid and smooth with a germ pore. 15-19 x 8.5-10.5 μm. Taste: Indistinct. Smell: Indistinct. Etymology The specific epithet nivea (originally niveus) is Latin for snowy or snow covered. This is a reference to the powdery white appearance of this mushroom. Habitat and distribution Grows in small trooping or tufting groups on old dung, especially that of cows and horses, Summer through late Autumn. Widespread and recorded quite regularly. Similar species Coprinopsis pseudonivea. References Psathyrellaceae Coprinopsis Fungus species
Coprinopsis nivea
Biology
402
36,464,688
https://en.wikipedia.org/wiki/Plant%20Protection%20and%20Quarantine
Plant Protection and Quarantine (PPQ) is one of six operational program units within the Animal and Plant Health Inspection Service (APHIS) of the United States Department of Agriculture (USDA). The PPQ works to safeguard agriculture and natural resources in the U.S. against the entry, establishment, and spread of animal and plant pests, and noxious weeds in order to help ensure the protection of native flora and an abundant, high-quality, and varied food supply. Plant pest program information PPQ collaborates with state departments of agriculture and other government agencies to eradicate, suppress, or contain plant pests. Such collaborations may include emergency or longer-term domestic programs to target a specific pest. Targeted pests include: insects and mites: Asian longhorned beetle (ALB), Anoplophora glabripennis cactus moth, Cactoblastis cactorum celery leaf miner, Liriomyza Trifolii cotton pests: boll weevil, Anthonomus grandis pink bollworm, Pectinophora gossypiella Cydalima perspectalis emerald ash borer, Agrilus planipennis European cherry fruit fly, Rhagoletis cerasi European grapevine moth, Lobesia botrana fruit flies of genera in the Tephritidae family, including: Anastrepha genus, especially the Mexican fruit fly Bactrocera genus, notably the Melon fly; PPQ have implemented heightened surveillance measures for the entry of B. invadens Ceratitis genus, particularly the Mediterranean fruit fly grasshoppers gypsy moth, Lymantria dispar dispar imported fire ant Japanese beetle, Popillia japonica light brown apple moth (LBAM), Epiphyas postvittana Mormon cricket, Anabrus simplex palmetto weevil, Rhynchophorus cruentatus pine shoot beetle, Tomicus piniperda pink hibiscus mealybug, Maconellicoccus hirsutus Spotted lanternfly spotted-wing drosophila, Drosophila suzukii mollusks: giant African land snails temperate terrestrial gastropods nematodes: golden nematode, Globodera rostochiensis pale cyst nematode, Globodera pallida Plant diseases: black stem rust, caused by Puccinia graminis chrysanthemum white rust, caused by Puccinia horiana Citrus diseases European larch canker Gladiolus rust, caused by Uromyces transversalis Karnal bunt, caused by Tilletia indica plum pox, a viral disease transmitted by aphids potato diseases Ralstonia, a bacterial pathogen soybean rust, caused by Phakopsora pachyrhizi and Phakopsora meibomiae sudden oak death (SOD) caused by Phytophthora ramorum thousand cankers disease, caused by Geosmithia morbida spread by the walnut twig beetle Pest detection and identification PPQ aims to support APHIS goals by early detection of pests, weeds and plant diseases harmful to the economy, to allow for an organized response before significant damage is caused. The National Identification Services (NIS) coordinates reports of plant pest identification, providing a database that may lead to quarantine actions. NIS collaborates with scientists in various specialties at institutions around the country, sending them detailed digital images of suspected pests for timely identification. Biochemical testing services are also employed. In the past about 2% of all live plant import allotments were inspected, however that has shown to be inflexible. The likelihood of detection of a problem when using the 2% rule is not homogeneous. The biggest problem is that likelihood of successful detection is correlated with size of allotment - that is to say, an inspected sample of less than 2% is good enough for a larger shipment, while 2% is not good enough for a small shipment. There are also other disparities due to the actual contents of the shipment especially between species of declared material, location of origin, and target pests. As a result, PPQ, APHIS, and phytosanitary authorities in other countries are moving towards a more adaptive inspection thoroughness regime. Center for Plant Health Science and Technology The Center for Plant Health Science and Technology (CPHST) is PPQ's scientific support division, providing research and data to make scientifically valid regulatory and policy decisions. The center also develops technology and practical tools for PPQ personnel to conduct pest detection, exclusion and management operations. The division's project areas include: Trade risk analysis and treatment – the potential impact on U.S. agriculture of pests and diseases associated with imported plant products, and treatment of these products to reduce such risks. Pest detection – through development of surveillance programs. Identification and diagnostics – developing and testing new detection technologies, and accrediting external laboratories in their use. Emergency response – providing scientific support during plant health emergencies. Harmful plant strategies – implementing existing methods and developing new technologies for the identification, exclusion, eradication, and management of invasive weeds and regulated plants. Biological control – developing technologies to allow natural enemies to effectively mitigate the impacts of invasive pests, arthropods, weeds, and plant pathogens. Plant import and export PPQ advises on regulations for international and interstate trade with the aim of preventing the introduction of foreign plant pests. This notably includes procedures on the import of live plants, fresh fruits and vegetables, and solid-wood packing material. Domestic standards are delegated by the National Plant Protection Organization (NPPO) which assumes the responsibilities for ensuring the U.S. export program meets international standards. It provides certification of commodities as a service to U.S. exporters. The North American Plant Protection Organization (NAPPO), operating between the U.S., Canada and Mexico, was created in 1976 to set Regional Standards for Phytosanitary Measures (RSPM). It depends upon regulators, scientists, producers and industry associations to collaborate in scientific standards to protect agricultural, forest, and other plant resources while facilitating trade. The International Plant Protection Convention (IPPC), established 1951, is an international plant health agreement that aims to protect cultivated and wild plants by preventing the introduction and spread of pests. This is done through International Standards for Phytosanitary Measures (ISPM). Accreditation, Certification, and Network Services The Accreditation, Certification, and Network Services (ACNS) unit manages the National Seed Health System; the U.S. Nursery Certification Program; the U.S. Greenhouse Certification Program; the State National Harmonization Program for seed potatoes; Special Foreign Inspection and Certification programs; Plants in Growing Media; Postentry Quarantine, Audit-based Certification Systems pertaining to section 10201(d)(1) of the Farm Bill; and the National Clean Plant Network pertaining to section 10202 of the Farm Bill. Identification Technology Program ITP produces images, videos, identification keys, tools, and molecular diagnostics supporting PPQ's activities. See also Food security Sanitary and phytosanitary measures and agreements Sources USDA APHIS | About APHIS USDA APHIS | Plant Health (PPQ) Home Page USDA APHIS | Plant Pests and Diseases Hot projects USDA APHIS | PPQ Science and Technology USDA APHIS | Plant Health (PPQ) Home Page USDA APHIS | Imports and Exports Imports exports plants manual USDA APHIS Application Access - Home to PCIT and VEHCS. Ippc Nappo USDA APHIS | Plant Health Permits ePermit: eAuthentication References External links Export and import control Regulators of biotechnology products United States Department of Agriculture agencies Foreign trade of the United States Phytosanitary authorities
Plant Protection and Quarantine
Biology
1,614
42,459,168
https://en.wikipedia.org/wiki/N3RD%20Street
N3RD Street (also N3rd Street, N3RD St, Nerd Street) is a nickname for a segment of North 3rd Street in Philadelphia, Pennsylvania, United States, between Market Street and Girard Avenue (spanning across the neighborhoods of Old City and Northern Liberties), and its surrounding community that is home to a concentration of "nerdy" companies and spaces; "N3RD" is a double entendre as both leet for "nerd" and reflecting the "N. 3rd St." of postal addresses. An official resolution recognizing N3RD Street was written by Indy Hall founder Alex Hillman and adopted by Philadelphia's city council on March 20, 2014. Starting April 10, 2014, the city installed special street signs along the corridor denoting its nickname, similar to the neighborhood-specific signs on the Avenue of the Arts, Avenue of Technology, Mummers Row, Fabric Row, French Quarter, Chinatown, and the Gayborhood. The official city naming ceremony took place on April 11, 2014, at Liberty Lands Park. External links Official N3RD Street website References Information technology places Streets in Philadelphia Culture of Philadelphia
N3RD Street
Technology
236
645,698
https://en.wikipedia.org/wiki/Arthur%20Oncken%20Lovejoy
Arthur Oncken Lovejoy (October 10, 1873 – December 30, 1962) was an American philosopher and intellectual historian, who founded the discipline known as the history of ideas with his book The Great Chain of Being (1936), on the topic of that name, which is regarded as 'probably the single most influential work in the history of ideas in the United States during the last half century'. He was elected to the American Philosophical Society in 1932. In 1940, he founded the Journal of the History of Ideas. Life Lovejoy was born in Berlin, Germany, while his father was doing medical research there. Eighteen months later, his mother, a daughter of Johann Gerhard Oncken, committed suicide, whereupon his father gave up medicine and became a clergyman. Lovejoy studied philosophy, first at the University of California at Berkeley, then at Harvard under William James and Josiah Royce. He did not earn a Ph.D. In 1901, he resigned from his first job, at Stanford University, to protest the dismissal of a colleague who had offended a trustee. The President of Harvard then vetoed hiring Lovejoy on the grounds that he was a known troublemaker. Over the subsequent decade, he taught at Washington University in St. Louis, Columbia University, and the University of Missouri. As a professor of philosophy at Johns Hopkins University from 1910 to 1938, Lovejoy founded and long presided over that university's History of Ideas Club, where many prominent and budding intellectual and social historians, as well as literary critics, gathered. In 1940 he co-founded the Journal of the History of Ideas with Philip P. Wiener. Lovejoy insisted that the history of ideas should focus on "unit ideas," single concepts (namely simple concepts sharing an abstract name with other concepts that were to be conceptually distinguished). Lovejoy was active in the public arena. He helped found the American Association of University Professors and the Maryland chapter of the American Civil Liberties Union. However, he qualified his belief in civil liberties to exclude what he considered threats to a free system. Thus, at the height of the McCarthy Era (in the February 14, 1952, edition of the Journal of Philosophy) Lovejoy stated that, since it was a "matter of empirical fact" that membership in the Communist Party contributed "to the triumph of a world-wide organization" which was opposed to "freedom of inquiry, of opinion and of teaching," membership in the party constituted grounds for dismissal from academic positions. He also published numerous opinion pieces in the Baltimore press. He died in Baltimore on December 30, 1962. Philosophy In the domain of epistemology, Lovejoy is remembered for an influential critique of the pragmatic movement, especially in the essay "The Thirteen Pragmatisms", written in 1908. Abstract nouns like 'pragmatism' 'idealism', 'rationalism' and the like were, in Lovejoy's view, constituted by distinct, analytically separate ideas, which the historian of the genealogy of ideas had to thresh out, and show how the basic unit ideas combine and recombine with each other over time. The idea has, according to Simo Knuuttila, exercised a greater attraction on literary critics than on philosophers. Lovejoy was also an opponent of Albert Einstein's theory of relativity. In 1930, he published a paper criticizing Einstein's relativistic concept of simultaneity as arbitrary. Legacy William F. Bynum, looking back at Lovejoy's Great Chain of Being after 40 years, describes it as "a familiar feature of the intellectual landscape", indicating its great influence and "brisk" ongoing sales. Bynum argues that much more research is needed into how the concept of the great chain of being was replaced, but he agrees that Lovejoy was right that the crucial period was the end of the 18th century when "the Enlightenment's chain of being was dismantled". Works Primitivism and Related Ideas in Antiquity, (1935). (with George Boas). Johns Hopkins U. Press. 1997 edition: The Great Chain of Being: A Study of the History of an Idea (1936). Harvard University Press. Reprinted by Harper & Row, , 2005 paperback: . Essays in the History of Ideas (1948). Johns Hopkins U. Press. The Revolt Against Dualism (1960). Open Court Publishing. The Reason, the Understanding, and Time (1961). Johns Hopkins U. Press. Reflections on Human Nature (1961). Johns Hopkins U. Press. The Thirteen Pragmatisms and Other Essays (1963). Johns Hopkins U. Press. Articles "The Entangling Alliance of Religion and History," The Hibbert Journal, Vol. V, October 1906/ July 1907. "The Desires of the Self-Conscious," The Journal of Philosophy, Psychology and Scientific Methods, Vol. 4, No. 2, Jan. 17, 1907. "The Place of Linnaeus in the History of Science," The Popular Science Monthly, Vol. LXXI, 1907. "The Origins of Ethical Inwardness in Jewish Thought," The American Journal of Theology, Vol. XI, 1907. "Kant and the English Platonists." In Essays, Philosophical and Psychological, Longmans, Green & Co., 1908. "Pragmatism and Theology," The American Journal of Theology, Vol. XII, 1908. "The Theory of a Pre-Christian Cult of Jesus," The Monist, Vol. XVIII, No. 4, October 1908. "The Thirteen Pragmatisms," The Journal of Philosophy, Psychology, and Scientific Methods, Vol. V, January/December, 1908. "The Argument for Organic Evolution Before the 'Origin of Species'," Part II, Popular Science Monthly, Vol. LXXV, July/December, 1909. "Schopenhauer as an Evolutionist," The Monist, Vol. XXI, 1911. "Kant and Evolution," Popular Science Monthly, Vol. LXXVII, 1910; Part II, Popular Science Monthly, Vol. LXXVIII, 1911. "The Problem of Time in Recent French Philosophy," Part II, Part III, The Philosophical Review, Vol. XXI, 1912. "Relativity, Reality, and Contradiction", The Journal of Philosophy, Psychology and Scientific Methods, 1914. "Pragmatism Versus the Pragmatist." In: Essays in Critical Realism. London: Macmillan & Co., 1920. "Professional Ethics and Social Progress," The North American Review, March 1924. "The Dialectical Argument Against Absolute Simultaneity", The Journal of Philosophy, 1930. "Plans for the Future," Free World, November 1943. Miscellany "Leibnitz, Gottfried Wilhelm, Freiherr Von," A Cyclopedia of Education, ed. by Paul Monroe, The Macmillan Company, 1911. "The Unity of Science," The University of Missouri Bulletin: Science Series, Vol. I, N°. 1, January 1912. Bergson & Romantic Evolutionism; Two Lectures Delivered Before the Union, September 5 & 12, 1913, University of California Press, 1914. References Further reading Campbell, James, "Arthur Lovejoy and the Progress of Philosophy,", in: Transactions of the Charles S. Peirce Society, Vol. 39, No. 4, Fall, 2003. Diggins, John P., "Arthur O. Lovejoy and the Challenge of Intellectual History,", in: Journal of the History of Ideas, Volume 67, Number 1, January 2006. Duffin, Kathleen E. "Arthur O. Lovejoy and the Emergence of Novelty," in: Journal of the History of Ideas, Vol. 41, No. 2, Apr./Jun., 1980. Feuer, Lewis S., "The Philosophical Method of Arthur O. Lovejoy: Critical Realism and Psychoanalytical Realism," in: Philosophy and Phenomenological Research, Vol. 23, No. 4, Jun., 1963. Feuer, Lewis S. "Arthur O. Lovejoy," in: The American Scholar, Vol. 46, No. 3, Summer 1977. Mandelbaum, Maurice. "Arthur O. Lovejoy and the Theory of Historiography," in: Journal of the History of Ideas, Vol. 9, No. 4, Oct., 1948. Moran, Seán Farrell, "A.O. Lovejoy", in: Kelly Boyd, ed., The Encyclopedia of Historians and Historical Writing, Routledge, 1999. Randall Jr., John Herman, "Arthur O. Lovejoy and the History of Ideas," in: Philosophy and Phenomenological Research"', Vol. 23, No. 4, Jun., 1963. Wilson, Daniel J., Arthur O. Lovejoy and the Quest for Intelligibility, University of North Carolina Press, 1980. External links Works by Arthur O. Lovejoy at JSTOR. Dictionary of the History of Ideas article on the Great Chain of Being. Lovejoy Papers at Johns Hopkins University. Includes a short biography. Dale Keiger, Tussling with the Idea Man "The Chinese Origin of Romanticism", in: Essays in the History of Ideas'', Johns Hopkins University Press, 1948. 1873 births 1962 deaths American historians American literary critics American expatriates in the German Empire People from Berlin Harvard University alumni University of California, Berkeley alumni People from the Province of Brandenburg Philosophers of time University of Missouri faculty Washington University in St. Louis faculty American Civil Liberties Union people Presidents of the American Association of University Professors Relativity critics American historians of philosophy Pragmatists Members of the American Philosophical Society
Arthur Oncken Lovejoy
Physics
1,986
66,439,467
https://en.wikipedia.org/wiki/Chlorine%20cycle
The chlorine cycle (Cl) is the biogeochemical cycling of chlorine through the atmosphere, hydrosphere, biosphere, and lithosphere. Chlorine is most commonly found as inorganic chloride ions, or a number of chlorinated organic forms. Over 5,000 biologically produced chlorinated organics have been identified. The cycling of chlorine into the atmosphere and creation of chlorine compounds by anthropogenic sources has major impacts on climate change and depletion of the ozone layer. Chlorine plays essential roles in many biological processes, including numerous roles in the human body. It also acts as an essential co-factor in enzymes involved in plant photosynthesis. Troposphere Chlorine plays a large role in atmospheric cycling and climate, including, but not limited to chlorofluorocarbons (CFCs). The major flux of chlorine into the troposphere comes from sea salt aerosol spray. Both organic and inorganic chlorine is transferred into the troposphere from the oceans. Biomass combustion is another source of both organic and inorganic forms of chlorine to the troposphere from the terrestrial reservoir. Typically, organic chlorine forms are highly un-reactive and will be transferred to the stratosphere from the troposphere. The major flux of chlorine from the troposphere is via surface deposition into water systems. Hydrosphere Oceans are the largest source of chlorine in the Earth's hydrosphere. In the hydrosphere, chlorine exists primarily as chloride due to the high solubility of the Cl− ion. The majority of chlorine fluxes are within the hydrosphere due to chloride ions' solubility and reactivity within water systems. The cryosphere is able to retain some chlorine deposited by rainfall and snow, but the majority is eluted into oceans. Lithosphere The largest reservoir of chlorine resides in the lithosphere, where of global chlorine is found in Earth's mantle. Volcanic eruptions will sporadically release high levels of chlorine as HCl into the troposphere, but the majority of the terrestrial chlorine flux comes from seawater sources mixing with the mantle. Organically bound chlorine is as abundant as chloride ions in terrestrial soil systems, or the pedosphere. Discovery of multiple Cl-mediating genes in microorganisms and plants indicate that numerous biotic processes use chloride and produce organic chlorinated compounds, as well as many abiotic processes. These chlorinated compounds can then be volatilized or leached out of soils, which makes the overall soil environment a global sink of chlorine. Multiple anaerobic prokaryotes have been found to contain genes and show activity for chlorinated organic volatilization Biological processes Chlorine's ability to completely dissociate in water is also why it is an essential electrolyte in many biological processes. Chlorine, along with phosphorus, is the sixth most common element in organic matter. Cells utilize chloride to balance pH and maintain turgor pressure at equilibrium. The high electrical conductivity of Cl− ions are essential for neuron signalling in the brain and regulate many other essential functions in biology Anthropogenic chlorinated compounds The depleting effects of chlorofluorocarbons (CFCs) on ozone over Antarctica has been studied extensively since the 1980s. The low reactivity of CFCs allow it to reach the upper stratosphere, where it interacts with UV-C radiation and forms highly reactive chloride ions that interact with methane. These highly reactive chlorine ions will also interact with volatile organic compounds to form other ozone depleting acids. Chlorine-36 is the radioactive isotope produced in many nuclear facilities as byproduct waste. Its half-life of , mobility in the pedosphere, and ability to be taken up by organisms has made it an isotope of high concern among researchers. The high solubility and low reactiveness of is also has made it a useful application for research of biogeochemical cycling of chlorine, as most research uses it as an isotope tracer References Biogeochemical cycle Chlorine
Chlorine cycle
Chemistry
880
10,559,845
https://en.wikipedia.org/wiki/Test%20method
A test method is a method for a test in science or engineering, such as a physical test, chemical test, or statistical test. It is a definitive procedure that produces a test result. In order to ensure accurate and relevant test results, a test method should be "explicit, unambiguous, and experimentally feasible.", as well as effective and reproducible. A test can be considered an observation or experiment that determines one or more characteristics of a given sample, product, process, or service. The purpose of testing involves a prior determination of expected observation and a comparison of that expectation to what one actually observes. The results of testing can be qualitative (yes/no), quantitative (a measured value), or categorical and can be derived from personal observation or the output of a precision measuring instrument. Usually the test result is the dependent variable, the measured response based on the particular conditions of the test or the level of the independent variable. Some tests, however, may involve changing the independent variable to determine the level at which a certain response occurs: in this case, the test result is the independent variable. Importance In software development, engineering, science, manufacturing, and business, its developers, researchers, manufacturers, and related personnel must understand and agree upon methods of obtaining data and making measurements. It is common for a physical property to be strongly affected by the precise method of testing or measuring that property. As such, fully documenting experiments and measurements while providing needed documentation and descriptions of specifications, contracts, and test methods is vital. Using a standardized test method, perhaps published by a respected standards organization, is a good place to start. Sometimes it is more useful to modify an existing test method or to develop a new one, though such home-grown test methods should be validated and, in certain cases, demonstrate technical equivalency to primary, standardized methods. Again, documentation and full disclosure are necessary. A well-written test method is important. However, even more important is choosing a method of measuring the correct property or characteristic. Not all tests and measurements are equally useful: usually a test result is used to predict or imply suitability for a certain purpose. For example, if a manufactured item has several components, test methods may have several levels of connections: test results of a raw material should connect with tests of a component made from that material test results of a component should connect with performance testing of a complete item results of laboratory performance testing should connect with field performance These connections or correlations may be based on published literature, engineering studies, or formal programs such as quality function deployment. Validation of the suitability of the test method is often required. Content Quality management systems usually require full documentation of the procedures used in a test. The document for a test method might include: descriptive title scope over which class(es) of items, policies, etc. may be evaluated date of last effective revision and revision designation reference to most recent test method validation person, office, or agency responsible for questions on the test method, updates, and deviations significance or importance of the test method and its intended use terminology and definitions to clarify the meanings of the test method types of apparatus and measuring instrument (sometimes the specific device) required to conduct the test sampling procedures (how samples are to be obtained and prepared, as well as the sample size) safety precautions required calibrations and metrology systems natural environment concerns and considerations testing environment concerns and considerations detailed procedures for conducting the test calculation and analysis of data interpretation of data and test method output report format, content, data, etc. Validation Test methods are often scrutinized for their validity, applicability, and accuracy. It is very important that the scope of the test method be clearly defined, and any aspect included in the scope is shown to be accurate and repeatable through validation. Test method validations often encompass the following considerations: accuracy and precision; demonstration of accuracy may require the creation of a reference value if none is yet available repeatability and reproducibility, sometimes in the form of a Gauge R&R. range, or a continuum scale over which the test method would be considered accurate (e.g., 10 N to 100 N force test) measurement resolution, be it spatial, temporal, or otherwise curve fitting, typically for linearity, which justifies interpolation between calibrated reference points robustness, or the insensitivity to potentially subtle variables in the test environment or setup which may be difficult to control usefulness to predict end-use characteristics and performance measurement uncertainty interlaboratory or round robin tests other types of measurement systems analysis See also Certified reference materials Data analysis Design of experiments Document management system EPA Methods Integrated test facility Measurement systems analysis Measurement uncertainty Metrication Observational error Replication (statistics) Sampling (statistics) Specification (technical standard) Test management approach Verification and validation References General references, books Pyzdek, T, "Quality Engineering Handbook", 2003, Godfrey, A. B., "Juran's Quality Handbook", 1999, Kimothi, S. K., "The Uncertainty of Measurements: Physical and Chemical Metrology: Impact and Analysis", 2002, Related standards ASTM E177 Standard Practice for Use of the Terms Precision and Bias in ASTM Test Methods ASTM E691 Standard Practice for Conducting an Interlaboratory Study to Determine the Precision of a Test Method ASTM E1488 Standard Guide for Statistical Procedures to Use in Developing and Applying Test Methods ASTM E2282 Standard Guide for Defining the Test Result of a Test Method ASTM E2655 - Standard Guide for Reporting Uncertainty of Test Results and Use of the Term Measurement Uncertainty in ASTM Test Methods Metrology Measurement Quality control
Test method
Physics,Mathematics
1,165
167,777
https://en.wikipedia.org/wiki/Topic%20map
A topic map is a standard for the representation and interchange of knowledge, with an emphasis on the findability of information. Topic maps were originally developed in the late 1990s as a way to represent back-of-the-book index structures so that multiple indexes from different sources could be merged. However, the developers quickly realized that with a little additional generalization, they could create a meta-model with potentially far wider application. The ISO/IEC standard is formally known as ISO/IEC 13250:2003. A topic map represents information using topics, representing any concept, from people, countries, and organizations to software modules, individual files, and events, associations, representing hypergraph relationships between topics, and occurrences, representing information resources relevant to a particular topic. Topic maps are similar to concept maps and mind maps in many respects, though only topic maps are ISO standards. Topic maps are a form of semantic web technology similar to RDF. Ontology and merging Topics, associations, and occurrences can all be typed, where the types must be defined by the one or more creators of the topic map(s). The definitions of allowed types is known as the ontology of the topic map. Topic maps explicitly support the concept of merging of identity between multiple topics or topic maps. Furthermore, because ontologies are topic maps themselves, they can also be merged thus allowing for the automated integration of information from diverse sources into a coherent new topic map. Features such as subject identifiers (URIs given to topics) and PSIs (published subject indicators) are used to control merging between differing taxonomies. Scoping on names provides a way to organise the various names given to a particular topic by different sources. Current standard The work standardizing topic maps (ISO/IEC 13250) took place under the umbrella of the ISO/IEC JTC 1/SC 34/WG 3 committee (ISO/IEC Joint Technical Committee 1, Subcommittee 34, Working Group 3 – Document description and processing languages – Information Association). However, WG3 was disbanded and maintenance of ISO/IEC 13250 was assigned to WG8. The topic maps (ISO/IEC 13250) reference model and data model standards are defined independent of any specific serialization or syntax. TMRM Topic Maps – Reference Model TMDM Topic Maps – Data Model Data format The specification is summarized in the abstract as follows: "This specification provides a model and grammar for representing the structure of information resources used to define topics, and the associations (relationships) between topics. Names, resources, and relationships are said to be characteristics of abstract subjects, which are called topics. Topics have their characteristics within scopes: i.e. the limited contexts within which the names and resources are regarded as their name, resource, and relationship characteristics. One or more interrelated documents employing this grammar is called a topic map." XML serialization formats In 2000, Topic Maps was defined in an XML syntax XTM. This is now commonly known as "XTM 1.0" and is still in fairly common use. The ISO standards committee published an updated XML syntax in 2006, XTM 2.0 which is increasingly in use today. Note that XTM 1.0 predates and therefore is not compatible with the more recent versions of the (ISO/IEC 13250) standard. Other formats Other proposed or standardized serialization formats include: CXTM – Canonical XML Topic Maps format (canonicalization of topic maps) CTM – a Compact Topic Maps Notation (not based on XML) GTM – a Graphical Topic Maps Notation The above standards are all recently proposed or defined as part of ISO/IEC 13250. As described below, there are also other, serialization formats such as LTM, AsTMa= that have not been put forward as standards. Linear topic map notation (LTM) serves as a kind of shorthand for writing topic maps in plain text editors. This is useful for writing short personal topic maps or exchanging partial topic maps by email. The format can be converted to XTM. There is another format called AsTMa which serves a similar purpose. When writing topic maps manually it is much more compact, but of course can be converted to XTM. Alternatively, it can be used directly with the Perl Module TM (which also supports LTM). The data formats of XTM and LTM are similar to the W3C standards for RDF/XML or the older N3 notation. Related standards Topic Maps API A de facto API standard called Common Topic Maps Application Programming Interface (TMAPI) was published in April 2004 and is supported by many Topic Maps implementations or vendors: TMAPI – Common Topic Maps Application Programming Interface TMAPI 2.0 – Topic Maps Application Programming Interface (v2.0) Query standard In normal use it is often desirable to have a way to arbitrarily query the data within a particular Topic Maps store. Many implementations provide a syntax by which this can be achieved (somewhat like 'SQL for Topic Maps') but the syntax tends to vary a lot between different implementations. With this in mind, work has gone into defining a standardized syntax for querying topic maps: ISO 18048: TMQL – Topic Maps Query Language Constraint standards It can also be desirable to define a set of constraints that can be used to guarantee or check the semantic validity of topic maps data for a particular domain. (Somewhat like database constraints for topic maps). Constraints can be used to define things like 'every document needs an author' or 'all managers must be human'. There are often implementation specific ways of achieving these goals, but work has gone into defining a standardized constraint language as follows: ISO 19756: TMCL – Topic Maps Constraint Language TMCL is functionally similar to RDF Schema with Web Ontology Language (OWL). Earlier standards The "Topic Maps" concept has existed for a long time. The HyTime standard was proposed as far back as 1992 (or earlier?). Earlier versions of ISO 13250 (than the current revision) also exist. More information about such standards can be found at the ISO Topic Maps site. RDF relationship Some work has been undertaken to provide interoperability between the W3C's RDF/OWL/SPARQL family of semantic web standards and the ISO's family of Topic Maps standards though the two have slightly different goals. The semantic expressive power of Topic Maps is, in many ways, equivalent to that of RDF, but the major differences are that Topic Maps (i) provide a higher level of semantic abstraction (providing a template of topics, associations and occurrences, while RDF only provides a template of two arguments linked by one relationship) and (hence) (ii) allow n-ary relationships (hypergraphs) between any number of nodes, while RDF is limited to triplets. See also Knowledge graph Semantic interoperability Topincs a commercial proprietary topic maps editor Unified Modeling Language (UML) References Further reading Lutz Maicher and Jack Park: Charting the Topic Maps Research and Applications Landscape, Springer, Jack Park and Sam Hunting: XML Topic Maps: Creating and Using Topic Maps for the Web, Addison-Wesley, (in bibMap) External links Information portal about Topic Maps An Introduction to Topic Maps at Microsoft Docs Topic Maps Lab Knowledge representation languages Technical communication ISO standards IEC standards Diagrams Semantic relations
Topic map
Technology
1,508
45,467,356
https://en.wikipedia.org/wiki/Cornelia%20Gillyard
Cornelia Denson Gillyard (born February 1, 1941) is an American organic chemist known for her work with chemicals in the environment. Early life and education The eldest of three children, Gillyard was born on February 1, 1941, in Talladega, Alabama to a steel worker and a nurse. When she was young, Gillyard was involved with the local 4-H club, singing in the school chorus, cheerleading, and taking part in science fairs. At one such fair Gillyard and her partner won a prize for a wooden replica they made of a human skeleton. While in high school, Gillyard became very interested in chemistry. After graduating as valedictorian, she received a bachelor's degree in chemistry from Talladega College. Her senior project bridged chemistry and human health as she studied nuts and their nutrient contents as related to human growth. After graduating from college she realized that she needed more money to pursue higher studies, so she took up a position at the Ohio State University's nuclear medicine laboratory. In 1964, she got a job setting up and running a new nuclear medicine laboratory at Nationwide Children's Hospital in Columbus, Ohio. In 1973, she received her master's degree in organic chemistry at Clark Atlanta University where she researched the chemistry of vitamin B12. She returned to Columbus and worked for the Battelle Memorial Institute, but returned to Atlanta in 1974 after getting married. In 1977, she began her doctoral degree in organic chemistry at Clark Atlanta University and began teaching chemistry at Spelman College. She focused on chemistry education for undergraduates. After receiving her PhD, she began teaching chemistry full-time at Spelman, where she also became the chair of the chemistry department. Career and research Gillyard's research focuses on organoarsenic chemistry as well as pollutants and toxins in the environment. One of Gillyard's research focuses was on microbes and their potential for use in cleaning up environments polluted by contaminants like arsenic. She serves on the Women Chemists Committee and the American Chemical Society's Scholars Selection Committee and Blue Ribbon Panel for Minority Affairs, and is a member of the National Organization for the Professional Advancement of Black Chemists and Chemical Engineers. She was the director of NASA's Women in Science and Engineering Scholars Program and the codirector of the National Science Foundation's Research in Chemistry for Minority Scholars Program. References 1941 births Living people 21st-century American chemists American organic chemists Spelman College faculty Talladega College alumni Clark University alumni Chemists from Alabama
Cornelia Gillyard
Chemistry
520
18,650,019
https://en.wikipedia.org/wiki/Rho3%20Arietis
{{DISPLAYTITLE:Rho3 Arietis}} Rho3 Arietis (Rho3 Ari, ρ3 Arietis, ρ3 Ari) is the Bayer designation for a star in the northern constellation of Aries. It is faintly visible to the naked eye with an apparent visual magnitude of 5.63. Based upon an annual parallax shift of 28.29 mas, this star is located at a distance of approximately from Earth. This is an astrometric binary system. The visible component is an F-type main sequence star with a stellar classification of F6 V. It is around 2.4 billion years old and has a high abundance of elements other than hydrogen and helium when compared to the Sun. Name This star, along with δ Ari, ε Ari, ζ Ari, and π Ari, were Al Bīrūnī's Al Buṭain (ألبطين), the dual of Al Baṭn, the Belly. According to the catalogue of stars in the Technical Memorandum 33-507 - A Reduced Star Catalog Containing 537 Named Stars, Al Buṭain were the title for five stars : δ Ari as Botein, π Ari as Al Buṭain I, ρ3 Ari as Al Buṭain II, ε Ari as Al Buṭain III dan ζ Ari as Al Buṭain IV References External links HR 869 in the Bright Star Catalogue Image Rho3 Arietis Aries (constellation) F-type main-sequence stars 018256 013702 Arietis, 46 Arietis, Rho03 869 Astrometric binaries Durchmusterung objects
Rho3 Arietis
Astronomy
336
34,132,621
https://en.wikipedia.org/wiki/Dewatering%20screw%20press
A dewatering screw press is a screw press that separates liquids from solids. A screw press can be used in place of a belt press, centrifuge, or filter paper. It is a simple, slow moving device that accomplishes dewatering by continuous gravitational drainage. Screw presses are often used for materials that are difficult to press, for example those that tend to pack together. The screw press squeezes the material against a screen or filter and the liquid is collected through the screen for collection and use. History An example of a dewatering press is a wine press. Dating back to Roman times, these machines worked similarly to the modern screw press but possessed some disadvantages which have been corrected and improved within modern presses. The ancient wine press only allowed for grapes to be juiced in batches and often a thick cake would form against the screen, making it difficult for the juice to flow through the screen and be collected for wine. Most modern screw presses allow for a continuous flow of material by surrounding the screw with a screen, which also helps to avoid the build up of a layer of solid material on the screen. One modern approach even removes the screen in favor of a system of fixed and moving rings, which often eliminates solids buildup entirely. The most commonly known screw press of this design is said to have been invented by famous Greek mathematician Archimedes and is known as the screw conveyor. The screw conveyor consists of a shaft, which is surrounded by a spiral steel plate, similar in design and appearance to a corkscrew. This design is used in a multitude of screw presses. There are some machines of this and also of similar design that are not screw presses at all - they do not separate solids from liquids but are instead used to fuse them together. An example of this is a mold-filling machine. Plastic pellets are inserted at one end and heat is applied, melting the pellets and discharging them into a mold. Another example is known as a cooker-extruder and is used in the production of snack foods such as pretzels and more. Design Most screw presses can have dilute materials pumped directly into the screw press, although pre-thickening sometimes improves the performance of the press. This is typically done with a static or sidehill screen, a rotating drum screen, belt press, or a gravity table. Patented in 1900, Valerius Anderson's interrupted flight design is most commonly used as opposed to the continuous flight design. Anderson, upon studying the continuous flight design, noticed that it led to co-rotation and a less efficient job being done dewatering, especially with softer materials. He solved this by putting interruptions on the flights of the screw. The interruptions allowed for the materials to stop moving forward between interruptions along the shaft and also allows for an adequate buildup of the material before it is pushed through the screw press to container that catches the material. This allowed for a better job at the dewatering and a consistent cake material being released. The interrupted flight design screw presses uses were broadened from just soft or mushy materials to include most materials screw presses were used for because unlike the continuous design screw presses the interrupted flight design did not require constant feed or consistency of material. If either were diminished in the continuous design so would production of the dewatered product, in order to avoid this while maintaining the continuous flight design a larger and heavier press with variable speed settings was a necessity; the press also entailed the need of an operator. The interrupted flight design eliminated the need for consistency as the compression of the screw did not change as the material did not progress through the screw until a sufficient amount of the material had formed, as described above. This also eliminates the need for changing speed and an operator. The design allows for self-correction and efficiency that is unavailable with the continuous design. It allowed for a more economically effective screw press that has been used for more than just slimy or slippery materials. After a period of time and its initial patent, resistor teeth were added to the presses where there was no flighting in order to increase the agitation of the materials adding to the limitation of the tendencies of co-rotation within the press Options The buildup of press cake moisture is controlled by a discharge door or cone. Screw presses possess different options that include perforated/slotted screens, a rotating cone, hard surfacing on the screw, and supplemental screen surface in the inlet hopper on the face of the cone. The standard construction for screw presses is of stainless steel with a carbon steel frame on the larger presses. Capacity The specific details of the design of a screw press depend on the material however. The configurations, screw speeds, screens for maximum outlet consistency, including an excellent capture rate vary per material. Most screw presses are designed to feed material that has a 40-60% water make up. The length and diameter ratio of the screw press also depends on the material. The range of the capacity of a screw press Drive Larger presses use a foot-mounted gearbox while smaller presses use a hollow-shaft gearbox. Currently, nearly all presses are driven by electric motors due to their reliable and low cost frequency drives. The electric motors replaced the previously popular hydraulic motor drives. A vertical design was popular in the 1800s through the 1950s but they are no longer made. Most screw presses are currently built with the screws in a horizontal configuration. One newer version uses an angled screw design to reduce floor footprint and press cake moisture. Compressive Mechanisms Compression is created within the screw press by increasing the inner shaft diameter of the screw. For example, if a 16" screw press has a 6" shaft at the start, the flights on the screw will be 5" tall. If this 6" shaft diameter is then increased to 12" at the discharge, the fights will be only 2" tall at this point. Thus compression is applied as the material is being pressed from a 5" opening through a 2" space. This compression can also be achieved tightening the separation of the flights of the screw. If at the inlet, the pitch is 16", the material thus will move 16" with each revolution. If it is then decreased to 8" at the point of discharge, the material will move 8" per revolution. This results in there being more volume forced into the press than there is being forced out of the press at a time. This creates the desired compression and pushes the liquid through the screen. Another way to achieve compression is to place a cone at the point of discharge. This can also be called a choke, stopper, or door. In many designs it is bolted into a fixed position, making a fixed, smaller opening which the material must pass through. More commonly found however, the screw press has the cone pushed into the point of discharge via a hydraulic or air cylinder. Specialized types Some other types of presses are vapor-tight presses, and twin-screw presses. Vapor-tight presses are used during the production of soybean protein concentrate (SPC), citrus and apple pectin, bioresin, and Xanthan gum. Twin-screw presses contain two overlapping compression screws. This is more complicated on a mechanical level because the screws must remain synchronized in order for them to work properly. These are often used for slippery materials and feature an internal shredding action. Classification There are two major kinds of screw presses of this design. One type, known as Expellers ®, removes water from fibrous material, while the other removes free liquid from a material. Expellers Oil expellers are used to squeeze the fat out of soybeans, peanuts, sunflower seeds, canola (rape seeds), and other oil seeds. The expeller works by exerting extremely high pressures which convert the fat in seeds into a liquid oil. Once the oil is liquefied the oil flows through the screen and is collected. Removal of free liquid Screw presses that are used to free liquid from material are commonly used in the pulp and paper industries, municipal biosolids, septage and grease trap sludge, food production, food waste, manure, and also within the chemical industry. Applications Pulp and paper industries Pulp and paper industries remove water within cellulose fiber. Sewage disposal Biosolids are dewatered and heated through a specific process which includes raising the pH to a level of 12. Septage and grease trap sludge is dewatered with a simple screw press of the above stated design. Nutrient management programs dewater hog and cow manure for sale and commercial use. Food processing Alcohol solutions are squeezed from foods with screw presses (such as soybeans, protein, pectin, and xanthan gum.) Food processing factories use screw presses to separate water from waste streams and convert the solid into animal feeds. For example, sugar beet pulp, orange peel, and spent grain. Fish and orange peel dewatering often provide maximum yield when dewatered within a press of the interrupted flight design and with the addition of steam begin injected into the material. Commonly steam injection holes are drilled into the resistor teeth of the press close to the screw's shaft. PET bottles "polyethylene terephthalate" is the preferred packaging for soft drinks, fizzy drinks, juice and water. This results in large waste problems. For breweries, large volumes of discharged products need to be destroyed regularly in order to eliminate the risks of the bottles being resold again. For waste collectors handling and transport is difficult and expensive, as there is a large discrepancy between weight and volume Manufacturers of ice-cream have a need to destroy returned goods with expired date and faulty manufacture to prevent the products being sold by mistake. Dairies destroy returned goods such as yogurt and other dairy products with expired date and faulty manufacture. Chemical industry Within the chemical industry screw presses are used for "ABS, sodium alginate and carrageenan, synthetic rubber, synthetic resin, hydrated polymer, naphthalene, elastomeric adhesive, color film emulsion, CmC, pharmaceuticals" and more. Cosmetics: To many manufacturers and brands the challenge is often that they have large amounts of discarded products, that needs 100% destruction to ensure no reselling on the black market. References Liquid-solid separation
Dewatering screw press
Chemistry
2,110
689,427
https://en.wikipedia.org/wiki/Latent%20semantic%20analysis
Latent semantic analysis (LSA) is a technique in natural language processing, in particular distributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text (the distributional hypothesis). A matrix containing word counts per document (rows represent unique words and columns represent each document) is constructed from a large piece of text and a mathematical technique called singular value decomposition (SVD) is used to reduce the number of rows while preserving the similarity structure among columns. Documents are then compared by cosine similarity between any two columns. Values close to 1 represent very similar documents while values close to 0 represent very dissimilar documents. An information retrieval technique using latent semantic structure was patented in 1988 by Scott Deerwester, Susan Dumais, George Furnas, Richard Harshman, Thomas Landauer, Karen Lochbaum and Lynn Streeter. In the context of its application to information retrieval, it is sometimes called latent semantic indexing (LSI). Overview Occurrence matrix LSA can use a document-term matrix which describes the occurrences of terms in documents; it is a sparse matrix whose rows correspond to terms and whose columns correspond to documents. A typical example of the weighting of the elements of the matrix is tf-idf (term frequency–inverse document frequency): the weight of an element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance. This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used. Rank lowering After the construction of the occurrence matrix, LSA finds a low-rank approximation to the term-document matrix. There could be various reasons for these approximations: The original term-document matrix is presumed too large for the computing resources; in this case, the approximated low rank matrix is interpreted as an approximation (a "least and necessary evil"). The original term-document matrix is presumed noisy: for example, anecdotal instances of terms are to be eliminated. From this point of view, the approximated matrix is interpreted as a de-noisified matrix (a better matrix than the original). The original term-document matrix is presumed overly sparse relative to the "true" term-document matrix. That is, the original matrix lists only the words actually in each document, whereas we might be interested in all words related to each document—generally a much larger set due to synonymy. The consequence of the rank lowering is that some dimensions are combined and depend on more than one term: {(car), (truck), (flower)} → {(1.3452 * car + 0.2828 * truck), (flower)} This mitigates the problem of identifying synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also partially mitigates the problem with polysemy, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense. Derivation Let be a matrix where element describes the occurrence of term in document (this can be, for example, the frequency). will look like this: Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document: Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term: Now the dot product between two term vectors gives the correlation between the terms over the set of documents. The matrix product contains all these dot products. Element (which is equal to element ) contains the dot product (). Likewise, the matrix contains the dot products between all the document vectors, giving their correlation over the terms: . Now, from the theory of linear algebra, there exists a decomposition of such that and are orthogonal matrices and is a diagonal matrix. This is called a singular value decomposition (SVD): The matrix products giving us the term and document correlations then become Since and are diagonal we see that must contain the eigenvectors of , while must be the eigenvectors of . Both products have the same non-zero eigenvalues, given by the non-zero entries of , or equally, by the non-zero entries of . Now the decomposition looks like this: The values are called the singular values, and and the left and right singular vectors. Notice the only part of that contributes to is the row. Let this row vector be called . Likewise, the only part of that contributes to is the column, . These are not the eigenvectors, but depend on all the eigenvectors. It turns out that when you select the largest singular values, and their corresponding singular vectors from and , you get the rank approximation to with the smallest error (Frobenius norm). This approximation has a minimal error. But more importantly we can now treat the term and document vectors as a "semantic space". The row "term" vector then has entries mapping it to a lower-dimensional space. These new dimensions do not relate to any comprehensible concepts. They are a lower-dimensional approximation of the higher-dimensional space. Likewise, the "document" vector is an approximation in this lower-dimensional space. We write this approximation as You can now do the following: See how related documents and are in the low-dimensional space by comparing the vectors and (typically by cosine similarity). Comparing terms and by comparing the vectors and . Note that is now a column vector. Documents and term vector representations can be clustered using traditional clustering algorithms like k-means using similarity measures like cosine. Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space. To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents: Note here that the inverse of the diagonal matrix may be found by inverting each nonzero value within the matrix. This means that if you have a query vector , you must do the translation before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors: Applications The new low-dimensional space typically can be used to: Compare the documents in the low-dimensional space (data clustering, document classification). Find similar documents across languages, after analyzing a base set of translated documents (cross-language information retrieval). Find relations between terms (synonymy and polysemy). Given a query of terms, translate it into the low-dimensional space, and find matching documents (information retrieval). Find the best similarity between small groups of terms, in a semantic way (i.e. in a context of a knowledge corpus), as for example in multi choice questions MCQ answering model. Expand the feature space of machine learning / text mining systems Analyze word association in text corpus Synonymy and polysemy are fundamental problems in natural language processing: Synonymy is the phenomenon where different words describe the same idea. Thus, a query in a search engine may fail to retrieve a relevant document that does not contain the words which appeared in the query. For example, a search for "doctors" may not return a document containing the word "physicians", even though the words have the same meaning. Polysemy is the phenomenon where the same word has multiple meanings. So a search may retrieve irrelevant documents containing the desired words in the wrong meaning. For example, a botanist and a computer scientist looking for the word "tree" probably desire different sets of documents. Commercial applications LSA has been used to assist in performing prior art searches for patents. Applications in human memory The use of Latent Semantic Analysis has been prevalent in the study of human memory, especially in areas of free recall and memory search. There is a positive correlation between the semantic similarity of two words (as measured by LSA) and the probability that the words would be recalled one after another in free recall tasks using study lists of random common nouns. They also noted that in these situations, the inter-response time between the similar words was much quicker than between dissimilar words. These findings are referred to as the Semantic Proximity Effect. When participants made mistakes in recalling studied items, these mistakes tended to be items that were more semantically related to the desired item and found in a previously studied list. These prior-list intrusions, as they have come to be called, seem to compete with items on the current list for recall. Another model, termed Word Association Spaces (WAS) is also used in memory studies by collecting free association data from a series of experiments and which includes measures of word relatedness for over 72,000 distinct word pairs. Implementation The SVD is typically computed using large matrix methods (for example, Lanczos methods) but may also be computed incrementally and with greatly reduced resources via a neural network-like approach, which does not require the large, full-rank matrix to be held in memory. A fast, incremental, low-memory, large-matrix SVD algorithm has been developed. MATLAB and Python implementations of these fast algorithms are available. Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's algorithm (2003) provides an exact solution. In recent years progress has been made to reduce the computational complexity of SVD; for instance, by using a parallel ARPACK algorithm to perform parallel eigenvalue decomposition it is possible to speed up the SVD computation cost while providing comparable prediction quality. Limitations Some of LSA's drawbacks include: The resulting dimensions might be difficult to interpret. For instance, in {(car), (truck), (flower)} ↦ {(1.3452 * car + 0.2828 * truck), (flower)} the (1.3452 * car + 0.2828 * truck) component could be interpreted as "vehicle". However, it is very likely that cases close to {(car), (bottle), (flower)} ↦ {(1.3452 * car + 0.2828 * bottle), (flower)} will occur. This leads to results which can be justified on the mathematical level, but have no immediately obvious meaning in natural language. Though, the (1.3452 * car + 0.2828 * bottle) component could be justified because both bottles and cars have transparent and opaque parts, are man made and with high probability contain logos/words on their surface; thus, in many ways these two concepts "share semantics." That is, within a language in question, there may not be a readily available word to assign and explainability becomes an analysis task as opposed to simple word/class/concept assignment task. LSA can only partially capture polysemy (i.e., multiple meanings of a word) because each occurrence of a word is treated as having the same meaning due to the word being represented as a single point in space. For example, the occurrence of "chair" in a document containing "The Chair of the Board" and in a separate document containing "the chair maker" are considered the same. The behavior results in the vector representation being an average of all the word's different meanings in the corpus, which can make it difficult for comparison. However, the effect is often lessened due to words having a predominant sense throughout a corpus (i.e. not all meanings are equally likely). Limitations of bag of words model (BOW), where a text is represented as an unordered collection of words. To address some of the limitation of bag of words model (BOW), multi-gram dictionary can be used to find direct and indirect association as well as higher-order co-occurrences among terms. The probabilistic model of LSA does not match observed data: LSA assumes that words and documents form a joint Gaussian model (ergodic hypothesis), while a Poisson distribution has been observed. Thus, a newer alternative is probabilistic latent semantic analysis, based on a multinomial model, which is reported to give better results than standard LSA. Alternative methods Semantic hashing In semantic hashing documents are mapped to memory addresses by means of a neural network in such a way that semantically similar documents are located at nearby addresses. Deep neural network essentially builds a graphical model of the word-count vectors obtained from a large set of documents. Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much faster than locality sensitive hashing, which is the fastest current method. Latent semantic indexing Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts. LSI is also an application of correspondence analysis, a multivariate statistical technique developed by Jean-Paul Benzécri in the early 1970s, to a contingency table built from word counts in documents. Called " indexing" because of its ability to correlate related terms that are in a collection of text, it was first applied to text at Bellcore in the late 1980s. The method, also called latent semantic analysis (LSA), uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries, commonly referred to as concept searches. Queries, or concept searches, against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don’t share a specific word or words with the search criteria. Benefits of LSI LSI helps overcome synonymy by increasing recall, one of the most problematic constraints of Boolean keyword queries and vector space models. Synonymy is often the cause of mismatches in the vocabulary used by the authors of documents and the users of information retrieval systems. As a result, Boolean or keyword queries often return irrelevant results and miss information that is relevant. LSI is also used to perform automated document categorization. In fact, several experiments have demonstrated that there are a number of correlations between the way LSI and humans process and categorize text. Document categorization is the assignment of documents to one or more predefined categories based on their similarity to the conceptual content of the categories. LSI uses example documents to establish the conceptual basis for each category. During categorization processing, the concepts contained in the documents being categorized are compared to the concepts contained in the example items, and a category (or categories) is assigned to the documents based on the similarities between the concepts they contain and the concepts that are contained in the example documents. Dynamic clustering based on the conceptual content of documents can also be accomplished using LSI. Clustering is a way to group documents based on their conceptual similarity to each other without using example documents to establish the conceptual basis for each cluster. This is very useful when dealing with an unknown collection of unstructured text. Because it uses a strictly mathematical approach, LSI is inherently independent of language. This enables LSI to elicit the semantic content of information written in any language without requiring the use of auxiliary structures, such as dictionaries and thesauri. LSI can also perform cross-linguistic concept searching and example-based categorization. For example, queries can be made in one language, such as English, and conceptually similar results will be returned even if they are composed of an entirely different language or of multiple languages. LSI is not restricted to working only with words. It can also process arbitrary character strings. Any object that can be expressed as text can be represented in an LSI vector space. For example, tests with MEDLINE abstracts have shown that LSI is able to effectively classify genes based on conceptual modeling of the biological information contained in the titles and abstracts of the MEDLINE citations. LSI automatically adapts to new and changing terminology, and has been shown to be very tolerant of noise (i.e., misspelled words, typographical errors, unreadable characters, etc.). This is especially important for applications using text derived from Optical Character Recognition (OCR) and speech-to-text conversion. LSI also deals effectively with sparse, ambiguous, and contradictory data. Text does not need to be in sentence form for LSI to be effective. It can work with lists, free-form notes, email, Web-based content, etc. As long as a collection of text contains multiple terms, LSI can be used to identify patterns in the relationships between the important terms and concepts contained in the text. LSI has proven to be a useful solution to a number of conceptual matching problems. The technique has been shown to capture key relationship information, including causal, goal-oriented, and taxonomic information. LSI timeline Mid-1960s – Factor analysis technique first described and tested (H. Borko and M. Bernick) 1988 – Seminal paper on LSI technique published 1989 – Original patent granted 1992 – First use of LSI to assign articles to reviewers 1994 – Patent granted for the cross-lingual application of LSI (Landauer et al.) 1995 – First use of LSI for grading essays (Foltz, et al., Landauer et al.) 1999 – First implementation of LSI technology for intelligence community for analyzing unstructured text (SAIC). 2002 – LSI-based product offering to intelligence-based government agencies (SAIC) Mathematics of LSI LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text. In general, the process involves constructing a weighted term-document matrix, performing a Singular Value Decomposition on the matrix, and using the matrix to identify the concepts contained in the text. Term-document matrix LSI begins by constructing a term-document matrix, , to identify the occurrences of the unique terms within a collection of documents. In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell, , initially representing the number of times the associated term appears in the indicated document, . This matrix is usually very large and very sparse. Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data. The weighting functions transform each cell, of , to be the product of a local term weight, , which describes the relative frequency of a term in a document, and a global weight, , which describes the relative frequency of the term within the entire collection of documents. Some common local weighting functions are defined in the following table. Some common global weighting functions are defined in the following table. Empirical studies with LSI report that the Log and Entropy weighting functions work well, in practice, with many data sets. In other words, each entry of is computed as: Rank-reduced singular value decomposition A rank-reduced, singular value decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI. It computes the term and document vector spaces by approximating the single term-frequency matrix, , into three other matrices— an m by r term-concept vector matrix , an r by r singular values matrix , and a n by r concept-document vector matrix, , which satisfy the following relations: In the formula, A is the supplied m by n weighted matrix of term frequencies in a collection of text where m is the number of unique terms, and n is the number of documents. T is a computed m by r matrix of term vectors where r is the rank of A—a measure of its unique dimensions ≤ min(m,n). S is a computed r by r diagonal matrix of decreasing singular values, and D is a computed n by r matrix of document vectors. The SVD is then truncated to reduce the rank by keeping only the largest k « r diagonal entries in the singular value matrix S, where k is typically on the order 100 to 300 dimensions. This effectively reduces the term and document vector matrix sizes to m by k and n by k respectively. The SVD operation, along with this reduction, has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space of A. This reduced set of matrices is often denoted with a modified formula such as: A ≈ Ak = Tk Sk DkT Efficient LSI algorithms only compute the first k singular values and term and document vectors as opposed to computing a full SVD and then truncating it. Note that this rank reduction is essentially the same as doing Principal Component Analysis (PCA) on the matrix A, except that PCA subtracts off the means. PCA loses the sparseness of the A matrix, which can make it infeasible for large lexicons. Querying and augmenting LSI vector spaces The computed Tk and Dk matrices define the term and document vector spaces, which with the computed singular values, Sk, embody the conceptual information derived from the document collection. The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors. The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index. By a simple transformation of the A = T S DT equation into the equivalent D = AT T S−1 equation, a new vector, d, for a query or for a new document can be created by computing a new column in A and then multiplying the new column by T S−1. The new column in A is computed using the originally derived global term weights and applying the same local weighting function to the terms in the query or in the new document. A drawback to computing vectors in this way, when adding new searchable documents, is that terms that were not known during the SVD phase for the original index are ignored. These terms will have no impact on the global weights and learned correlations derived from the original collection of text. However, the computed vectors for the new text are still very relevant for similarity comparisons with all other document vectors. The process of augmenting the document vector spaces for an LSI index with new documents in this manner is called folding in. Although the folding-in process does not account for the new semantic content of the new text, adding a substantial number of documents in this way will still provide good results for queries as long as the terms and concepts they contain are well represented within the LSI index to which they are being added. When the terms and concepts of a new set of documents need to be included in an LSI index, either the term-document matrix, and the SVD, must be recomputed or an incremental update method (such as the one described in ) is needed. Additional uses of LSI It is generally acknowledged that the ability to work with text on a semantic basis is essential to modern information retrieval systems. As a result, the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome. LSI is being used in a variety of information retrieval and text processing applications, although its primary application has been for concept searching and automated document categorization. Below are some other ways in which LSI is being used: Information discovery (eDiscovery, Government/Intelligence community, Publishing) Automated document classification (eDiscovery, Government/Intelligence community, Publishing) Text summarization (eDiscovery, Publishing) Relationship discovery (Government, Intelligence community, Social Networking) Automatic generation of link charts of individuals and organizations (Government, Intelligence community) Matching technical papers and grants with reviewers (Government) Online customer support (Customer Management) Determining document authorship (Education) Automatic keyword annotation of images Understanding software source code (Software Engineering) Filtering spam (System Administration) Information visualization Essay scoring (Education) Literature-based discovery Stock returns prediction Dream Content Analysis (Psychology) LSI is increasingly being used for electronic document discovery (eDiscovery) to help enterprises prepare for litigation. In eDiscovery, the ability to cluster, categorize, and search large collections of unstructured text on a conceptual basis is essential. Concept-based searching using LSI has been applied to the eDiscovery process by leading providers as early as 2003. Challenges to LSI Early challenges to LSI focused on scalability and performance. LSI requires relatively high computational performance and memory in comparison to other information retrieval techniques. However, with the implementation of modern high-speed processors and the availability of inexpensive memory, these considerations have been largely overcome. Real-world applications involving more than 30 million documents that were fully processed through the matrix and SVD computations are common in some LSI applications. A fully scalable (unlimited number of documents, online training) implementation of LSI is contained in the open source gensim software package. Another challenge to LSI has been the alleged difficulty in determining the optimal number of dimensions to use for performing the SVD. As a general rule, fewer dimensions allow for broader comparisons of the concepts contained in a collection of text, while a higher number of dimensions enable more specific (or more relevant) comparisons of concepts. The actual number of dimensions that can be used is limited by the number of documents in the collection. Research has demonstrated that around 300 dimensions will usually provide the best results with moderate-sized document collections (hundreds of thousands of documents) and perhaps 400 dimensions for larger document collections (millions of documents). However, recent studies indicate that 50-1000 dimensions are suitable depending on the size and nature of the document collection. Checking the proportion of variance retained, similar to PCA or factor analysis, to determine the optimal dimensionality is not suitable for LSI. Using a synonym test or prediction of missing words are two possible methods to find the correct dimensionality. When LSI topics are used as features in supervised learning methods, one can use prediction error measurements to find the ideal dimensionality. See also Coh-Metrix Compound term processing Distributional semantics Explicit semantic analysis Latent semantic mapping Latent semantic structure indexing Principal components analysis Probabilistic latent semantic analysis Spamdexing Word vector Topic model Latent Dirichlet allocation References Further reading Original article where the model was first exposed. (PDF) . Illustration of the application of LSA to document retrieval. External links Articles on LSA Latent Semantic Analysis, a scholarpedia article on LSA written by Tom Landauer, one of the creators of LSA. Talks and demonstrations LSA Overview, talk by Prof. Thomas Hofmann describing LSA, its applications in Information Retrieval, and its connections to probabilistic latent semantic analysis. Complete LSA sample code in C# for Windows. The demo code includes enumeration of text files, filtering stop words, stemming, making a document-term matrix and SVD. Implementations Due to its cross-domain applications in Information Retrieval, Natural Language Processing (NLP), Cognitive Science and Computational Linguistics, LSA has been implemented to support many different kinds of applications. Sense Clusters, an Information Retrieval-oriented perl implementation of LSA S-Space Package, a Computational Linguistics and Cognitive Science-oriented Java implementation of LSA Semantic Vectors applies Random Projection, LSA, and Reflective Random Indexing to Lucene term-document matrices Infomap Project, an NLP-oriented C implementation of LSA (superseded by semanticvectors project) Text to Matrix Generator , A MATLAB Toolbox for generating term-document matrices from text collections, with support for LSA Gensim contains a Python implementation of LSA for matrices larger than RAM. Information retrieval techniques Natural language processing Latent variable models Semantic relations
Latent semantic analysis
Technology
5,954
39,567,805
https://en.wikipedia.org/wiki/Cotoneaster%20%C3%97%20watereri
Cotoneaster × watereri, or Waterer's cotoneaster, is a large evergreen shrub belonging to the genus Cotoneaster. It is an artificial hybrid, initially of Cotoneaster frigidus, Cotoneaster henrianus and Cotoneaster salicifolius. Later also Cotoneaster rugosus and Cotoneaster sargentii were probably involved. Description Cotoneaster × watereri is about 4 m tall, up to 8 m at maturity. Leaves are elliptical, dark green, up to 12 cm long and 3 cm wide. This plant shows large attractive inflorescences with white small flowers and large spherical coral red berries of about 6–9 mm. It is in flower from June to July. References Lingdi L. & Brach A.R. Cotoneaster Dickoré W.B. & Kasperek Species of Cotoneaster (Rosaceae, Maloideae) indigenous to, naturalising or commonly cultivated in Central Europe Manual of the alien plants of Belgium watereri Hybrid plants
Cotoneaster × watereri
Biology
212
40,654,575
https://en.wikipedia.org/wiki/Affinine
Affinine is a monoterpenoid indole alkaloid which can be isolated from plants of the genus Tabernaemontana. Structurally it can be considered a member of the vobasine alkaloid family and may be synthesized from tryptophan. Limited pharmacological testing has indicated that it may be an effective inhibitor of both acetylcholinesterase and butyrylcholinesterase. See also Affinisine References Acetylcholinesterase inhibitors Alkaloids found in Apocynaceae
Affinine
Chemistry
112
3,876
https://en.wikipedia.org/wiki/Binomial%20distribution
In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success (with probability ) or failure (with probability ). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., , the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the binomial test of statistical significance. The binomial distribution is frequently used to model the number of successes in a sample of size drawn with replacement from a population of size . If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for much larger than , the binomial distribution remains a good approximation, and is widely used. Definitions Probability mass function If the random variable follows the binomial distribution with parameters and , we write . The probability of getting exactly successes in independent Bernoulli trials (with the same rate ) is given by the probability mass function: for , where is the binomial coefficient. The formula can be understood as follows: is the probability of obtaining the sequence of independent Bernoulli trials in which trials are "successes" and the remaining trials result in "failure". Since the trials are independent with probabilities remaining constant between them, any sequence of trials with successes (and failures) has the same probability of being achieved (regardless of positions of successes within the sequence). There are such sequences, since the binomial coefficient counts the number of ways to choose the positions of the successes among the trials. The binomial distribution is concerned with the probability of obtaining any of these sequences, meaning the probability of obtaining one of them () must be added times, hence . In creating reference tables for binomial distribution probability, usually, the table is filled in up to values. This is because for , the probability can be calculated by its complement as Looking at the expression as a function of , there is a value that maximizes it. This value can be found by calculating and comparing it to 1. There is always an integer that satisfies is monotone increasing for and monotone decreasing for , with the exception of the case where is an integer. In this case, there are two values for which is maximal: and . is the most probable outcome (that is, the most likely, although this can still be unlikely overall) of the Bernoulli trials and is called the mode. Equivalently, . Taking the floor function, we obtain . Example Suppose a biased coin comes up heads with probability 0.3 when tossed. The probability of seeing exactly 4 heads in 6 tosses is Cumulative distribution function The cumulative distribution function can be expressed as: where is the "floor" under , i.e. the greatest integer less than or equal to . It can also be represented in terms of the regularized incomplete beta function, as follows: which is equivalent to the cumulative distribution functions of the beta distribution and of the -distribution: Some closed-form bounds for the cumulative distribution function are given below. Properties Expected value and variance If , that is, is a binomially distributed random variable, being the total number of experiments and p the probability of each experiment yielding a successful result, then the expected value of is: This follows from the linearity of the expected value along with the fact that is the sum of identical Bernoulli random variables, each with expected value . In other words, if are identical (and independent) Bernoulli random variables with parameter , then and The variance is: This similarly follows from the fact that the variance of a sum of independent random variables is the sum of the variances. Higher moments The first 6 central moments, defined as , are given by The non-central moments satisfy and in general where are the Stirling numbers of the second kind, and is the th falling power of . A simple bound follows by bounding the Binomial moments via the higher Poisson moments: This shows that if , then is at most a constant factor away from Mode Usually the mode of a binomial distribution is equal to , where is the floor function. However, when is an integer and is neither 0 nor 1, then the distribution has two modes: and . When is equal to 0 or 1, the mode will be 0 and correspondingly. These cases can be summarized as follows: Proof: Let For only has a nonzero value with . For we find and for . This proves that the mode is 0 for and for . Let . We find . From this follows So when is an integer, then and is a mode. In the case that , then only is a mode. Median In general, there is no single formula to find the median for a binomial distribution, and it may even be non-unique. However, several special results have been established: If is an integer, then the mean, median, and mode coincide and equal . Any median must lie within the interval . A median cannot lie too far away from the mean: . The median is unique and equal to when (except for the case when and is odd). When is a rational number (with the exception of \ and odd) the median is unique. When and is odd, any number in the interval is a median of the binomial distribution. If and is even, then is the unique median. Tail bounds For , upper bounds can be derived for the lower tail of the cumulative distribution function , the probability that there are at most successes. Since , these bounds can also be seen as bounds for the upper tail of the cumulative distribution function for . Hoeffding's inequality yields the simple bound which is however not very tight. In particular, for , we have that (for fixed , with ), but Hoeffding's bound evaluates to a positive constant. A sharper bound can be obtained from the Chernoff bound: where is the relative entropy (or Kullback-Leibler divergence) between an -coin and a -coin (i.e. between the and distribution): Asymptotically, this bound is reasonably tight; see for details. One can also obtain lower bounds on the tail , known as anti-concentration bounds. By approximating the binomial coefficient with Stirling's formula it can be shown that which implies the simpler but looser bound For and for even , it is possible to make the denominator constant: Statistical inference Estimation of parameters When is known, the parameter can be estimated using the proportion of successes: This estimator is found using maximum likelihood estimator and also the method of moments. This estimator is unbiased and uniformly with minimum variance, proven using Lehmann–Scheffé theorem, since it is based on a minimal sufficient and complete statistic (i.e.: ). It is also consistent both in probability and in MSE. This statistic is asymptotically normal thanks to the central limit theorem, because it is the same as taking the mean over Bernoulli samples. It has a variance of , a property which is used in various ways, such as in Wald's confidence intervals. A closed form Bayes estimator for also exists when using the Beta distribution as a conjugate prior distribution. When using a general as a prior, the posterior mean estimator is: The Bayes estimator is asymptotically efficient and as the sample size approaches infinity (), it approaches the MLE solution. The Bayes estimator is biased (how much depends on the priors), admissible and consistent in probability. Using the Bayesian estimator with the Beta distribution can be used with Thompson sampling. For the special case of using the standard uniform distribution as a non-informative prior, , the posterior mean estimator becomes: (A posterior mode should just lead to the standard estimator.) This method is called the rule of succession, which was introduced in the 18th century by Pierre-Simon Laplace. When relying on Jeffreys prior, the prior is , which leads to the estimator: When estimating with very rare events and a small (e.g.: if ), then using the standard estimator leads to which sometimes is unrealistic and undesirable. In such cases there are various alternative estimators. One way is to use the Bayes estimator , leading to: Another method is to use the upper bound of the confidence interval obtained using the rule of three: Confidence intervals for the parameter p Even for quite large values of n, the actual distribution of the mean is significantly nonnormal. Because of this problem several methods to estimate confidence intervals have been proposed. In the equations for confidence intervals below, the variables have the following meaning: n1 is the number of successes out of n, the total number of trials is the proportion of successes is the quantile of a standard normal distribution (i.e., probit) corresponding to the target error rate . For example, for a 95% confidence level the error  = 0.05, so  = 0.975 and  = 1.96. Wald method A continuity correction of may be added. Agresti–Coull method Here the estimate of is modified to This method works well for and . See here for . For use the Wilson (score) method below. Arcsine method Wilson (score) method The notation in the formula below differs from the previous formulas in two respects: Firstly, has a slightly different interpretation in the formula below: it has its ordinary meaning of 'the th quantile of the standard normal distribution', rather than being a shorthand for 'the th quantile'. Secondly, this formula does not use a plus-minus to define the two bounds. Instead, one may use to get the lower bound, or use to get the upper bound. For example: for a 95% confidence level the error  = 0.05, so one gets the lower bound by using , and one gets the upper bound by using . Comparison The so-called "exact" (Clopper–Pearson) method is the most conservative. (Exact does not mean perfectly accurate; rather, it indicates that the estimates will not be less conservative than the true value.) The Wald method, although commonly recommended in textbooks, is the most biased. Related distributions Sums of binomials If and are independent binomial variables with the same probability , then is again a binomial variable; its distribution is : A Binomial distributed random variable can be considered as the sum of Bernoulli distributed random variables. So the sum of two Binomial distributed random variables and is equivalent to the sum of Bernoulli distributed random variables, which means . This can also be proven directly using the addition rule. However, if and do not have the same probability , then the variance of the sum will be smaller than the variance of a binomial variable distributed as . Poisson binomial distribution The binomial distribution is a special case of the Poisson binomial distribution, which is the distribution of a sum of independent non-identical Bernoulli trials . Ratio of two binomial distributions This result was first derived by Katz and coauthors in 1978. Let and be independent. Let . Then log(T) is approximately normally distributed with mean log(p1/p2) and variance . Conditional binomials If X ~ B(n, p) and Y | X ~ B(X, q) (the conditional distribution of Y, given X), then Y is a simple binomial random variable with distribution Y ~ B(n, pq). For example, imagine throwing n balls to a basket UX and taking the balls that hit and throwing them to another basket UY. If p is the probability to hit UX then X ~ B(n, p) is the number of balls that hit UX. If q is the probability to hit UY then the number of balls that hit UY is Y ~ B(X, q) and therefore Y ~ B(n, pq). Since and , by the law of total probability, Since the equation above can be expressed as Factoring and pulling all the terms that don't depend on out of the sum now yields After substituting in the expression above, we get Notice that the sum (in the parentheses) above equals by the binomial theorem. Substituting this in finally yields and thus as desired. Bernoulli distribution The Bernoulli distribution is a special case of the binomial distribution, where . Symbolically, has the same meaning as . Conversely, any binomial distribution, , is the distribution of the sum of independent Bernoulli trials, , each with the same probability . Normal approximation If is large enough, then the skew of the distribution is not too great. In this case a reasonable approximation to is given by the normal distribution and this basic approximation can be improved in a simple way by using a suitable continuity correction. The basic approximation generally improves as increases (at least 20) and is better when is not near to 0 or 1. Various rules of thumb may be used to decide whether is large enough, and is far enough from the extremes of zero or one: One rule is that for the normal approximation is adequate if the absolute value of the skewness is strictly less than 0.3; that is, if This can be made precise using the Berry–Esseen theorem. A stronger rule states that the normal approximation is appropriate only if everything within 3 standard deviations of its mean is within the range of possible values; that is, only if This 3-standard-deviation rule is equivalent to the following conditions, which also imply the first rule above. The rule is totally equivalent to request that Moving terms around yields: Since , we can apply the square power and divide by the respective factors and , to obtain the desired conditions: Notice that these conditions automatically imply that . On the other hand, apply again the square root and divide by 3, Subtracting the second set of inequalities from the first one yields: and so, the desired first rule is satisfied, Another commonly used rule is that both values and must be greater than or equal to 5. However, the specific number varies from source to source, and depends on how good an approximation one wants. In particular, if one uses 9 instead of 5, the rule implies the results stated in the previous paragraphs. Assume that both values and are greater than 9. Since , we easily have that We only have to divide now by the respective factors and , to deduce the alternative form of the 3-standard-deviation rule: The following is an example of applying a continuity correction. Suppose one wishes to calculate for a binomial random variable . If has a distribution given by the normal approximation, then is approximated by . The addition of 0.5 is the continuity correction; the uncorrected normal approximation gives considerably less accurate results. This approximation, known as de Moivre–Laplace theorem, is a huge time-saver when undertaking calculations by hand (exact calculations with large are very onerous); historically, it was the first use of the normal distribution, introduced in Abraham de Moivre's book The Doctrine of Chances in 1738. Nowadays, it can be seen as a consequence of the central limit theorem since is a sum of independent, identically distributed Bernoulli variables with parameter . This fact is the basis of a hypothesis test, a "proportion z-test", for the value of using , the sample proportion and estimator of , in a common test statistic. For example, suppose one randomly samples people out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If groups of n people were sampled repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportion p of agreement in the population and with standard deviation Poisson approximation The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while the product converges to a finite limit. Therefore, the Poisson distribution with parameter can be used as an approximation to of the binomial distribution if is sufficiently large and is sufficiently small. According to rules of thumb, this approximation is good if and such that , or if and such that , or if and . Concerning the accuracy of Poisson approximation, see Novak, ch. 4, and references therein. Limiting distributions Poisson limit theorem: As approaches and approaches 0 with the product held fixed, the distribution approaches the Poisson distribution with expected value . de Moivre–Laplace theorem: As approaches while remains fixed, the distribution of approaches the normal distribution with expected value 0 and variance 1. This result is sometimes loosely stated by saying that the distribution of is asymptotically normal with expected value 0 and variance 1. This result is a specific case of the central limit theorem. Beta distribution The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is the PMF of successes given independent events each with a probability of success. Mathematically, when and , the beta distribution and the binomial distribution are related by a factor of : Beta distributions also provide a family of prior probability distributions for binomial distributions in Bayesian inference: Given a uniform prior, the posterior distribution for the probability of success given independent events with observed successes is a beta distribution. Computational methods Random number generation Methods for random number generation where the marginal distribution is a binomial distribution are well-established. One way to generate random variates samples from a binomial distribution is to use an inversion algorithm. To do so, one must calculate the probability that for all values from through . (These probabilities should sum to a value close to one, in order to encompass the entire sample space.) Then by using a pseudorandom number generator to generate samples uniformly between 0 and 1, one can transform the calculated samples into discrete numbers by using the probabilities calculated in the first step. History This distribution was derived by Jacob Bernoulli. He considered the case where where is the probability of success and and are positive integers. Blaise Pascal had earlier considered the case where , tabulating the corresponding binomial coefficients in what is now recognized as Pascal's triangle. See also Logistic regression Multinomial distribution Negative binomial distribution Beta-binomial distribution Binomial measure, an example of a multifractal measure. Statistical mechanics Piling-up lemma, the resulting probability when XOR-ing independent Boolean variables References Further reading External links Interactive graphic: Univariate Distribution Relationships Binomial distribution formula calculator Difference of two binomial variables: X-Y or |X-Y| Querying the binomial probability distribution in WolframAlpha Confidence (credible) intervals for binomial probability, p: online calculator available at causaScientia.org Discrete distributions Factorial and binomial topics Conjugate prior distributions Exponential family distributions
Binomial distribution
Mathematics
4,007
53,800
https://en.wikipedia.org/wiki/TacTix
TacTix is a two-player strategy game invented by Piet Hein, a poet well known for dabbling in math and science, best known for his game Hex. TacTix is essentially a two-dimension version of Nim; players alternate moves, removing one or more tokens in a single row or column until the last one is removed. At the time of its founding, TacTix was played on a 6x6 board, but is now usually played on a 4x4 board. The game can be played in both its misere and non-misere forms. The strategies outlined here make the non-misere variant of the game straightforward. The game is often used as a programming exercise, and many versions are available on the web as Java applets. Game play TacTix is played on a NxN grid of squares, where N was initially 6, but has more commonly been played as 4. Players alternate removing pieces a selected row or column, as many contiguous pieces as desired. For instance, in a 6x6 game, a player might remove pieces one through four on the first row. They cannot remove only the first and third pieces, these are not contiguous. Players alternate doing this until the last piece is removed. The player who takes the last piece loses in the misère play convention, or wins in the non-misère version. Strategy First Player If N Is Odd (non-misere): The player takes the center piece and symmetrically imitates every one of the opponent's moves. Second Player If N Is Even (non-misere): Player copies opponent's moves symmetrically. You will eventually take the last piece and win. Variations The hexagonal variation of the game, played on a six by six by six board, is called TacTex. TacTix can also be played on any size NxN board. A Non Misere version of TacTix, where the player who makes the last move is the winner, is also playable. Analysis On the 4×4 grid originally proposed by Hein, the second player will always win with correct play (HAKMEM item #74). If the game is instead played with the normal play convention (player who takes the last piece wins), the second player can always win by symmetrically mirroring the first player's moves. (Or on an odd × odd size grid, the first player can win by choosing the center piece and subsequently mirroring.) Tac Tix has 65,536 reachable positions. Out of the reachable positions, 57,156 are winning, and 8,380 are losing References External links TacTix applet from thinks.net TacTix at Four.com TacTix iPhone an jPhone game available at the App Store JavaScript TacTix Mathematical games
TacTix
Mathematics
586
62,819,282
https://en.wikipedia.org/wiki/Hilpda
Hypoxia inducible lipid droplet-associated (Hilpda, also known as C7orf68 and HIG-2) is a protein that in humans is encoded by the HILPDA gene. Discovery HILPDA was originally discovered in a screen to identify new genes that are activated by low oxygen pressure (hypoxia) in human cervical cancer cells. The protein consists of 63 amino acids in humans and 64 amino acids in mice. Expression HILPDA is produced by numerous cells and tissues, including cancer cells, immune cells, fat cells, and liver cells. Low oxygen pressure (hypoxia), fatty acids, and beta-adrenergic agonists stimulate HILPDA expression. Function Nearly all cells have the ability to store excess energy as fat in special structures in the cell called lipid droplets. The formation and breakdown of lipid droplets is controlled by various enzymes and lipid droplet-associated proteins. One of the lipid droplet-associated proteins is HILPDA. HILPDA acts as a regulatory signal that blocks the breakdown of the fat stores in cells when the external fat supply is high or the availability of oxygen is low. In cells, HILPDA is located in the endoplasmic reticulum and around lipid droplets. Gain and loss-of-function studies have shown that HILPDA promotes fat storage in cancer cells, macrophages and liver cells. This effect is at least partly achieved by suppressing triglyceride breakdown by inhibiting the enzyme adipose triglyceride lipase. The binding of HILPDA to adipose triglyceride lipase occurs via the conserved N-terminal portion of HILPDA, which is similar to a region in the G0S2 protein. Clinical significance The deficiency of HILPDA in mice that are prone to develop atherosclerosis led to a reduction in atherosclerotic plaques, suggesting that HILPDA may be a potential therapeutic target for atherosclerosis. In addition, HILPDA may be targeted for the treatment of non-alcoholic fatty liver disease. References Proteins Genetics
Hilpda
Chemistry
441
9,518,854
https://en.wikipedia.org/wiki/Microscopic%20traffic%20flow%20model
Microscopic traffic flow models are a class of scientific models of vehicular traffic dynamics. In contrast, to macroscopic models, microscopic traffic flow models simulate single vehicle-driver units, so the dynamic variables of the models represent microscopic properties like the position and velocity of single vehicles. Car-following models Also known as time-continuous models, all car-following models have in common that they are defined by ordinary differential equations describing the complete dynamics of the vehicles' positions and velocities . It is assumed that the input stimuli of the drivers are restricted to their own velocity , the net distance (bumper-to-bumper distance) to the leading vehicle (where denotes the vehicle length), and the velocity of the leading vehicle. The equation of motion of each vehicle is characterized by an acceleration function that depends on those input stimuli: In general, the driving behavior of a single driver-vehicle unit might not merely depend on the immediate leader but on the vehicles in front. The equation of motion in this more generalized form reads: Examples of car-following models Optimal velocity model (OVM) Velocity difference model (VDIFF) Wiedemann model (1974) Gipps' model (Gipps, 1981) Intelligent driver model (IDM, 1999) DNN based anticipatory driving model (DDS, 2021) Cellular automaton models Cellular automaton (CA) models use integer variables to describe the dynamical properties of the system. The road is divided into sections of a certain length and the time is discretized to steps of . Each road section can either be occupied by a vehicle or empty and the dynamics are given by updated rules of the form: (the simulation time is measured in units of and the vehicle positions in units of ). The time scale is typically given by the reaction time of a human driver, . With fixed, the length of the road sections determines the granularity of the model. At a complete standstill, the average road length occupied by one vehicle is approximately 7.5 meters. Setting to this value leads to a model where one vehicle always occupies exactly one section of the road and a velocity of 5 corresponds to , which is then set to be the maximum velocity a driver wants to drive at. However, in such a model, the smallest possible acceleration would be which is unrealistic. Therefore, many modern CA models use a finer spatial discretization, for example , leading to a smallest possible acceleration of . Although cellular automaton models lack the accuracy of the time-continuous car-following models, they still have the ability to reproduce a wide range of traffic phenomena. Due to the simplicity of the models, they are numerically very efficient and can be used to simulate large road networks in real-time or even faster. Examples of cellular automaton models Rule 184 Biham–Middleton–Levine traffic model Nagel–Schreckenberg model (NaSch, 1992) See also Microsimulation References Road traffic management Mathematical modeling Traffic flow
Microscopic traffic flow model
Mathematics
610
69,726,571
https://en.wikipedia.org/wiki/%28Trimethylsilyl%29methyllithium
(Trimethylsilyl)methyllithium is classified both as an organolithium compound and an organosilicon compound. It has the empirical formula LiCH2Si(CH3)3, often abbreviated LiCH2TMS. It crystallizes as the hexagonal prismatic hexamer [LiCH2TMS]6, akin to some polymorphs of methyllithium. Many adducts have been characterized including the diethyl ether complexed cubane [Li4(μ3-CH2TMS)4(Et2O)2] and [Li2(μ-CH2TMS)2(TMEDA)2]. Preparation (Trimethylsilyl)methyllithium, which is commercially available as a THF solution, is usually prepared by treatment of (trimethylsilyl)methyl chloride with butyllithium: (CH3)3SiCH2Cl + BuLi → (CH3)3SiCH2Li + BuCl (Trimethylsilyl)methylmagnesium chloride is often functionally equivalent to (trimethylsilyl)methyllithium. It is prepared by the Grignard reaction of (trimethylsilyl)methyl chloride. Use in methylenations In one example of the Peterson olefination, (trimethylsilyl)methyllithium reacts with aldehydes and ketones to give the terminal alkene (R1 = Me, R2 & R3 = H): Metal derivatives (Trimethylsilyl)methyllithium is widely used in organotransition metal chemistry to affix (trimethylsilyl)methyl ligands. Such complexes are usually produced by salt metathesis involving metal chlorides. These compounds are often highly soluble in nonpolar organic solvents, enjoying stability due to their steric bulk and resistance to beta-hydride elimination. In these regards, (trimethylsilyl)methyl is akin to neopentyl. Bis(trimethylsilyl)methylmagnesium is used as an alternative to (trimethylsilyl)methyllithium. Related compounds bis(trimethylsilyl)methyllithium tris(trimethylsilyl)methyllithium References Organolithium compounds Trimethylsilyl compounds
(Trimethylsilyl)methyllithium
Chemistry
500
71,469
https://en.wikipedia.org/wiki/Barn%20%28unit%29
A barn (symbol: b) is a metric unit of area equal to (100 fm2). This is equivalent to a square that is (10 fm) each side, or a circle of diameter approximately (11.28 fm). Originally used in nuclear physics for expressing the cross sectional area of nuclei and nuclear reactions, today it is also used in all fields of high-energy physics to express the cross sections of any scattering process, and is best understood as a measure of the probability of interaction between small particles. A barn is approximately the cross-sectional area of a uranium nucleus. The barn is also the unit of area used in nuclear quadrupole resonance and nuclear magnetic resonance to quantify the interaction of a nucleus with an electric field gradient. While the barn never was an SI unit, the SI standards body acknowledged it in the 8th SI Brochure (superseded in 2019) due to its use in particle physics. Etymology During Manhattan Project research on the atomic bomb during World War II, American physicists Marshall Holloway and Charles P. Baker were working at Purdue University on a project using a particle accelerator to measure the cross sections of certain nuclear reactions. According to an account of theirs from a couple years later, they were dining in a cafeteria in December 1942 and discussing their work. They "lamented" that there was no name for the unit of cross section and challenged themselves to develop one. They initially tried to find the name of "some great man closely associated with the field" that they could name the unit after, but struggled to find one that was appropriate. They considered "Oppenheimer" too long (in retrospect, they considered an "Oppy" to perhaps have been allowable), and considered "Bethe" to be too easily confused with the commonly-used Greek letter beta. They then considered naming it after John Manley, another scientist associated with their work, but considered "Manley" too long and "John" too closely associated with toilets. But this latter association, combined with the "rural background" of one of the scientists, suggested to them the term "barn", which also worked because the unit was "really as big as a barn." According to the authors, the first published use of the term was in a (secret) Los Alamos report from late June 1943, on which the two originators were co-authors. Commonly used prefixed versions The unit symbol for the barn (b) is also the IEEE standard symbol for bit. In other words, 1 Mb can mean one megabarn or one megabit. Conversions Calculated cross sections are often given in terms of inverse squared gigaelectronvolts (GeV−2), via the conversion ħ2c2/GeV2 = = . In natural units (where ħ = c = 1), this simplifies to GeV−2 = = . SI units with prefix In SI, one can use units such as square femtometers (fm2). The most common SI prefixed unit for the barn is the femtobarn, which is equal to a tenth of a square zeptometer. Many scientific papers discussing high-energy physics mention quantities of fractions of femtobarn level. Inverse femtobarn The inverse femtobarn (fb−1) is the unit typically used to measure the number of particle collision events per femtobarn of target cross-section, and is the conventional unit for time-integrated luminosity. Thus if a detector has accumulated of integrated luminosity, one expects to find 100 events per femtobarn of cross-section within these data. Consider a particle accelerator where two streams of particles, with cross-sectional areas measured in femtobarns, are directed to collide over a period of time. The total number of collisions will be directly proportional to the luminosity of the collisions measured over this time. Therefore, the collision count can be calculated by multiplying the integrated luminosity by the sum of the cross-section for those collision processes. This count is then expressed as inverse femtobarns for the time period (e.g., 100 fb−1 in nine months). Inverse femtobarns are often quoted as an indication of particle collider productivity. Fermilab produced in the first decade of the 21st century. Fermilab's Tevatron took about 4 years to reach in 2005, while two of CERN's LHC experiments, ATLAS and CMS, reached over of proton–proton data in 2011 alone. In April 2012 the LHC achieved the collision energy of with a luminosity peak of 6760 inverse microbarns per second; by May 2012 the LHC delivered 1 inverse femtobarn of data per week to each detector collaboration. A record of over 23 fb−1 was achieved during 2012. As of November 2016, the LHC had achieved over that year, significantly exceeding the stated goal of . In total, the second run of the LHC has delivered around to both ATLAS and CMS in 2015–2018. Usage example As a simplified example, if a beamline runs for 8 hours (28 800 seconds) at an instantaneous luminosity of   , then it will gather data totaling an integrated luminosity of  =  = during this period. If this is multiplied by the cross-section, then a dimensionless number is obtained equal to the number of expected scattering events. See also "Shake", a unit of time created by the same people at the same time as the barn Orders of magnitude (area) List of unusual units of measurement List of humorous units of measurement References External links IUPAC citation for this usage of "barn" Units of area Non-SI metric units Particle physics
Barn (unit)
Physics,Mathematics
1,186
76,793,082
https://en.wikipedia.org/wiki/Amy%20Betz
Amy Rachel Betz is an American materials scientist whose research investigates the effects of water-attracting and water-repelling surfaces on heat transfer and on icing of aircraft surfaces. She is an associate professor of mechanical and nuclear engineering at Kansas State University, where she also serves as assistant dean for retention, diversity and inclusion. Education and career Betz has a 2006 bachelor's degree in mechanical engineering from George Washington University. She went to Columbia University for graduate study in mechanical engineering, earning a master's degree in 2008 and completing her Ph.D. in 2011. Her doctoral dissertation, Multiphase Microfluidics for Convective Heat Transfer and Manufacturing, was supervised by Daniel Attinger. Before she completed her studies, Betz worked in hotel management. She joined the Kansas State University faculty in 2011, She became assistant dean for retention, diversity and inclusion in the Kansas State University College of Engineering in 2019. Recognition Betz's efforts to encourage women in engineering were recognized by the K-State Office for the Advancement of Women in Science and Engineering, which gave her their KAWSE Award in 2016, and again in 2023. In 2017, the American Society of Mechanical Engineers (ASME) International Conference on Nanochannel, Microchannels, and Minichannels gave her their Outstanding Leadership Award. She was elected as an ASME Fellow in 2022. References External links Home page (not regularly updated since 2016) Q&A with Betz, Engineergirl Year of birth missing (living people) Living people American materials scientists American women engineers Women materials scientists and engineers George Washington University alumni Columbia University alumni Kansas State University faculty Fellows of the American Society of Mechanical Engineers
Amy Betz
Materials_science,Technology
341
7,464,485
https://en.wikipedia.org/wiki/List%20of%20amateur%20radio%20modes
The following is a list of the modes of radio communication used in the amateur radio hobby. Modes of communication Amateurs use a variety of voice, text, image, and data communications modes over radio. Generally new modes can be tested in the amateur radio service, although national regulations may require disclosure of a new mode to permit radio licensing authorities to monitor the transmissions. Encryption, for example, is not generally permitted in the Amateur Radio service except for the special purpose of satellite vehicle control uplinks. The following is a partial list of the modes of communication used, where the mode includes both modulation types and operating protocols. Morse code Morse code is called the original digital mode. Radio telegraphy, designed for machine-to-machine communication is the direct on / off keying of a continuous wave carrier by Morse code symbols, often called amplitude-shift keying or ASK, may be considered to be an amplitude modulated mode of communications, and is rightfully considered the first digital data mode. Although more than 140 years old, bandwidth-efficient Morse code, originally developed by Samuel Morse and Alfred Vail in the 1840s, uses techniques that were not more fully understood until much later under the modern terms of source coding or data compression. Alfred Vail intuitively understood efficient code design: The bandwidth-efficiency of Morse code arises because its encodings are variable length, and Vail assigned the shortest encodings to the most-used symbols, and the longest encodings to the least-used symbols. It was not until one hundred years later that Shannon's modern information theory (1948) described Vail's coding technique for Morse code, giving it a firm footing in a mathematically based theory. Shannon's information theory resulted in similarly efficient data encoding technologies which use bandwidth like Morse code, such as the modern Huffman, Arithmetic, and Lempel-Ziv codes. Although commercial telegraphy ended in the late 20th century, Morse code remains in use by amateur radio operators. Operators may either key the code manually using a telegraph key and decode by ear, or they may use computers to send and receive the code. Continuous wave (CW) Modulated continuous wave (MCW) is most often used by repeaters for identification. Frequency-shift keying (FSK) dots and dashes are transmitted as different frequency continuous waves, for easier reception in noisy conditions. Analog voice Decades after the advent of digital amplitude-shift keying (ASK) of radio carriers by Morse symbols, radio technology evolved several methods of analog modulating radio carriers such as: amplitude, frequency and phase modulation by analog waveforms. The first such analog modulating waveforms applied to radio carriers were human voice signals picked up by microphone sensors and applied to the carrier waveforms. The resulting analog voice modes are known today as: Amplitude modulation (AM) Double-sideband suppressed carrier (DSB-SC) Independent sideband (ISB) Single sideband (SSB) Compatible sideband transmission, also called amplitude modulation equivalent (AME) Frequency modulation (FM) Phase modulation (PM) Digital voice Digital voice modes encode speech into a data stream before transmitting it. APCO P25 - Found in repurposed public safety equipment from multiple vendors. Uses IMBE or AMBE CODEC over FSK. D-STAR - Open specification with proprietary vocoder system available from Icom, Kenwood, and FlexRadio Systems. Uses AMBE over GMSK with VoIP capabilities. DMR - Found in both commercial and public safety equipment from multiple vendors. Uses AMBE codec over a FSK modulation variant with TDMA. NXDN: Used primarily in commercial 2-way (particularly railroads). Equipment is available from multiple manufacturers. NXDN uses FDMA with bandwidths of 6.25 kHz common. System Fusion - Open specification with proprietary vocoder system available from Yaesu. Uses AMBE voice codec with 4FSK modulation. FreeDV - Narrow bandwidth, open source digital voice mode. Uses Codec 2 with differential or coherent PSK modulation, as well as the new experimental high-fidelity Radio Autoencoder (RADE) based on the Framewise Autoregressive GAN (FARGAN) ML vocoder. M17 - Another open source digital voice mode based on Codec 2. Uses 4FSK. Utilizes punctured convolutional coding and quadratic permutation polynomials for error control and bit stream re-ordering. Image Image modes consist of sending either video or still images. Amateur television, also known as Fast Scan television (ATV) Slow-scan television (SSTV) Facsimile Text and data Most amateur digital modes are transmitted by inserting audio into the microphone input of a radio and using an analog scheme, such as amplitude modulation (AM), frequency modulation (FM), or single-sideband modulation (SSB). Amateur teleprinting over radio (AMTOR) D-STAR (Digital Data) a high speed (128 kbit/s), data-only mode. Hellschreiber, also referred to as either Feld-Hell, or Hell a facsimile-based teleprinter Discrete multi-tone modulation modes such as Multi Tone 63 (MT63) Multiple frequency-shift keying (MFSK) modes such as FSK441, JT6M, JT65, and FT8 Olivia MFSK JS8 Packet radio (AX25) Amateur Packet Radio Network (AMPRNet) Automatic Packet Reporting System (APRS) PACTOR (AMTOR + packet radio) Phase-shift keying: 31-baud binary phase shift keying: PSK31 31-baud quadrature phase shift keying: QPSK31 63-baud binary phase shift keying: PSK63 63-baud quadrature phase shift keying: QPSK63 Frequency-shift keying: Radioteletype (RTTY) Frequency-shift keying Other modes Spread spectrum, which may be analog or digital in nature, is the spreading of a signal over a wide bandwidth. High-speed multimedia radio, networking using 802.11 protocols. Activities sometimes called 'modes' Certain procedural activities in amateur radio are also commonly referred to as 'modes', even though no one specific modulation scheme is used. AllStarLink (ASL) connects amateurs and repeaters via the internet using the Asterisk (PBX) IAX VOIP Protocol. Automatic link establishment (ALE) is a method of automatically finding a sustainable communications channel on HF. Automatically Controlled Digital Stations (ACDS) Earth-Moon-Earth (EME) uses the Moon to communicate over long distances. EchoLink connects amateurs and amateur stations via the internet. Internet Radio Linking Project (IRLP) connects repeaters via the internet. Satellite (OSCAR - Orbiting Satellite Carrying Amateur Radio) Low transmitter power (QRP) Prosigns for Morse code References Sources Modes, list of Digital amateur radio Radio modulation modes Packet radio Modes
List of amateur radio modes
Technology
1,446
4,535,333
https://en.wikipedia.org/wiki/NIH%20shift
An NIH shift is a chemical rearrangement where a hydrogen atom on an aromatic ring undergoes an intramolecular migration primarily during a hydroxylation reaction. This process is also known as a 1,2-hydride shift. These shifts are often studied and observed by isotopic labeling. An example of an NIH shift is shown below: In this example, a hydrogen atom has been isotopically labeled using deuterium (shown in red). As the hydroxylase adds a hydroxyl (the −OH group), the labeled site shifts one position around the aromatic ring relative to the stationary methyl group (−CH3). Several hydroxylase enzymes are believed to incorporate an NIH shift in their mechanism, including 4-hydroxyphenylpyruvate dioxygenase and the tetrahydrobiopterin dependent hydroxylases. The name NIH shift arises from the US National Institutes of Health from where studies first reported observing this transformation. References . . Enzymes Post-translational modification Reaction mechanisms
NIH shift
Chemistry
217
75,250,253
https://en.wikipedia.org/wiki/NGC%203786
NGC 3786 is an spiral galaxy located away in the northern constellation of Ursa Major. It was discovered by English astronomer John Herschel on April 10, 1831. This object appears to form a close pair with its peculiar neighbor to the north, NGC 3788. They show some indications of interaction, such as minor distortion of the disk or tidal features. The morphological classification of this galaxy is (R')SA(rs)a, indicating an unbarred spiral galaxy (SA) with an outer ring (R'), transitional inner ring (rs), and tightly wound spiral arms (a). The galactic plane is inclined at an angle of to the line of sight from the Earth. A mini-bar structure appears in the circumnuclear region. It is a type 1.8 Seyfert galaxy, with a detectable X-ray emission that is being partially absorbed by warm, dusty material along the line of sight. The active galactic nucleus of this galaxy is driven by a supermassive black hole with an estimated mass of . An outburst from the core was observed in 1996 and a mid-infrared flare in 2022. Type Ic supernova SN 1999bu was detected from an image taken April 16, 1999. It was magnitude 17.5 and was located at an offset west and south of the galactic nucleus of NGC 3786. A possible progenitor to this core collapse supernova event was identified in 2003 from archival images. A second supernova, SN 2004bd, was discovered April 7, 2004. This was a type Ia supernova located west and south of the nucleus. References Astronomical objects discovered in 1831 Seyfert galaxies Unbarred spiral galaxies 3786 NGC 3786 36158 Discoveries by John Herschel 294 Markarian galaxies
NGC 3786
Astronomy
367
27,302,078
https://en.wikipedia.org/wiki/Machine-tool%20dynamometer
A machine-tool dynamometer is a multi-component dynamometer that is used to measure forces during the use of the machine tool. Empirical calculations of these forces can be cross-checked and verified experimentally using these machine tool dynamometers. With advances in technology, machine-tool dynamometers are increasingly used for the accurate measurement of forces and for optimizing the machining process. These multi-component forces are measured as an individual component force in each co-ordinate, depending on the coordinate system used. The forces during machining are dependent on depth of cut, feed rate, cutting speed, tool material and geometry, material of the work piece and other factors such as use of lubrication/cooling during machining. Types Lathe Drill Milling Grinding References Machining Technology, Machine Tools and Operations Helmi A. Youssef and Hassan El-Hofy CRC Press 2008 Pages 371–390 Print eBook Machine Tool Dynamometers External links DOI.org A book containing information on Machine Tool Dynamometers by Springer Machine tools Dynamometers
Machine-tool dynamometer
Technology,Engineering
218
70,750,754
https://en.wikipedia.org/wiki/Schizothecium%20vesticola
Schizothecium vesticola is a species of coprophilous fungus in the family Lasiosphaeriaceae. In Greece, it is known to grow in the dung of goats and possibly on that of sheep, goats and donkeys. In Iceland, it has been reported from the dung of sheep, goose and horse. References External links Fungi described in 1972 Fungi of Greece Fungi of Iceland Sordariales Fungus species
Schizothecium vesticola
Biology
90
44,431,852
https://en.wikipedia.org/wiki/Cyclooctatetraenide%20anion
In chemistry, the cyclooctatetraenide anion or cyclooctatetraenide, more precisely cyclooctatetraenediide, is an aromatic species with a formula of [C8H8]2− and abbreviated as COT2−. It is the dianion of cyclooctatetraene. Salts of the cyclooctatetraenide anion can be stable, e.g., Dipotassium cyclooctatetraenide or disodium cyclooctatetraenide. More complex coordination compounds are known as cyclooctatetraenide complexes, such as the actinocenes. The structure is a planar symmetric octagon stabilized by resonance, meaning each atom bears a charge of −. The length of the bond between carbon atoms is 1.432 Å. There are 10 π electrons. The structure can serve as a ligand with various metals. List of salts See also Tropylium ion Cyclopentadienyl anion References Simple aromatic rings Anions Non-benzenoid aromatic carbocycles
Cyclooctatetraenide anion
Physics,Chemistry
237
71,148,012
https://en.wikipedia.org/wiki/Land%20bridges%20of%20Japan
Due to changes in sea level, Japan has at various times been connected to the continent by , with continental Russia to the north via the Sōya Strait, Sakhalin, and the Mamiya Strait, and with the Korean Peninsula to the southwest, via the Tsushima Strait and Korea Strait. Land bridges also connected the Japanese Islands with each other. These land bridges enabled the migration of terrestrial fauna from the continent and their dispersal within Japan. Geological background Around 25 million years ago, the Sea of Japan began to open, separating Japan from the continent and giving rise to the Japanese island arc system of today. The Sea of Japan as a back-arc basin was open both to the northeast and to the southwest by 14 Ma, while marine transgression further contributed to the isolation and insulation of Japan. Due to the level of tectonic activity in the area and significant subsidence of the Japanese Islands since the Miocene, exact quantification of historic sea level changes is problematic. Northern land bridge Based on current depths, a reduction in sea level would be sufficient to connect Hokkaidō with the mainland. The and — sometimes referred to jointly as the or Sakhalin land bridge — are thus thought to have been in place during most glacial periods. Western land bridge With a minimum depth of and based in part on the appearance in Japan of Proboscidea, the and — sometimes referred to jointly as the Korean land bridge — are understood to have been in place at 1.2 Ma, 0.63 Ma, and 0.43 Ma. Kuril land bridge A has been insufficient to connect Hokkaidō with Kamchatka during the Quaternary. The southern Kuril land bridge that connected Kunashiri and the Lesser Kurils to Hokkaidō during the Early Holocene was insufficient with the rising sea level at around 6,000 BP. Seto land bridges Honshū, Shikoku, and Kyūshū are separated by shallow straits that rarely exceed in depth. Consequently, they were frequently connected together as a single land mass. Tsugaru land bridge The Tsugaru Strait, with a depth in excess of , represents a more significant faunal boundary, known as Blakiston's Line. The most recent age of the is uncertain. Ryūkyū land bridge The Ryūkyū Islands, separated by deeper straits still (the Tokara Gap), have been isolated from the main islands throughout the Quaternary. The was sufficient temporarily to connect Miyako-jima with Taiwan during the late Middle Pleistocene, allowing for the migration of the Steppe mammoth (Mammuthus trogontherii). During this period, the Miyako Strait was sufficient to prevent the land bridge reaching Okinawa Island. See also List of prehistoric mammals of Japan References Landforms of Japan Geology of Japan Historical geology Biogeography
Land bridges of Japan
Biology
570
53,661,709
https://en.wikipedia.org/wiki/Newton%20%28Paolozzi%29
Newton, sometimes known as Newton after Blake, is a 1995 work by the sculptor Eduardo Paolozzi. The large bronze sculpture is displayed on a high plinth in the piazza outside the British Library in London. The sculpture is based on William Blake's 1795 print of Newton: Personification of Man Limited by Reason, which depicts a nude Isaac Newton sitting on ledge beside a mossy rock face while measuring with a pair of compasses or dividers. The print was intended by Blake to criticise Newton's profane knowledge, usurping the sacred knowledge and power of the creator Urizen, with the scientist turning away from nature to focus on his books. Paolozzi had admired Blake since viewing a large print of Newton at the Tate Gallery in the 1940s. He was also a friend of Colin St John Wilson, the architect of the British Library, since they both participated in the This is Tomorrow exhibition at the Whitechapel Gallery in 1956. Wilson intended to site a seated sculpture at the junction of the two main axes in the piazza of his library. Paolozzi was then working on a sculpture of Newton, and he was commissioned to create the sculpture for the library. The new library was constructed from 1982 to 1999, and the sculpture was installed in 1995. The sculpture includes Paolozzi's self-portrait as the naked Newton, measuring the universe with his dividers. The eyes were copied from Michelangelo's David. It can be interpreted as symbolising a confluence of the two cultures, the arts and the sciences, and illustrating how Newton changed our view of the world to one determined by mathematical laws. The sculpture makes the body resemble a mechanical object, joined with bolts at the shoulders, elbows, knees and ankles. The sculptures shows the visible seams of Paolozzi's technique of dividing his model and reassembling the pieces, for example on the head. The final full-size sculpture stands high, and is mounted on a high plinth. The bronze was cast by the Morris Singer foundry, and funded by the Foundation for Sport and the Arts. It was included in the Grade I listing of the library, granted in 2015. A maquette was donated by the artist to the Isaac Newton Institute for Mathematical Sciences at the University of Cambridge. A bronze model cast in 1988 "from the model made to show the Library committee", has been held by the Tate Gallery since 1995. A similar sculpture by Paolozzi from 1989, Master of the Universe, is on display at Modern Two (formerly the Dean Gallery), part of the Scottish National Gallery Of Modern Art in Edinburgh; while another example, Concept of Newton, is in Kowloon Park, Hong Kong. Gallery References The British Library, piazza, boundary wall and railings to Ossulston Street, Euston Road and Midland Road, National Heritage List for England, Historic England Statue: British Library – Newton, London Remembers Paolozzi’s Newton, British Library, Tate Gallery Paolozzi's Sculpture of Isaac Newton, Isaac Newton Institute for Mathematical Sciences Eduardo Paolozzi, Master of the Universe (1989), National Galleries Scotland British Library, Euston Road NW1, Ornamental Passions Blake 2.0: William Blake in Twentieth-Century Art, Music and Culture, edited by Steve Clark, T. Connolly, Jason Whittaker The Architecture of the British Library at St. Pancras, Roger Stonehouse, Gerhard Stromberg, p. 175 1995 sculptures Bronze sculptures in London Outdoor sculptures in London Statues in London Grade I listed buildings in the London Borough of Camden Cultural depictions of Isaac Newton British Library 1995 in England Adaptations of works by William Blake Colossal statues in the United Kingdom
Newton (Paolozzi)
Astronomy
739
72,551,201
https://en.wikipedia.org/wiki/HD%20168592
HD 168592, also designated as HR 6862 or rarely 7 G. Coronae Australis, is a solitary star located in the southern constellation Corona Australis. It is faintly visible to the naked eye as an orange-hued star with an apparent magnitude of 5.07. Gaia DR3 parallax measurements place it at a distance of 490 light years and is currently receding with a heliocentric radial velocity of . At its current distance, HD 168592's brightness is diminished by 0.38 magnitudes due to interstellar dust. It has an absolute magnitude of −0.76. HD 168592 has a stellar classification of K4/5 III, indicating that it is an evolved K-type star with the characteristics of a K4 and K5 giant star. It has a comparable mass to the Sun but the star has expanded to 43.6 times the Sun's radius. It radiates 666 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . HD 168592 is slightly metal deficient with an iron abundance 26% below solar levels. The star spins slowly, as is common for giant stars, with a projected rotational velocity of . References K-type giants Corona Australis Coronae Australis, 7 CD-38 12729 168592 090037 6862
HD 168592
Astronomy
287
416,954
https://en.wikipedia.org/wiki/Viral%20evolution
Viral evolution is a subfield of evolutionary biology and virology that is specifically concerned with the evolution of viruses. Viruses have short generation times, and many—in particular RNA viruses—have relatively high mutation rates (on the order of one point mutation or more per genome per round of replication). Although most viral mutations confer no benefit and often even prove deleterious to viruses, the rapid rate of viral mutation combined with natural selection allows viruses to quickly adapt to changes in their host environment. In addition, because viruses typically produce many copies in an infected host, mutated genes can be passed on to many offspring quickly. Although the chance of mutations and evolution can change depending on the type of virus (e.g., double stranded DNA, double stranded RNA, single strand DNA), viruses overall have high chances for mutations. Viral evolution is an important aspect of the epidemiology of viral diseases such as influenza (influenza virus), AIDS (HIV), and hepatitis (e.g. HCV). The rapidity of viral mutation also causes problems in the development of successful vaccines and antiviral drugs, as resistant mutations often appear within weeks or months after the beginning of a treatment. One of the main theoretical models applied to viral evolution is the quasispecies model, which defines a viral quasispecies as a group of closely related viral strains competing within an environment. Origins Three classical hypotheses Viruses are ancient. Studies at the molecular level have revealed relationships between viruses infecting organisms from each of the three domains of life, suggesting viral proteins that pre-date the divergence of life and thus infecting the last universal common ancestor. This indicates that some viruses emerged early in the evolution of life, and that they have probably arisen multiple times. It has been suggested that new groups of viruses have repeatedly emerged at all stages of evolution, often through the displacement of ancestral structural and genome replication genes. There are three main hypotheses that aim to explain the origins of viruses: Regressive hypothesis Viruses may have once been small cells that parasitised larger cells. Over time, genes not required by their parasitism were lost. The bacteria rickettsia and chlamydia are living cells that, like viruses, can reproduce only inside host cells. They lend support to this hypothesis, as their dependence on parasitism is likely to have caused the loss of genes that enabled them to survive outside a cell. This is also called the "degeneracy hypothesis", or "reduction hypothesis". Cellular origin hypothesis Some viruses may have evolved from bits of DNA or RNA that "escaped" from the genes of a larger organism. The escaped DNA could have come from plasmids (pieces of naked DNA that can move between cells) or transposons (molecules of DNA that replicate and move around to different positions within the genes of the cell). Once called "jumping genes", transposons are examples of mobile genetic elements and could be the origin of some viruses. They were discovered in maize by Barbara McClintock in 1950. This is sometimes called the "vagrancy hypothesis", or the "escape hypothesis". Co-evolution hypothesis This is also called the "virus-first hypothesis" and proposes that viruses may have evolved from complex molecules of protein and nucleic acid at the same time that cells first appeared on Earth and would have been dependent on cellular life for billions of years. Viroids are molecules of RNA that are not classified as viruses because they lack a protein coat. They have characteristics that are common to several viruses and are often called subviral agents. Viroids are important pathogens of plants. They do not code for proteins but interact with the host cell and use the host machinery for their replication. The hepatitis delta virus of humans has an RNA genome similar to viroids but has a protein coat derived from hepatitis B virus and cannot produce one of its own. It is, therefore, a defective virus. Although hepatitis delta virus genome may replicate independently once inside a host cell, it requires the help of hepatitis B virus to provide a protein coat so that it can be transmitted to new cells. In similar manner, the sputnik virophage is dependent on mimivirus, which infects the protozoan Acanthamoeba castellanii. These viruses, which are dependent on the presence of other virus species in the host cell, are called "satellites" and may represent evolutionary intermediates of viroids and viruses. Later hypotheses Chimeric-origins hypothesis: Based on the analyses of the evolution of the replicative and structural modules of viruses, a chimeric scenario for the origin of viruses was proposed in 2019. According to this hypothesis, the replication modules of viruses originated from the primordial genetic pool, although the long course of their subsequent evolution involved many displacements by replicative genes from their cellular hosts. By contrast, the genes encoding major structural proteins evolved from functionally diverse host proteins throughout the evolution of the virosphere. This scenario is distinct from each of the three traditional scenarios but combines features of the Virus-first and Escape hypotheses. One of the problems for studying viral origins and evolution is the high rate of viral mutation, particularly the case in RNA retroviruses like HIV/AIDS. A recent study based on comparisons of viral protein folding structures, however, is offering some new evidence. Fold Super Families (FSFs) are proteins that show similar folding structures independent of the actual sequence of amino acids, and have been found to show evidence of viral phylogeny. The proteome of a virus, the viral proteome, still contains traces of ancient evolutionary history that can be studied today. The study of protein FSFs suggests the existence of ancient cellular lineages common to both cells and viruses before the appearance of the 'last universal cellular ancestor' that gave rise to modern cells. Evolutionary pressure to reduce genome and particle size may have eventually reduced viro-cells into modern viruses, whereas other coexisting cellular lineages eventually evolved into modern cells. Furthermore, the long genetic distance between RNA and DNA FSFs suggests that the RNA world hypothesis may have new experimental evidence, with a long intermediary period in the evolution of cellular life. Definitive exclusion of a hypothesis on the origin of viruses is difficult to make on Earth given the ubiquitous interactions between viruses and cells, and the lack of availability of rocks that are old enough to reveal traces of the earliest viruses on the planet. From an astrobiological perspective, it has therefore been proposed that on celestial bodies such as Mars not only cells but also traces of former virions or viroids should be actively searched for: possible findings of traces of virions in the apparent absence of cells could provide support for the virus-first hypothesis. Evolution Viruses do not form fossils in the traditional sense, because they are much smaller than the finest colloidal fragments forming sedimentary rocks that fossilize plants and animals. However, the genomes of many organisms contain endogenous viral elements (EVEs). These DNA sequences are the remnants of ancient virus genes and genomes that ancestrally 'invaded' the host germline. For example, the genomes of most vertebrate species contain hundreds to thousands of sequences derived from ancient retroviruses. These sequences are a valuable source of retrospective evidence about the evolutionary history of viruses, and have given birth to the science of paleovirology. The evolutionary history of viruses can to some extent be inferred from analysis of contemporary viral genomes. The mutation rates for many viruses have been measured, and application of a molecular clock allows dates of divergence to be inferred. Viruses evolve through changes in their RNA (or DNA), some quite rapidly, and the best adapted mutants quickly outnumber their less fit counterparts. In this sense their evolution is Darwinian. The way viruses reproduce in their host cells makes them particularly susceptible to the genetic changes that help to drive their evolution. The RNA viruses are especially prone to mutations. In host cells there are mechanisms for correcting mistakes when DNA replicates and these kick in whenever cells divide. These important mechanisms prevent potentially lethal mutations from being passed on to offspring. But these mechanisms do not work for RNA and when an RNA virus replicates in its host cell, changes in their genes are occasionally introduced in error, some of which are lethal. One virus particle can produce millions of progeny viruses in just one cycle of replication, therefore the production of a few "dud" viruses is not a problem. Most mutations are "silent" and do not result in any obvious changes to the progeny viruses, but others confer advantages that increase the fitness of the viruses in the environment. These could be changes to the virus particles that disguise them so they are not identified by the cells of the immune system or changes that make antiviral drugs less effective. Both of these changes occur frequently with HIV. Many viruses (for example, influenza A virus) can "shuffle" their genes with other viruses when two similar strains infect the same cell. This phenomenon is called genetic shift, and is often the cause of new and more virulent strains appearing. Other viruses change more slowly as mutations in their genes gradually accumulate over time, a process known as antigenic drift. Through these mechanisms new viruses are constantly emerging and present a continuing challenge in attempts to control the diseases they cause. Most species of viruses are now known to have common ancestors, and although the "virus first" hypothesis has yet to gain full acceptance, there is little doubt that the thousands of species of modern viruses have evolved from less numerous ancient ones. The morbilliviruses, for example, are a group of closely related, but distinct viruses that infect a broad range of animals. The group includes measles virus, which infects humans and primates; canine distemper virus, which infects many animals including dogs, cats, bears, weasels and hyaenas; rinderpest, which infected cattle and buffalo; and other viruses of seals, porpoises and dolphins. Although it is not possible to prove which of these rapidly evolving viruses is the earliest, for such a closely related group of viruses to be found in such diverse hosts suggests the possibility that their common ancestor is ancient. Bacteriophage Escherichia virus T4 (phage T4) is a species of bacteriophage that infects Escherichia coli bacteria. It is a double-stranded DNA virus in the family Myoviridae. Phage T4 is an obligate intracellular parasite that reproduces within the host bacterial cell and its progeny are released when the host is destroyed by lysis. The complete genome sequence of phage T4 encodes about 300 gene products. These virulent viruses are among the largest, most complex viruses that are known and one of the best studied model organisms. They have played a key role in the development of virology and molecular biology. The numbers of reported genetic homologies between phage T4 and bacteria and between phage T4 and eukaryotes are similar suggesting that phage T4 shares ancestry with both bacteria and eukaryotes and has about equal similarity to each. Phage T4 may have diverged in evolution from a common ancestor of bacteria and eukaryotes or from an early evolved member of either lineage. Most of the phage genes showing homology with bacteria and eukaryotes encode enzymes acting in the ubiquitous processes of DNA replication, DNA repair, recombination and nucleotide synthesis. These processes likely evolved very early. The adaptive features of the enzymes catalyzing these early processes may have been maintained in the phage T4, bacterial, and eukaryotic lineages because they were established well-tested solutions to basic functional problems by the time these lineages diverged. Transmission Viruses have been able to continue their infectious existence due to evolution. Their rapid mutation rates and natural selection has given viruses the advantage to continue to spread. One way that viruses have been able to spread is with the evolution of virus transmission. The virus can find a new host through: Droplet transmission- passed on through body fluids (sneezing on someone) An example is the influenza virus Airborne transmission- passed on through the air (brought in by breathing) An example would be how viral meningitis is passed on Vector transmission- picked up by a carrier and brought to a new host An example is viral encephalitis Waterborne transmission- leaving a host, infecting the water, and being consumed in a new host Poliovirus is an example for this Sit-and-wait-transmission- the virus is living outside a host for long periods of time The smallpox virus is also an example for this Virulence, or the harm that the virus does on its host, depends on various factors. In particular, the method of transmission tends to affect how the level of virulence will change over time. Viruses that transmit through vertical transmission (transmission to the offspring of the host) will evolve to have lower levels of virulence. Viruses that transmit through horizontal transmission (transmission between members of the same species that don't have a parent-child relationship) will usually evolve to have a higher virulence. See also DNA virus Earliest known life forms Evolution of the Sacbrood Virus RNA virus Viral classification Viral decay acceleration Viral phylodynamics Viral quasispecies Endothelial Cell Tropism References Bibliography Further reading External links Evolutionary biology Virology Microbial population biology
Viral evolution
Biology
2,781
1,506,069
https://en.wikipedia.org/wiki/Outline%20of%20electrical%20engineering
The following outline is provided as an overview of and topical guide to electrical engineering. Electrical engineering – field of engineering that generally deals with the study and application of electricity, electronics and electromagnetism. The field first became an identifiable occupation in the late nineteenth century after commercialization of the electric telegraph and electrical power supply. It now covers a range of subtopics including power, electronics, control systems, signal processing and telecommunications. Classification Electrical engineering can be described as all of the following: Academic discipline – branch of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties to which their practitioners belong. Branch of engineering – discipline, skill, and profession of acquiring and applying scientific, economic, social, and practical knowledge, in order to design and build structures, machines, devices, systems, materials and processes. Branches of electrical engineering Power engineering Control engineering Electronic engineering Microelectronics Signal processing Radio-frequency engineering and Radar Telecommunications engineering Instrumentation engineering Electro-Optical Engineering and Optoelectronics Computer engineering Related disciplines Biomedical engineering Engineering physics Mechanical engineering Mechatronics History of electrical engineering History of electrical engineering Timeline of electrical and electronic engineering General electrical engineering concepts Electromagnetism Electromagnetism Electricity Magnetism Electromagnetic spectrum Optical spectrum Electrostatics Electric charge Coulomb's law Electric field Gauss's law Electric potential Magnetostatics Electric current Ampère's law Magnetic field Magnetic moment Electrodynamics Lorentz force law Electromotive force Electromagnetic induction Faraday-Lenz law Displacement current Maxwell's equations Electromagnetic field Electromagnetic radiation Electrical circuits Antenna Electrical resistance Capacitance Inductance Impedance Resonant cavity Transmission line Waveguide Physical laws Physical laws Ampère's law Coulomb's law Faraday's law of induction/Faraday-Lenz law Gauss's law Kirchhoff's circuit laws Current law Voltage law Maxwell's equations Gauss's law Faraday's law of induction Ampère's law Ohm's law Control engineering Control engineering Control theory Adaptive control Control theory Digital control Nonlinear control Optimal control Intelligent control Fuzzy control Model predictive control System properties: Exponential stability Marginal stability BIBO stability Lyapunov stability (i.e., asymptotic stability) Input-to-state (ISS) stability Controllability Observability Negative feedback Positive feedback System modeling and analysis: System identification State observer First principles modeling Least squares Kalman filter Root locus Extended Kalman filter Signal-flow graph State space representation Artificial neural networks Controllers: Closed-loop controller PID controller Programmable logic controller Embedded controller Field oriented controller Direct torque controller Digital signal controller Pulse-width modulation controller Control applications: Industrial Control Systems Process Control Distributed Control System Mechatronics Motion control Supervisory control (SCADA) Electronics Electronics Electrical network/Circuit Circuit laws Kirchhoff's circuit laws Current law Voltage law Y-delta transform Ohm's law Electrical element/Discretes Passive elements: Capacitor Inductor Resistor Hall effect sensor Active elements: Microcontroller Operational amplifier Semiconductors: Diode Zener diode Light-emitting diode PIN diode Schottky diode Avalanche diode Laser diode DIAC Thyristor Transistor Bipolar transistor (BJT) Field effect transistor (FET) Darlington transistor IGBT TRIAC Mosfet Electronic design automation Power engineering Power engineering Generation Electrical generator Renewable electricity Hydropower Transmission Electricity pylon Transformer Transmission line Distribution Processes: Alternating current Direct current Single-phase electric power Two-phase electric power Three-phase power Power electronics / Electro-mechanical Inverter Static VAR compensator Variable-frequency drive Ward Leonard control Electric vehicles Electric vehicles Electric motor Hybrid electric vehicle Plug-in hybrid Rechargeable battery Vehicle-to-grid Smart Grid Signal processing Signal processing Analog signal processing Digital signal processing Quantization Sampling Analog-to-digital converter, Digital-to-analog converter Continuous signal, Discrete signal Down sampling Nyquist frequency Nyquist–Shannon sampling theorem Oversampling Sample and hold Sampling frequency Undersampling Upsampling Audio signal processing Audio noise reduction Speech processing Equalization (audio) Digital image processing Geometric transformation Color correction Computer vision Image noise reduction Edge detection Image editing Segmentation Data compression Lossless data compression Lossy data compression Filtering Analog filter Audio filter Digital filter Finite impulse response Infinite impulse response Electronic filter Analogue filter Filter (signal processing) Band-pass filter Band-stop filter Butterworth filter Chebyshev filter High-pass filter Kalman filter Low-pass filter Notch filter Sallen Key filter Wiener filter Transforms Advanced Z-transform Bilinear transform Continuous Fourier transform Discrete cosine transform Discrete Fourier transform, Fast Fourier transform (FFT) Discrete sine transform Fourier transform Hilbert transform Laplace transform, Two-sided Laplace transform Z-transform Instrumentation Actuator Electric motor Oscilloscope Telecommunication Telecommunication Telephone Pulse-code modulation (PCM) Main distribution frame (MDF) Carrier system Mobile phone Wireless network Optical fiber Modulation Carrier wave Communication channel Information theory Error correction and detection Digital television Digital audio broadcasting Satellite radio Satellite Electrical engineering occupations Occupations in electrical/electronics engineering Electrical Technologist Electrical engineering organizations International Electrotechnical Commission (IEC) Electrical engineering publications IEEE Spectrum IEEE series of journals Hawkins Electrical Guide Iterative Receiver Design Journal of Electrical Engineering Persons influential in electrical engineering List of electrical engineers and their contributions List of Russian electrical engineers See also Index of electrical engineering articles Outline of engineering References External links International Electrotechnical Commission (IEC) MIT OpenCourseWare in-depth look at Electrical Engineering - online courses with video lectures. IEEE Global History Network A wiki-based site with many resources about the history of IEEE, its members, their professions and electrical and informational technologies and sciences. Electrical engineering Electrical engineering
Outline of electrical engineering
Engineering
1,206
5,662,689
https://en.wikipedia.org/wiki/Reification%20%28information%20retrieval%29
In information retrieval and natural language processing reification is the process by which an abstract idea about a person, place or thing, is turned into an explicit data model or other object created in a programming language, such as a feature set of demographic or psychographic attributes or both. By means of reification, something that was previously implicit, unexpressed, and possibly inexpressible is explicitly formulated and made available to conceptual (logical or computational) manipulation. The process by which a natural language statement is transformed so actions and events in it become quantifiable variables is semantic parsing. For example "John chased the duck furiously" can be transformed into something like (Exists e)(chasing(e) & past_tense(e) & actor(e,John) & furiously(e) & patient(e,duck)). Another example would be "Sally said John is mean", which could be expressed as something like (Exists u,v)(saying(u) & past_tense(u) & actor(u,Sally) & that(u,v) & is(v) & actor(v,John) & mean(v)). Such formal meaning representations allow one to use the tools of classical first-order predicate calculus even for statements which, due to their use of tense, modality, adverbial constructions, propositional arguments (e.g. "Sally said that X"), etc., would have seemed intractable. This is an advantage because predicate calculus is better understood and simpler than the more complex alternatives (higher-order logics, modal logics, temporal logics, etc.), and there exist better automated tools (e.g. automated theorem provers and model checkers) for manipulating it. Meaning representations can be used for other purposes besides the application of first-order logic; one example is the automatic discovery of synonymous phrases. The meaning representations are sometimes called quasi-logical forms, and the existential variables are sometimes treated as Skolem constants. Not all natural language constructs admit a uniform translation to first order logic. See donkey sentence for examples and a discussion. See also Drinker paradox Nonfirstorderizability Reification (computer science) Reification (fallacy) Reification (knowledge representation) References Computational linguistics
Reification (information retrieval)
Technology
478
44,370,960
https://en.wikipedia.org/wiki/SIMPLEC%20algorithm
The SIMPLEC (Semi-Implicit Method for Pressure Linked Equations-Consistent) algorithm; a modified form of SIMPLE algorithm; is a commonly used numerical procedure in the field of computational fluid dynamics to solve the Navier–Stokes equations. This algorithm was developed by Van Doormal and Raithby in 1984. The algorithm follows the same steps as the SIMPLE algorithm, with the variation that the momentum equations are manipulated, allowing the SIMPLEC velocity correction equations to omit terms that are less significant than those omitted in SIMPLE. This modification attempts to minimize the effects of dropping velocity neighbor correction terms. Algorithm The steps involved are same as the SIMPLE algorithm and the algorithm is iterative in nature. p*, u*, v* are guessed Pressure, X-direction velocity and Y-direction velocity respectively, p', u', v' are the correction terms respectively and p, u, v are the correct fields respectively; Φ is the property for which we are solving and d terms are involved with the under relaxation factor. So, steps are as follows: 1. Specify the boundary conditions and guess the initial values. 2. Determine the velocity and pressure gradients. 3. Calculate the pseudo velocities. 4. Solve for the pressure equation and get the p. 5. Set p*=p. 6. Using p* solve the discretized momentum equation and get u* and v*. 7. Solve the pressure correction equation. 8. Get the pressure correction term and evaluate the corrected velocities and get p, u, v, Φ*. 9. Solve all other discretized transport equations. 10. If Φ shows convergence, then STOP and if not, then set p*=p, u*=u, v*=v, Φ*=Φ and start the iteration again. Peculiar features The discretized pressure correction equation is same as in the SIMPLE algorithm, except for the d terms which are used in momentum equations. p=p*+p' which tells that the under relaxing factor is not there in SIMPLEC as it was in SIMPLE. SIMPLEC algorithm is seen to converge 1.2-1.3 times faster than the SIMPLE algorithm It doesn't solve extra equations like SIMPLER algorithm. The cost per iteration is same as in the case of SIMPLE. Like SIMPLE, a bad pressure field guess will destroy a good velocity field. See also SIMPLE algorithm SIMPLER algorithm Navier–Stokes equations References Computational fluid dynamics
SIMPLEC algorithm
Physics,Chemistry
499
27,124,188
https://en.wikipedia.org/wiki/History%20of%20the%20oil%20industry%20in%20Saudi%20Arabia
Saudi Arabian oil was first discovered by the Americans and British in commercial quantities at Dammam oil well No. 7 in 1938 in what is now modern day Dhahran. Background On January 15, 1902, Ibn Saud took Riyadh from the Rashid tribe. In 1913, his forces captured the province of al-Hasa from the Ottoman Turks. In 1922, he completed his conquest of the Nejd, and in 1925, he conquered the Hijaz. In 1932, the Kingdom of Saudi Arabia was proclaimed with Ibn Saud as king. Without stability in the region, the search for oil would have been difficult, as evidenced by early oil exploration in neighbouring countries such as Yemen and Oman. Prior to 1938, there were three main factors that triggered the search for oil in Arabia: The discovery of oil by the Anglo-Persian Oil Company at Masjid-i-Sulaiman in the mountains of north-western Persia in 1908; but the consensus of geological opinion at the time was that there was no oil on the Arabian peninsula, although there were rumours of an oil seepage at Qatif on the eastern seaboard of Al-Ahsa, the eastern province of Arabia. The demand for oil during World War I. It became obvious that oil was going to be a crucial resource in warfare for the foreseeable future. Examples that proved this were “General Gallieni’s commandeering of the Paris taxi fleet to ferry soldiers to the front. This happened when the city seemed about to fall”. In addition to this, Germany’s shortage of oil supplies hindered their ability to produce aircraft, automobiles, and engines. The allies took advantage of this by producing thousands of vehicles to aid their war effort. The onset of the Great Depression. Prior to the depression, a major source of income for the ruler of Hijaz was the taxes paid by pilgrims on their way to the holy cities. After the depression hit, the number of pilgrimages per year fell from 100,000 to below 40,000. This hurt their economy greatly and they needed to find alternate sources of income. This caused Ibn Saud to get serious about the search for oil. Initial search In 1922, Ibn Saud met a New Zealand mining engineer, Major Frank Holmes. During World War I, Holmes had been to Gallipoli and then Ethiopia, where he first heard rumours of the oil seeps of the Persian Gulf region. He was convinced that much oil would be found throughout the region. After the war, Holmes helped to set up Eastern and General Syndicate Ltd in order, among other things, to seek oil concessions in the region. In 1923, the king signed a concession with Holmes allowing him to search for oil in eastern Saudi Arabia. Eastern and General Syndicate brought in a Swiss geologist to evaluate the land, but he claimed that searching for oil in Arabia would be “a pure gamble”. This discouraged the major banks and oil companies from investing in Arabian oil ventures. In 1925, Holmes signed a concession with the sheikh of Bahrain, allowing him to search for oil there. He then proceeded to the United States to find an oil company that might be interested in taking on the concession. He found help from Gulf Oil. In 1927, Gulf Oil took control of the concessions that Holmes made years ago. But Gulf Oil was a partner in the Iraq Petroleum Company, which was jointly owned by Royal Dutch — Shell, Anglo-Persian, the Compagnie Française des Pétroles (ancestor of French major TotalEnergies), and "the Near East Development Company", representing the interests of the American companies. The partners had signed up to the “Red Line Agreement”, which meant that Gulf Oil was precluded from taking up the Bahrain concession without the consent of the other partners; and they declined. Despite a promising survey in Bahrain, Gulf Oil was forced to transfer its interest to another company, Standard Oil of California (SOCAL), which was not a bound by the Red Line Agreement. Meanwhile Ibn Saud had dispatched American mining engineer Karl Twitchell to examine eastern Arabia. Twitchell found encouraging signs of oil, asphalt seeps in the vicinity of Qatif, but advised the king to await the outcome of the Bahrain No.1 well before inviting bids for a concession for Al-Ahsa. To the American engineers working in Bahrain, standing on the Jebel Dukhan and gazing across a twenty-mile (32 km) stretch of the Persian Gulf at the Arabian Peninsula in the clear light of early morning, the outline of the low Dhahran hills in the distance were an obvious oil prospect. On 31 May 1932, the SOCAL subsidiary, the Bahrain Petroleum Company (BAPCO) struck oil in Bahrain. The discovery brought fresh impetus to the search for oil on the Arabian peninsula. Negotiations for an oil concession for al-Hasa province opened at Jeddah in March, 1933. Twitchell attended with lawyer Lloyd Hamilton on behalf of SOCAL. The Iraq Petroleum Company represented by Stephen Longrigg competed in the bidding, but SOCAL was granted the concession on 23 May 1933. Under the agreement, SOCAL was given “exploration rights to some 930,000 square kilometers of land for 60 years”. Soon after the agreement, geologists arrived in al-Hasa and the search for oil was underway. Discovery of oil SOCAL set up a subsidiary company, the California Arabian Standard Oil Company (CASOC) to develop the oil concession. SOCAL also joined forces with the Texas Oil Company when together they formed CALTEX in 1936 to take advantage of the latter's formidable marketing network in Africa and Asia. When CASOC geologists surveyed the concession area, they identified a promising site and named it Dammam No. 7, after a nearby village. Over the next three years, the drillers were unsuccessful in making a commercial strike, but chief geologist Max Steineke persevered. He urged the team to drill deeper, even when Dammam No. 7 was plagued by cave-ins, stuck drill bits and other problems, before the drillers finally struck oil on 3 March 1938. This discovery would turn out to be first of many, eventually revealing the largest source of crude oil in the world. For the king, oil revenues became a crucial source of wealth since he no longer had to rely on receipts from pilgrimages to Mecca. This discovery would alter Middle Eastern political relations forever. Changes to the original concession In 1943, the name of the company in control in Saudi Arabia was changed to Arabian American Oil Company (ARAMCO). In addition, numerous changes were made to the original concession after the striking of oil. In 1939, the first modification gave the Arabian American Oil Company a greater area to search for oil and extended the concession until 1949, increasing the original deal by six years. In return, ARAMCO agreed to provide the Saudi Arabian government with large amounts of free kerosene and gasoline, and to pay higher payments than originally stipulated. Beginning in 1950, the Saudi Arabian government began a pattern of trying to increase government shares of revenue from oil production. In 1950, a fifty-fifty profit-sharing agreement was signed, whereby a tax was levied by the government. This tax considerably increased government revenues. The government continued this trend well into the ‘80s. By 1982, ARAMCO’s concession area was reduced to 220,000 square kilometers, down from the original 930,000 square kilometers. By 1988, ARAMCO was officially bought out by Saudi Arabia and became known as Saudi Aramco. Tapline Due to the quantity of the oil in Saudi Arabia, construction of pipelines became necessary to increase efficiency of production and transport. ARAMCO soon realized that “advantages of a pipeline to the Mediterranean Sea seemed obvious, saving about 3,200 kilometers of sea travel and the transit fees of the Suez Canal”. In 1945, the Trans-Arabian Pipeline Company (Tapline) was started and was completed in 1950. The pipeline greatly increased efficiency of oil transport, but also had its shortcomings. Issues concerning taxes and damages plagued it for years. It had to be shut down numerous times for repairs, and by 1983 was officially shut down. Yom Kippur War The Yom Kippur War was a conflict between Egypt, Syria, and their backers and Israel. Because the United States was a supporter of Israel, the Arab countries participated in an oil boycott of Canada, Japan, the Netherlands, the United Kingdom, and the United States. This boycott later included Portugal, Rhodesia, and South Africa. This was one of the major causes of the 1973 energy crisis that occurred in the United States. After the completion of the war, the price of oil increased drastically allowing Saudi Arabia to gain much wealth and power. See also Oil reserves in Saudi Arabia Energy in Saudi Arabia References External links Map of oil and gas fields in Saudi Arabia Economic history of Saudi Arabia Petroleum in Saudi Arabia History of the petroleum industry by country Petroleum industry in Saudi Arabia Petroleum
History of the oil industry in Saudi Arabia
Chemistry
1,823
2,283,276
https://en.wikipedia.org/wiki/Arame
, sea oak is a species of kelp, of the brown algae, best known for its use in Japanese cuisine. Description Eisenia bicyclis is indigenous to temperate Pacific Ocean waters centered near Japan, although it is deliberately cultured elsewhere, including South Korea. It grows and reproduces seasonally. Two flattened oval fronds rise from a stiff woody stipe which can be up to about tall. The fronds are shed and new ones formed annually. The plant appears both branched and feathered. It may be harvested by divers manually or mechanically, and the dried form is available year-round. Cuisine It is one of many species of seaweed used in Asian cuisine. Usually purchased in a dried state, it is reconstituted quickly, taking about five minutes. Arame comes in dark brown strands, has a mild, semi-sweet flavor, and a firm texture. It is added to appetizers, casseroles, muffins, pilafs, soups, toasted dishes, and many other types of food. Its mild flavor makes it adaptable to many uses. Chemistry Arame is high in calcium, iodine, iron, magnesium, and vitamin A as well as being a dietary source of many other minerals. It also is harvested for alginate, fertilizer and iodide. It contains the storage polysaccharide laminarin and the tripeptide eisenin, a peptide with immunological activity. Lignan content in arame is noted by several sources. It also contains the phlorotannins phlorofucofuroeckol A, dioxinodehydroeckol, fucofuroeckol A, eckol, dieckol, triphloroethol A and 7-phloroethol. Extracts of this algae have been tested to combat MRSA staph infections. See also Edible seaweed Seafood allergy References Further reading Kristina Turner. 1996. The Self-Healing Cookbook: A Macrobiotic Primer for the Healing Body. p. 122 Further reading Iwata, Kayoko. Tagami, Keiko. Uchida, Shigeo. (16 July 2013). "Ecological Half-Lives of Radiocesium in 16 Species in Marine Biota after the TEPCO's Fukushima Daiichi Nuclear Power Plant Accident". Environmental Science and Technology. Vol. 47. Issue. 14. Web of Science Core Collection. External links AlgaeBase Profile, M.D. Guiry in Guiry, M.D. & Guiry, G.M. 2013. AlgaeBase. National University of Ireland, Galway, retrieved 8 February 2013. Lessoniaceae Edible seaweeds
Arame
Biology
559
38,743,590
https://en.wikipedia.org/wiki/Commotion%20Wireless
Commotion Wireless is an open-source wireless mesh network for electronic communication. The project was developed by the Open Technology Institute, and development included a $2 million grant from the United States Department of State in 2011 for use as a mobile ad hoc network (MANET), concomitant with the Arab Spring. It was preliminarily deployed in Detroit in late 2012, and launched generally in March 2013. The project has been called an "Internet in a Suitcase". Commotion 1.0, the first non-beta release, was launched on December 30, 2013. Commotion relies on several open source projects: OLSR, OpenWrt, OpenBTS, and Serval project. Supported hardware Ubiquiti: PicoStation M2, Release 1 & 1.1, DR2 Bullet M2/M5, Release 1 & 1.1, DR2 NanoStation M2/M5, Release 1 & 1.1, DR2 Rocket M2/M5, Release 1 & 1.1, DR2 UniFi AP, Release 1 & 1.1 UniFi Outdoor, Release 1 & 1.1 TP-Link: TL-WDR3600, Release 1.1 TL-WDR4300, Release 1.1 Mikrotik: RB411AH, Release 1.1 See also List of router and firewall distributions References External links Mesh networking
Commotion Wireless
Technology
289
78,205,862
https://en.wikipedia.org/wiki/Lqh%CE%B1IT
Alpha-Insect Toxin LqhαIT is a neurotoxic protein found in the venom of the Leiurus hebraeus, commonly known as the Hebrew deathstalker scorpion. It is classified as an alpha-toxin due to its effect on insect voltage-gated sodium channels, causing prolonged neuronal firing that leads to paralysis in affected insects. This toxin has been widely studied for its unique interaction with insect nervous systems and has potential applications in neurophysiological research. Structure and Mechanism LqhαIT is part of the larger family of scorpion alpha-toxins. that act specifically on insect sodium channels. The primary structure of LqhαIT consists of a polypeptide chain with several disulfide bridges, contributing to its stability and resistance to degradation. These disulfide bonds are essential for maintaining the conformation needed to bind effectively to target sodium channels in insect nerve cells. LqhαIT binds to voltage-gated sodium channels in insect neurons, causing a prolonged opening of the channels. This action prevents the neurons from returning to their resting state, leading to continuous firing and eventually paralysis. This mechanism is specific to insect sodium channels, which makes LqhαIT highly selective, with limited effects on mammalian sodium channels. Biological Function The primary function of LqhαIT is to immobilize prey, particularly insects, by inducing rapid neurotoxic effects. Upon envenomation, LqhαIT binds to the insect's sodium channels, leading to hyperexcitation and paralysis. This allows the scorpion to subdue its prey quickly and effectively. The specificity of LqhαIT for insect sodium channels also plays a role in the evolutionary adaptation of Leiurus hebraeus, helping it to target insect prey within its native desert ecosystem. Research and Applications Neurophysiological Research: LqhαIT's specificity for insect sodium channels has made it a valuable tool in neurophysiological research. Scientists use this toxin to study the role of sodium channels in neuronal function and to better understand the differences between insect and mammalian ion channel structures. LqhαIT also serves as a model for studying the structure-function relationship of neurotoxins, as it exhibits highly selective binding characteristics that are important for developing novel bioinsecticides. LqhαIT: Structure and Functional Insights As one of the most potent scorpion α-neurotoxins targeting insects, LqhαIT serves as a crucial model for understanding the structural basis of selective toxicity and biological activity among α-neurotoxins. Its structure was determined through proton two-dimensional nuclear magnetic resonance spectroscopy (2D NMR), revealing detailed conformational features and providing insights into the interactions that underlie its insecticidal potency. Apo Structure The solution structure of LqhαIT was determined using 2D NMR. The structural features include: Secondary Structure: LqhαIT consists of an α-helix and a three-strand antiparallel β-sheet. These elements are stabilized by three type I tight turns and a five-residue turn. Hydrophobic Patch: A distinct hydrophobic patch, characteristic of scorpion neurotoxins, includes tyrosine and tryptophan residues arranged in a "herringbone" pattern. This region likely contributes to toxin stability and interaction with insect sodium channels. Comparison with Anti-mammalian α-Toxin (AaHII) The polypeptide backbone of LqhαIT closely resembles that of AaHII, an antimammalian α-toxin from Androctonus australis Hector, sharing approximately 60% amino acid sequence similarity. However, critical structural differences exist between the two, particularly in the five-residue turn involving Lys8-Cys12, the C-terminal segment, and the relative orientation of these regions. These variations are thought to underpin LqhαIT's selectivity for insect sodium channels, whereas AaHII is more effective against mammalian targets CryoEM structure of LqhαIT bound to NavPas Scorpion α-toxin LqhαIT exerts its potent insecticidal effects by specifically binding to a unique glycan on the insect voltage-gated sodium (Nav) channel. Cryo-electron microscopy (cryo-EM) studies have elucidated the structure of LqhαIT in complex with the insect Nav channel, revealing the intricate interactions between the toxin and the glycan scaffold attached to asparagine 330 on the channel. This glycan provides a distinct epitope that facilitates selective binding of LqhαIT to insect channels, stabilizing the voltage sensor domain in an inactive "S4 down" conformation. This mechanism contrasts with similar toxins that target mammalian channels, highlighting LqhαIT's specificity and effectiveness due to its selectivity. Further studies demonstrated that LqhαIT contains an NC-domain epitope, including residues critical for binding to the glycan scaffold, enabling the toxin to maintain a stable interaction with the Nav channel. Molecular dynamics simulations confirm the stability of these interactions, including hydrogen bonds and salt bridges, which remain consistent throughout the simulations. This glycosylation binding contributes to the potency of LqhαIT and offers insights into the design of insect-specific Nav channel modulators. The structure-function relationship observed here underscores the utility of such toxins as models for developing targeted Nav channel modulators with minimal off-target effects on mammalian systems. Toxicology and Safety While LqhαIT is toxic to insects, it exhibits minimal toxicity to mammals, including humans. This specificity is due to structural differences in mammalian sodium channels, which do not interact with LqhαIT in the same way as insect channels. However, the venom of Leiurus hebraeus as a whole can still pose significant risks to humans, as it contains other potent toxins targeting various components of the nervous system. Proper safety measures are necessary when handling scorpion venom in laboratory settings to prevent accidental envenomation. See also Leiurus hebraeus Scorpion venom Voltage-gated sodium channels References Scorpion toxins Peptides
LqhαIT
Chemistry
1,296
54,316,088
https://en.wikipedia.org/wiki/Non-relativistic%20spacetime
In physics, a non-relativistic spacetime is any mathematical model that fuses n–dimensional space and m–dimensional time into a single continuum other than the (3+1) model used in relativity theory. In the sense used in this article, a spacetime is deemed "non-relativistic" if (a) it deviates from (3+1) dimensionality, even if the postulates of special or general relativity are otherwise satisfied, or if (b) it does not obey the postulates of special or general relativity, regardless of the model's dimensionality. Introduction There are many reasons why spacetimes may be studied that do not satisfy relativistic postulates and/or that deviate from the apparent (3+1) dimensionality of the known universe. Galilean/Newtonian spacetime The classic example of a non-relativistic spacetime is the spacetime of Galileo and Newton. It is the spacetime of everyday "common sense". Galilean/Newtonian spacetime assumes that space is Euclidean (i.e. "flat"), and that time has a constant rate of passage that is independent of the state of motion of an observer, or indeed of anything external. Newtonian mechanics takes place within the context of Galilean/Newtonian spacetime. For a huge problem set, the results of computations using Newtonian mechanics are only imperceptibly different from computations using a relativistic model. Since computations using Newtonian mechanics are considerably simpler than those using relativistic mechanics, as well as correspond to intuition, most everyday mechanics problems are solved using Newtonian mechanics. Model systems Efforts since 1930 to develop a consistent quantum theory of gravity have not yet produced more than tentative results. The study of quantum gravity is difficult for multiple reasons. Technically, general relativity is a complex, nonlinear theory. Very few problems of significant interest admit of analytical solution, and numerical solutions in the strong-field realm can require immense amounts of supercomputer time. Conceptual issues present an even greater difficulty, since general relativity states that gravity is a consequence of the geometry of spacetime. To produce a quantum theory of gravity would therefore require quantizing the basic units of measurement themselves: space and time. A completed theory of quantum gravity would undoubtedly present a visualization of the Universe unlike any that has hitherto been imagined. One promising research approach is to explore the features of simplified models of quantum gravity that present fewer technical difficulties while retaining the fundamental conceptual features of the full-fledged model. In particular, general relativity in reduced dimensions (2+1) retains the same basic structure of the full (3+1) theory, but is technically far simpler. Multiple research groups have adopted this approach to studying quantum gravity. "New physics" theories The idea that relativistic theory could be usefully extended with the introduction of extra dimensions originated with Nordstöm's 1914 modification of his previous 1912 and 1913 theories of gravitation. In this modification, he added an additional dimension resulting in a 5-dimensional vector theory. Kaluza–Klein theory (1921) was an attempt to unify relativity theory with electromagnetism. Although at first enthusiastically welcomed by physicists such as Einstein, Kaluza–Klein theory was too beset with inconsistencies to be a viable theory. Various superstring theories have effective low-energy limits that correspond to classical spacetimes with alternate dimensionalities than the apparent dimensionality of the observed universe. It has been argued that all but the (3+1) dimensional world represent dead worlds with no observers. Therefore, on the basis of anthropic arguments, it would be predicted that the observed universe should be one of (3+1) spacetime. Space and time may not be fundamental properties, but rather may represent emergent phenomena whose origins lie in quantum entanglement. It had occasionally been wondered whether it is possible to derive sensible laws of physics in a universe with more than one time dimension. Early attempts at constructing spacetimes with extra timelike dimensions inevitably met with issues such as causality violation and so could be immediately rejected, but it is now known that viable frameworks exist of such spacetimes that can be correlated with general relativity and the Standard Model, and which make predictions of new phenomena that are within the range of experimental access. Possible observational evidence Observed high values of the cosmological constant may imply kinematics significantly different from relativistic kinematics. A deviation from relativistic kinematics would have significant cosmological implications in regards to such puzzles as the "missing mass" problem. To date, general relativity has satisfied all experimental tests. However, proposals that may lead to a quantum theory of gravity (such as string theory and loop quantum gravity) generically predict violations of the weak equivalence principle in the 10−13 to 10−18 range. Currently envisioned tests of the weak equivalence principle are approaching a degree of sensitivity such that non-discovery of a violation would be just as profound a result as discovery of a violation. Non-discovery of equivalence principle violation in this range would suggest that gravity is so fundamentally different from other forces as to require a major reevaluation of current attempts to unify gravity with the other forces of nature. A positive detection, on the other hand, would provide a major guidepost towards unification. Condensed matter physics Research on condensed matter has spawned a two-way relationship between spacetime physics and condensed matter physics: On the one hand, spacetime approaches have been used to investigate certain condensed matter phenomena. For example, spacetimes with local non-relativistic symmetries have been investigated capable of supporting massive matter fields. This approach has been used to investigate the details of matter couplings, transport phenomena, and the thermodynamics of non-relativistic fluids. On the other hand, condensed matter systems can be used to mimic certain aspects of general relativity. Although intrinsically non-relativistic, these systems provide models of curved spacetime quantum field theory that are experimentally accessible. The include acoustical models in flowing fluids, Bose–Einstein condensate systems, or quasiparticles in moving superfluids, such as the quasiparticles and domain walls of the A-phase of superfluid . Examples of model systems Examples of "new physics" theories Examples of possible observational evidence Examples in condensed matter physics Further reading Debono, I. and G. F. Smoot. General Relativity and Cosmology: Unsolved Questions and Future Directions See also Non-relativistic gravitational fields References Theory of relativity
Non-relativistic spacetime
Physics
1,374
24,532,432
https://en.wikipedia.org/wiki/Unplaced%20in%20APG%20II
When the APG II system of plant classification was published in April 2003, fifteen genera and three families were placed incertae sedis in the angiosperms, and were listed in a section of the appendix entitled "Taxa of uncertain position". By the end of 2009, molecular phylogenetic analysis of DNA sequences had revealed the relationships of most of these taxa, and all but three of them had been placed in some group within the angiosperms. In October 2009, APG II was superseded by the APG III system. In APG III, 11 of the genera listed above were placed in families, or else became families whose position within their orders was approximately or exactly known. The family Rafflesiaceae was placed in the order Malpighiales, close to Euphorbiaceae and possibly within it. Mitrastema became a monotypic family, Mitrastemonaceae. This family and Balanophoraceae were placed incertae sedis into orders, that is, their positions within these orders remained completely unknown. Metteniusa was found to belong to a supraordinal group known as the lamiids, which has not been satisfactorily divided into orders. Cynomorium was raised to familial status as Cynomoriaceae, and along with Apodanthaceae and Gumillea, remained unplaced in APG III. Five taxa were unplaced among the angiosperms in APG III because Nicobariodendron and Petenaea were added to the list. Leptaulus There is no apparent reason for the inclusion of Leptaulus in the list of unplaced taxa, other than the time lag between submission and publication. In 2001, in a phylogenetic study based on morphological and DNA data, Leptaulus was found to belong to a group of six genera that most authors now consider to be the family Cardiopteridaceae. This was confirmed in a study of wood anatomy in 2008. The genus is placed in the Cardiopteridaceae in the APG III system of 2009. Before 2001, Leptaulus and the rest of Cardiopteridaceae had usually been placed in a broadly circumscribed Icacinaceae, which turned out to be polyphyletic. Some botanists do not recognize Cardiopteridaceae as a family of six genera. Instead, they segregate Cardiopteris into a monogeneric Cardiopteridaceae sensu stricto and place the other five genera in the family Leptaulaceae. The monophyly of Leptaulaceae has never been tested with molecular data. Pottingeria It had long been thought, at least by some, that the small Southeast Asian tree Pottingeria might belong in the order Celastrales. In a phylogenetic study of that order in 2006, Pottingeria was found to be a member of the order, but not of any of its families. It was in an unresolved pentatomy consisting of Parnassiaceae, Pottingeria, Mortonia, the pair (Quetzalia + Zinowiewia), and the other genera of Celastraceae. When the APG III system was published in October 2009, the Angiosperm Phylogeny Group expanded Celastraceae to include all members of the pentatomy mentioned above. Dipentodon Dipentodon has one species Dipentodon sinicus. It is native to southern China, Burma, and northern India. In 2009, in a molecular phylogenetic study of the order Huerteales, it was shown that Dipentodon and Perrottetia belong together as the two genera of the family Dipentodontaceae. Medusandra and Soyauxia In 2009, in a molecular phylogenetic study of Malpighiales, Kenneth Wurdack and Charles Davis sampled five genera and one family that had been unplaced in APG II. They placed some of these for the first time and confirmed the previous placement of others with strong statistical support. In their outgroup, they included four genera from Saxifragales. These were Daphniphyllum, Medusandra, Soyauxia, and Peridiscus. In their phylogeny, Medusandra and Soyauxia formed a strongly supported clade with Peridiscus, a member of the family Peridiscaceae, the most basal clade in Saxifragales. Wurdack and Davis recommended that Medusandra and Soyauxia both be transferred to Peridiscaceae. Thus the monogeneric family Medusandraceae is subsumed into Peridiscaceae. Soyauxia had been found to be close to Peridiscus in another study two years before. Wurdack and Davis also found that the family Rafflesiaceae and the genera Aneulophus, Centroplacus, and Trichostephanus belong in the order Malpighiales. Aneulophus Aneulophus consists of two species of woody plants from tropical West Africa. Wurdack and Davis found the traditional placement of Aneulophus in Erythroxylaceae to be correct. Its position within the family remains uncertain. Erythroxylaceae is a family of four genera. Erythroxylum has about 230 species. Nectaropetalum has eight species and Pinacopodium has two. No one has yet produced a molecular phylogeny of the family. Centroplacus Centroplacus has a single species, Centroplacus glaucinus, a tree from West Africa. It was found to be close to Bhesa, a genus that had only recently been removed from Celastrales. Bhesa was grouped with Centroplacus to become the second genus in Centroplacaceae. Bhesa consists of five species of trees from India and Malesia. Trichostephanus Trichostephanus has two species, both in tropical West Africa. It had usually been assigned to Achariaceae, but it was found to be deeply embedded in Samydaceae. Many taxonomists do not recognize Samydaceae as a separate family from Salicaceae. Rafflesiaceae Several genera have been removed from Rafflesiaceae, so that it now consists of only three genera: Sapria, Rhizanthes, and Rafflesia. All of these are holoparasites and, as discussed below, finding their relationships by molecular phylogenetics has presented special challenges. Rafflesia and its relatives were the subject of several papers from 2004 to 2009, and as the world's largest flower, Rafflesia has attracted special interest. In 2009, Wurdack and Davis confirmed earlier work in which it was found that Rafflesiaceae is nested within Euphorbiaceae sensu stricto, a circumscription of Euphorbiaceae that excludes Phyllanthaceae, Picrodendraceae, Putranjivaceae, Pandaceae, and a few other very small groups that had been included in it until the 1990s. In order to preserve Rafflesiaceae, Wurdack and Davis split Euphorbiaceae sensu stricto into Euphorbiaceae sensu strictissimo and Peraceae, a new family comprising Pera and four other genera. Parasites Four of the unplaced genera, and all three of the unplaced families of APG II consist of achlorophyllous holoparasites. In these, the chloroplast genes that are usually used in phylogenetic studies of angiosperms have become nonfunctional pseudogenes. If these evolve rapidly, they may be saturated with repeated mutations at the same site and consequently not be useful for phylogenetic reconstruction. The relationships of some parasitic taxa have been elucidated in studies of nuclear and mitochondrial DNA sequences. But these sequences sometimes produce artifactual topologies in the phylogenetic tree, because horizontal gene transfer often occurs between parasites and their hosts. Bdallophyton and Cytinus The parasitic genera Bdallophyton and Cytinus have been found to be closely related and have been placed together as the family Cytinaceae. On the basis of mitochondrial DNA, Cytinaceae has been placed in Malvales, as sister to Muntingiaceae. Mitrastemon The parasitic family Mitrastemonaceae has one genus, known either as Mitrastemon or Mitrastema. The genus name and the corresponding family name have been a source of much confusion. A phylogeny based on mitochondrial genes places Mitrastemon in the order Ericales, but this result had only 76% maximum likelihood bootstrap support. Hoplestigma Hoplestigma consists of two species of African trees, notable for their large leaves, up to 55 cm long and 25 cm wide. It is usually placed by itself in the family Hoplestigmataceae which is thought to be related to Boraginaceae. In 2014, a phylogeny of Boraginaceae was published in a scientific journal called Cladistics. By comparing the DNA sequences of selected genes, the authors of that study showed that Hoplestigma is related to members of Boraginaceae subfamily Cordioideae, and they recommended that Hoplestigma be placed in that subfamily. Other authors have suggested that, while Hoplestigma is the closest relative of Cordioideae, it should perhaps not be placed within it. Metteniusa Metteniusa consists of seven species of trees in Central America and northwestern South America. Ever since Hermann Karsten proposed the name Metteniusaceae in 1859, some authors have placed Metteniusa by itself, in that family. Most authors, however, placed it in Icacinaceae until that family was shown to be polyphyletic in 2001. In 2007, in a comparison of DNA sequences for three genes, it was found that Metteniusa is one of the basal clades of the lamiids. The authors recommended that the family Metteniusaceae be recognized. Nothing is yet known about relationships among the groups of basal lamiids. The groups in this polytomy include the order Garryales, the families Icacinaceae, Oncothecaceae, and Metteniusaceae, as well as some unplaced genera, including Apodytes, Emmotum, and Cassinopsis. No phylogenetic study has focused on the lamiids, but phylogenies have been inferred for the asterids, a group composed of Cornales, Ericales, the lamiids, and the campanulids. Balanophoraceae Balanophoraceae is a family of holoparasites with 44 species in 17 genera. For a long time, Cynomorium was usually included in this family, but it is now known to be unrelated. In 2005, Balanophoraceae was shown to be in the order Santalales, but its position within that order has not been determined. Two researchers in Taiwan announced on the internet in 2009 that they have results supporting the placement of Balanophoraceae in Santalales. They have yet to publish anything in a scientific journal. Cynomorium Many names have been published in Cynomorium, but there are probably only two species. It is not closely related to anything else, so it is placed in the monogeneric family Cynomoriaceae. Attempts to find its closest relatives have demonstrated with special clarity that molecular phylogenetics is not a sure-fire, problem-free method of determining systematic relationships. One study placed it in Saxifragales, but not at any particular position within that order. Doubts have been expressed about the results of this study. Another study placed Cynomorium in Rosales based on analysis of the two invert repeat regions of the chloroplast genome, which evolve at one fifth the rate of the two single copy regions. Gumillea Gumillea has a single species, Gumillea auriculata, and is known from only one specimen which was collected in the late 18th century in Peru. It was named by Hipólito Ruiz López and José Antonio Pavón Jiménez. George Bentham and Joseph Hooker placed it in Cunoniaceae, and this treatment was followed by Adolf Engler and most others. The last comprehensive treatment of Cunoniaceae, however, excludes it from the family. In 2009, Armen Takhtajan placed Gumillea in Simaroubaceae. A 2007 article on Simaroubaceae contains a list of the genera in the family. Gumillea is not on that list, but the authors do not provide a list or section on excluded genera. Gumillea has also been called a synonym of Picramnia, but the ultimate source of this information is obscure and it is not mentioned in either of the recent treatments of Picramnia. It is worth noting that on their plate for Gumillea, Ruiz and Pavón showed 11 ovules or immature seeds that had been extracted from a 2-locular ovary. But the ovary in Picramnia has (sometimes 2), usually 3 to 4 locules and there are always two ovules in each locule. It might be possible to determine the affinities of Gumillea if DNA could be extracted from the existing specimen. DNA has been successfully amplified from specimens of similar age. Any material used in such research, however, will never be replaced. Apodanthaceae The family Apodanthaceae comprises 22 to 30 species of endoparasitic herbs. These are distributed into three genera: Pilostyles, Apodanthes, and Berlinianche. Attempts to determine the relationships of Apodanthaceae have produced only uncertain results and they have remained enigmatic, until the family was shown to be confidently placed in Cucurbitales References External links Aneulophus Apodanthaceae Balanophoraceae Bdallophytum Centroplacus Cynomorium Cytinus Dipentodon Gumillea Hoplestigma Leptaulus Medusandra Metteniusa Mitrastema Pottingeria Rafflesiaceae Trichostephanus Mabberley's Plant-book Gumillea Flowering Plants (Takhtajan 2009) Plant taxonomy Unplaced names
Unplaced in APG II
Biology
2,971
23,826,998
https://en.wikipedia.org/wiki/Compensatory%20growth%20%28organ%29
Compensatory growth is a type of regenerative growth that can take place in a number of human organs after the organs are either damaged, removed, or cease to function. Additionally, increased functional demand can also stimulate this growth in tissues and organs. The growth can be a result of increased cell size (compensatory hypertrophy) or an increase in cell division (compensatory hyperplasia) or both. For instance, if one kidney is surgically removed, the cells of the other kidney divide at an increased rate. Eventually, the remaining kidney can grow until its mass approaches the combined mass of two kidneys. Along with the kidneys, compensatory growth has also been characterized in a number of other tissues and organs including: The adrenal glands The heart Muscles The liver The lungs The pancreas (beta cells and acinar cells) The mammary glands The spleen (where bone marrow and lymphatic tissue undergo compensatory hypertrophy and assumes the spleen function during spleen injury) The testicles The thyroid gland The turbinates of the nose A large number of growth factors and hormones are involved with compensatory growth, but the exact mechanism is not fully understood and probably varies between different organs. Nevertheless, angiogenic growth factors which control the growth of blood vessels are particularly important because blood flow significantly determines the maximum growth of an organ. Compensatory growth may also refer to the accelerated growth following a period of slowed growth, particularly as a result of nutrient deprivation. See also Hyperplasia Hypertrophy Cellular adaptation References Developmental biology Healing Human anatomy Human physiology Human development
Compensatory growth (organ)
Biology
341
26,137,572
https://en.wikipedia.org/wiki/Plant%20litter
Plant litter (also leaf litter, tree litter, soil litter, litterfall or duff) is dead plant material (such as leaves, bark, needles, twigs, and cladodes) that have fallen to the ground. This detritus or dead organic material and its constituent nutrients are added to the top layer of soil, commonly known as the litter layer or O horizon ("O" for "organic"). Litter is an important factor in ecosystem dynamics, as it is indicative of ecological productivity and may be useful in predicting regional nutrient cycling and soil fertility. Characteristics and variability Litterfall is characterized as fresh, undecomposed, and easily recognizable (by species and type) plant debris. This can be anything from leaves, cones, needles, twigs, bark, seeds/nuts, logs, or reproductive organs (e.g. the stamen of flowering plants). Items larger than 2 cm diameter are referred to as coarse litter, while anything smaller is referred to as fine litter or litter. The type of litterfall is most directly affected by ecosystem type. For example, leaf tissues account for about 70 percent of litterfall in forests, but woody litter tends to increase with forest age. In grasslands, there is very little aboveground perennial tissue so the annual litterfall is very low and quite nearly equal to the net primary production. In soil science, soil litter is classified in three layers, which form on the surface of the O Horizon. These are the L, F, and H layers: The litter layer is quite variable in its thickness, decomposition rate and nutrient content and is affected in part by seasonality, plant species, climate, soil fertility, elevation, and latitude. The most extreme variability of litterfall is seen as a function of seasonality; each individual species of plant has seasonal losses of certain parts of its body, which can be determined by the collection and classification of plant litterfall throughout the year, and in turn affects the thickness of the litter layer. In tropical environments, the largest amount of debris falls in the latter part of dry seasons and early during wet season. As a result of this variability due to seasons, the decomposition rate for any given area will also be variable. Latitude also has a strong effect on litterfall rates and thickness. Specifically, litterfall declines with increasing latitude. In tropical rainforests, there is a thin litter layer due to the rapid decomposition, while in boreal forests, the rate of decomposition is slower and leads to the accumulation of a thick litter layer, also known as a mor. Net primary production works inversely to this trend, suggesting that the accumulation of organic matter is mainly a result of decomposition rate. Surface detritus facilitates the capture and infiltration of rainwater into lower soil layers. The surface detritus also protects soil from excess drying and warming. Soil litter protects soil aggregates from raindrop impact, preventing the release of clay and silt particles from plugging soil pores. Releasing clay and silt particles reduces the capacity for soil to absorb water and increases cross surface flow, accelerating soil erosion. In addition soil litter reduces wind erosion by preventing soil from losing moisture and providing cover preventing soil transportation. Organic matter accumulation also helps protect soils from wildfire damage. Soil litter can be completely removed depending on intensity and severity of wildfires and season. Regions with high frequency wildfires have reduced vegetation density and reduced soil litter accumulation. Climate also influences the depth of plant litter. Typically humid tropical and sub-tropical climates have reduced organic matter layers and horizons due to year-round decomposition and high vegetation density and growth. In temperate and cold climates, litter tends to accumulate and decompose slower due to a shorter growing season. Net primary productivity Net primary production and litterfall are intimately connected. In every terrestrial ecosystem, the largest fraction of all net primary production is lost to herbivores and litter fall. Due to their interconnectedness, global patterns of litterfall are similar to global patterns of net primary productivity. Plant litter, which can be made up of fallen leaves, twigs, seeds, flowers, and other woody debris, makes up a large portion of above ground net primary production of all terrestrial ecosystems. Fungus plays a large role in cycling the nutrients from the plant litter back into the ecosystem. Habitat and food Litter provides habitat for a variety of organisms. Plants Certain plants are specially adapted for germinating and thriving in the litter layers. For example, bluebell (Hyacinthoides non-scripta) shoots puncture the layer to emerge in spring. Some plants with rhizomes, such as common wood sorrel (Oxalis acetosella) do well in this habitat. Detritivores and other decomposers Many organisms that live on the forest floor are decomposers, such as fungi. Organisms whose diet consists of plant detritus, such as earthworms, are termed detritivores. The community of decomposers in the litter layer also includes bacteria, amoeba, nematodes, rotifer, tardigrades, springtails, cryptostigmata, potworms, insect larvae, mollusks, oribatid mites, woodlice, and millipedes. Even some species of microcrustaceans, especially copepods (for instance Bryocyclops spp., Graeteriella spp.,Olmeccyclops hondo, Moraria spp.,Bryocamptus spp., Atheyella spp.) live in moist leaf litter habitats and play an important role as predators and decomposers. The consumption of the litterfall by decomposers results in the breakdown of simple carbon compounds into carbon dioxide (CO2) and water (H2O), and releases inorganic ions (like nitrogen and phosphorus) into the soil where the surrounding plants can then reabsorb the nutrients that were shed as litterfall. In this way, litterfall becomes an important part of the nutrient cycle that sustains forest environments. As litter decomposes, nutrients are released into the environment. The portion of the litter that is not readily decomposable is known as humus. Litter aids in soil moisture retention by cooling the ground surface and holding moisture in decaying organic matter. The flora and fauna working to decompose soil litter also aid in soil respiration. A litter layer of decomposing biomass provides a continuous energy source for macro- and micro-organisms. Larger animals Numerous reptiles, amphibians, birds, and even some mammals rely on litter for shelter and forage. Amphibians such as salamanders and caecilians inhabit the damp microclimate underneath fallen leaves for part or all of their life cycle. This makes them difficult to observe. A BBC film crew captured footage of a female caecilian with young for the first time in a documentary that aired in 2008. Some species of birds, such as the ovenbird of eastern North America for example, require leaf litter for both foraging and material for nests. Sometimes litterfall even provides energy to much larger mammals, such as in boreal forests where lichen litterfall is one of the main constituents of wintering deer and elk diets. Nutrient cycle During leaf senescence, a portion of the plant's nutrients are reabsorbed from the leaves. The nutrient concentrations in litterfall differ from the nutrient concentrations in the mature foliage by the reabsorption of constituents during leaf senescence. Plants that grow in areas with low nutrient availability tend to produce litter with low nutrient concentrations, as a larger proportion of the available nutrients is reabsorbed. After senescence, the nutrient-enriched leaves become litterfall and settle on the soil below. Litterfall is the dominant pathway for nutrient return to the soil, especially for nitrogen (N) and phosphorus (P). The accumulation of these nutrients in the top layer of soil is known as soil immobilization. Once the litterfall has settled, decomposition of the litter layer, accomplished through the leaching of nutrients by rainfall and throughfall and by the efforts of detritivores, releases the breakdown products into the soil below and therefore contributes to the cation exchange capacity of the soil. This holds especially true for highly weathered tropical soils. Decomposition rate is tied to the type of litterfall present. Leaching is the process by which cations such as iron (Fe) and aluminum (Al), as well as organic matter are removed from the litterfall and transported downward into the soil below. This process is known as podzolization and is particularly intense in boreal and cool temperate forests that are mainly constituted by coniferous pines whose litterfall is rich in phenolic compounds and fulvic acid. By the process of biological decomposition by microfauna, bacteria, and fungi, CO2 and H2O, nutrient elements, and a decomposition-resistant organic substance called humus are released. Humus composes the bulk of organic matter in the lower soil profile. The decline of nutrient ratios is also a function of decomposition of litterfall (i.e. as litterfall decomposes, more nutrients enter the soil below and the litter will have a lower nutrient ratio). Litterfall containing high nutrient concentrations will decompose more rapidly and asymptote as those nutrients decrease. Knowing this, ecologists have been able to use nutrient concentrations as measured by remote sensing as an index of a potential rate of decomposition for any given area. Globally, data from various forest ecosystems shows an inverse relationship in the decline in nutrient ratios to the apparent nutrition availability of the forest. Once nutrients have re-entered the soil, the plants can then reabsorb them through their roots. Therefore, nutrient reabsorption during senescence presents an opportunity for a plant's future net primary production use. A relationship between nutrient stores can also be defined as: annual storage of nutrients in plant tissues + replacement of losses from litterfall and leaching = the amount of uptake in an ecosystem Non-terrestrial Litterfall Non-terrestrial litterfall follows a very different path. Litter is produced both inland by terrestrial plants and moved to the coast by fluvial processes, and by mangrove ecosystems. From the coast Robertson & Daniel 1989 found it is then removed by the tide, crabs and microbes. They also noticed that which of those three is most significant depends on the tidal regime. Nordhaus et al. 2011 find crabs forage for leaves at low tide and if their detritivory is the predominant disposal route, they can take 80% of leaf material. Bakkar et al 2017 studied the chemical contribution of the resulting crab defecation. They find crabs pass a noticeable amount of undegraded lignins to both the sediments and water composition. They also find that the exact carbonaceous contribution of each plant species can be traced from the plant, through the crab, to its sediment or water disposition in this way. Crabs are usually the only significant macrofauna in this process, however Raw et al 2017 find Terebralia palustris competes with crabs unusually vigorously in southeast Asia. Collection and analysis The main objectives of litterfall sampling and analysis are to quantify litterfall production and chemical composition over time in order to assess the variation in litterfall quantities, and hence its role in nutrient cycling across an environmental gradient of climate (moisture and temperature) and soil conditions. Ecologists employ a simple approach to the collection of litterfall, most of which centers around one piece of equipment, known as a litterbag. A litterbag is simply any type of container that can be set out in any given area for a specified amount of time to collect the plant litter that falls from the canopy above. Litterbags are generally set in random locations within a given area and marked with GPS or local coordinates, and then monitored on a specific time interval. Once the samples have been collected, they are usually classified on type, size and species (if possible) and recorded on a spreadsheet. When measuring bulk litterfall for an area, ecologists will weigh the dry contents of the litterbag. By this method litterfall flux can be defined as: litterfall (kg m−2 yr−1) = total litter mass (kg) / litterbag area (m2) The litterbag may also be used to study decomposition of the litter layer. By confining fresh litter in the mesh bags and placing them on the ground, an ecologist can monitor and collect the decay measurements of that litter. An exponential decay pattern has been produced by this type of experiment: , where is the initial leaf litter and is a constant fraction of detrital mass. The mass-balance approach is also utilized in these experiments and suggests that the decomposition for a given amount of time should equal the input of litterfall for that same amount of time. litterfall = k(detrital mass) For study various groups from edaphic fauna you need a different mesh sizes in the litterbags Issues Change due to invasive earthworms In some regions of glaciated North America, earthworms have been introduced where they are not native. Non-native earthworms have led to environmental changes by accelerating the rate of decomposition of litter. These changes are being studied, but may have negative impacts on some inhabitants such as salamanders. Forest litter raking Leaf litter accumulation depends on factors like wind, decomposition rate and species composition of the forest. The quantity, depth and humidity of leaf litter varies in different habitats. The leaf litter found in primary forests is more abundant, deeper and holds more humidity than in secondary forests. This condition also allows for a more stable leaf litter quantity throughout the year. This thin, delicate layer of organic material can be easily affected by humans. For instance, forest litter raking as a replacement for straw in husbandry is an old non-timber practice in forest management that has been widespread in Europe since the seventeenth century. In 1853, an estimated 50 Tg of dry litter per year was raked in European forests, when the practice reached its peak. This human disturbance, if not combined with other degradation factors, could promote podzolisation; if managed properly (for example, by burying litter removed after its use in animal husbandry), even the repeated removal of forest biomass may not have negative effects on pedogenesis. See also Coarse woody debris Detritus Forest floor Leaf litter sieve Leaf mold (a type of compost) Soil horizon References External links forestresearch.gov.uk Biology terminology Ecological restoration Ecology terminology Ecology Environmental terminology Habitat Soil improvers
Plant litter
Chemistry,Engineering,Biology
2,963
49,606,123
https://en.wikipedia.org/wiki/Zinc%20finger%20protein%20695
Zinc finger protein 695 is a protein that in humans is encoded by the ZNF695 gene. See also ZNF692 References Further reading Human proteins
Zinc finger protein 695
Chemistry
38
5,050,196
https://en.wikipedia.org/wiki/Cointet-element
The Cointet-element, also known as a Belgian Gate or C-element, was a heavy steel fence about wide and high, typically mounted on concrete rollers, used as a mobile anti-tank obstacle during World War II. Each individual fence element weighed about and was movable (e.g. with two horses) through the use of two fixed and one rotating roller. Its invention is attributed to a French colonel (later general), Léon-Edmond de Cointet de Fillain who came up with the idea in 1933 to be used in the Maginot Line. Besides their use as barricades to the entrances of forts, bridges and roads, the heavy fences were used in the Belgian "Iron Wall" of the Koningshooikt–Wavre Line (also known as "Dyle Line") and were re-used as beach obstacles on the Atlantic Wall defending Normandy from Allied invasion. History The Cointet-element formed the main barricade of the Belgian K-W Line, a tank barricade that was built between September 1939 and May 1940. Following tests, the Belgian Army accepted the Cointet-elements in 1936 after slightly altering the design by the addition of eight vertical beams in the front frame to stop infantry moving through them. On 13 February 1939 and 24 July 1939 the first tenders were called for ten groups of five hundred Cointets each. A total of 77,000 pieces were ordered by the Belgian Ministry of Defence and produced by twenty-eight Belgian companies with 73,600 pieces delivered. Thousands of Cointets were installed on the K-W Line between the village of Koningshooikt and the city of Wavre to act as the main line of defence against a possible German armoured invasion through the heartland of Belgium, forming a long iron wall. The Cointet-elements were placed next to each other in a zig-zag and connected with steel cables. Near main roads they were fixed to heavy concrete pillars set into the ground to allow local traffic passage. By May 1940 however, due to a relocation programme, the elements did not form a continuous line and thus were easily bypassed by the 3rd and 4th Panzer Divisions. The Cointet elements were also used as an anti-tank line in a side branch of the K-W Line, which was meant to defend the southern approaches to Brussels. This line branched off the main line in Wavre and ran from there to Halle and on to Ninove, where it ended on the banks of the Dender. After the German victory in Belgium on 28 May 1940, the Belgian Gates were reallocated across Europe to serve as barricade elements on roads, bridges and beaches. The Germans gave it the name C-element. Large numbers of gates were brought to Normandy during the construction of the Atlantikwall to be used with the other varieties of beach obstacles. Instead of connecting them, the Germans used them singly next to other items, especially at the low tide line. They were also put on the dikes next to bunkers. Notes from 1944 cite the placement of 23,408 Cointets over of coastline. With many more still present in Belgium after D-Day, the Allies had great difficulty passing them in the last months of the war. See also Cheval de frise Czech hedgehog Dragon's teeth References External links Obstacles on the Normandy battlefields kwlinie.be – A Belgian inventarisation project of the KW line set up in 2009 Fortification (obstacles) Anti-tank obstacles Military history of Belgium during World War II Maginot Line Military equipment introduced in the 1930s Fortification (architectural elements) Area denial weapons
Cointet-element
Engineering
744
10,747,879
https://en.wikipedia.org/wiki/Lazy%20learning
(Not to be confused with the lazy learning regime, see Neural tangent kernel). In machine learning, lazy learning is a learning method in which generalization of the training data is, in theory, delayed until a query is made to the system, as opposed to eager learning, where the system tries to generalize the training data before receiving queries. The primary motivation for employing lazy learning, as in the K-nearest neighbors algorithm, used by online recommendation systems ("people who viewed/purchased/listened to this movie/item/tune also ...") is that the data set is continuously updated with new entries (e.g., new items for sale at Amazon, new movies to view at Netflix, new clips at YouTube, new music at Spotify or Pandora). Because of the continuous update, the "training data" would be rendered obsolete in a relatively short time especially in areas like books and movies, where new best-sellers or hit movies/music are published/released continuously. Therefore, one cannot really talk of a "training phase". Lazy classifiers are most useful for large, continuously changing datasets with few attributes that are commonly queried. Specifically, even if a large set of attributes exist - for example, books have a year of publication, author/s, publisher, title, edition, ISBN, selling price, etc. - recommendation queries rely on far fewer attributes - e.g., purchase or viewing co-occurrence data, and user ratings of items purchased/viewed. Advantages The main advantage gained in employing a lazy learning method is that the target function will be approximated locally, such as in the k-nearest neighbor algorithm. Because the target function is approximated locally for each query to the system, lazy learning systems can simultaneously solve multiple problems and deal successfully with changes in the problem domain. At the same time they can reuse a lot of theoretical and applied results from linear regression modelling (notably PRESS statistic) and control. It is said that the advantage of this system is achieved if the predictions using a single training set are only developed for few objects. This can be demonstrated in the case of the k-NN technique, which is instance-based and function is only estimated locally. Disadvantages Theoretical disadvantages with lazy learning include: The large space requirement to store the entire training dataset. In practice, this is not an issue because of advances in hardware and the relatively small number of attributes (e.g., as co-occurrence frequency) that need to be stored. Particularly noisy training data increases the case base unnecessarily, because no abstraction is made during the training phase. In practice, as stated earlier, lazy learning is applied to situations where any learning performed in advance soon becomes obsolete because of changes in the data. Also, for the problems for which lazy learning is optimal, "noisy" data does not really occur - the purchaser of a book has either bought another book or hasn't. Lazy learning methods are usually slower to evaluate. In practice, for very large databases with high concurrency loads, the queries are not postponed until actual query time, but recomputed in advance on a periodic basis - e.g., nightly, in anticipation of future queries, and the answers stored. This way, the next time new queries are asked about existing entries in the database, the answers are merely looked up rapidly instead of having to be computed on the fly, which would almost certainly bring a high-concurrency multi-user system to its knees. Larger training data also entail increased cost. Particularly, there is the fixed amount of computational cost, where a processor can only process a limited amount of training data points. There are standard techniques to improve re-computation efficiency so that a particular answer is not recomputed unless the data that impact this answer has changed (e.g., new items, new purchases, new views). In other words, the stored answers are updated incrementally. This approach, used by large e-commerce or media sites, has long been used in the Entrez portal of the National Center for Biotechnology Information (NCBI) to precompute similarities between the different items in its large datasets: biological sequences, 3-D protein structures, published-article abstracts, etc. Because "find similar" queries are asked so frequently, the NCBI uses highly parallel hardware to perform nightly recomputation. The recomputation is performed only for new entries in the datasets against each other and against existing entries: the similarity between two existing entries need not be recomputed. Examples of Lazy Learning Methods K-nearest neighbors, which is a special case of instance-based learning. Local regression. Lazy naive Bayes rules, which are extensively used in commercial spam detection software. Here, the spammers keep getting smarter and revising their spamming strategies, and therefore the learning rules must also be continually updated. References Further reading lazy: Lazy Learning for Local Regression, R package with reference manual Webb G.I. (2011) Lazy Learning. In: Sammut C., Webb G.I. (eds) Encyclopedia of Machine Learning. Springer, Boston, MA David W. Aha: Lazy learning. Kluwer Academic Publishers, Norwell 1997, ISBN 0-7923-4584-3. Bontempi, Birattari, Bersini, Hugues Bersini, Iridia: Lazy Learning for Local Modeling and Control Design. 1997. Machine learning
Lazy learning
Engineering
1,131
3,186,107
https://en.wikipedia.org/wiki/Race%3A%20The%20Reality%20of%20Human%20Difference
Race: The Reality of Human Differences is an anthropology book, in which authors Vincent M. Sarich, Emeritus Professor of Anthropology at the University of California, Berkeley, and Frank Miele, senior editor of Skeptic Magazine, argue for the reality of race. The book was published by Basic Books in 2004. It disputes the statements of the PBS documentary Race: The Power of an Illusion aired in 2003. After arguing that human races exist, the authors put forth three different political systems that take race into account in the final chapter, "Learning to Live with Race." These are "Meritocracy in the Global Marketplace", "Affirmative Action and Race Norming", and "Resegregation and the Emergence of Ethno-States." Sarich and Miele list the advantages and disadvantages of each system and advocate Global Meritocracy as the best of the three options. The authors then discuss "the horrific prospect of ethnically targeted weapons," which they view as technically feasible but not very likely to be used. References External links Website of the video Race: The Power of an Illusion Books about race and ethnicity Biology books Race and intelligence controversy
Race: The Reality of Human Difference
Biology
239
70,195,362
https://en.wikipedia.org/wiki/Naematelia%20aurantialba
Naematelia aurantialba (synonym Tremella aurantialba) is a species of fungus producing yellow, frondose, gelatinous basidiocarps (fruit bodies) parasitic on fruit bodies of another fungus, Stereum hirsutum, on broadleaf trees. In China, where it is called jīn'ěr (金耳; literally "golden ear"), it is cultivated for both food and medical purposes. References External links Tremella aurantialba page (Chinese) Tremella aurantialba page (Chinese) Tremella aurantialba page (Chinese) Tremellomycetes Fungi of Asia Chinese edible mushrooms Fungi in cultivation Fungi described in 1990 Fungus species Taxa named by Mu Zang
Naematelia aurantialba
Biology
153
2,644,660
https://en.wikipedia.org/wiki/Curtius%20rearrangement
The Curtius rearrangement (or Curtius reaction or Curtius degradation), first defined by Theodor Curtius in 1885, is the thermal decomposition of an acyl azide to an isocyanate with loss of nitrogen gas. The isocyanate then undergoes attack by a variety of nucleophiles such as water, alcohols and amines, to yield a primary amine, carbamate or urea derivative respectively. Several reviews have been published. Preparation of acyl azide The acyl azide is usually made from the reaction of acid chlorides or anhydrides with sodium azide or trimethylsilyl azide. Acyl azides are also obtained from treating acylhydrazines with nitrous acid. Alternatively, the acyl azide can be formed by the direct reaction of a carboxylic acid with diphenylphosphoryl azide (DPPA). Reaction mechanism It was believed that the Curtius rearrangement was a two-step processes, with the loss of nitrogen gas forming an acyl nitrene, followed by migration of the R-group to give the isocyanate. However, recent research has indicated that the thermal decomposition is a concerted process, with both steps happening together, due to the absence of any nitrene insertion or addition byproducts observed or isolated in the reaction. Thermodynamic calculations also support a concerted mechanism. The migration occurs with full retention of configuration at the R-group. The migratory aptitude of the R-group is roughly tertiary > secondary ~ aryl > primary. The isocyanate formed can then be hydrolyzed to give a primary amine, or undergo nucleophilic attack with alcohols and amines to form carbamates and urea derivatives respectively. Modifications Research has shown that the Curtius rearrangement is catalyzed by both Brønsted and Lewis acids, via the protonation of, or coordination to the acyl oxygen atom respectively. For example, Fahr and Neumann have shown that the use of boron trifluoride or boron trichloride catalyst reduces the decomposition temperature needed for rearrangement by about 100 °C, and increases the yield of the isocyanate significantly. Photochemical rearrangement Photochemical decomposition of the acyl azide is also possible. However, photochemical rearrangement is not concerted and instead occurs by a nitrene intermediate, formed by the cleavage of the weak N–N bond and the loss of nitrogen gas. The highly reactive nitrene can undergo a variety of nitrene reactions, such as nitrene insertion and addition, giving unwanted side products. In the example below, the nitrene intermediate inserts into one of the C–H bonds of the cyclohexane solvent to form N-cyclohexylbenzamide as a side product. Variations Darapsky degradation In one variation called the Darapsky degradation, or Darapsky synthesis, a Curtius rearrangement takes place as one of the steps in the conversion of an α-cyanoester to an amino acid. Hydrazine is used to convert the ester to an acylhydrazine, which is reacted with nitrous acid to give the acyl azide. Heating the azide in ethanol yields the ethyl carbamate via the Curtius rearrangement. Acid hydrolysis yields the amine from the carbamate and the carboxylic acid from the nitrile simultaneously, giving the product amino acid. Harger reaction The photochemical Curtius-like migration and rearrangement of a phosphinic azide forms a metaphosphonimidate in what is also known as the Harger reaction (named after Dr Martin Harger from University of Leicester). This is followed by hydrolysis, in the example below with methanol, to give a phosphonamidate. Unlike the Curtius rearrangement, there is a choice of R-groups on the phosphinic azide which can migrate. Harger has found that the alkyl groups migrate preferentially to aryl groups, and this preference increases in the order methyl < primary < secondary < tertiary. This is probably due to steric and conformational factors, as the bulkier the R-group, the less favorable the conformation for phenyl migration. Synthetic applications The Curtius rearrangement is tolerant of a large variety of functional groups, and has significant synthetic utility, as many different groups can be incorporated depending on the choice of nucleophile used to attack the isocyanate. For example, when carried out in the presence of tert-butanol, the reaction generates Boc-protected amines, useful intermediates in organic synthesis. Likewise, when the Curtius reaction is performed in the presence of benzyl alcohol, Cbz-protected amines are formed. Triquinacene R. B. Woodward et al. used the Curtius rearrangement as one of the steps in the total synthesis of the polyquinane triquinacene in 1964. Following hydrolysis of the ester in the intermediate (1), a Curtius rearrangement was effected to convert the carboxylic acid groups in (2) to the methyl carbamate groups (3) with 84% yield. Further steps then gave triquinacene (4). Oseltamivir In their synthesis of the antiviral drug oseltamivir, also known as Tamiflu, Ishikawa et al. used the Curtius rearrangement in one of the key steps in converting the acyl azide to the amide group in the target molecule. In this case, the isocyanate formed by the rearrangement is attacked by a carboxylic acid to form the amide. Subsequent reactions could all be carried out in the same reaction vessel to give the final product with 57% overall yield. An important benefit of the Curtius reaction highlighted by the authors was that it could be carried out at room temperature, minimizing the hazard from heating. The scheme overall was highly efficient, requiring only three “one-pot” operations to produce this important and valuable drug used for the treatment of avian influenza. Dievodiamine Dievodiamine is a natural product from the plant Euodia ruticarpa, which is widely used in traditional Chinese medicine. Unsworth et al.’s protecting group-free total synthesis of dievodiamine utilizes the Curtius rearrangement in the first step of the synthesis, catalyzed by boron trifluoride. The activated isocyanate then quickly reacts with the indole ring in an electrophilic aromatic substitution reaction to give the amide in 94% yield, and subsequent steps give dievodamine. See also Beckmann rearrangement Bergmann degradation Hofmann rearrangement Lossen rearrangement Schmidt reaction Tiemann rearrangement Neber rearrangement Wolff rearrangement References External links Rearrangement reactions Name reactions
Curtius rearrangement
Chemistry
1,470
53,676,219
https://en.wikipedia.org/wiki/Gymnopus%20fusipes
Gymnopus fusipes (formerly often called Collybia fusipes) is a parasitic species of gilled mushroom which is quite common in Europe and often grows in large clumps. It is variable but easy to recognize because the stipe soon becomes distinctively tough, bloated and ridged. Naming This species was originally described by Bulliard in his 1793 "Herbier de la France" as Agaricus fusipes at a time when all gilled mushrooms were assigned to genus Agaricus. Then in 1821 Samuel Frederick Gray published his "Natural Arrangement of British Plants" (including fungi) in which he allocated the species to the already existing genus Gymnopus. However Gray's book was not very popular and in 1872 Lucien Quélet put this mushroom in genus Collybia, giving it the name Collybia fusipes by which it was generally known for many years. In much later work culminating in 1997, Antonín and Noordeloos found that the genus Collybia as defined at that time was unsatisfactory due to being polyphyletic and they proposed a fundamental rearrangement. They resurrected the genus Gymnopus for some species including fusipes, and after subsequent DNA studies, this has been accepted by modern authorities including Species Fungorum and the Global Biodiversity Information Facility, and so its current name has reverted to Gray's combination, Gymnopus fusipes. There was also an alternative move to reclassify it under Rhodocollybia, but that has not generally been accepted. Gymnopus fusipes is the type species of the genus Gymnopus. The species name fusipes indicates that the stem is spindle-shaped (from the Latin fusus meaning "spindle" and pes meaning "foot"). The English name "Spindle Shank" has been given to this species. Earlier in 1821 Gray had already given it the English name "Spindle naked-foot", but that suggestion never gained much popularity. Description This mushroom is very variable, though it is easy to recognize on close examination, at least when not young, due to the distinctive tough stem. The following sections use the given references throughout. General The cap, growing from about 3 cm to 10 cm, is smooth and dark red-brown, or may be paler, sometimes with dark spots. There is no ring or other veil remnant. The red-brownish stem is often lighter at the top and can grow to about 15 cm long, sometimes rooting. At least when older the stem typically becomes inflated and deeply furrowed and also develops a distinctive tough consistency. Sometimes a new clump of these mushrooms grows from the stem bases left from the previous year. The usually well-spaced gills are whitish and may be flecked with spots. Microscopic characteristics The ellipsoidal spores are around 4.5-6 μm by 3–4.5 μm. Clamp connections are present in all parts of the fungus. Distribution, habitat & ecology This mushroom grows in often large clumps at the base of trees, or on roots or stumps. It is always associated with wood, which may however be buried and not immediately visible. Its main host is oak, but sometimes it is also found on beech. This mushroom is saprobic on dead wood and it is also a serious parasite. Appearing from summer to autumn, it is distributed throughout Europe, where it varies locally between quite common and quite rare. Also the fungus is spreading as a disease to North America, particularly on Northern Red Oak. Human impact Most authors do not consider this species worthwhile for the table, but although this mushroom soon becomes tough, the caps (only) are said to be edible and good when young. Note that with its resistant texture G. fusipes can often appear collectable after several months of growth, but due to the normal development of organisms of putrescence during that time, such specimens could cause gastro-enteritis. Any rancid smell is a sign that the mushrooms are too old. It is a serious parasite of oak trees, causing a root rot. References External links Fungal tree pathogens and diseases Omphalotaceae Edible fungi Fungi of Europe Taxa named by Jean Baptiste François Pierre Bulliard Fungi described in 1793 Fungus species
Gymnopus fusipes
Biology
878
5,642,519
https://en.wikipedia.org/wiki/Soil%20zoology
Soil zoology or pedozoology is the study of animals living fully or partially in the soil (soil fauna). The field of study was developed in the 1940s by Mercury Ghilarov in Russia. Ghilarov noted inverse relationships between size and numbers of soil organisms. He also suggested that soil included water, air and solid phases and that soil may have provided the transitional environment between aquatic and terrestrial life. The phrase was apparently first used in the English speaking world at a conference of soil zoologists presenting their research at the University of Nottingham, UK, in 1955. See also Biogeochemical cycle Soil ecology Zoology References Bibliography Safwat H. Shakir Hanna, ed, 2004, Soil Zoology For Sustainable Development In The 21st century: A Festschrift in Honour of Prof. Samir I. Ghabbour on the Occasion of His 70th Birthday, Cairo, . External links D. Keith McE. Kevan, Ethnoentomologist, Cultural Entomology Digest 3 Soil biology Edaphology Soil science
Soil zoology
Biology
211
50,197,812
https://en.wikipedia.org/wiki/Thomas%20Pearson%20Moody
Thomas Pearson Moody (14 April 1841 – 14 November 1917) was a mining engineer in Australia and New Zealand. Early life Thomas Pearson Moody was born in Killingworth, Westmoor, Newcastle-upon-Tyne, and educated at Swansea, the son of John Moody. His father was a colliery manager at Cyfarthfa, Merthyr Tydfil, where Thomas Pearson Moody worked early in his career. His brother, William Moody, was also in coal mining, in northeastern Pennsylvania. Career Thomas Pearson Moody left Wales in 1863, shortly after the deadly Gethin Pit Disaster; Thomas Pearson Moody worked at the Gethin Pit, and his father was charged with manslaughter in the following inquest. He became general manager, clerk, and surveyor of the colliery at Waratah, New South Wales, Australia; he left that position in 1869. Next he was superintendent of a sheep station at Darling Downs, Queensland. In 1875, he was named manager and engineer of the Australasian Coal Company. He was also first chairman of the New Castle Australasian Steamship Company. Thomas Pearson Moody moved to New Zealand in 1878, to run the Bay of Islands Coal Company, which helped to open the Hikurangi coal fields in New Zealand. He retired from his work at Hikurangi in 1908. He was a member of the British Institute of Mining Engineers, the South Wales Institute of Mining Engineers, the North of England Institute of Mining and Mechanical Engineers, and the British Geographical Society, among many other professional associations. Personal life Thomas Pearson Moody married Minnie Snowdon. They had six daughters and three sons. One son, Robert H. E. Moody, died in 1916, as a private in New Zealand's army in World War I. Of his Welsh nationality, Moody declared, "By birth I am a Northumbrian, by sympathy a Welshman. I am now an Australasian and I suppose a cosmopolite....Yet I languish for my old home, 'Yr Hen Wlad.'" Moody died in late 1917; his gravesite is at Kamo Public Cemetery in New Zealand. References Mining engineers People from the Northland Region 1841 births 1917 deaths
Thomas Pearson Moody
Engineering
450
47,013,574
https://en.wikipedia.org/wiki/Mupapillomavirus
Mupapillomavirus is a genus of viruses in the family Papillomaviridae. Humans serve as natural hosts. There are three species in this genus. Diseases associated with this genus include palmoplantar warts. Taxonomy The following three species are assigned to the genus: Mupapillomavirus 1 Mupapillomavirus 2 Mupapillomavirus 3 Structure Viruses in Mupapillomavirus are non-enveloped, with icosahedral geometries, and T=7 symmetry. The diameter is around 52-55 nm. Genomes are circular, around 8kb in length. Life cycle Viral replication is nuclear. Entry into the host cell is achieved by attachment of the viral proteins to host receptors, which mediates endocytosis. Replication follows the dsDNA bidirectional replication model. DNA-templated transcription, with some alternative splicing mechanism is the method of transcription. The virus exits the host cell by nuclear envelope breakdown. Human serve as the natural host. Transmission routes are contact. References External links ICTV Report Papillomaviridae Viralzone: Mupapillomavirus Papillomavirus Virus genera
Mupapillomavirus
Biology
245
362,649
https://en.wikipedia.org/wiki/Stanis%C5%82aw%20Saks
Stanisław Saks (30 December 1897 – 23 November 1942) was a Polish mathematician and university tutor, a member of the Lwów School of Mathematics, known primarily for his membership in the Scottish Café circle, an extensive monograph on the theory of integrals, his works on measure theory and the Vitali–Hahn–Saks theorem. Life and work Stanisław Saks was born on 30 December 1897 in Kalisz, Congress Poland, to an assimilated Polish-Jewish family. In 1915 he graduated from a local gymnasium and joined the newly recreated Warsaw University. In 1922 he received a doctorate of his alma mater with a prestigious distinction maxima cum laude. Soon afterwards he also passed his habilitation and received the Rockefeller fellowship, which allowed him to travel to the United States. Around that time he started publishing articles in various mathematical journals, mostly the Fundamenta Mathematicae, but also in the Transactions of the American Mathematical Society. He participated in the Silesian Uprisings and was awarded the Cross of the Valorous and the Medal of Independence for his bravery. Following the end of the uprising he returned to Warsaw and resumed his academic career. For most of it he studied the theories of functions and functionals in particular. In 1930 he published his most notable book, the Zarys teorii całki (Sketch on the Theory of the Integral), which later got expanded and translated into several languages, including English (Theory of the Integral), French (Théorie de l'Intégrale) and Russian (Teoriya Integrala). Despite his successes, Saks was never awarded the title of professor and remained an ordinary tutor, initially at his alma mater and the Warsaw University of Technology, and later at the Lwów University and Wilno University. He was also an active socialist and a journalist at the Robotnik weekly (1919–1926) and later a collaborator of the Association of Socialist Youth. Saks wrote a mathematics book with Antoni Zygmund, Analytic Functions, in 1933. It was translated into English in 1952 by E. J. Scott. In the preface to the English edition, Zygmund writes: Stanislaw Saks was a man of moral as well as physical courage, of rare intelligence and wit. To his colleagues and pupils he was an inspiration not only as a mathematician but as a human being. In the period between the two world wars he exerted great influence upon a whole generation of Polish mathematicians in Warsaw and Lwów. In November 1942, at the age of 45, Saks died in a Warsaw prison, victim of a policy of extermination. After the outbreak of World War II and the occupation of Poland by Germany, Saks joined the Polish underground. Arrested in November 1942, he was executed on 23 November 1942 by the German Gestapo in Warsaw. Publications . English translation by Laurence Chisholm Young, with two additional notes by Stefan Banach. See also Lwów School of Mathematics Notes References Functional analysts Measure theorists 1897 births 1942 deaths Lwów School of Mathematics Mathematical analysts University of Warsaw alumni People from Kalisz Academic staff of Vilnius University Polish Jews who died in the Holocaust Warsaw School of Mathematics
Stanisław Saks
Mathematics
646
44,794,995
https://en.wikipedia.org/wiki/Annals%20of%20Nuclear%20Energy
Annals of Nuclear Energy is a monthly peer-reviewed scientific journal covering research on nuclear energy and nuclear science. It was established in 1975 and is published by Elsevier. The current editors-in-chief are Lynn E. Weaver (Florida Institute of Technology), S. Mostafa Ghiaasiaan (Georgia Institute of Technology) and Imre Pázsit (Chalmers University of Technology). Abstracting and indexing The journal is abstracted and indexed in: Chemical Abstracts Service Index Medicus/MEDLINE/PubMed Science Citation Index Expanded Current Contents/Engineering, Computing & Technology Scopus According to the Journal Citation Reports, the journal has a 2013 impact factor of 1.020. Former titles history Annals of Nuclear Energy is derived from the following former titles: Journal of Nuclear Energy (1954-1959) Journal of Nuclear Energy. Part A. Reactor Science (1959-1961) Journal of Nuclear Energy. Part B. Reactor Technology (1959) Journal of Nuclear Energy. Parts A/B. Reactor Science and Technology (1961-1966) Journal of Nuclear Energy (1967-1973) Annals of Nuclear Science and Engineering (1974) Annals of Nuclear Energy (1975–present) Notes References External links Energy and fuel journals Elsevier academic journals English-language journals Monthly journals Academic journals established in 1975
Annals of Nuclear Energy
Environmental_science
262
63,663,102
https://en.wikipedia.org/wiki/Insertion%20symbol
The term insertion symbol has more than one meaning,, when using a cursor (user interface), it is (usually) a vertical bar indicating where text being typed will be inserted a caret (proofreading) is a V-shaped grapheme, usually inverted and sometimes extended, used to indicate that additional material needs to be inserted at this point in the text. See also Caret (computing) Typographical symbols
Insertion symbol
Mathematics
91
14,739
https://en.wikipedia.org/wiki/IEEE%20802.11
IEEE 802.11 is part of the IEEE 802 set of local area network (LAN) technical standards, and specifies the set of medium access control (MAC) and physical layer (PHY) protocols for implementing wireless local area network (WLAN) computer communication. The standard and amendments provide the basis for wireless network products using the Wi-Fi brand and are the world's most widely used wireless computer networking standards. IEEE 802.11 is used in most home and office networks to allow laptops, printers, smartphones, and other devices to communicate with each other and access the Internet without connecting wires. IEEE 802.11 is also a basis for vehicle-based communication networks with IEEE 802.11p. The standards are created and maintained by the Institute of Electrical and Electronics Engineers (IEEE) LAN/MAN Standards Committee (IEEE 802). The base version of the standard was released in 1997 and has had subsequent amendments. While each amendment is officially revoked when it is incorporated in the latest version of the standard, the corporate world tends to market to the revisions because they concisely denote the capabilities of their products. As a result, in the marketplace, each revision tends to become its own standard. 802.11x is a shorthand for "any version of 802.11", to avoid confusion with "802.11" used specifically for the original 1997 version. IEEE 802.11 uses various frequencies including, but not limited to, 2.4 GHz, 5 GHz, 6 GHz, and 60 GHz frequency bands. Although IEEE 802.11 specifications list channels that might be used, the allowed radio frequency spectrum availability varies significantly by regulatory domain. The protocols are typically used in conjunction with IEEE 802.2, and are designed to interwork seamlessly with Ethernet, and are very often used to carry Internet Protocol traffic. General description The 802.11 family consists of a series of half-duplex over-the-air modulation techniques that use the same basic protocol. The 802.11 protocol family employs carrier-sense multiple access with collision avoidance (CSMA/CA) whereby equipment listens to a channel for other users (including non 802.11 users) before transmitting each frame (some use the term "packet", which may be ambiguous: "frame" is more technically correct). 802.11-1997 was the first wireless networking standard in the family, but 802.11b was the first widely accepted one, followed by 802.11a, 802.11g, 802.11n, 802.11ac, and 802.11ax. Other standards in the family (c–f, h, j) are service amendments that are used to extend the current scope of the existing standard, which amendments may also include corrections to a previous specification. 802.11b and 802.11g use the 2.4-GHz ISM band, operating in the United States under Part 15 of the U.S. Federal Communications Commission Rules and Regulations. 802.11n can also use that 2.4-GHz band. Because of this choice of frequency band, 802.11b/g/n equipment may occasionally suffer interference in the 2.4-GHz band from microwave ovens, cordless telephones, and Bluetooth devices. 802.11b and 802.11g control their interference and susceptibility to interference by using direct-sequence spread spectrum (DSSS) and orthogonal frequency-division multiplexing (OFDM) signaling methods, respectively. 802.11a uses the 5 GHz U-NII band which, for much of the world, offers at least 23 non-overlapping, 20-MHz-wide channels. This is an advantage over the 2.4-GHz, ISM-frequency band, which offers only three non-overlapping, 20-MHz-wide channels where other adjacent channels overlap (see: list of WLAN channels). Better or worse performance with higher or lower frequencies (channels) may be realized, depending on the environment. 802.11n and 802.11ax can use either the 2.4 GHz or 5 GHz band; 802.11ac uses only the 5 GHz band. The segment of the radio frequency spectrum used by 802.11 varies between countries. In the US, 802.11a and 802.11g devices may be operated without a license, as allowed in Part 15 of the FCC Rules and Regulations. Frequencies used by channels one through six of 802.11b and 802.11g fall within the 2.4 GHz amateur radio band. Licensed amateur radio operators may operate 802.11b/g devices under Part 97 of the FCC Rules and Regulations, allowing increased power output but not commercial content or encryption. Generations In 2018, the Wi-Fi Alliance began using a consumer-friendly generation numbering scheme for the publicly used 802.11 protocols. Wi-Fi generations 1–8 use the 802.11b, 802.11a, 802.11g, 802.11n, 802.11ac, 802.11ax, 802.11be and 802.11bn protocols, in that order. History 802.11 technology has its origins in a 1985 ruling by the U.S. Federal Communications Commission that released the ISM band for unlicensed use. In 1991 NCR Corporation/AT&T (now Nokia Labs and LSI Corporation) invented a precursor to 802.11 in Nieuwegein, the Netherlands. The inventors initially intended to use the technology for cashier systems. The first wireless products were brought to the market under the name WaveLAN with raw data rates of 1 Mbit/s and 2 Mbit/s. Vic Hayes, who held the chair of IEEE 802.11 for 10 years, and has been called the "father of Wi-Fi", was involved in designing the initial 802.11b and 802.11a standards within the IEEE. He, along with Bell Labs Engineer Bruce Tuch, approached IEEE to create a standard. In 1999, the Wi-Fi Alliance was formed as a trade association to hold the Wi-Fi trademark under which most products are sold. The major commercial breakthrough came with Apple's adoption of Wi-Fi for their iBook series of laptops in 1999. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort. One year later IBM followed with its ThinkPad 1300 series in 2000. Protocol 802.11-1997 (802.11 legacy) The original version of the standard IEEE 802.11 was released in 1997 and clarified in 1999, but is now obsolete. It specified two net bit rates of 1 or 2 megabits per second (Mbit/s), plus forward error correction code. It specified three alternative physical layer technologies: diffuse infrared operating at 1 Mbit/s; frequency-hopping spread spectrum operating at 1 Mbit/s or 2 Mbit/s; and direct-sequence spread spectrum operating at 1 Mbit/s or 2 Mbit/s. The latter two radio technologies used microwave transmission over the Industrial Scientific Medical frequency band at 2.4 GHz. Some earlier WLAN technologies used lower frequencies, such as the U.S. 900 MHz ISM band. Legacy 802.11 with direct-sequence spread spectrum was rapidly supplanted and popularized by 802.11b. 802.11a (OFDM waveform) 802.11a, published in 1999, uses the same data link layer protocol and frame format as the original standard, but an OFDM based air interface (physical layer) was added. It operates in the 5 GHz band with a maximum net data rate of 54 Mbit/s, plus error correction code, which yields realistic net achievable throughput in the mid-20 Mbit/s. It has seen widespread worldwide implementation, particularly within the corporate workspace. Since the 2.4 GHz band is heavily used to the point of being crowded, using the relatively unused 5 GHz band gives 802.11a a significant advantage. However, this high carrier frequency also brings a disadvantage: the effective overall range of 802.11a is less than that of 802.11b/g. In theory, 802.11a signals are absorbed more readily by walls and other solid objects in their path due to their smaller wavelength, and, as a result, cannot penetrate as far as those of 802.11b. In practice, 802.11b typically has a higher range at low speeds (802.11b will reduce speed to 5.5 Mbit/s or even 1 Mbit/s at low signal strengths). 802.11a also suffers from interference, but locally there may be fewer signals to interfere with, resulting in less interference and better throughput. 802.11b The 802.11b standard has a maximum raw data rate of 11 Mbit/s (Megabits per second) and uses the same media access method defined in the original standard. 802.11b products appeared on the market in early 2000, since 802.11b is a direct extension of the modulation technique defined in the original standard. The dramatic increase in throughput of 802.11b (compared to the original standard) along with simultaneous substantial price reductions led to the rapid acceptance of 802.11b as the definitive wireless LAN technology. Devices using 802.11b experience interference from other products operating in the 2.4 GHz band. Devices operating in the 2.4 GHz range include microwave ovens, Bluetooth devices, baby monitors, cordless telephones, and some amateur radio equipment. As unlicensed intentional radiators in this ISM band, they must not interfere with and must tolerate interference from primary or secondary allocations (users) of this band, such as amateur radio. 802.11g In June 2003, a third modulation standard was ratified: 802.11g. This works in the 2.4 GHz band (like 802.11b), but uses the same OFDM based transmission scheme as 802.11a. It operates at a maximum physical layer bit rate of 54 Mbit/s exclusive of forward error correction codes, or about 22 Mbit/s average throughput. 802.11g hardware is fully backward compatible with 802.11b hardware, and therefore is encumbered with legacy issues that reduce throughput by ~21% when compared to 802.11a. The then-proposed 802.11g standard was rapidly adopted in the market starting in January 2003, well before ratification, due to the desire for higher data rates as well as reductions in manufacturing costs. By summer 2003, most dual-band 802.11a/b products became dual-band/tri-mode, supporting a and b/g in a single mobile adapter card or access point. Details of making b and g work well together occupied much of the lingering technical process; in an 802.11g network, however, the activity of an 802.11b participant will reduce the data rate of the overall 802.11g network. Like 802.11b, 802.11g devices also suffer interference from other products operating in the 2.4 GHz band, for example, wireless keyboards. 802.11-2007 In 2003, task group TGma was authorized to "roll up" many of the amendments to the 1999 version of the 802.11 standard. REVma or 802.11ma, as it was called, created a single document that merged 8 amendments (802.11a, b, d, e, g, h, i, j) with the base standard. Upon approval on 8 March 2007, 802.11REVma was renamed to the then-current base standard IEEE 802.11-2007. 802.11n 802.11n is an amendment that improves upon the previous 802.11 standards; its first draft of certification was published in 2006. The 802.11n standard was retroactively labelled as Wi-Fi 4 by the Wi-Fi Alliance. The standard added support for multiple-input multiple-output antennas (MIMO). 802.11n operates on both the 2.4 GHz and the 5 GHz bands. Support for 5 GHz bands is optional. Its net data rate ranges from 54 Mbit/s to 600 Mbit/s. The IEEE has approved the amendment, and it was published in October 2009. Prior to the final ratification, enterprises were already migrating to 802.11n networks based on the Wi-Fi Alliance's certification of products conforming to a 2007 draft of the 802.11n proposal. Early Intel WiFi cards were not compatible with the final standard. Many rival access points and cards also did not support 5 GHz at all. 802.11-2012 In May 2007, task group TGmb was authorized to "roll up" many of the amendments to the 2007 version of the 802.11 standard. REVmb or 802.11mb, as it was called, created a single document that merged ten amendments (802.11k, r, y, n, w, p, z, v, u, s) with the 2007 base standard. In addition much cleanup was done, including a reordering of many of the clauses. Upon publication on 29 March 2012, the new standard was referred to as IEEE 802.11-2012. 802.11ac IEEE 802.11ac-2013 is an amendment to IEEE 802.11, published in December 2013, that builds on 802.11n. The 802.11ac standard was retroactively labelled as Wi-Fi 5 by the Wi-Fi Alliance. Changes compared to 802.11n include wider channels (80 or 160 MHz versus 40 MHz) in the 5 GHz band, more spatial streams (up to eight versus four), higher-order modulation (up to 256-QAM vs. 64-QAM), and the addition of Multi-user MIMO (MU-MIMO). The Wi-Fi Alliance separated the introduction of ac wireless products into two phases ("waves"), named "Wave 1" and "Wave 2". From mid-2013, the alliance started certifying Wave 1 802.11ac products shipped by manufacturers, based on the IEEE 802.11ac Draft 3.0 (the IEEE standard was not finalized until later that year). In 2016 Wi-Fi Alliance introduced the Wave 2 certification, to provide higher bandwidth and capacity than Wave 1 products. Wave 2 products include additional features like MU-MIMO, 160 MHz channel width support, support for more 5 GHz channels, and four spatial streams (with four antennas; compared to three in Wave 1 and 802.11n, and eight in IEEE's 802.11ax specification). 802.11ad IEEE 802.11ad is an amendment that defines a new physical layer for 802.11 networks to operate in the 60 GHz millimeter wave spectrum. This frequency band has significantly different propagation characteristics than the 2.4 GHz and 5 GHz bands where Wi-Fi networks operate. Products implementing the 802.11ad standard are sold under the WiGig brand name, with a certification program developed by the Wi-Fi Alliance. The peak transmission rate of 802.11ad is 7 Gbit/s. IEEE 802.11ad is a protocol used for very high data rates (about 8 Gbit/s) and for short range communication (about 1–10 meters). TP-Link announced the world's first 802.11ad router in January 2016. The WiGig standard as of 2021 has been published after being announced in 2009 and added to the IEEE 802.11 family in December 2012. 802.11af IEEE 802.11af, also referred to as "White-Fi" and "Super Wi-Fi", is an amendment, approved in February 2014, that allows WLAN operation in TV white space spectrum in the VHF and UHF bands between 54 and 790 MHz. It uses cognitive radio technology to transmit on unused TV channels, with the standard taking measures to limit interference for primary users, such as analog TV, digital TV, and wireless microphones. Access points and stations determine their position using a satellite positioning system such as GPS, and use the Internet to query a geolocation database (GDB) provided by a regional regulatory agency to discover what frequency channels are available for use at a given time and position. The physical layer uses OFDM and is based on 802.11ac. The propagation path loss as well as the attenuation by materials such as brick and concrete is lower in the UHF and VHF bands than in the 2.4 GHz and 5 GHz bands, which increases the possible range. The frequency channels are 6 to 8 MHz wide, depending on the regulatory domain. Up to four channels may be bonded in either one or two contiguous blocks. MIMO operation is possible with up to four streams used for either space–time block code (STBC) or multi-user (MU) operation. The achievable data rate per spatial stream is 26.7 Mbit/s for 6 and 7 MHz channels, and 35.6 Mbit/s for 8 MHz channels. With four spatial streams and four bonded channels, the maximum data rate is 426.7 Mbit/s for 6 and 7 MHz channels and 568.9 Mbit/s for 8 MHz channels. 802.11-2016 IEEE 802.11-2016 which was known as IEEE 802.11 REVmc, is a revision based on IEEE 802.11-2012, incorporating 5 amendments (11ae, 11aa, 11ad, 11ac, 11af). In addition, existing MAC and PHY functions have been enhanced and obsolete features were removed or marked for removal. Some clauses and annexes have been renumbered. 802.11ah IEEE 802.11ah, published in 2017, defines a WLAN system operating at sub-1 GHz license-exempt bands. Due to the favorable propagation characteristics of the low-frequency spectra, 802.11ah can provide improved transmission range compared with the conventional 802.11 WLANs operating in the 2.4 GHz and 5 GHz bands. 802.11ah can be used for various purposes including large-scale sensor networks, extended-range hotspots, and outdoor Wi-Fi for cellular WAN carrier traffic offloading, whereas the available bandwidth is relatively narrow. The protocol intends consumption to be competitive with low-power Bluetooth, at a much wider range. 802.11ai IEEE 802.11ai is an amendment to the 802.11 standard that added new mechanisms for a faster initial link setup time. 802.11aj IEEE 802.11aj is a derivative of 802.11ad for use in the 45 GHz unlicensed spectrum available in some regions of the world (specifically China); it also provides additional capabilities for use in the 60 GHz band. Alternatively known as China Millimeter Wave (CMMW). 802.11aq IEEE 802.11aq is an amendment to the 802.11 standard that will enable pre-association discovery of services. This extends some of the mechanisms in 802.11u that enabled device discovery to discover further the services running on a device, or provided by a network. 802.11-2020 IEEE 802.11-2020, which was known as IEEE 802.11 REVmd, is a revision based on IEEE 802.11-2016 incorporating 5 amendments (11ai, 11ah, 11aj, 11ak, 11aq). In addition, existing MAC and PHY functions have been enhanced and obsolete features were removed or marked for removal. Some clauses and annexes have been added. 802.11ax IEEE 802.11ax is the successor to 802.11ac, marketed as (2.4 GHz and 5 GHz) and (6 GHz) by the Wi-Fi Alliance. It is also known as High Efficiency , for the overall improvements to clients in dense environments. For an individual client, the maximum improvement in data rate (PHY speed) against the predecessor (802.11ac) is only 39% (for comparison, this improvement was nearly 500% for the predecessors). Yet, even with this comparatively minor 39% figure, the goal was to provide 4 times the throughput-per-area of 802.11ac (hence High Efficiency). The motivation behind this goal was the deployment of WLAN in dense environments such as corporate offices, shopping malls and dense residential apartments. This is achieved by means of a technique called OFDMA, which is basically multiplexing in the frequency domain (as opposed to spatial multiplexing, as in 802.11ac). This is equivalent to cellular technology applied into . The IEEE 802.11ax2021 standard was approved on February 9, 2021. 802.11ay IEEE 802.11ay is a standard that is being developed, also called EDMG: Enhanced Directional MultiGigabit PHY. It is an amendment that defines a new physical layer for 802.11 networks to operate in the 60 GHz millimeter wave spectrum. It will be an extension of the existing 11ad, aimed to extend the throughput, range, and use-cases. The main use-cases include indoor operation and short-range communications due to atmospheric oxygen absorption and inability to penetrate walls. The peak transmission rate of 802.11ay is 40 Gbit/s. The main extensions include: channel bonding (2, 3 and 4), MIMO (up to 4 streams) and higher modulation schemes. The expected range is 300–500 m. 802.11ba IEEE 802.11ba Wake-up Radio (WUR) Operation is an amendment to the IEEE 802.11 standard that enables energy-efficient operation for data reception without increasing latency. The target active power consumption to receive a WUR packet is less than 1 milliwatt and supports data rates of 62.5 kbit/s and 250 kbit/s. The WUR PHY uses MC-OOK (multicarrier OOK) to achieve extremely low power consumption. 802.11bb IEEE 802.11bb is a networking protocol standard in the IEEE 802.11 set of protocols that uses infrared light for communications. 802.11be IEEE 802.11be Extremely High Throughput (EHT) is the potential next amendment to the 802.11 IEEE standard, and will likely be designated as Wi-Fi 7. It will build upon 802.11ax, focusing on WLAN indoor and outdoor operation with stationary and pedestrian speeds in the 2.4 GHz, 5 GHz, and 6 GHz frequency bands. Common misunderstandings about achievable throughput Across all variations of 802.11, maximum achievable throughputs are given either based on measurements under ideal conditions or in the layer-2 data rates. However, this does not apply to typical deployments in which data is being transferred between two endpoints, of which at least one is typically connected to a wired infrastructure and the other endpoint is connected to an infrastructure via a wireless link. This means that, typically, data frames pass an 802.11 (WLAN) medium and are being converted to 802.3 (Ethernet) or vice versa. Due to the difference in the frame (header) lengths of these two media, the application's packet size determines the speed of the data transfer. This means applications that use small packets (e.g., VoIP) create dataflows with high-overhead traffic (i.e., a low goodput). Other factors that contribute to the overall application data rate are the speed with which the application transmits the packets (i.e., the data rate) and, of course, the energy with which the wireless signal is received. The latter is determined by distance and by the configured output power of the communicating devices. The same references apply to the attached graphs that show measurements of UDP throughput. Each represents an average (UDP) throughput (please note that the error bars are there but barely visible due to the small variation) of 25 measurements. Each is with a specific packet size (small or large) and with a specific data rate (10 kbit/s – 100 Mbit/s). Markers for traffic profiles of common applications are included as well. These figures assume there are no packet errors, which, if occurring, will lower the transmission rate further. Channels and frequencies 802.11b, 802.11g, and 802.11n-2.4 utilize the spectrum, one of the ISM bands. 802.11a, 802.11n, and 802.11ac use the more heavily regulated band. These are commonly referred to as the "2.4 GHz and 5 GHz bands" in most sales literature. Each spectrum is sub-divided into channels with a center frequency and bandwidth, analogous to how radio and TV broadcast bands are sub-divided. The 2.4 GHz band is divided into 14 channels spaced 5 MHz apart, beginning with channel 1, which is centered on 2.412 GHz. The latter channels have additional restrictions or are unavailable for use in some regulatory domains. The channel numbering of the spectrum is less intuitive due to the differences in regulations between countries. These are discussed in greater detail on the list of WLAN channels. Channel spacing within the 2.4 GHz band In addition to specifying the channel center frequency, 802.11 also specifies (in Clause 17) a spectral mask defining the permitted power distribution across each channel. The mask requires the signal to be attenuated a minimum of 20 dB from its peak amplitude at ±11 MHz from the center frequency, the point at which a channel is effectively 22 MHz wide. One consequence is that stations can use only every fourth or fifth channel without overlap. Availability of channels is regulated by country, constrained in part by how each country allocates radio spectrum to various services. At one extreme, Japan permits the use of all 14 channels for 802.11b, and for 802.11g/n-2.4. Other countries such as Spain initially allowed only channels 10 and 11, and France allowed only 10, 11, 12, and 13; however, Europe now allow channels 1 through 13. North America and some Central and South American countries allow only Since the spectral mask defines only power output restrictions up to ±11 MHz from the center frequency to be attenuated by −50 dBr, it is often assumed that the energy of the channel extends no further than these limits. It is more correct to say that the overlapping signal on any channel should be sufficiently attenuated to interfere with a transmitter on any other channel minimally, given the separation between channels. Due to the near–far problem a transmitter can impact (desensitize) a receiver on a "non-overlapping" channel, but only if it is close to the victim receiver (within a meter) or operating above allowed power levels. Conversely, a sufficiently distant transmitter on an overlapping channel can have little to no significant effect. Confusion often arises over the amount of channel separation required between transmitting devices. 802.11b was based on direct-sequence spread spectrum (DSSS) modulation and utilized a channel bandwidth of 22 MHz, resulting in three "non-overlapping" channels (1, 6, and 11). 802.11g was based on OFDM modulation and utilized a channel bandwidth of 20 MHz. This occasionally leads to the belief that four "non-overlapping" channels (1, 5, 9, and 13) exist under 802.11g. However, this is not the case as per 17.4.6.3 Channel Numbering of operating channels of the IEEE Std 802.11 (2012), which states, "In a multiple cell network topology, overlapping and/or adjacent cells using different channels can operate simultaneously without interference if the distance between the center frequencies is at least 25 MHz." and section 18.3.9.3 and Figure 18-13. This does not mean that the technical overlap of the channels recommends the non-use of overlapping channels. The amount of inter-channel interference seen on a configuration using channels 1, 5, 9, and 13 (which is permitted in Europe, but not in North America) is barely different from a three-channel configuration, but with an entire extra channel. However, overlap between channels with more narrow spacing (e.g. 1, 4, 7, 11 in North America) may cause unacceptable degradation of signal quality and throughput, particularly when users transmit near the boundaries of AP cells. Regulatory domains and legal compliance IEEE uses the phrase regdomain to refer to a legal regulatory region. Different countries define different levels of allowable transmitter power, time that a channel can be occupied, and different available channels. Domain codes are specified for the United States, Canada, ETSI (Europe), Spain, France, Japan, and China. Most Wi-Fi certified devices default to regdomain 0, which means least common denominator settings, i.e., the device will not transmit at a power above the allowable power in any nation, nor will it use frequencies that are not permitted in any nation. The regdomain setting is often made difficult or impossible to change so that the end-users do not conflict with local regulatory agencies such as the United States' Federal Communications Commission. Layer 2 – Datagrams The datagrams are called frames. Current 802.11 standards specify frame types for use in the transmission of data as well as management and control of wireless links. Frames are divided into very specific and standardized sections. Each frame consists of a MAC header, payload, and frame check sequence (FCS). Some frames do not have payloads. The first two bytes of the MAC header form a frame control field specifying the form and function of the frame. This frame control field is subdivided into the following sub-fields: Protocol Version: Two bits representing the protocol version. The currently used protocol version is zero. Other values are reserved for future use. Type: Two bits identifying the type of WLAN frame. Control, Data, and Management are various frame types defined in IEEE 802.11. Subtype: Four bits providing additional discrimination between frames. Type and Subtype are used together to identify the exact frame. ToDS and FromDS: Each is one bit in size. They indicate whether a data frame is headed for a distribution system or it is getting out of it. Control and management frames set these values to zero. All the data frames will have one of these bits set. ToDS = 0 and FromDS = 0 Communication within a basic service set or an independent basic service set (IBSS) network. ToDS = 0 and FromDS = 1 A frame sent by a station and directed to an AP accessed via the distribution system. ToDS = 1 and FromDS = 0 A frame exiting the distribution system for a station. ToDS = 1 and FromDS = 1 Only kind of frame frame that uses all four MAC addresses in a DATA frame. Address 1: access point address exiting from the distribution system. Address 2: access point entrance to the distribution system (AP to which the source station is connected). Address 3: final station address. Address 4: address of the source station. More Fragments: The More Fragments bit is set when a packet is divided into multiple frames for transmission. Every frame except the last frame of a packet will have this bit set. Retry: Sometimes frames require retransmission, and for this, there is a Retry bit that is set to one when a frame is resent. This aids in the elimination of duplicate frames. Power Management: This bit indicates the power management state of the sender after the completion of a frame exchange. Access points are required to manage the connection and will never set the power-saver bit. More Data: The More Data bit is used to buffer frames received in a distributed system. The access point uses this bit to facilitate stations in power-saver mode. It indicates that at least one frame is available and addresses all stations connected. Protected Frame: The Protected Frame bit is set to the value of one if the frame body is encrypted by a protection mechanism such as Wired Equivalent Privacy (WEP), Wi-Fi Protected Access (WPA), or Wi-Fi Protected Access II (WPA2). Order: This bit is set only when the "strict ordering" delivery method is employed. Frames and fragments are not always sent in order as it causes a transmission performance penalty. The next two bytes are reserved for the Duration ID field, indicating how long the field's transmission will take so other devices know when the channel will be available again. This field can take one of three forms: Duration, Contention-Free Period (CFP), and Association ID (AID). An 802.11 frame can have up to four address fields. Each field can carry a MAC address. Address 1 is the receiver, Address 2 is the transmitter, Address 3 is used for filtering purposes by the receiver. Address 4 is only present in data frames transmitted between access points in an Extended Service Set or between intermediate nodes in a mesh network. The remaining fields of the header are: The Sequence Control field is a two-byte section used to identify message order and eliminate duplicate frames. The first 4 bits are used for the fragmentation number, and the last 12 bits are the sequence number. An optional two-byte Quality of Service control field, present in QoS Data frames; it was added with 802.11e. The payload or frame body field is variable in size, from 0 to 2304 bytes plus any overhead from security encapsulation, and contains information from higher layers. The Frame Check Sequence (FCS) is the last four bytes in the standard 802.11 frame. Often referred to as the Cyclic Redundancy Check (CRC), it allows for integrity checks of retrieved frames. As frames are about to be sent, the FCS is calculated and appended. When a station receives a frame, it can calculate the FCS of the frame and compare it to the one received. If they match, it is assumed that the frame was not distorted during transmission. Management frames Management frames are not always authenticated, and allow for the maintenance, or discontinuance, of communication. Some common 802.11 subtypes include: Authentication frame: 802.11 authentication begins with the wireless network interface controller (WNIC) sending an authentication frame to the access point containing its identity. When open system authentication is being used, the WNIC sends only a single authentication frame, and the access point responds with an authentication frame of its own indicating acceptance or rejection. When shared key authentication is being used, the WNIC sends an initial authentication request, and the access point responds with an authentication frame containing challenge text. The WNIC then sends an authentication frame containing the encrypted version of the challenge text to the access point. The access point confirms the text was encrypted with the correct key by decrypting it with its own key. The result of this process determines the WNIC's authentication status. Association request frame: Sent from a station, it enables the access point to allocate resources and synchronize. The frame carries information about the WNIC, including supported data rates and the SSID of the network the station wishes to associate with. If the request is accepted, the access point reserves memory and establishes an association ID for the WNIC. Association response frame: Sent from an access point to a station containing the acceptance or rejection to an association request. If it is an acceptance, the frame will contain information such as an association ID and supported data rates. Beacon frame: Sent periodically from an access point to announce its presence and provide the SSID and other parameters for WNICs within range. : Sent from a station wishing to terminate connection from another station. Disassociation frame: Sent from a station wishing to terminate the connection. It is an elegant way to allow the access point to relinquish memory allocation and remove the WNIC from the association table. Probe request frame: Sent from a station when it requires information from another station. Probe response frame: Sent from an access point containing capability information, supported data rates, etc., after receiving a probe request frame. Reassociation request frame: A WNIC sends a reassociation request when it drops from the currently associated access point range and finds another access point with a stronger signal. The new access point coordinates the forwarding of any information that may still be contained in the buffer of the previous access point. Reassociation response frame: Sent from an access point containing the acceptance or rejection to a WNIC reassociation request frame. The frame includes information required for association such as the association ID and supported data rates. Action frame: extending management frame to control a certain action. Some of the action categories are QoS, Block Ack, Public, Radio Measurement, Fast BSS Transition, Mesh Peering Management, etc. These frames are sent by a station when it needs to tell its peer for a certain action to be taken. For example, a station can tell another station to set up a block acknowledgement by sending an ADDBA Request action frame. The other station would then respond with an ADDBA Response action frame. The body of a management frame consists of frame-subtype-dependent fixed fields followed by a sequence of information elements (IEs). The common structure of an IE is as follows: Control frames Control frames facilitate the exchange of data frames between stations. Some common 802.11 control frames include: Acknowledgement (ACK) frame: After receiving a data frame, the receiving station will send an ACK frame to the sending station if no errors are found. If the sending station does not receive an ACK frame within a predetermined period of time, the sending station will resend the frame. Request to Send (RTS) frame: The RTS and CTS frames provide an optional collision reduction scheme for access points with hidden stations. A station sends an RTS frame as the first step in a two-way handshake required before sending data frames. Clear to Send (CTS) frame: A station responds to an RTS frame with a CTS frame. It provides clearance for the requesting station to send a data frame. The CTS provides collision control management by including a time value for which all other stations are to hold off transmission while the requesting station transmits. Data frames Data frames carry packets from web pages, files, etc. within the body. The body begins with an IEEE 802.2 header, with the Destination Service Access Point (DSAP) specifying the protocol, followed by a Subnetwork Access Protocol (SNAP) header if the DSAP is hex AA, with the organizationally unique identifier (OUI) and protocol ID (PID) fields specifying the protocol. If the OUI is all zeroes, the protocol ID field is an EtherType value. Almost all 802.11 data frames use 802.2 and SNAP headers, and most use an OUI of 00:00:00 and an EtherType value. Similar to TCP congestion control on the internet, frame loss is built into the operation of 802.11. To select the correct transmission speed or Modulation and Coding Scheme, a rate control algorithm may test different speeds. The actual packet loss rate of Access points varies widely for different link conditions. There are variations in the loss rate experienced on production Access points, between 10% and 80%, with 30% being a common average. It is important to be aware that the link layer should recover these lost frames. If the sender does not receive an Acknowledgement (ACK) frame, then it will be resent. Standards and amendments Within the IEEE 802.11 Working Group, the following IEEE Standards Association Standard and Amendments exist: IEEE 802.11-1997: The WLAN standard was originally 1 Mbit/s and 2 Mbit/s, 2.4 GHz RF and infrared (IR) standard (1997), all the others listed below are Amendments to this standard, except for Recommended Practices 802.11F and 802.11T. IEEE 802.11a: 54 Mbit/s, 5 GHz standard (1999, shipping products in 2001) IEEE 802.11b: 5.5 Mbit/s and 11 Mbit/s, 2.4 GHz standard (1999) IEEE 802.11c: Bridge operation procedures; included in the IEEE 802.1D standard (2001) IEEE 802.11d: International (country-to-country) roaming extensions (2001) IEEE 802.11e: Enhancements: QoS, including packet bursting (2005) IEEE 802.11F: Inter-Access Point Protocol (2003) Withdrawn February 2006 IEEE 802.11g: 54 Mbit/s, 2.4 GHz standard (backwards compatible with b) (2003) IEEE 802.11h: Spectrum Managed 802.11a (5 GHz) for European compatibility (2004) IEEE 802.11i: Enhanced security (2004) IEEE 802.11j: Extensions for Japan (4.9-5.0 GHz) (2004) IEEE 802.11-2007: A new release of the standard that includes amendments a, b, d, e, g, h, i, and j. (July 2007) IEEE 802.11k: Radio resource measurement enhancements (2008) IEEE 802.11n: Higher Throughput WLAN at 2.4 and 5 GHz; 20 and 40 MHz channels; introduces MIMO to (September 2009) IEEE 802.11p: WAVE—Wireless Access for the Vehicular Environment (such as ambulances and passenger cars) (July 2010) IEEE 802.11r: Fast BSS transition (FT) (2008) IEEE 802.11s: Mesh Networking, Extended Service Set (ESS) (July 2011) IEEE 802.11T: Wireless Performance Prediction (WPP)—test methods and metrics Recommendation cancelled IEEE 802.11u: Improvements related to HotSpots and 3rd-party authorization of clients, e.g., cellular network offload (February 2011) IEEE 802.11v: Wireless network management (February 2011) IEEE 802.11w: Protected Management Frames (September 2009) IEEE 802.11y: 3650–3700 MHz Operation in the U.S. (2008) IEEE 802.11z: Extensions to Direct Link Setup (DLS) (September 2010) IEEE 802.11-2012: A new release of the standard that includes amendments k, n, p, r, s, u, v, w, y, and z (March 2012) IEEE 802.11aa: Robust streaming of Audio Video Transport Streams (June 2012) - see Stream Reservation Protocol IEEE 802.11ac: Very High Throughput WLAN at 5 GHz; wider channels (80 and 160 MHz); Multi-user MIMO (down-link only) (December 2013) IEEE 802.11ad: Very High Throughput 60 GHz (December 2012) — see also WiGig IEEE 802.11ae: Prioritization of Management Frames (March 2012) IEEE 802.11af: TV Whitespace (February 2014) IEEE 802.11-2016: A new release of the standard that includes amendments aa, ac, ad, ae, and af (December 2016) IEEE 802.11ah: Sub-1 GHz license exempt operation (e.g., sensor network, smart metering) (December 2016) IEEE 802.11ai: Fast Initial Link Setup (December 2016) IEEE 802.11aj: China Millimeter Wave (February 2018) IEEE 802.11ak: Transit Links within Bridged Networks (June 2018) IEEE 802.11aq: Pre-association Discovery (July 2018) IEEE 802.11-2020: A new release of the standard that includes amendments ah, ai, aj, ak, and aq (December 2020) IEEE 802.11ax: High Efficiency WLAN at 2.4, 5 and 6 GHz; introduces OFDMA to (February 2021) IEEE 802.11ay: Enhancements for Ultra High Throughput in and around the 60 GHz Band (March 2021) IEEE 802.11az: Next Generation Positioning (March 2023) IEEE 802.11ba: Wake Up Radio (March 2021) IEEE 802.11bb: Light Communications (November 2023) IEEE 802.11bc: Enhanced Broadcast Service (February 2024) IEEE 802.11bd: Enhancements for Next Generation V2X (see also IEEE 802.11p) (March 2023) In process IEEE 802.11be: Extremely High Throughput (see also IEEE 802.11ax) (May 2024) IEEE 802.11bf: WLAN Sensing IEEE 802.11bh: Randomized and Changing MAC Addresses IEEE 802.11bi: Enhanced Data Privacy IEEE 802.11bk: 320 MHz Positioning IEEE 802.11bn: Ultra High Reliability IEEE 802.11bp: Ambient Power Communication IEEE 802.11me: 802.11 Accumulated Maintenance Changes IEEE 802.11mf: 802.11 Accumulated Maintenance Changes 802.11F and 802.11T are recommended practices rather than standards and are capitalized as such. 802.11m is used for standard maintenance. 802.11ma was completed for 802.11-2007, 802.11mb for 802.11-2012, 802.11mc for 802.11-2016, and 802.11md for 802.11-2020. Standard vs. amendment Both the terms "standard" and "amendment" are used when referring to the different variants of IEEE standards. As far as the IEEE Standards Association is concerned, there is only one current standard; it is denoted by IEEE 802.11 followed by the date published. IEEE 802.11-2020 is the only version currently in publication, superseding previous releases. The standard is updated by means of amendments. Amendments are created by task groups (TG). Both the task group and their finished document are denoted by 802.11 followed by one or two lower case letters, for example, IEEE 802.11a or IEEE 802.11ax. Updating 802.11 is the responsibility of task group m. In order to create a new version, TGm combines the previous version of the standard and all published amendments. TGm also provides clarification and interpretation to industry on published documents. New versions of the IEEE 802.11 were published in 1999, 2007, 2012, 2016, and 2020. Nomenclature Various terms in 802.11 are used to specify aspects of wireless local-area networking operation and may be unfamiliar to some readers. For example, time unit (usually abbreviated TU) is used to indicate a unit of time equal to 1024 microseconds. Numerous time constants are defined in terms of TU (rather than the nearly equal millisecond). Also, the term portal is used to describe an entity that is similar to an 802.1H bridge. A portal provides access to the WLAN by non-802.11 LAN STAs. Security In 2001, a group from the University of California, Berkeley presented a paper describing weaknesses in the 802.11 Wired Equivalent Privacy (WEP) security mechanism defined in the original standard; they were followed by Fluhrer, Mantin, and Shamir's paper titled "Weaknesses in the Key Scheduling Algorithm of RC4". Not long after, Adam Stubblefield and AT&T publicly announced the first verification of the attack. In the attack, they were able to intercept transmissions and gain unauthorized access to wireless networks. The IEEE set up a dedicated task group to create a replacement security solution, 802.11i (previously, this work was handled as part of a broader 802.11e effort to enhance the MAC layer). The Wi-Fi Alliance announced an interim specification called Wi-Fi Protected Access (WPA) based on a subset of the then-current IEEE 802.11i draft. These started to appear in products in mid-2003. IEEE 802.11i (also known as WPA2) itself was ratified in June 2004, and uses the Advanced Encryption Standard (AES), instead of RC4, which was used in WEP. The modern recommended encryption for the home/consumer space is WPA2 (AES Pre-Shared Key), and for the enterprise space is WPA2 along with a RADIUS authentication server (or another type of authentication server) and a strong authentication method such as EAP-TLS. In January 2005, the IEEE set up yet another task group "w" to protect management and broadcast frames, which previously were sent unsecured. Its standard was published in 2009. In December 2011, a security flaw was revealed that affects some wireless routers with a specific implementation of the optional Wi-Fi Protected Setup (WPS) feature. While WPS is not a part of 802.11, the flaw allows an attacker within the range of the wireless router to recover the WPS PIN and, with it, the router's 802.11i password in a few hours. In late 2014, Apple announced that its iOS 8 mobile operating system would scramble MAC addresses during the pre-association stage to thwart retail footfall tracking made possible by the regular transmission of uniquely identifiable probe requests. Android 8.0 "Oreo" introduced a similar feature, named "MAC randomization". Wi-Fi users may be subjected to a Wi-Fi deauthentication attack to eavesdrop, attack passwords, or force the use of another, usually more expensive access point. See also 802.11 frame types Comparison of wireless data standards Fujitsu Ltd. v. Netgear Inc. Gi-Fi, a term used by some trade press to refer to faster versions of the IEEE 802.11 standards LTE-WLAN Aggregation OFDM system comparison table Passive Wi-Fi Reference Broadcast Infrastructure Synchronization TU (time unit) TV White Space Database Ultra-wideband White spaces (radio) Wi-Fi operating system support Wibree or Bluetooth low energy WiGig Wireless USB – another wireless protocol primarily designed for shorter-range applications Notes Footnotes References External links IEEE 802.11 working group Official timelines of 802.11 standards from IEEE List of all Wi-Fi Chipset Vendors – Including historical timeline of mergers and acquisitions Computer-related introductions in 1997 Wireless networking standards Local area networks
IEEE 802.11
Technology
10,339
414,421
https://en.wikipedia.org/wiki/John%20Kendrew
Sir John Cowdery Kendrew, (24 March 1917 – 23 August 1997) was an English biochemist, crystallographer, and science administrator. Kendrew shared the 1962 Nobel Prize in Chemistry with Max Perutz, for their work at the Cavendish Laboratory to investigate the structure of haem-containing proteins. Education and early life Kendrew was born in Oxford, son of Wilfrid George Kendrew, reader in climatology in the University of Oxford, and Evelyn May Graham Sandburg, art historian. After preparatory school at the Dragon School in Oxford, he was educated at Clifton College in Bristol, 1930–1936. He attended Trinity College, Cambridge in 1936, as a Major Scholar, graduating in chemistry in 1939. He spent the early months of World War II doing research on reaction kinetics, and then became a member of the Air Ministry Research Establishment, working on radar. In 1940 he became engaged in operational research at the Royal Air Force headquarters; commissioned a squadron leader on 17 September 1941, he was appointed an honorary wing commander on 8 June 1944, and relinquished his commission on 5 June 1945. He was awarded his PhD after the war in 1949. Research and career During the war years, he became increasingly interested in biochemical problems, and decided to work on the structure of proteins. Crystallography In 1945 he approached Max Perutz in the Cavendish Laboratory in Cambridge. Joseph Barcroft, a respiratory physiologist, suggested he might make a comparative protein crystallographic study of adult and fetal sheep haemoglobin, and he started that work. In 1947 he became a Fellow of Peterhouse; and the Medical Research Council (MRC) agreed to create a research unit for the study of the molecular structure of biological systems, under the direction of Sir Lawrence Bragg. In 1954 he became a Reader at the Davy-Faraday Laboratory of the Royal Institution in London. Crystal structure of myoglobin Kendrew shared the 1962 Nobel Prize for chemistry with Max Perutz for determining the first atomic structures of proteins using X-ray crystallography. Their work was done at what is now the MRC Laboratory of Molecular Biology in Cambridge. Kendrew determined the structure of the protein myoglobin, which stores oxygen in muscle cells. In 1947 the MRC agreed to make a research unit for the Study of the Molecular Structure of Biological Systems. The original studies were on the structure of sheep haemoglobin, but when this work had progressed as far as was possible using the resources then available, Kendrew embarked on the study of myoglobin, a molecule only a quarter the size of the haemoglobin molecule. His initial source of raw material was horse heart, but the crystals thus obtained were too small for X-ray analysis. Kendrew realized that the oxygen-conserving tissue of diving mammals could offer a better prospect, and a chance encounter led to his acquiring a large chunk of whale meat from Peru. Whale myoglobin did give large crystals with clean X-ray diffraction patterns. However, the problem still remained insurmountable, until in 1953 Max Perutz discovered that the phase problem in analysis of the diffraction patterns could be solved by multiple isomorphous replacement — comparison of patterns from several crystals; one from the native protein, and others that had been soaked in solutions of heavy metals and had metal ions introduced in different well-defined positions. An electron density map at 6 angstrom (0.6 nanometre) resolution was obtained by 1957, and by 1959 an atomic model could be built at 2 angstrom (0.2 nm) resolution. Later career In 1963, Kendrew became one of the founders of the European Molecular Biology Organization; he also founded the Journal of Molecular Biology and was for many years its editor-in-chief. He became Fellow of the American Society of Biological Chemists in 1967 and honorary member of the International Academy of Science, Munich. In 1974, he succeeded in persuading governments to establish the European Molecular Biology Laboratory (EMBL) in Heidelberg and became its first director. He was knighted in 1974. From 1974 to 1979, he was a Trustee of the British Museum, and from 1974 to 1988 he was successively Secretary General, Vice-President, and President of the International Council of Scientific Unions. After his retirement from EMBL, Kendrew became President of St John's College at the University of Oxford, a post he held from 1981 to 1987. In his will, he designated his bequest to St John's College for studentships in science and in music, for students from developing countries. The Kendrew Quadrangle at St John's College in Oxford, officially opened on 16 October 2010, is named after him. Kendrew was married to the former Elizabeth Jarvie (née Gorvin) from 1948 to 1956. Their marriage ended in divorce. Kendrew was subsequently partners with the artist Ruth Harris. He had no surviving children. A biography of Kendrew, entitled A Place in History: The Biography of John C. Kendrew, by Paul M. Wassarman was published by Oxford University Press in 2020. Selected publications References Further reading John Finch; 'A Nobel Fellow on Every Floor', Medical Research Council 2008, 381 pp, ; this book is all about the MRC Laboratory of Molecular Biology, Cambridge. Oxford University Press, page on Paul M. Wassarman, A Place in History, , 2020 External links 1917 births 1997 deaths Alumni of Trinity College, Cambridge Commanders of the Order of the British Empire British crystallographers English biologists English biophysicists English molecular biologists English Nobel laureates Fellows of the Royal Society Structural biologists Foreign associates of the National Academy of Sciences Knights Bachelor Members of the European Molecular Biology Organization Nobel laureates in Chemistry People educated at Clifton College People educated at The Dragon School Scientists from Oxford Presidents of St John's College, Oxford Presidents of the British Science Association Royal Medal winners Trustees of the British Museum X-ray crystallography 20th-century British biologists Royal Air Force personnel of World War II Royal Air Force Volunteer Reserve personnel of World War II Royal Air Force wing commanders
John Kendrew
Chemistry,Materials_science
1,267
2,212,195
https://en.wikipedia.org/wiki/Beta%20Draconis
Beta Draconis (β Draconis, abbreviated Beta Dra, β Dra) is a binary star system and the third-brightest star in the northern circumpolar constellation of Draco. The two components are designated Beta Draconis A (officially named Rastaban , “head to sole of foot”, the traditional name of the system) and B respectively. With a combined apparent visual magnitude of 2.79, it is bright enough to be easily seen with the naked eye. Based upon parallax measurements from the Hipparcos astrometry satellite, it lies at a distance of about from the Sun. The system is drifting closer with a radial velocity of −21 km/s. The binary system consists of a bright giant orbited by a dwarf companion once every four millennia or so. The companion is about 11 magnitudes fainter than the primary star, and the two are separated by . The spectrum of the primary, Beta Draconis A, matches a stellar classification of G2Ib-IIa, showing mixed features of a bright giant and a supergiant star, and is listed as a standard star for that spectral class. It is about 65 million years old and is currently undergoing its first convective dredge-up. Compared to the Sun, Beta Draconis A is an enormous star with six times the mass and roughly 40 times the radius. At this size, it is emitting about 950 times the luminosity of the Sun from its outer envelope at an effective temperature of 5,160 K, giving it the yellow hue of a G-type star. The star has a particularly strong chromospheric emission that is generating X-ray and far-UV radiation. There is a detectable magnetic field with a longitudinal field strength of . Beta Draconis lies on or near the cepheid instability strip, yet only appears to be a microvariable with a range of about 1/100 of a magnitude. It was confirmed as a variable star with a range of about 1/100 of a magnitude by Gabriel Cristian Neagu using data from the TESS and Hipparcos missions. The variability was reported to the AAVSO (American Association of Variable Star Observers), in the Variable Star Index. Nomenclature β Draconis (Latinised to Beta Draconis) is the system's Bayer designation. The designations of the two components as Beta Draconis A and B derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU). It bore the traditional name Rastaban, which has also been used for Gamma Draconis. This name, less commonly written Rastaben, derives from the Arabic phrase ra's ath-thu'ban "head of the serpent/dragon". It was also known as Asuia and Alwaid , the latter from the Arabic al-ʽawāʼidh "the old mother camels". In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Rastaban for the component Beta Draconis A on 21 August 2016 and it is now so included in the List of IAU-approved Star Names. Beta Draconis is part of the asterism of the Mother Camels (Arabic al'awa'id), along with Gamma Draconis (Eltanin), Mu Draconis (Erakis), Nu Draconis (Kuma) and Xi Draconis (Grumium), which was later known as the Quinque Dromedarii. In Chinese, (), meaning Celestial Flail, refers to an asterism consisting of Beta Draconis, Xi Draconis, Nu Draconis, Gamma Draconis and Iota Herculis. Consequently, the Chinese name for Beta Draconis itself is known as (, ). References External links G-type bright giants G-type supergiants Binary stars Draco (constellation) Draconis, Beta BD+52 2065 Draconis, 23 159181 085670 6536 Rastaban
Beta Draconis
Astronomy
874
29,000,170
https://en.wikipedia.org/wiki/Trash%20Inc%3A%20The%20Secret%20Life%20of%20Garbage
Trash Inc: The Secret Life of Garbage is a one-hour television documentary film that aired on CNBC on September 29, 2010 about trash/garbage, what happens to it when it's "thrown away", and its impact on the world. The film is hosted by CNBC Squawk Box co-anchor Carl Quintanilla as he reports from various landfills (such as the largest in the United States, the Apex Landfill in Clark County, Nevada), business, and other locations in the United States (New York, New Jersey, Hawaii, South Carolina) and China (mostly Beijing). The idea for Trash, Inc was born of the 2008 recession and the relative stability of publicly traded waste management companies. References External links CNBC original programming American documentary television films Waste 2010 in the environment 2010 television films 2010 films Documentary films about environmental issues 2010s English-language films 2010s American films English-language documentary films
Trash Inc: The Secret Life of Garbage
Physics
189
51,183,962
https://en.wikipedia.org/wiki/Semi-abelian%20category
In mathematics, specifically in category theory, a semi-abelian category is a pre-abelian category in which the induced morphism is a bimorphism, i.e., a monomorphism and an epimorphism, for every morphism . The history of the notion is intertwined with that of a quasi-abelian category, as, for awhile, it was not known whether the two notions are distinct (see quasi-abelian category#History). Properties The two properties used in the definition can be characterized by several equivalent conditions. Every semi-abelian category has a maximal exact structure. If a semi-abelian category is not quasi-abelian, then the class of all kernel-cokernel pairs does not form an exact structure. Examples Every quasiabelian category is semiabelian. In particular, every abelian category is semi-abelian. Non-quasiabelian examples are the following. The category of (possibly non-Hausdorff) bornological spaces is semiabelian. Let be the quiver and be a field. The category of finitely generated projective modules over the algebra is semiabelian. Left and right semi-abelian categories By dividing the two conditions on the induced map in the definition, one can define left semi-abelian categories by requiring that is a monomorphism for each morphism . Accordingly, right semi-abelian categories are pre-abelian categories such that is an epimorphism for each morphism . If a category is left semi-abelian and right quasi-abelian, then it is already quasi-abelian. The same holds, if the category is right semi-abelian and left quasi-abelian. Citations References José Bonet, J., Susanne Dierolf, The pullback for bornological and ultrabornological spaces. Note Mat. 25(1), 63–67 (2005/2006). Yaroslav Kopylov and Sven-Ake Wegner, On the notion of a semi-abelian category in the sense of Palamodov, Appl. Categ. Structures 20 (5) (2012) 531–541. Wolfgang Rump, A counterexample to Raikov's conjecture, Bull. London Math. Soc. 40, 985–994 (2008). Wolfgang Rump, Almost abelian categories, Cahiers Topologie Géom. Différentielle Catég. 42(3), 163–225 (2001). Wolfgang Rump, Analysis of a problem of Raikov with applications to barreled and bornological spaces, J. Pure and Appl. Algebra 215, 44–52 (2011). Dennis Sieg and Sven-Ake Wegner, Maximal exact structures on additive categories, Math. Nachr. 284 (2011), 2093–2100. Additive categories
Semi-abelian category
Mathematics
598
35,043,850
https://en.wikipedia.org/wiki/Telugu%20years
In India, the Telugu year is the calendar year of the Telugu speaking people of Andhra Pradesh, Telangana, and the enclave Yanam. Each Yuga (era) has a cycle of 60 years. Each year of Ugadi year has a specific name in Panchangam (astronomical calendar) based on astrological influences and the name of the year; this denotes the overall character of that year. The calendar includes 60 year names. Every 60 years, one name cycle completes, repeat in the next omnibus cycle. For example, the Telugu name for 1954 is "Jaya", and it first repeated in 2014. Ugadi is the Telugu new year festival in spring (usually March or April). These years always change on Ugadi. In Telugu mythology, the names of the years are those of Naradha Maharshi's children's names. To teach a lesson to Naradha, Lord Vishnu presented an illusion to Naradha of a lady, who eventually gave birth birth to 60 children – all of whom were to die in a war. After this denouement, and Naradha having learned his lesson, Vishnu offered boon to Naradha that his children's names would be the names of the cyclic, and that their specific characteristics would carry over to those that years. E.g., 2024 is a Krodhi year. Years The sixty Ugadi year names are as follows: (1867, 1927, 1987, 2047) Prabhava ప్రభవ (యజ్ఞములు అధికంగా జరుగుతాయి) (1868, 1928, 1988, 2048) Vibhava విభవ (సుఖంగా జీవిస్తారు) (1869, 1929, 1989, 2049) Śukla శుక్ల (సమృద్దిగా పంటలు పండుతాయి) (1870, 1930, 1990, 2050) Pramōdyuta ప్రమోద్యూత (అందరికి ఆనందాన్ని ఇస్తుంది) (1871, 1931, 1991, 2051) Prajōtpatti ప్రజోత్పత్తి (అన్నింటిలోను అభివృద్ధి ఉంటుంది) (1872, 1932, 1992, 2052) Āṅgīrasa ఆంగీరస (భోగాలు కలుగుతాయి) (1873, 1933, 1993, 2053) Śrīmukha శ్రీముఖ (వనరులు సమృద్దిగా ఉంటాయి) (1874, 1934, 1994, 2054) Bhava భవ (ఉన్నత భావాలు కలిగి ఉంటారు) (1875, 1935, 1995, 2055) Yuva యువ (వర్షాలు కురిపించి పంటలు సమృద్ధిగా చేతికి అందుతాయి) (1876, 1936, 1996, 2056) Dhāta ధాత (అనారోగ్య బాధలు తగ్గుతాయి) (1877, 1937, 1997, 2057) Īśvara ఈశ్వర (క్షేమం, ఆరోగ్యాన్ని సూచిస్తుంది) (1878, 1938, 1998, 2058) Bahudhānya బహుధాన్య (దేశం సుభిక్షంగా, సంతోషంగా ఉండాలని సూచిస్తుంది) (1879, 1939, 1999, 2059) Pramādhi ప్రమాధి (వర్షాలు మధ్యస్థంగా ఉంటాయి‌) (1880, 1940, 2000, 2060) Vikrama విక్రమ (పంటలు బాగా పండి రైతన్నలు సంతోషిస్తారు, విజయాలు సాధిస్తారు) (1881, 1941, 2001, 2061) Vr̥ṣa వృష (వర్షాలు సమృద్ధిగా కురుస్తాయి) (1882, 1942, 2002, 2062) Citrabhānu చిత్రభాను (అద్భుతమైన ఫలితాలు పొందుతారు) (1883, 1943, 2003, 2063) Svabhānu స్వభాను (క్షేమము, ఆరోగ్యం) (1884, 1944, 2004, 2064) Tāraṇa తారణ (మేఘాలు సరైన సమయంలో వర్షించి సమృద్ధిగా వర్షాలు కురుస్తాయి) (1885, 1945, 2005, 2065) Pārthiva పార్థివ (ఐశ్వర్యం, సంపద పెరుగుతాయి) (1886, 1946, 2006, 2066) Vyaya వ్యయ (అతివృష్టి, అనవసర ఖర్చులు) (1887, 1947, 2007, 2067) Sarvajittu సర్వజిత్తు (సంతోషకరంగా చాలా వర్షాలు కురుస్తాయి) (1888, 1948, 2008, 2068) Sarvadhāri సర్వధారి (సుభిక్షంగా ఉంటారు) (1889, 1949, 2009, 2069) Virōdhi విరోధి (వర్షాలు లేకుండా ఇబ్బందులు పడే సమయం) (1890, 1950, 2010, 2070) Vikr̥ti వికృతి (ఈ సమయం భయంకరంగా ఉంటుంది) (1891, 1951, 2011, 2071) Khara ఖర (పరిస్థితులు సాధారణంగా ఉంటాయి) (1892, 1952, 2012, 2072) Nandana ‌నందన (ప్రజలకు ఆనందం కలుగుతుంది) (1893, 1953, 2013, 2073) Vijaya విజయ (శత్రువులను జయిస్తారు) (1894, 1954, 2014, 2074) Jaya జయ (లాభాలు, విజయం సాధిస్తారు) (1895, 1955, 2015, 2075) Manmadha మన్మధ (జ్వరాది బాధలు తొలగిపోతాయి) (1896, 1956, 2016, 2076) Durmukhi దుర్ముఖి (ఇబ్బందులు ఉన్న క్షేమంగానే ఉంటారు) (1897, 1957, 2017, 2077) Hēvaḷambi హేవళంబి (ప్రజలు సంతోషంగా ఉంటారు) (1898, 1958, 2018, 2078) Viḷambi విళంబి (సుభిక్షంగా ఉంటారు) (1899, 1959, 2019, 2079) Vikāri వికారి (అనారోగ్యాన్ని కలిగిస్తుంది, శత్రువులకు చాలా కోపం తీసుకొస్తుంది) (1900, 1960, 2020, 2080) Śārvari శార్వరి (చీకటి) (1901, 1961, 2021, 2081) Plava ప్లవ (ఒడ్డుకు చేర్చునది) (1902, 1962, 2022, 2082) Śubhakr̥ttu శుభకృతు (శుభములు కలిగించేది) (1903, 1963, 2023, 2083) Śōbhakr̥ttu శోభకృతు (లాభములు కలిగించేది) (1904, 1964, 2024, 2084) Krōdhi క్రోధి (కోపం కలిగించేది) (1905, 1965, 2025, 2085) Viśvāvasu విశ్వావసు (ధనం సమృద్ధిగా ఉంటుంది) (1906, 1966, 2026, 2086) Parābhava పరాభవ (ప్రజల పరాభవాలకు గురవుతారు) (1907, 1967, 2027, 2087) Plavaṅga ప్లవంగ (నీరు సమృద్ధిగా ఉంటుంది) (1908, 1968, 2028, 2088) Kīlaka కీలక (పంటలు బాగా పండుతాయి) (1909, 1969, 2029, 2089) Saumya సౌమ్య (శుభ ఫలితాలు అధికం) (1910, 1970, 2030, 2090) Sādhāraṇa సాధారణ (సాధారణ పరిస్థితులు ఉంటాయి) (1911, 1971, 2031, 2091) Virōdhikr̥ttu విరోధికృతు (ప్రజల్లో విరోధం ఏర్పడుతుంది) (1912, 1972, 2032, 2092) Paridhāvi పరిధావి (ప్రజల్లో భయం ఎక్కువగా ఉంటుంది) (1913, 1973, 2033, 2093) Pramādīca ప్రమాదీచ (ప్రమాదాలు ఎక్కువ) (1914, 1974, 2034, 2094) Ānanda ఆనంద (ఆనందంగా ఉంటారు) (1915, 1975, 2035, 2095) Rākṣasa రాక్షస (కఠిన హృదయం కలిగి ఉంటారు) (1916, 1976, 2036, 2096) Nala నల (పంటలు బాగా పండుతాయి) (1917, 1977, 2037, 2097) Piṅgaḷa పింగళ (సామాన్య ఫలితాలు కలుగుతాయి) (1918, 1978, 2038, 2098) Kāḷayukti కాళయుక్తి (కాలానికి అనుకూలమైన ఫలితాలు లభిస్తాయి) (1919, 1979, 2039, 2099) Siddhārthi సిద్ధార్ది (కార్య సిద్ధి) (1920, 1980, 2040, 2100) Raudri రౌద్రి (ప్రజలకు చిన్నపాటి బాధలు ఉంటాయి) (1921, 1981, 2041, 2101) Durmati దుర్మతి (వర్షాలు సామాన్యంగా ఉంటాయి) (1922, 1982, 2042, 2102) Dundubhi దుందుభి (క్షేమం, ధ్యానం) (1923, 1983, 2043, 2103) Rudhirōdgāri రుధిరోద్గారి (ప్రమాదాలు ఎక్కువ) (1924, 1984, 2044, 2104) Raktākṣi రక్తాక్షి (అశుభాలు కలుగుతాయి) (1925, 1985, 2045, 2105) Krōdhana క్రోధన (విజయాలు సిద్ధిస్తాయి) (1926, 1986, 2046, 2106) Akṣaya అక్షయ (తరగని సంపద) Significance In ancient days Yogis (saints) interact directly with God, according to that, they have given information related to our Indian Kalachakra (time-cycle) in relation to Lord Brahma. Below is the Indian Kalachakra (time-cycle): 60 years = Shashti Poorthi 432,000 years = Kali Yuga 864,000 years = Dvapara Yuga 1,296,000 years = Treta Yuga 1,728,000 years = Satya Yuga 4,320,000 years = Chatur Yuga (Total 4 Yuga) 71 Chatur Yuga = Manvantara 1,000 Chatur Yuga = Kalpa Brahma day (= 14 Manvantara + 15 Manvantara-sandhya) Kalpa + Pralaya = Brahma Day (day + night) 30 Brahma Days = Brahma month 12 Brahma months = Brahma year 100 Brahma years = Brahma lifespan Maha-kalpa of 311.04 trillion years (followed by Maha-pralaya of equal length) References Calendars Telugu language Names of units of time
Telugu years
Physics
1,707
60,591,217
https://en.wikipedia.org/wiki/Plunge%20saw
A plunge saw or plunge-cut saw is a type of hand-held circular saw which differs from a regular circular saw in that it can plunge into the material to a predetermined depth during the cut. In other words, the depth-of-cut is not fixed and often can be adjusted to be just slightly over the thickness of the board being cut. This property also allows a plunge saw to cut shallow grooves into the workpiece, if necessary. Compared to traditional hand-held circular saws, plunge saws are said to increase operator safety, as well as allowing for reduced splintering and tear-out. Plunge saws are an essential power tool for joiners, carpenters, kitchen fitters and anyone who works with laminates, insulation or needs to make lots of cuts in small work pieces. History The German power tool manufacturer Festool introduced the first guide rail in 1962, and patented and released the first plunge-cut saw in 1980. Rail systems A track is used to guide the plunge saw. Compatibility The original FS track system of Festool is also used by many other manufacturers, such as Makita and Milwaukee, which means that tracks, saw and other equipment can be used across different manufacturers. An alternative standard which is not compatible with the Festool system is the FSN system from Bosch, which is also used by Mafell. There is some debate as to which of the rail systems is the best, and both have their supporters, but their functional differences are small in practice. Lengths and connection Rails are available in different lengths, and should be robust and rigid. A shorter rail (for example 80 cm) can be handy for smaller work, while lengths such as 140 cm, 210 cm or 310 cm can be useful for sawing larger boards. Multiple rails can often be joined with extension pieces to achieve a longer length. If a shorter rail of a particular length is needed one can modify an existing rail by cutting it. It is not possible to interchange rails between different systems (Festool versus Bosch). Stability The underside of the rail can be finished with a non-slip material, while the upper side can have smooth plastic finish. A clamp can be used to make the rail lie still if precision cutting is needed. When using a rail for the first time, the rubber strip on the side used for sighting must be cut to fit. There should then be nothing underneath, and the cut depth should not be set too deeply. Other uses Some rails can also be used for jigsaws and handheld routers by using an adapter, and some of these adapters can be used on multiple rail systems (Festool and Bosch). Compared with other track saws Plunge saws usually come with a track system which lets them slide on a guide rail during operation, allowing the operator to perform long and accurate cuts, and for this reason plunge saws are sometimes called "track saws". However, the term track saw can be ambiguous, since some normal handheld circular saws without a plunge-cut feature also can be fitted with a track or guide rail. Compared to a conventional circular saw, a plunge saw can be safer and more and precise tool for woodworking and carpentry. See also Wall chaser References Cutting machines Metalworking cutting tools Saws Woodworking hand-held power tools Woodworking machines
Plunge saw
Physics,Technology
680
1,576,787
https://en.wikipedia.org/wiki/Allyl%20isothiocyanate
Allyl isothiocyanate (AITC) is a naturally occurring unsaturated isothiocyanate. The colorless oil is responsible for the pungent taste of cruciferous vegetables such as mustard, radish, horseradish, and wasabi. This pungency and the lachrymatory effect of AITC are mediated through the TRPA1 and TRPV1 ion channels. It is slightly soluble in water, but more soluble in most organic solvents. Biosynthesis and biological functions Allyl isothiocyanate can be obtained from the seeds of black mustard (Rhamphospermum nigrum) or brown Indian mustard (Brassica juncea). When these mustard seeds are broken, the enzyme myrosinase is released and acts on a glucosinolate known as sinigrin to give allyl isothiocyanate. This serves the plant as a defense against herbivores; since it is harmful to the plant itself, it is stored in the harmless form of the glucosinolate, separate from the myrosinase enzyme. When an animal chews the plant, the allyl isothiocyanate is released, repelling the animal. Human appreciation of the pungency is learned. The compound has been shown to strongly repel fire ants (Solenopsis invicta). AITC vapor is also used as an antimicrobial and shelf life extender in food packaging. Production and applications Allyl isothiocyanate is produced commercially by the reaction of allyl chloride and potassium thiocyanate: CH2=CHCH2Cl + KSCN → CH2=CHCH2NCS + KCl The product obtained in this fashion is sometimes known as synthetic mustard oil. Allyl thiocyanate isomerizes to the isothiocyanate: Allyl isothiocyanate can also be liberated by dry distillation of the seeds. The product obtained in this fashion is known as volatile oil of mustard. It is used principally as a flavoring agent in foods. Synthetic allyl isothiocyanate is used as an insecticide, as an anti-mold agent bacteriocide, and nematicide, and is used in certain cases for crop protection. It is also used in fire alarms for the deaf. Hydrolysis of allyl isothiocyanate gives allylamine. Safety Allyl isothiocyanate has an LD50 of 151 mg/kg and is a lachrymator (similar to tear gas or mace). Oncology Based on in vitro experiments and animal models, allyl isothiocyanate exhibits many of the desirable attributes of a cancer chemopreventive agent. See also Mustard plaster, traditional home remedy Piperine, the piquant chemical in black pepper Capsaicin, the piquant chemical in chili peppers Allicin, the piquant flavor chemical in raw garlic References Antibiotics Insecticides Isothiocyanates Pungent flavors Nematicides Allyl compounds Lachrymatory agents Transient receptor potential channel modulators
Allyl isothiocyanate
Chemistry,Biology
645
24,542,773
https://en.wikipedia.org/wiki/Left%20corner%20parser
In computer science, a left corner parser is a type of chart parser used for parsing context-free grammars. It combines the top-down and bottom-up approaches of parsing. The name derives from the use of the left corner of the grammar's production rules. An early description of a left corner parser is "A Syntax-Oriented Translator" by Peter Zilahy Ingerman. References Specific Parsing algorithms
Left corner parser
Technology
91
26,196,497
https://en.wikipedia.org/wiki/Elephant%27s%20toothpaste
Elephant's toothpaste is a foamy substance caused by the quick decomposition of hydrogen peroxide () using potassium iodide (KI) or yeast and warm water as a catalyst. How rapidly the reaction proceeds will depend on the concentration of hydrogen peroxide. Because it requires only a small number of ingredients and makes a "volcano of foam", it is a popular experiment for children to perform in school or at parties. Explanation Description About 50 ml of concentrated (>12%) hydrogen peroxide is first mixed with liquid soap or dishwashing detergent. Then, a catalyst, often around 10 ml potassium iodide solution or catalase from baker's yeast, is added to make the hydrogen peroxide decompose very quickly. Hydrogen peroxide breaks down into oxygen and water. As a small amount of hydrogen peroxide generates a large volume of oxygen, the oxygen quickly pushes out of the container. The soapy water traps the oxygen, creating bubbles, and turns into foam. About 5-10 drops of food coloring could also be added before the catalyst to dramatize the effect. How rapidly the reaction occurs will depend on the concentration of hydrogen peroxide used. Chemical explanation This experiment shows the catalyzed decomposition of hydrogen peroxide. Hydrogen peroxide (H2O2) decomposes into water and oxygen gas, which is in the form of foam, but normally the reaction is too slow to be easily perceived or measured: 2H2O2 -> 2H2O + O2 ^ In normal conditions, this reaction takes place very slowly, therefore a catalyst is added to speed up the reaction, which will result in rapid formation of foam. The iodide ion from potassium iodide acts as a catalyst and speeds up the reaction while remaining chemically unchanged in the reaction process. The iodide ion changes the mechanism by which the reaction occurs: The reaction is exothermic; the foam produced is hot (about 75°C or 167°F). A glowing splint can be used to show that the gas produced is oxygen. The rate of foam formation measured in volume per time unit has a positive correlation with the peroxide concentration (v/V%), which means that more foam will be generated per unit time when a more concentrated peroxide solution is used. Variations YouTube science entertainer Mark Rober has created a variation of the experiment, named "Devil's Toothpaste", which has a far more pronounced reaction than the version usually performed in classroom settings. The ingredients to create the devil's toothpaste reaction are the same as the regular elephant's toothpaste reaction, the only difference being the use of 50% H2O2 instead of the usual 35%. See also Black snake (firework) Carbon snake Soda geyser References External links The Elephant's Toothpaste Experiment sciencebob.com Chemistry classroom experiments Articles containing video clips
Elephant's toothpaste
Chemistry
600
1,323,035
https://en.wikipedia.org/wiki/Griefer
A griefer or bad-faith player is a player in a multiplayer video game who deliberately annoys, disrupts, or trolls others in ways that are not part of the intended gameplay. Griefing is often accomplished by killing players for sheer fun, destroying player-built structures, or stealing items. A griefer derives pleasure from the act of annoying other users, and as such, is a nuisance in online gaming communities. History The term "griefing" was applied to online multiplayer video games by the year 2000 or earlier, as illustrated by postings to the rec.games.computer.ultima.online USENET group. The player is said to cause "grief" in the sense of "giving someone grief". The term "griefing" dates to the late 1990s, when it was used to describe the willfully antisocial behaviors seen in early massively multiplayer online games like Ultima Online, and later, in the 2000s, first-person shooters such as Counter-Strike. Even before it had a name, griefer-like behavior was familiar in the virtual worlds of text-based Multi-User Domains (MUDs), where joyriding invaders inflicted "virtual rape" and similar offenses on the local populace. Julian Dibbell's 1993 article "A Rape in Cyberspace" analyzed the griefing events in a particular MUD, LambdaMOO, and the staff's response. In the culture of massively multiplayer online role-playing games (MMORPGs) in Taiwan, such as Lineage, griefers are known as "white-eyed"—a metaphor meaning that their eyes have no pupils and so they look without seeing. Behaviors other than griefing that can cause players to be stigmatized as "white-eyed" include cursing, cheating, stealing, or unreasonable killing. Methods Methods of griefing differ from game to game. What might be considered griefing in one area of a game, may even be an intended function or mechanic in another area. Common methods may include but are not limited to: Intentional friendly fire, or deliberately performing actions detrimental to other team members' game performance in primarily shooter games. Wasting or destroying key game elements Colluding with opponents Giving false information Giving information about your team's whereabouts to an enemy team Faking extreme incompetence with the intent of hurting teammates, or failing an in-game objective Deliberately blocking shots from a player's own team, or blocking a player's view by standing in front of them, so they cannot damage the enemy Trapping teammates in inescapable locations by using physics props, special abilities, or teleportation Intentionally killing oneself Actions undertaken to waste other players' time. Playing as slowly as possible Trapping and imprisoning players for extended periods of time to deny their ability to play on a server. Hiding from an enemy when there is no tactical benefit in doing so If a game interface element has no time limit, leaving their computer (going "AFK"), potentially forcing the other players to leave the game (which may incur a penalty for leaving), like Among Us. Constantly pausing the game, or lowering its speed as much as possible, in the hopes that their target quits in frustration Standing on top of important NPCS to block other players from interacting with them. A powerful player entering an area intended for lower-level or less experienced players and using up or hogging otherwise available limited resources, as can be sometimes seen in MMORPGs or grinding-based games Causing a player disproportionate loss or reversing their progress. Destroying or vandalizing other players' creations without permission in sandbox games like Minecraft and Terraria Driving vehicles backward around lapped courses in multiplayer racing games, often done with the intent of crashing head-on into whoever is in first place Using exploits (taking advantage of bugs in a game). Illegally exiting a map's boundaries to prevent the enemy team from winning In a co-op or multiplayer game, destroying or otherwise denying access to items, which without, other players cannot finish the game Purposeful violation of server rules or guidelines. Impersonation of administrators or other players through similar screen names Written or verbal insults, including false accusations of cheating or griefing Abusing the in game reporting system with mass reports to trigger a bot to automatically ban another player. Spamming a voice or text chat channel to inconvenience, harass, or annoy other players. Uploading offensive or explicit images to profile pictures, in-game sprays, or game skins. Kill stealing, denying another player the satisfaction or gain of killing a target that should have been theirs. Camping at a corpse or spawn area to repeatedly kill players as they respawn (when players have no method of recourse to prevent getting killed), preventing them from being able to play. Camping can also refer to continuously waiting in a tactically advantageous position for others to come to them; this is sometimes considered griefing because if all players do it, the game stalls, but this is now more commonly considered a game design issue, and in games where you need to defeat someone, like a juggernaut, it is more likely for that juggernaut to camp. Acting out-of-character in a role-play setting to disrupt the serious gameplay of others. Luring many monsters or a single larger monster to chase the griefer, before moving to where other players are. The line of monsters in pursuit looks like a train, and hence this is sometimes called "training" or "aggroing". Blocking other players so they cannot move to or from a particular area, or access an in-game resource (such as a non-player character); the game Tom Clancy's The Division was found to have a serious problem with this at launch, where griefers could stand in the doorway out of the starting area, trapping players in the spawn room. Intentionally attempting to crash a server through lag or other means (such as spawning large amounts of resource-demanding objects), in order to cause interference among players. Smurfing, the process of creating extra accounts and deliberately losing games to enter a lower skill rank than is appropriate, before playing at full skill against lower-ranked opponents, thus defeating them easily. High-skill players deliberately losing in matches against low-skill players (usually due to shortage of players), causing the low-skill player's skill rating to artificially rise so that they will be routinely pitted against opponents they have no chance of winning against in the future. Impersonating an enemy to trick someone into attacking the griefer, so that a player is flagged as having attacked the griefer. A notable example of this is early on in Ultima Online, where players had a scroll that could change their appearance to that of a monster, with the only way to tell the difference between them and a real monster is to click on them and read their name. Attacking a monster disguised griefer would flag the player as a murderer, causing the town guard to kill the player. Starting a vote to kick someone in hopes of others blindly agreeing in doing so, so that the griefer stays in the server with no consequences while tricking others to get rid of the innocent person. The term is sometimes applied more generally to refer to a person who uses the internet to cause distress to others as a prank, or to intentionally inflict harm, as when it was used to describe an incident in March 2008, when malicious users posted seizure-inducing animations on epilepsy forums. Industry response Many subscription-based games actively oppose griefers, since their behavior can drive away business. It is common for developers to release server-side upgrades and patches to annul griefing methods. Many online games employ gamemasters that reprimand offenders. Some use a crowdsourcing approach, where players can report griefing. Malicious players are then red-flagged, and are then dealt with at a gamemaster's discretion. As many as 25% of customer support calls to companies operating online games deal specifically with griefing. Blizzard Entertainment has enacted software components to combat griefing. To prevent non-consensual attacks between players, some games such as Ultima Online have created separate realms for those who wish to be able to attack anyone at any time, and for those who do not. Others implemented separate servers. When EverQuest was released, Sony included a PvP switch where people could fight each other only if they had enabled that option. This was done in order to prevent the player-killing that was driving people away from Ultima Online, which at that time had no protection on any of its servers. Second Life bans players for harassment (defined as being rude or threatening, making unwelcome sexual advances, or performing activities likely to annoy or alarm somebody) and assault (shooting, pushing, or shoving in a safe area, or creating scripted objects that target another user and hinder their enjoyment of the game) in its community standards. Sanctions include warnings, suspension from Second Life, or being banned altogether. Eve Online has incorporated activities typically considered griefing into the gameplay mechanisms. Corporate spying, theft, scams, gate-camping, and PvP on non-PvP players are all part of the gaming experience. This does not mean that the developers are indifferent to the negative effect that these activities may have on players, it is simply their choice with regards to the culture and atmosphere that they intended for the game. Players are advised to approach unfamiliar situations in the game with an appropriate level of caution, develop strategies to deal with the presence of these elements, and take personal responsibility for their in-game actions. Certain activities are allowed by the developers, but are still considered illegal in the game itself and result in in-game consequences, such as the unavoidable loss of the attacker's ship when engaging in combat with a non-allowed target in high-security space. Shooters such as Counter Strike: Global Offensive have implemented peer review systems, where if a player is reported too many times, multiple higher ranked players are allowed to review the player and determine if the reports are valid, and apply a temporary ban to the player's account if necessary. The player's name is omitted during the replay, as well as those of the other 9 players in the game. In October 2016, Valve implemented a change that will permanently ban a player if they receive two penalties for griefing. Many Minecraft servers have rules against griefing. In Minecraft freebuild servers, griefing is often the destruction of another player's build, and in other servers the definition ranges, but almost all servers recognize griefing as harassment. Most servers use temporary bans for minor and/or first-time incidents, and indefinite bans from the server for more serious and/or repeat offences. While many servers try to fight this, other servers, like 2b2t, allow griefing as part of the gameplay. By the early 2020s, Grand Theft Auto Online has experienced a drastic increase in griefing, due in part to the emergence of bugs and better money-making opportunities. Common griefing techniques within the game abuse passive mode and trivially accessible weaponized vehicles. Developer Rockstar has implemented measures such as a longer cool-down on passive mode, patching invincibility glitches, and removing passive mode from weaponized vehicles in recent updates. In addition, the game also features a reputation system that, in effect, after excessive "bad sport point" accumulation, will mark players as "bad sports", allowing them to only play in lobbies with other "bad sports". Such points are either accumulated over time or gained within a certain time frame and are acquired by actions such as destroying another player's personal vehicle, or quitting jobs early. This is one of the more controversial features of the game, as some point out flaws such as the game not considering if destruction of a vehicle was self-defense. Bethesda Softworks Games, a division of ZeniMax Media Inc., has a clear code of conduct that does not allow griefing, as indicated in section 3.2. Whether this has any effect is debatable, with numerous forum posts about ongoing griefing behaviour. Because the boilerplate response to generating a ticket about such a player, contains the clause "Please note, to protect individual privacy, we do not disclose the outcome of our investigation.", there is unfortunately no transparency to indicate whether violations to the code of conduct (by griefing) are taken seriously by Bethesda/ZeniMax. Fallout 76 attempted to discourage players from griefing by marking them as wanted criminals, which one can get a reward for killing. Wanted players cannot see any other players on the world map, and must rely on their normal player view. However, this has instead become another mechanism to engage in griefing, by luring other players into PvP, in which they largely have no chance to survive because of the perk loadout and weapons used by the griefer. An example of this is by breaking resource locks in a player camp, which will make the griefer wanted, with the hope that the camp owner will find them to retaliate, and thereby initiate PvP with the griefer. See also 2b2t Aimbot Anti-social behaviour Cyberbullying Dark triad Glossary of video game terms Internet troll Leeroy Jenkins Lulz Online harassment Schadenfreude Spamming Video game exploit Wikipedia:Griefing References External links Globe and Mail: "Frontier justice: Can virtual worlds be civilized?" "Ready, set, game: Learn how to keep video gaming safe and fun." Documented incident of griefing during a virtual interview, see also Anshe Chung Research paper on griefing. To view this PDF paper, the host website requires a subscription to the digital library. "Feature: The Griefer Within", GamePro. "Mutilated Furries, Flying Phalluses: Put the Blame on Griefers", WIRED MAGAZINE: ISSUE 16.02 "Griefer Madness: Terrorizing Virtual Worlds" Can you grief it? - feature article at VideoGamer.com Internet trolling MUD terminology Video game terminology Video game culture
Griefer
Technology
2,920
55,232,088
https://en.wikipedia.org/wiki/NGC%204633
NGC 4633 is a spiral galaxy located about 70 million light-years away in the constellation of Coma Berenices. It is interacting with the nearby galaxy NGC 4634. NGC 4633 was discovered by astronomer Edward D. Swift on April 27, 1887. It was rediscovered on November 23, 1900, by astronomer Arnold Schwassmann and was later listed as IC 3688. NGC 4633 is a member of the Virgo Cluster. See also List of NGC objects (4001–5000) Arp 116 References External links Intermediate spiral galaxies Interacting galaxies Dwarf spiral galaxies Magellanic spiral galaxies Coma Berenices 4633 IC objects 42699 7874 Astronomical objects discovered in 1887 Virgo Cluster Discoveries by Edward Swift
NGC 4633
Astronomy
150
21,330,156
https://en.wikipedia.org/wiki/Graphane
Graphane is a two-dimensional polymer of carbon and hydrogen with the formula unit (CH)n where n is large. Partial hydrogenation results in hydrogenated graphene, which was reported by Elias et al. in 2009 by a TEM study to be "direct evidence for a new graphene-based derivative". The authors viewed the panorama as "a whole range of new two-dimensional crystals with designed electronic and other properties". With the band gap ranges from 0 to 0.8 eV Synthesis Its preparation was reported in 2009. Graphane can be formed by electrolytic hydrogenation of graphene, few-layer graphene or high-oriented pyrolytic graphite. In the last case mechanical exfoliation of hydrogenated top layers can be used. Structure The first theoretical description of graphane was reported in 2003. The structure was found, using a cluster expansion method, to be the most stable of all the possible hydrogenation ratios of graphene. In 2007, researchers found that the compound is more stable than other compounds containing carbon and hydrogen, such as benzene, cyclohexane and polyethylene. This group named the predicted compound graphane, because it is the fully saturated version of graphene. Graphane is effectively made up of cyclohexane units, and, in parallel to cyclohexane, the most stable structural conformation is not planar, but an out-of-plane structure, including the chair and boat conformers, in order to minimize ring strain and allow for the ideal tetrahedral bond angle of 109.5° for sp3-bonded atoms. However, in contrast to cyclohexane, graphane cannot interconvert between these different conformers because not only are they topologically different, but they are also different structural isomers with different configurations. The chair conformer has the hydrogens alternating above or below the plane from carbon to neighboring carbon, while the boat conformer has the hydrogen atoms alternating in pairs above and below the plane. There are also other possible conformational isomers, including the twist-boat and twist-boat-chair. As with cyclohexane, the most stable conformer for graphane is the chair, followed by the twist-boat structure. While the buckling of the chair conformer would imply lattice shrinkage, calculations show the lattice actually expands by approximately 30% due to the opposing effect on the lattice spacing of the longer carbon-carbon (C-C) bonds, as the sp3-bonding of graphane yields longer C-C bonds of 1.52 Å compared to the sp2-bonding of graphene which yields shorter C-C bonds of 1.42 Å. As just established, theoretically if graphane was perfect and everywhere in its stable chair conformer, the lattice would expand; however, the existence of domains where the locally stable twist-boat conformer dominates “contribute to the experimentally observed lattice contraction.” When experimentalists have characterized graphane, they have found a distribution of lattice spacings, corresponding to different domains exhibiting different conformers. Any disorder in hydrogenation conformation tends to contract the lattice constant by about 2.0%. Graphane is an insulator. Chemical functionalization of graphene with hydrogen may be a suitable method to open a band gap in graphene. P-doped graphane is proposed to be a high-temperature BCS theory superconductor with a Tc above 90 K. Variants Partial hydrogenation leads to hydrogenated graphene rather than (fully hydrogenated) graphane. Such compounds are usually named as "graphane-like" structures. Graphane and graphane-like structures can be formed by electrolytic hydrogenation of graphene or few-layer graphene or high-oriented pyrolytic graphite. In the last case mechanical exfoliation of hydrogenated top layers can be used. Hydrogenation of graphene on substrate affects only one side, preserving hexagonal symmetry. One-sided hydrogenation of graphene is possible due to the existence of ripplings. Because the latter are distributed randomly, the obtained material is disordered in contrast to two-sided graphane. Annealing allows the hydrogen to disperse, reverting to graphene. Simulations revealed the underlying kinetic mechanism. Potential applications p-Doped graphane is postulated to be a high-temperature BCS theory superconductor with a Tc above 90 K. Graphane has been proposed for hydrogen storage. Hydrogenation decreases the dependence of the lattice constant on temperature, which indicates a possible application in precision instruments. References External links Sep 14, 2010 Hydrogen vacancies induce stable ferromagnetism in graphane May 25, 2010 Graphane yields new potential May 02 2010 Doped Graphane Should Superconduct at 90K Two-dimensional nanomaterials Polymers Superconductors Hydrocarbons
Graphane
Chemistry,Materials_science
1,004
11,361,777
https://en.wikipedia.org/wiki/Social%20television
Social television is the union of television and social media. Millions of people now share their TV experience with other viewers on social media such as Twitter and Facebook using smartphones and tablets. TV networks and rights holders are increasingly sharing video clips on social platforms to monetise engagement and drive tune-in. The social TV market covers the technologies that support communication and social interaction around TV as well as companies that study television-related social behavior and measure social media activities tied to specific TV broadcasts – many of which have attracted significant investment from established media and technology companies. The market is also seeing numerous tie-ups between broadcasters and social networking players such as Twitter and Facebook. The market is expected to be worth $256bn by 2017. Social TV was named one of the 10 most important emerging technologies by the MIT Technology Review on Social TV in 2010. And in 2011, David Rowan, the editor of Wired magazine, named Social TV at number three of six in his peek into 2011 and what tech trends to expect to get traction. Ynon Kreiz, CEO of the Endemol Group told the audience at the Digital Life Design (DLD) conference in January 2011: "Everyone says that social television will be big. I think it's not going to be big—it's going to be huge". Much of the investment in the earlier years of social TV went into standalone social TV apps. The industry believed these apps would provide an appealing and complimentary consumer experience which could then be monetized with ads. These apps featured TV listings, check-ins, stickers and synchronised second-screen content but struggled to attract users away from Twitter and Facebook. Most of these companies have since gone out of business or been acquired amid a wave of consolidation and the market has instead focused on the activities of the social media channels themselves – such as Twitter Amplify, Facebook Suggested Videos and Snapchat Discover – and the technologies that support them. Twitter Twitter and Facebook are both helping users connect around media, which can provoke strong debate and engagement. Both social platforms want to be the 'digital watercooler' and host conversation around TV because the engagement and data about what media people consume can then be used to generate advertising revenue. As an open platform, conversation on Twitter is closely aligned with real-time events. In May 2013, it launched Twitter Amplify – an advertising product for media and consumer brands. With Amplify, Twitter runs video highlights from major live broadcasts, with advertisers' names and messages playing before the clip. By February 2014, all four major U.S. TV networks had signed up to the Amplify program, bringing a variety of premium TV content onto the social platform in the form of in-tweet real-time video clips. In June 2014, Twitter acquired its Twitter Amplify partner in the U.S. SnappyTV, a company that was helping broadcasters and rights holders to share video content both organically across social and via Twitter's Amplify program. Twitter continues to rely on Grabyo, which has also struck numerous deals with some of the largest broadcasters and rights holders in Europe and North America to share video content across Facebook and Twitter. Facebook Facebook made significant changes to its platform in 2014 including updates to its algorithm to enhance how it serves video in users' feeds. It also launched video autoplay to get users to watch the videos in their feeds. It rapidly surpassed Twitter and by the end of 2014 it was enjoying three billion video views a day on its platform and had announced a partnership with the NFL, one of Twitter's most active Twitter Amplify partners. In April 2015, at its F8 Developer Conference, it revealed it was working with Grabyo among other technology partners to bring video onto its platform. Then in July it announced it would be launching Facebook Suggested Videos, bringing related videos and ads to anyone that clicks on a video – a move that not only competed with Twitter's commercial video offering but also put it in direct competition with YouTube. TV Time TV Time is a television dedicated social network that allows users to keep track of the television series they watch, as well as films. It also allows them to express their reaction to the media they have seen with episode specific voting for favorite characters and emotional reaction to episodes, as well as commenting in episode restrictive pages. This way users are able to avoid spoilers while also finding a precise audience and community for each of their interactions, as opposed to bigger, non-television dedicated social medias such as Facebook and Twitter where the likelihood of unintentionally reading spoilers is much higher. TV Time offers an analytics service called "TVLytics" where the votes and reactions collected from users can be studied for research and television production purposes. Advertising According to Businessinsider.com, there are variety of applications for social TV, including support for TV ad sales, optimizing TV ad buys, making ad buys more efficient, as a complement to audience measurement, and eventually, audience forecasting and real-time optimization. Social TV data can ease access to focus groups and may create a positive feedback loop for generating ultra-sticky TV programming and multi-screen ad campaigns. In numbers Viewers share their TV experience on social media in real-time as events unfold: between 88-100m Facebook users login to the platform during the primetime hours of 8pm – 11pm in the US. The volume of social media engagement in TV is also rising – according to Nielsen SocialGuide, there was a 38% increase in tweets about TV in 2013 to 263m. For the 2014 Super Bowl, Twitter reported that a record 24.9 million tweets about the game were sent during the telecast, peaking at 381,605 tweets per minute. Facebook reported that 50 million people discussed the Super Bowl, generating 185 million interactions. The 2014 Oscars generated 5m tweets, viewed by an audience of 37m unique Twitter users and delivering 3.3bn impressions globally as conversation and key moments were shared virally across the platform. In 2014 the All England Lawn Tennis Club (AELTC), hosts of Wimbledon, used Grabyo to share video content across social. The videos were viewed 3.5 million times across Facebook and Twitter. In partnered with Grabyo again in 2015 and the videos generated over 48 million views across Facebook and Twitter. Television shows with social integration Here are some examples of how TV executives are integrating social elements with TV shows: C-SPAN streamed tweets from US Senators and Representatives during the quorum call The Voice had the judges of the program tweet during the show and the posts scrolls on the bottom of the screen. The use of Twitter also led to an increase in viewers. "Glee" Entertainment Weekly created a second screen viewing platform for the Glee season 3 premiere. Related publications Erika Jonietz. "Making TV Social, Virtually" MIT Technology Review. (January 11, 2010) AmigoTV (Alcatel-Lucent; Coppens et al.) – 2004 www.ist-ipmedianet.org/Alcatel_EuroiTV2004_AmigoTV_short_paper_S4-2.pdf Nextream (MIT Media Lab, Martin et al.) – 2010 Social Interactive Television: Immersive Shared Experiences and Perspectives (P. Cesar, D. Geerts, and K. Chorianopoulos (eds.)) – 2009 Social TV and the Emergence of Interactive TV – Multimedia Research Group – November 2010 Interactive Social TV on Service Oriented Environments: Challenges and Enablers (May 2011) Systems Boxee – acquired by Samsung GetGlue – acquired by i.TV Grabyo KIT digital Miso TV Tank Top TV WiO Xbox Live See also Interactive television Personal broadcasting Smart TV Social aspects of television Social media and television Social Network Service Social news website Social software References Streaming television Social networks Social media Television technology Television terminology
Social television
Technology
1,617
4,436,145
https://en.wikipedia.org/wiki/Apex%20%28radio%20band%29
Apex radio stations (also known as skyscraper and pinnacle) was the name commonly given to a short-lived group of United States broadcasting stations, which were used to evaluate transmitting on frequencies that were much higher than the ones used by standard amplitude modulation (AM) and shortwave stations. Their name came from the tall height of their transmitter antennas, which were needed because coverage was primarily limited to local line-of-sight distances. These stations were assigned to what at the time were described as "ultra-high shortwave" frequencies, between roughly 25 and 44 MHz. They employed amplitude modulation (AM) transmissions, although in most cases using a wider bandwidth than standard broadcast band AM stations, in order to provide high fidelity sound with less static and distortion. In 1937 the Federal Communications Commission (FCC) formally allocated an Apex station band, consisting of 75 transmitting frequencies running from 41.02 to 43.98 MHz. These stations were never given permission to operate commercially, although they were allowed to retransmit programming from standard AM stations. Most operated under experimental licenses, however this band was the first to include a formal "non-commercial educational" station classification. The FCC eventually concluded that frequency modulation (FM) transmissions were superior, and the Apex band was eliminated effective January 1, 1941, in order to make way for the creation of the original FM band, assigned to 42 to 50 MHz. Initial development During the 1920s and 1930s, radio engineers and government regulators investigated the characteristics of transmitting frequencies higher than those currently in use. In the United States, by 1930 the original AM broadcasting band consisted of 96 frequencies from 550 to 1500 kHz, with a 10 kHz spacing between adjacent assignments. On this band, a station's coverage during the daytime consisted exclusively of its groundwave signal, which for the most powerful stations might exceed 200 miles (320 kilometers), although it was significantly less for the average station. However, during the nighttime, changes in the ionosphere resulted in additional long distance skywave signals, that were commonly reflected for up to hundreds of kilometers. Over time, technology was developed to transmit on progressively higher frequencies. (Although initially these were in general called "ultra-high shortwave" frequencies, radio spectrum nomenclature was later standardized, with 3 to 30 MHz transmissions becoming known as "High Frequency" (HF), 30 to 300 MHz called "Very High Frequency" (VHF), and 300 to 3,000 MHz called "Ultra High Frequency" (UHF)). It soon became apparent that there were significant differences in the propagation characteristics of various frequency ranges. Signals from shortwave stations, operating roughly in the range from 5 MHz to 20 MHz, were found to be readily reflected by the ionosphere during both the day and at night, resulting in stations that sometimes could transmit halfway around the world. Investigations of increasingly higher frequencies found that, above around 20 MHz, signal propagation by both groundwave and skywave generally became minimal, which meant that station coverage now began to be limited to just line-of-sight distances from the transmitting antenna. This was considered to be a valuable characteristic by the FCC, because it would allow the establishment of broadcasting stations with limited but consistent day and night coverage, that could only be received by their local communities. It also meant that multiple stations could operate on the same frequency throughout the country without interfering with each other. Because the standard AM broadcast band was considered to be too full to allow any meaningful increase in the number of stations, the FCC began to issue licenses to parties interested in testing the suitability of higher frequencies. Most Apex stations operated under experimental licenses, and were commonly affiliated with and subsidized by a commercially licensed AM station. Until the late 1930s, commercially made radio receivers did not cover these high frequencies, so early Apex station listeners constructed their own receivers, or built converters for existing models. On March 18, 1934, W8XH in Buffalo, New York, a companion station to AM station WBEN, became the first Apex station to air a regular schedule. Although most of these stations merely retransmitted the programs of their AM station partners, in a few cases efforts were made to provide original programming. In 1936, The Milwaukee Journal's W9XAZ, which initially had relayed the programming of WTMJ, became the first Apex station to originate its own programming on a regular basis. While monitoring the first group of stations, it was soon realized that, due to the strengthening of the ionosphere during periods of high solar activity, at times the lower end of the VHF frequencies would produce strong, and undesirable, skywave signals. (The December 1937 issue of All-Wave Radio reported that W6XKG in Los Angeles, transmitting on 25.95 MHz, had been heard in both Asia and Europe, while W9XAZ, 26.4 MHz in Milwaukee, Wisconsin had "a strong signal in Australia", and W8XAI, 31.6 MHz in Rochester, New York, "is another station that is often heard in Australia.") This most commonly occurred during the summer months, and during peaks in the 11-year sunspot cycle. This determination led to the FCC moving the developing broadcasting service stations, which by now began to include experimental FM radio and TV stations, to higher frequencies that were less affected by solar influences. Apex band establishment (1937) In October 1937, the FCC announced a sweeping allocation of frequency assignments for the various competing services, including television, relay, and public service, which covered 10 kHz to 300 MHz. Included was a band of Apex stations, consisting of 75 channels with 40 kHz separations, and spanning from 41.02 to 43.98 MHz. The 40 kHz spacing between adjacent frequencies was four times as much as the 10 kHz spacing on the standard AM broadcast band, which reduced adjacent-frequency interference, and provided more bandwidth for high-fidelity programming. At the time it was estimated that there were about 50 Apex-style stations currently in operation, although transmitting on a variety of frequencies. In January 1938 the band's first 25 channels, from 41.02 to 41.98 MHz, were reserved for non-commercial educational stations, with the Cleveland City Board of Education's WBOE in Cleveland, Ohio, the first station to begin operation within this group. Apex band assignments (1937–1941) Conversion to FM (1941) At the time the Apex band was established, the FCC noted that "The Commission at an early date will consider carefully the needs and requirements for high-frequency broadcast stations using both conventional [AM] modulation and frequency modulation". As of January 15, 1940, only 2 non-commercial and 14 experimental stations held Apex band licenses, all of which were assigned operating frequencies in the bottom half of the band. (A similar number of experimental stations held grants for frequencies in the 25–26 MHz region.) In addition, at this same time 20 experimental FM stations had been assigned slots within the top half of the Apex band frequencies. The commission's studies soon found significant advantages to FM transmissions over the Apex AM signals. Sound quality, and especially resistance to interference from static, including from lightning, was found to be far superior for FM. Although FM assignments required five times the bandwidth of Apex stations (200 kHz vs. 40 kHz), the "capture effect" allowed FM stations operating on the same frequency to be spaced closer together than Apex stations. By 1939 the FCC began encouraging Apex stations to consider changing to the technically superior FM transmissions. In May 1940, the FCC decided to authorize a commercial FM band effective January 1, 1941, operating on 40 channels spanning 42–50 MHz. (This was later changed to 88–106 MHz, and still later to 88–108 MHz, which increased the number of channels to 100.) This new assignment also resulted in the elimination of the Apex band, and the Apex stations were informed that they needed to either go silent or convert to FM. With this change, a few of the original Apex stations were converted into some of the earliest FM stations. The three educational stations were allowed some leeway in making the conversion to FM, with WBOE switching over in February 1941, WNYE receiving permission to continue as an Apex station until June 29, 1941, and WBKY receiving a series of authorizations to continue using its AM transmitter until May 1, 1944. Currently, the frequencies that had been used by the Apex band are allocated for land mobile communication. There would be at least one attempt to revive the Apex band concept. Beginning in May 1946, consulting radio engineer Sarkes Tarzian operated a 200-watt experimental AM station, W9XHZ, on 87.75 MHz in Bloomington, Indiana. After two years of successful operation of what he referred to as his "HIFAM" station, in 1948 he proposed that the FCC allocate a small high-frequency broadcast band, 400 kHz wide with 10 kHz spacing between frequency assignments. Tarzian promoted this as a low-cost alternative to expensive FM transmitters and receivers, saying that a $5.95 converter could be added to existing AM radios that would allow them to pick up the HIFAM stations. He continued to operate his experimental station, which eventually became KS2XAP, until 1950, although by then its transmitting hours were greatly restricted, as the FCC required the station to remain off the air whenever nearby WFBM-TV in Indianapolis was broadcasting. This was due to the fact that the TV station's audio transmitter used the same frequency as Tarzian's station. Moreover, after his station's final license expired on June 1, 1950, the FCC denied Tarzian any further renewals, concluding it would not reverse its earlier determination that there was no need for a second AM broadcast band. Notes References External links America's Apex Broadcasting Stations of the 1930s by John Schneider, Monitoring Times Magazine, December 2010. (theradiohistorian.org) A Detroit Apex Station in 1936 (W8XWJ) by John Schneider, September 17, 2013. (radioworld.com) Pre-History: Detroit's Experimental Amplitude Modulation (AM) "Apex" Station, W8XWJ (michiguide.com) FCC History Cards for W8XWJ (covering 1936–1941) Apex Radio in Milwaukee (W9XAZ) (jeff560.tripod.com) Apex and FM chronology (jeff560.tripod.com) "High Frequency Broadcast Stations in the United States" (Licensed by FCC as of January 1, 1937), Broadcasting Yearbook (1937 edition), page 331. "High Frequency (Apex) Broadcast Stations in the United States" (authorized by FCC as of January 1, 1938), Broadcasting Yearbook (1938 edition), page 290. "High Frequency (Apex) Broadcast Stations in the United States" (authorized by FCC as of January 1, 1939), Broadcasting Yearbook (1939 edition), page 369. (jeff560.tripod.com) "High Frequency Broadcasting Stations in the United States" (Authorized by FCC as of January 15, 1940), Broadcasting Yearbook (1940 edition), page 374. High Frequency and FM broadcast stations in the U.S. in 1942 Radio Annual (jeff560.tripod.com) "Sarkes Tarzian and His HiFAM Experiment" by Andrew Mitz, Radio Age, July 2004. Radio technology History of radio in the United States 1930s in American music Telecommunications-related introductions in 1934 Bandplans Broadcast engineering
Apex (radio band)
Technology,Engineering
2,353
4,916,053
https://en.wikipedia.org/wiki/Henry%20Rzepa
Henry Stephen Rzepa (born 1950) is a chemist and Emeritus Professor of Computational Chemistry at Imperial College London. Education Rzepa was born in 1950 and was educated at Wandsworth Comprehensive School in London. He then entered the chemistry department at Imperial College London where he graduated in 1971. He stayed to do a Ph.D. on the physical organic chemistry of indoles supervised by Brian Challis. Career and research After spending three years doing postdoctoral research at University of Texas at Austin, Texas with Michael Dewar in the emerging field of computational chemistry, he returned to Imperial College and was eventually appointed as Professor of the college in 2003. he is Emeritus Professor of Computational Chemistry. His research interests directed towards combining different types of chemical information tools for solving structural, mechanistic and stereochemical problems in organic, bioorganic, organometallic chemistry and catalysis, using techniques such as semiempirical molecular orbital methods (the MNDO family), Nuclear Magnetic Resonance (NMR) spectroscopy, X-ray crystallography and ab initio quantum theories. Aware of the complex semantic issues involved in converging different areas of chemistry to address modern multidisciplinary problems, he started investigating the use of the Internet as an information and integrating medium around 1987, focusing in 1994 on the World Wide Web as having the most potential. Peter Murray-Rust and he first introduced Chemical Markup Language (CML) in 1995 as a rich carrier of semantic chemical information and data; and they coined the term Datument as a portmanteau word to better express the evolution from the documents produced by traditional academic publishing methods to the Semantic Web ideals expressed by Tim Berners-Lee. His contributions to chemistry include exploration of Möbius aromaticity, highlighted by the theoretical discovery of relatively stable forms of cyclic conjugated molecules which exhibit two and higher half-twists in the topology rather than just the single twist associated with Mobius systems. He is responsible for unraveling the mechanistic origins of stereocontrol in a variety of catalytic polymerisation reactions, including that of lactide to polylactide, a new generation of bio-sustainable polymer not dependent on oil. He is also known for the integration of chemistry (in the form of CML) with emergent Internet technologies and trends such as RSS and podcasting, for the introduction of the Chemical MIME types in 1994, and for organizing the ECTOC online conferences in organic chemistry, which ran from 1995 to 1998. Awards and honours Rzepa was awarded the Herman Skolnik Award in 2012 by the American Chemical Society. References Academics of Imperial College London Living people British chemists Theoretical chemists 1950 births Computational chemists Alumni of Imperial College London
Henry Rzepa
Chemistry
559
48,756,720
https://en.wikipedia.org/wiki/Tachyaerobic
Tachyaerobic is a term used in biology to describe the muscles of large animals and birds that are able to maintain high levels or physical activity because their hearts make up at least 0.5-0.6 percent of their body mass and maintain high blood pressures. A reptile displaying equal size to a tachyaerobic mammal does not have the same capabilities. Tachyaerobic animals' hearts beat more quickly, produce more oxygen, and distribute blood at a quicker rate than reptiles. The use of tachyaerobic muscles is important to animals such as giraffes that need blood circulated through a large body size quickly. See also Bradyaerobic References Muscular system Thermoregulation
Tachyaerobic
Biology
148
9,546,237
https://en.wikipedia.org/wiki/Irone
Irones are a group of methylionone odorants used in perfumery, derived from iris oil, e.g. orris root. The most commercially important of these are: (-)-cis-γ-irone, and (-)-cis-α-irone Irones form through slow oxidation of triterpenoids in dried rhizomes of the iris species, Iris pallida. Irones typically have a sweet floral, iris, woody, ionone, odor. See also Ionone References External links Structure - Odor Relationships Perfume ingredients Ketones
Irone
Chemistry
120
73,019,349
https://en.wikipedia.org/wiki/Fendt%20700%20Vario
Fendt 700 Vario is a series of tractors made by the manufacturer of agricultural machinery, Fendt. Since the introduction of the first 700 Vario model in 1989, seven generations of the series have been released. The 700 Vario is the manufacturer's best-selling tractor series. The latest generation, the Fendt 700 Vario Gen7, has been available on the market since the summer of 2022. Series The 700 Vario series has been available since 1998. Among the larger Fendt models, 800 Vario to 1100 Vario MT and the compact machines of the 200 Vario, 300 Vario and 500 Vario series, the 700 Vario is considered particularly versatile. The series is the best-selling model of Fendt and the most popular tractor in Germany. Since its official presentation in 1998, the drive, technology and equipment of the Fendt 700 Vario have been consistently further developed. In total, there are seven generations of the series. Each generation includes four to six different models, which mainly differ in performance and horsepower. Currently, the 700 Vario Gen6 models and the tractors of the latest generation 700 Vario Gen7 are still being produced. Due to their durability and long lifespan, older Fendt 700 generations are still available as used machines on the market. Fendt pursues the approach of combining a comfortable and user-friendly operation with a versatile range of functions with the 700 Vario series. The innovative operating system Variotronic, which was introduced with the introduction of the first 700 Vario generation in 1998, has been awarded internationally. Since 2011, all tractors of the fourth generation have been equipped with Fendt Efficient Technology (FET) and the VisioPlus cab for safe and comfortable work. In 2020, the FendtOne operating system was introduced with the sixth generation. In response to global demand, the available track widths of the tractor models have been adjusted. Also, all Fendt 700 Vario models currently built comply with the European exhaust emission standard stage V. On July 25, 2024, the 100,000th Fendt 700 Vario rolled off the production line in Marktoberdorf, proving the succes of the tractor series. Generations and Models The latest generation The seventh generation series includes five models (720, 722, 724, 726 and 728 Vario) with a power range of 149 – 208 kW and 203 - 303 HP. As with previous generations, the focus of the Gen7 is a low power-to-weight ratio and high-performance range—however, the new models have been visually and technically overhauled. For the first time, a Deutz engine was not used, but a 6-cylinder engine by AGCO Power with 7.5 litres of displacement and 1,220 Nm. The fans, transmission and drive have also been renewed in the Gen7 models. The Fendt VarioDrive drive for automatic shifting of the driving ranges from the 1000 Vario series has now been integrated into the new 700 Vario models. The compact cooling unit with the Concentric Air System (CAS) cooling concept is a new feature. The slim design allows for a large steering angle and makes the Fendt 700 Vario Gen7 particularly manoeuvrable. All models also have integrated hood and rear cameras as well as the integrated safety concept Fendt Stability Control, which reduces side inclination and swaying even with heavy loads. In addition, there is an integrated tire pressure control VarioGrip. From late 2023, the Fendt 700 Vario Gen7 will be optionally available with a new trailer brake assistant. External links Fendt 700 Vario on the homepage of the manufacturer References Tractors
Fendt 700 Vario
Engineering
753
47,200
https://en.wikipedia.org/wiki/4%20Vesta
Vesta (minor-planet designation: 4 Vesta) is one of the largest objects in the asteroid belt, with a mean diameter of . It was discovered by the German astronomer Heinrich Wilhelm Matthias Olbers on 29 March 1807 and is named after Vesta, the virgin goddess of home and hearth from Roman mythology. Vesta is thought to be the second-largest asteroid, both by mass and by volume, after the dwarf planet Ceres. Measurements give it a nominal volume only slightly larger than that of Pallas (about 5% greater), but it is 25% to 30% more massive. It constitutes an estimated 9% of the mass of the asteroid belt. Vesta is the only known remaining rocky protoplanet (with a differentiated interior) of the kind that formed the terrestrial planets. Numerous fragments of Vesta were ejected by collisions one and two billion years ago that left two enormous craters occupying much of Vesta's southern hemisphere. Debris from these events has fallen to Earth as howardite–eucrite–diogenite (HED) meteorites, which have been a rich source of information about Vesta. Vesta is the brightest asteroid visible from Earth. It is regularly as bright as magnitude 5.1, at which times it is faintly visible to the naked eye. Its maximum distance from the Sun is slightly greater than the minimum distance of Ceres from the Sun, although its orbit lies entirely within that of Ceres. NASA's Dawn spacecraft entered orbit around Vesta on 16 July 2011 for a one-year exploration and left the orbit of Vesta on 5 September 2012 en route to its final destination, Ceres. Researchers continue to examine data collected by Dawn for additional insights into the formation and history of Vesta. History Discovery Heinrich Olbers discovered Pallas in 1802, the year after the discovery of Ceres. He proposed that the two objects were the remnants of a destroyed planet. He sent a letter with his proposal to the British astronomer William Herschel, suggesting that a search near the locations where the orbits of Ceres and Pallas intersected might reveal more fragments. These orbital intersections were located in the constellations of Cetus and Virgo. Olbers commenced his search in 1802, and on 29 March 1807 he discovered Vesta in the constellation Virgo—a coincidence, because Ceres, Pallas, and Vesta are not fragments of a larger body. Because the asteroid Juno had been discovered in 1804, this made Vesta the fourth object to be identified in the region that is now known as the asteroid belt. The discovery was announced in a letter addressed to German astronomer Johann H. Schröter dated 31 March. Because Olbers already had credit for discovering a planet (Pallas; at the time, the asteroids were considered to be planets), he gave the honor of naming his new discovery to German mathematician Carl Friedrich Gauss, whose orbital calculations had enabled astronomers to confirm the existence of Ceres, the first asteroid, and who had computed the orbit of the new planet in the remarkably short time of 10 hours. Gauss decided on the Roman virgin goddess of home and hearth, Vesta. Name and symbol Vesta was the fourth asteroid to be discovered, hence the number 4 in its formal designation. The name Vesta, or national variants thereof, is in international use with two exceptions: Greece and China. In Greek, the name adopted was the Hellenic equivalent of Vesta, Hestia in English, that name is used for (Greeks use the name "Hestia" for both, with the minor-planet numbers used for disambiguation). In Chinese, Vesta is called the 'hearth-god(dess) star', , naming the asteroid for Vesta's role, similar to the Chinese names of Uranus, Neptune, and Pluto. Upon its discovery, Vesta was, like Ceres, Pallas, and Juno before it, classified as a planet and given a planetary symbol. The symbol represented the altar of Vesta with its sacred fire and was designed by Gauss. In Gauss's conception, now obsolete, this was drawn . His form is in the pipeline for Unicode 17.0 as U+1F777 . The asteroid symbols were gradually retired from astronomical use after 1852, but the symbols for the first four asteroids were resurrected for astrology in the 1970s. The abbreviated modern astrological variant of the Vesta symbol is . After the discovery of Vesta, no further objects were discovered for 38 years, and during this time the Solar System was thought to have eleven planets. However, in 1845, new asteroids started being discovered at a rapid pace, and by 1851 there were fifteen, each with its own symbol, in addition to the eight major planets (Neptune had been discovered in 1846). It soon became clear that it would be impractical to continue inventing new planetary symbols indefinitely, and some of the existing ones proved difficult to draw quickly. That year, the problem was addressed by Benjamin Apthorp Gould, who suggested numbering asteroids in their order of discovery, and placing this number in a disk (circle) as the generic symbol of an asteroid. Thus, the fourth asteroid, Vesta, acquired the generic symbol . This was soon coupled with the name into an official number–name designation, as the number of minor planets increased. By 1858, the circle had been simplified to parentheses, which were easier to typeset. Other punctuation, such as and was also briefly used, but had more or less completely died out by 1949. Early measurements Photometric observations of Vesta were made at the Harvard College Observatory in 1880–1882 and at the Observatoire de Toulouse in 1909. These and other observations allowed the rotation rate of Vesta to be determined by the 1950s. However, the early estimates of the rotation rate came into question because the light curve included variations in both shape and albedo. Early estimates of the diameter of Vesta ranged from in 1825, to . E.C. Pickering produced an estimated diameter of in 1879, which is close to the modern value for the mean diameter, but the subsequent estimates ranged from a low of up to a high of during the next century. The measured estimates were based on photometry. In 1989, speckle interferometry was used to measure a dimension that varied between during the rotational period. In 1991, an occultation of the star SAO 93228 by Vesta was observed from multiple locations in the eastern United States and Canada. Based on observations from 14 different sites, the best fit to the data was an elliptical profile with dimensions of about . Dawn confirmed this measurement. These measurements will help determine the thermal history, size of the core, role of water in asteroid evolution and what meteorites found on Earth come from these bodies, with the ultimate goal of understanding the conditions and processes present at the solar system's earliest epoch and the role of water content and size in planetary evolution. Vesta became the first asteroid to have its mass determined. Every 18 years, the asteroid 197 Arete approaches within of Vesta. In 1966, based upon observations of Vesta's gravitational perturbations of Arete, Hans G. Hertz estimated the mass of Vesta at (solar masses). More refined estimates followed, and in 2001 the perturbations of 17 Thetis were used to calculate the mass of Vesta to be . Dawn determined it to be . Orbit Vesta orbits the Sun between Mars and Jupiter, within the asteroid belt, with a period of 3.6 Earth years, specifically in the inner asteroid belt, interior to the Kirkwood gap at 2.50 AU. Its orbit is moderately inclined (i = 7.1°, compared to 7° for Mercury and 17° for Pluto) and moderately eccentric (e = 0.09, about the same as for Mars). True orbital resonances between asteroids are considered unlikely. Because of their small masses relative to their large separations, such relationships should be very rare. Nevertheless, Vesta is able to capture other asteroids into temporary 1:1 resonant orbital relationships (for periods up to 2 million years or more) and about forty such objects have been identified. Decameter-sized objects detected in the vicinity of Vesta by Dawn may be such quasi-satellites rather than proper satellites. Rotation Vesta's rotation is relatively fast for an asteroid (5.342 h) and prograde, with the north pole pointing in the direction of right ascension 20 h 32 min, declination +48° (in the constellation Cygnus) with an uncertainty of about 10°. This gives an axial tilt of 29°. Coordinate systems Two longitudinal coordinate systems are used for Vesta, with prime meridians separated by 150°. The IAU established a coordinate system in 1997 based on Hubble photos, with the prime meridian running through the center of Olbers Regio, a dark feature 200 km across. When Dawn arrived at Vesta, mission scientists found that the location of the pole assumed by the IAU was off by 10°, so that the IAU coordinate system drifted across the surface of Vesta at 0.06° per year, and also that Olbers Regio was not discernible from up close, and so was not adequate to define the prime meridian with the precision they needed. They corrected the pole, but also established a new prime meridian 4° from the center of Claudia, a sharply defined crater 700 meters across, which they say results in a more logical set of mapping quadrangles. All NASA publications, including images and maps of Vesta, use the Claudian meridian, which is unacceptable to the IAU. The IAU Working Group on Cartographic Coordinates and Rotational Elements recommended a coordinate system, correcting the pole but rotating the Claudian longitude by 150° to coincide with Olbers Regio. It was accepted by the IAU, although it disrupts the maps prepared by the Dawn team, which had been positioned so they would not bisect any major surface features. Physical characteristics Vesta is the second most massive body in the asteroid belt, although it is only 28% as massive as Ceres, the most massive body. Vesta is however the most massive body that formed in the asteroid belt, as Ceres is believed to have formed between Jupiter and Saturn. Vesta's density is lower than those of the four terrestrial planets but is higher than those of most asteroids, as well as all of the moons in the Solar System except Io. Vesta's surface area is about the same as the land area of Pakistan, Venezuela, Tanzania, or Nigeria; slightly under . It has a differentiated interior. Vesta is only slightly larger () than 2 Pallas () in mean diameter, but is about 25% more massive. Vesta's shape is close to a gravitationally relaxed oblate spheroid, but the large concavity and protrusion at the southern pole (see 'Surface features' below) combined with a mass less than precluded Vesta from automatically being considered a dwarf planet under International Astronomical Union (IAU) Resolution XXVI 5. A 2012 analysis of Vesta's shape and gravity field using data gathered by the Dawn spacecraft has shown that Vesta is currently not in hydrostatic equilibrium. Temperatures on the surface have been estimated to lie between about with the Sun overhead, dropping to about at the winter pole. Typical daytime and nighttime temperatures are and , respectively. This estimate is for 6 May 1996, very close to perihelion, although details vary somewhat with the seasons. Surface features Before the arrival of the Dawn spacecraft, some Vestan surface features had already been resolved using the Hubble Space Telescope and ground-based telescopes (e.g., the Keck Observatory). The arrival of Dawn in July 2011 revealed the complex surface of Vesta in detail. Rheasilvia and Veneneia The most prominent of these surface features are two enormous impact basins, the -wide Rheasilvia, centered near the south pole; and the wide Veneneia. The Rheasilvia impact basin is younger and overlies the Veneneia. The Dawn science team named the younger, more prominent crater Rheasilvia, after the mother of Romulus and Remus and a mythical vestal virgin. Its width is 95% of the mean diameter of Vesta. The crater is about deep. A central peak rises above the lowest measured part of the crater floor and the highest measured part of the crater rim is above the crater floor low point. It is estimated that the impact responsible excavated about 1% of the volume of Vesta, and it is likely that the Vesta family and V-type asteroids are the products of this collision. If this is the case, then the fact that fragments have survived bombardment until the present indicates that the crater is at most only about 1 billion years old. It would also be the site of origin of the HED meteorites. All the known V-type asteroids taken together account for only about 6% of the ejected volume, with the rest presumably either in small fragments, ejected by approaching the 3:1 Kirkwood gap, or perturbed away by the Yarkovsky effect or radiation pressure. Spectroscopic analyses of the Hubble images have shown that this crater has penetrated deep through several distinct layers of the crust, and possibly into the mantle, as indicated by spectral signatures of olivine. The large peak at the center of Rheasilvia is high and wide, and is possibly a result of a planetary-scale impact. Other craters Several old, degraded craters approach Rheasilvia and Veneneia in size, although none are quite so large. They include Feralia Planitia, shown at right, which is across. More-recent, sharper craters range up to Varronilla and Postumia. Dust fills up some craters, creating so-called dust ponds. They are a phenomenon where pockets of dust are seen in celestial bodies without a significant atmosphere. These are smooth deposits of dust accumulated in depressions on the surface of the body (like craters), contrasting from the Rocky terrain around them. On the surface of Vesta, we have identified both type 1 (formed from impact melt) and type 2 (electrostatically made) dust ponds within 0˚–30°N/S, that is, Equatorial region. 10 craters have been identified with such formations. "Snowman craters" The "snowman craters" are a group of three adjacent craters in Vesta's northern hemisphere. Their official names, from largest to smallest (west to east), are Marcia, Calpurnia, and Minucia. Marcia is the youngest and cross-cuts Calpurnia. Minucia is the oldest. Troughs The majority of the equatorial region of Vesta is sculpted by a series of parallel troughs designated Divalia Fossae; its longest trough is wide and long. Despite the fact that Vesta is a one-seventh the size of the Moon, Divalia Fossae dwarfs the Grand Canyon. A second series, inclined to the equator, is found further north. This northern trough system is named Saturnalia Fossae, with its largest trough being roughly 40 km wide and over 370 km long. These troughs are thought to be large-scale graben resulting from the impacts that created Rheasilvia and Veneneia craters, respectively. They are some of the longest chasms in the Solar System, nearly as long as Ithaca Chasma on Tethys. The troughs may be graben that formed after another asteroid collided with Vesta, a process that can happen only in a body that, like Vesta, is differentiated. Vesta's differentiation is one of the reasons why scientists consider it a protoplanet. Alternatively, it is proposed that the troughs may be radial sculptures created by secondary cratering from Rheasilvia. Surface composition Compositional information from the visible and infrared spectrometer (VIR), gamma-ray and neutron detector (GRaND), and framing camera (FC), all indicate that the majority of the surface composition of Vesta is consistent with the composition of the howardite, eucrite, and diogenite meteorites. The Rheasilvia region is richest in diogenite, consistent with the Rheasilvia-forming impact excavating material from deeper within Vesta. The presence of olivine within the Rheasilvia region would also be consistent with excavation of mantle material. However, olivine has only been detected in localized regions of the northern hemisphere, not within Rheasilvia. The origin of this olivine is currently unclear. Though olivine was expected by astronomers to have originated from Vesta's mantle prior to the arrival of the Dawn orbiter, the lack of olivine within the Rheasilvia and Veneneia impact basins complicates this view. Both impact basins excavated Vestian material down to 60–100 km, far deeper than the expected thickness of ~30–40 km for Vesta's crust. Vesta's crust may be far thicker than expected or the violent impact events that created Rheasilvia and Veneneia may have mixed material enough to obscure olivine from observations. Alternatively, Dawn observations of olivine could instead be due to delivery by olivine-rich impactors, unrelated to Vesta's internal structure. Features associated with volatiles Pitted terrain has been observed in four craters on Vesta: Marcia, Cornelia, Numisia and Licinia. The formation of the pitted terrain is proposed to be degassing of impact-heated volatile-bearing material. Along with the pitted terrain, curvilinear gullies are found in Marcia and Cornelia craters. The curvilinear gullies end in lobate deposits, which are sometimes covered by pitted terrain, and are proposed to form by the transient flow of liquid water after buried deposits of ice were melted by the heat of the impacts. Hydrated materials have also been detected, many of which are associated with areas of dark material. Consequently, dark material is thought to be largely composed of carbonaceous chondrite, which was deposited on the surface by impacts. Carbonaceous chondrites are comparatively rich in mineralogically bound OH. Geology A large collection of potential samples from Vesta is accessible to scientists, in the form of over 1200 HED meteorites (Vestan achondrites), giving insight into Vesta's geologic history and structure. NASA Infrared Telescope Facility (NASA IRTF) studies of asteroid suggest that it originated from deeper within Vesta than the HED meteorites. Vesta is thought to consist of a metallic iron–nickel core 214–226 km in diameter, an overlying rocky olivine mantle, with a surface crust. From the first appearance of calcium–aluminium-rich inclusions (the first solid matter in the Solar System, forming about 4.567 billion years ago), a likely time line is as follows: Vesta is the only known intact asteroid that has been resurfaced in this manner. Because of this, some scientists refer to Vesta as a protoplanet. However, the presence of iron meteorites and achondritic meteorite classes without identified parent bodies indicates that there once were other differentiated planetesimals with igneous histories, which have since been shattered by impacts. On the basis of the sizes of V-type asteroids (thought to be pieces of Vesta's crust ejected during large impacts), and the depth of Rheasilvia crater (see below), the crust is thought to be roughly thick. Findings from the Dawn spacecraft have found evidence that the troughs that wrap around Vesta could be graben formed by impact-induced faulting (see Troughs section above), meaning that Vesta has more complex geology than other asteroids. Vesta's differentiated interior implies that it was in hydrostatic equilibrium and thus a dwarf planet in the past, but it is not today. The impacts that created the Rheasilvia and Veneneia craters occurred when Vesta was no longer warm and plastic enough to return to an equilibrium shape, distorting its once rounded shape and prohibiting it from being classified as a dwarf planet today. Regolith Vesta's surface is covered by regolith distinct from that found on the Moon or asteroids such as Itokawa. This is because space weathering acts differently. Vesta's surface shows no significant trace of nanophase iron because the impact speeds on Vesta are too low to make rock melting and vaporization an appreciable process. Instead, regolith evolution is dominated by brecciation and subsequent mixing of bright and dark components. The dark component is probably due to the infall of carbonaceous material, whereas the bright component is the original Vesta basaltic soil. Fragments Some small Solar System bodies are suspected to be fragments of Vesta caused by impacts. The Vestian asteroids and HED meteorites are examples. The V-type asteroid 1929 Kollaa has been determined to have a composition akin to cumulate eucrite meteorites, indicating its origin deep within Vesta's crust. Vesta is currently one of only eight identified Solar System bodies of which we have physical samples, coming from a number of meteorites suspected to be Vestan fragments. It is estimated that 1 out of 16 meteorites originated from Vesta. The other identified Solar System samples are from Earth itself, meteorites from Mars, meteorites from the Moon, and samples returned from the Moon, the comet Wild 2, and the asteroids 25143 Itokawa, 162173 Ryugu, and 101955 Bennu. Exploration In 1981, a proposal for an asteroid mission was submitted to the European Space Agency (ESA). Named the Asteroidal Gravity Optical and Radar Analysis (AGORA), this spacecraft was to launch some time in 1990–1994 and perform two flybys of large asteroids. The preferred target for this mission was Vesta. AGORA would reach the asteroid belt either by a gravitational slingshot trajectory past Mars or by means of a small ion engine. However, the proposal was refused by the ESA. A joint NASA–ESA asteroid mission was then drawn up for a Multiple Asteroid Orbiter with Solar Electric Propulsion (MAOSEP), with one of the mission profiles including an orbit of Vesta. NASA indicated they were not interested in an asteroid mission. Instead, the ESA set up a technological study of a spacecraft with an ion drive. Other missions to the asteroid belt were proposed in the 1980s by France, Germany, Italy and the United States, but none were approved. Exploration of Vesta by fly-by and impacting penetrator was the second main target of the first plan of the multi-aimed Soviet Vesta mission, developed in cooperation with European countries for realisation in 1991–1994 but canceled due to the dissolution of the Soviet Union. In the early 1990s, NASA initiated the Discovery Program, which was intended to be a series of low-cost scientific missions. In 1996, the program's study team recommended a mission to explore the asteroid belt using a spacecraft with an ion engine as a high priority. Funding for this program remained problematic for several years, but by 2004 the Dawn vehicle had passed its critical design review and construction proceeded. It launched on 27 September 2007 as the first space mission to Vesta. On 3 May 2011, Dawn acquired its first targeting image 1.2 million kilometers from Vesta. On 16 July 2011, NASA confirmed that it received telemetry from Dawn indicating that the spacecraft successfully entered Vesta's orbit. It was scheduled to orbit Vesta for one year, until July 2012. Dawn arrival coincided with late summer in the southern hemisphere of Vesta, with the large crater at Vesta's south pole (Rheasilvia) in sunlight. Because a season on Vesta lasts eleven months, the northern hemisphere, including anticipated compression fractures opposite the crater, would become visible to Dawn cameras before it left orbit. Dawn left orbit around Vesta on 4 September 2012 to travel to Ceres. NASA/DLR released imagery and summary information from a survey orbit, two high-altitude orbits (60–70 m/pixel) and a low-altitude mapping orbit (20 m/pixel), including digital terrain models, videos and atlases. Scientists also used Dawn to calculate Vesta's precise mass and gravity field. The subsequent determination of the J2 component yielded a core diameter estimate of about 220 km assuming a crustal density similar to that of the HED. Dawn data can be accessed by the public at the UCLA website. Observations from Earth orbit Observations from Dawn Vesta comes into view as the Dawn spacecraft approaches and enters orbit: True-color images Detailed images retrieved during the high-altitude (60–70 m/pixel) and low-altitude (~20 m/pixel) mapping orbits are available on the Dawn Mission website of JPL/NASA. Visibility Its size and unusually bright surface make Vesta the brightest asteroid, and it is occasionally visible to the naked eye from dark skies (without light pollution). In May and June 2007, Vesta reached a peak magnitude of +5.4, the brightest since 1989. At that time, opposition and perihelion were only a few weeks apart. It was brighter still at its 22 June 2018 opposition, reaching a magnitude of +5.3. Less favorable oppositions during late autumn 2008 in the Northern Hemisphere still had Vesta at a magnitude of from +6.5 to +7.3. Even when in conjunction with the Sun, Vesta will have a magnitude around +8.5; thus from a pollution-free sky it can be observed with binoculars even at elongations much smaller than near opposition. 2010–2011 In 2010, Vesta reached opposition in the constellation of Leo on the night of 17–18 February, at about magnitude 6.1, a brightness that makes it visible in binocular range but generally not for the naked eye. Under perfect dark sky conditions where all light pollution is absent it might be visible to an experienced observer without the use of a telescope or binoculars. Vesta came to opposition again on 5 August 2011, in the constellation of Capricornus at about magnitude 5.6. 2012–2013 Vesta was at opposition again on 9 December 2012. According to Sky and Telescope magazine, this year Vesta came within about 6 degrees of 1 Ceres during the winter of 2012 and spring 2013. Vesta orbits the Sun in 3.63 years and Ceres in 4.6 years, so every 17.4 years Vesta overtakes Ceres (the previous overtaking was in April 1996). On 1 December 2012, Vesta had a magnitude of 6.6, but it had decreased to 8.4 by 1 May 2013. 2014 Ceres and Vesta came within one degree of each other in the night sky in July 2014. See also 3103 Eger 3551 Verenia 3908 Nyx 4055 Magellan Asteroids in fiction Diogenite Eucrite List of former planets Howardite Vesta family (vestoids) List of tallest mountains in the Solar System Notes References Bibliography The Dawn Mission to Minor Planets 4 Vesta and 1 Ceres, Christopher T. Russell and Carol A. Raymond (Editors), Springer (2011), Keil, K.; Geological History of Asteroid 4 Vesta: The Smallest Terrestrial Planet in Asteroids III, William Bottke, Alberto Cellino, Paolo Paolicchi, and Richard P. Binzel, (Editors), University of Arizona Press (2002), External links Interactive 3D gravity simulation of the Dawn spacecraft in orbit around Vesta Vesta Trek – An integrated map browser of datasets and maps for 4 Vesta JPL Ephemeris Views of the Solar System: Vesta HubbleSite: Hubble Maps the Asteroid Vesta Encyclopædia Britannica, Vesta – full article HubbleSite: short movie composed from Hubble Space Telescope images from November 1994. Adaptive optics views of Vesta from Keck Observatory 4 Vesta images at ESA/Hubble Dawn at Vesta (NASA press kit on Dawn's operations at Vesta) NASA video Vesta atlas Vesta Vesta 20110716 Former dwarf planets Former dwarf planet candidates Articles containing video clips V-type asteroids (Tholen) V-type asteroids (SMASS) 18070329 18070329 Vesta (mythology) Solar System
4 Vesta
Astronomy
5,833
33,147,401
https://en.wikipedia.org/wiki/WISEPA%20J184124.74%2B700038.0
WISEPA J184124.74+700038.0 (designation is abbreviated to WISE 1841+7000) is a binary system of brown dwarfs of spectral classes T5 + T5, located in constellation Draco at approximately 131 light-years from Earth. It is notable for being one of the first known binary brown dwarf systems. Discovery WISE 1841+7000 was discovered in 2011 from data, collected by Wide-field Infrared Survey Explorer (WISE) Earth-orbiting satellite — NASA infrared-wavelength 40 cm (16 in) space telescope, which mission lasted from December 2009 to February 2011. WISE 1841+7000A has two discovery papers: Gelino et al. (2011) and Kirkpatrick et al. (2011). Gelino et al. examined for binarity nine brown dwarfs using Laser Guide Star Adaptive Optics system (LGS-AO) on Keck II telescope on Mauna Kea; seven of these nine brown dwarfs were also newfound, including WISE 1841+7000. These observations had indicated that two of these nine brown dwarfs, including WISE 1841+7000, are binary. Kirkpatrick et al. presented discovery of 98 new found by WISE brown dwarf systems with components of spectral types M, L, T and Y, among which also was WISE 1841+7000. Discovery of companion Component B of the system was discovered in 2011 Gelino et al. with Laser Guide Star Adaptive Optics system (LGS-AO) on Keck II telescope. It was presented in the same article as the component A. Distance Trigonometric parallax of WISE 1841+7000 is not yet measured. Therefore, there are only distance estimates of this object, obtained by indirect — spectrofotometric — means (see table). WISE 1841+7000 distance estimates Non-trigonometric distance estimates are marked in italic. The best estimate is marked in bold. See also The other eight objects, checked for binarity by Gelino et al. (2011) on Keck II: binarity found: WISE 0458+6434 (T8.5 + T9.5, component A discovered before by Mainzer et al. (2011)) binarity not found: WISE 0750+2725 (T8.5, newfound) WISE 1322-2340 (T8, newfound) WISE 1614+1739 (T9, newfound) WISE 1617+1807 (T8, discovered before by Burgasser et al. (2011)) WISE 1627+3255 (T6, newfound) WISE 1653+4444 (T8, newfound) WISE 1741+2553 (T9, newfound) Notes References Binary stars Brown dwarfs T-type brown dwarfs Draco (constellation) WISE objects
WISEPA J184124.74+700038.0
Astronomy
571